Texas A&M US Nuclear DATA Program

. Nuclear data evaluation is an independent century-long expert activity accompanying the development of the nuclear physics science. Its goal is to produce periodic surveys of the world literature in order to recommend and maintain the set of the best nuclear data parameters of common use in all basic and applied sciences. After WWII the e ﬀ ort extended and while it became more international it continued to be supported mainly by the US for the beneﬁt of the whole world. The Evaluated Nuclear Structure Data File (ENSDF) is the most comprehensive nuclear structure database worldwide maintained by the United States National Nuclear Data Center (NNDC) at Brookhaven National Laboratory (BNL) and echoed by the IAEA Vienna Nuclear Data Services. Part of the US Nuclear Data Program since 2005 the Cyclotron Institute is one of the important contributors to ENSDF. Since 2018 we became an international evaluation center working in a consortium of peers hosted traditionally by prestigious national institutes as well as universities. In this paper the main stages of the evaluation work are presented in order to facilitate a basic understanding of the process as a guide for our potential users. Our goals are to maintain a good productivity vs. quality performance assuring the currency of the data and participating in the e ﬀ ort of modernizing the structure of ENSDF databases in order to make them compatible with the data-centric paradigms of the future.


Introduction
Nuclear data evaluation fills a century-long chapter of nuclear science. A search in the Nuclear Science Reference (NSR) database maintained at the National Nuclear Data Center (NNDC) (https://www.nndc.bnl.gov/nsr/) on the author M. Curie produces a paper titled The Radioactive Constants as of 1930 [1]. The introduction to this paper states that "the need has arisen for the publication of special Tables of the Radioactive Constants" and continues: "This responsibility has been assumed by the International Radium Standards Commission chosen in Brussels in 1910 (...)." Here we have the origin of what today is known as Nuclear Data Evaluation.
After WWII Nuclear Data Evaluation got sustained mainly by the USA being initiated by Katharine (Kay) Way who in the late 1940s worked on the Manhattan project [2]. In 1948 she founded the Nuclear Data Project at the U.S. National Bureau of Standards, which in 1950 published the first Nuclear Data Report. In 1953 the Nuclear Data Project relocated to the U.S. National Academy of Sciences -National Research Council in Washington D.C. publishing its first nuclear data AEC report in the form of loose leaf pages called Nuclear Data Sheets. In 1964 the Nuclear Data Project moved to the Oak Ridge National Laboratory (ORNL) in Oak Ridge Tennessee following the need to embed the data program in an active physics research environment, thus making nuclear data evaluation a basic science domain. In February 1966 the first issue of the Nuclear Data Sheets in a journal format * e-mail: nica@comp.tamu.edu rather than as loose leaf sheets of data appeared in Section B of the Nuclear Data journal published by Academic Press, while Section A had been released an year earlier as Atomic Data Tables. In August 1973 the two journals merged into the Atomic and Nuclear Data Tables with Kay Way as the editor for both.
The main evaluation work was organized in the frame of the Nuclear Data Project (NDP) at ORNL where the data were still entered by hand directly on sheets of paper with drawings done by hand, which were typed and photographed together with the professional draftsman redrawings in order to generate the the final publishable version. An important landmark was the implementation of the so-called 80-column computer format for ENSDF designed in 1977 by Bruce Ewbank and Marcel Schmorak of the NDP staff, described in February 1978 in the ORNL-5054/R1 report. The 80-column format was done judiciously and with some changes added by NNDC served for many years being still in use today, which is now in a vast process of modernization under the lead of NNDC. The same report describes the original versions of data computer codes (Logft, Alpha HF, GTOL, HSICC (Hager-Selzer), Medlist, and plot computer programs), which have ever since been modified and extended at NNDC, with many additional analysis and utility programs added.
The evaluation activity extended and became international with the establishment in 1974 of the Nuclear Structure and Decay Data Network, NSDD [3], under the auspices of the IAEA, Nuclear Data Section, putting together the evaluation efforts in several countries and having the US as the main contributor. Although NNDC at BNL became the coordinator of both the national (US-NDP) and the international (NSDD) eforts for the US, the leading contribution in editing and processing of the evaluation effort was still assumed by Oak Ridge. In 1981, the NNDC took over production of Nuclear Data Sheets, and completely computerized the entire operation. NDP and NNDC jointly edited the journal, with Murray Martin as Editor-in-Chief and Jagdish Tuli as Editor, with Tuli taking over as sole editor after June 1988, when Martin retired. The present editor, E. A. (Libby) McCutchan, took over upon Tuli's retirement in April, 2016, with NNDC still being the most prominent evaluation center in the US and abroad [4].

US Nuclear Data Program
Nuclear science research since its beginings has been producing a great variety of experimental data published in a multitude of publications worldwide. However each such piece of information is difficult to use by the data users because its context needs to be understood and quite often it can be different or even discrepant relative to the same type of data reported by different authors. The situation is similar to a jungle where there is an abundance of useful fruits and seeds which are difficult to use unless a group of experts is assuming the task to retrieve, categorize and make the data available for the basic nuclear science and applications communities.
As we saw in the Introduction, this mission was gradually undertaken by the small group of scientists constituting the nuclear data evaluation community, who developed the skills and standard procedures suitable for it.
According to the USNDP Mission Statement, our goal is to provide current, accurate, authoritative data for work- Main portal page of the NNDC internet site (https://www.nndc.bnl.gov/) together with the "Evaluation Pipeline" drawing (on the left side) which illustrates the way the nuclear structure data evaluation is conducted as described in the text. ers in pure and applied areas of nuclear science and engineering. This is accomplished primarily through the compilation, evaluation, dissemination, and archiving of extensive nuclear datasets. USNDP also addresses gaps in the data, through targeted experimental studies and the use of theoretical models.
The nuclear data evalution community is organized by evaluation centers located in the historic BNL and ORNL national labs plus the newer ones as shown in Fig. 1, in total six national labs and four universities of which Texas A&M University (ATM) joined in 2017. Figure 2 presents the main portal page of the NNDC internet site (https://www.nndc.bnl.gov/) together with the "Evaluation Pipeline". The group of buttons situated at the top delineates the two main components of the low energy nuclear data evaluation, one for Nuclear Structure and other for Nuclear Reactions, each of which is maintained by two different groups of specialized evalutors. As a rule the Nuclear Structure group takes care of the discrete nuclear properties, a list of which is given in the figure, while the Nucleare Reactions group evaluates the continnum properties -the cross sections. The evaluated databases for each direction are called ENSDF (Evaluated Nuclear Structure Data File) for Nuclear Structure and ENDF (Evaluated Nuclear Data File) for Nuclear Reactions, respectively. In what follows we will limit our presentation to the discrete nucleare structure properties only.
Despite the fact that nuclear data are of great need and extensively used, most of the data users have no knowledge of the evalution process, which can result in the underperformance or even wrongful usage of the databases, reason for which a short description of the evaluation pipe progression is going to be done in the next subsections.
Thus there are two important things useful for a user to know. First, that there is a complete transparency of the evalution process all over the evalution pipe, as well as in the evalution procedures themselves. Secondly, that all the evaluation tools are equally available to the evalutor as well as to the data user, which can use them for scientific purposes.
In the following subsections we are going to overview each segment of the evaluation pipe.

Publish
The first segment of the evaluation pipe is that of retrieving the relevant publications. Before starting the evaluation an extensive bibliographic search is conducted by the evaluator for each nucleus in order to retrieve virtually all the articles published for a particular nucleus worldwide, including both primary and secondary publications.
The next step consists in a first selection of those publications that contain relevant experimental data and any piece of relevant information, followed by sorting them based on the reaction or decay process by which the nucleus of interest was populated and studied. For this purpose the NSR bibliographic database is the best tool which was developed and is maintained on daily basis by the NNDC and collaborators [5]. The great advantage of the NSR database is that it contains a detailed list of keywords which allow a reliable retrieval of all relevant publications.

Compile
The second segment of the evaluation pipe is that of compilation, which is extracting the most relevant data from one publication and placing it in the 80 columns ENSDF format. The compilation files are stored by the so-called XUNDL database (XUNDL is the acronym of Experimental Unevaluated Nuclear Data List) which has its own search engine activated by the XUNDL button of the NNDC portal. Each XUNDL file is named after the type of nuclear reaction or decay process and is a collection of data tables and comments related to the publication together with some general descriptors of the experimental procedures. A pdf copy or a copy in the ENSDF format of the XUNDL file can be downloaded at the end of the search.
The main purpose of the XUNDL compilation is to collect and check for the correctness of the data before the evaluation process, allowing the evalutor to concentrate on the generally more difficult operations and decisions related to the evaluation process itself. In the last years the XUNDL compilation started to be used as a tool to vet the data structure of articles submitted for publication to various scientific journals as a preliminary step of the peer review process, leading to a substantial increase of the quality of the published papers.
Although the compilation is a specialized operation, the ENSDF format is relatively easy to deal with for the non-expert users (the ENSDF manual can be downloaded from https://www-nds.iaea.org/public/documents/ensdf/) allowing the researchers to use the powerful evalution tools (see subsection 2.4) alone or under the guidance of an expert evalutor.

Evaluate
The evaluation segment is the most important step of the evaluation pipe where properly speaking the evaluation process takes place. As for the previous segments full access to the evaluation database is given to the users through the search engine activated by the ENSDF button in Fig. 2.
The basic ideea for evalution is prefigured by the precedent steps: once all suitable publications and XUNDL files are prepared, the evaluator builds one particular dataset for each individual reaction or decay study. For example if for a particular α-decay process ten articles were collected, of which the two most important were compiled into XUNDL files, the evaluator uses the two XUNDL-coded data files together with the remaining eight publications to build the α-decay dataset by assembling together in a coherent way all the individual pieces of data therein. When multiple data exists for one parameter, as for example several T 1/2 measured values for one particular level, a unique value is adopted by reviewing how each individual value was measured and deciding its weight in the averaging procedure. If there are several measured α energies in between the same pair of parentdaughter levels another averaging procedure is specifically required, and so on until all data are included, the discrepancies are resolved and the dataset is fully built.
The work of composing a particular dataset is similar to reconstructing an archeologial artifact from individual fragments: when many fragments are found, the reconstruction is almost certain; but when fewer fragments are found the reconstruction becomes very tentative. This way, although the evaluation procedures are generally exact, as a rule one cannot develop fully standard methodologies; all depends on how the evaluator can integrate redundant or incomplete types of information coming from diverse sources in order to make coherent decisions to help buiding the dataset, even in cases when data from different sources are contradictory.
Here comes also the concept of data sheets which from the beginning conceptualized data evaluation as described in the Introduction, where a data sheet hosts one particular reaction or decay dataset, built out of the "leaves" of infinitesimal (individual) data collected from publications. Once the particular data sheets are built, the second level of evalution takes place by building up the whole "tree" of data -the Adopted Levels, Gammas dataset -out of the individual reaction and decay datasets for each nucleus. For each level and transition process data is collected from the individual datasets and "interweaved" together to build the most comprehensive Adopted dataset by recurring the procedures used for the individual datasets. Finally the most difficult operation follows, of adopting the spins and parities of the levels based on measured properties known as "strong" arguments in combination with the more theoretical/qualitative "weak" arguments.
The most important methodology tool at the evaluator's disposal is the comment by which different data patterns are discussed, the bibliographic references are introduced, and the way the evalution sequence was conducted is permanently disclosed. Thus an "open standard" con- cept gets operational by combining rigorous standard evalution rules with ad hoc decisions consensually taken to solve non-standard aspects and get an optimum decision in every context. This is the best aspect of the ENSDF evalution, which makes the process transparent for the public up to detailes by fully disclosing data, bibliographic references and decisional thinking for each evaluation sequence.
The Adopted Levels, Gammas dataset is the nuclear data evaluation unique contribution to nuclear science: that is building for each nucleus the "Tree of Nuclear Knowlwdge" through Nuclear Structure RESTAU-RATION.

Process & Validate
The Process & Validate segment is the computerized stage of the evalution pipe where the formatted datasets are passed through computer codes in order to calculate new parameters from the evaluated data as well as to multiply check, process and prepare the data for publication. There are two families of codes for each category, the Analysis Codes and the Utility Codes, as listed in Fig. 3 where a short description is given for each code. The programs that were initially designed and implemented in the Fortran language are in a vast program of redesign and modernization, the newer versions being implemented in JAVA. They are publicly available for free downloading in the IAEA Vienna Nuclear Data Services site where executable versions are given for the main computing platforms, together with user's manuals. The codes are higly reliable and can be run not only by evaluators but by any users provided the data are put in the ENSDF format.

Use
Once the evalution is completed the data are ready to use through several dissemination ways. The main dissemination channel is through the NNDC internet site which is mirrored by the IAEA Vienna Data Services. As already mentioned the NNDC portal has under the XUNDL and ENSDF buttons the search engines where the interested user can retrieve and download data in the pdf format or in the database format, the latter being of interest for the users that for example want to sample the formatted data to use as templates for their own purposes.
The Nuclear Data Sheets (NDS), an Elsevier publication, is the traditional dissemination tool of the ENSDF database which is also fully available through the internet (however the access is free only through an institutional subscription). The main difference is that the NDS published articles are done usually only for the so called full mass chain evalutions, where the whole set of isobar nuclei are re-evaluated about each ten years (so the new data published in this interval are put together with the historic data). If after the full evalution was published some datasets are separately revisited and re-evaluated, the updated version is added only to the ENSDF database which thus is maintained as the most current version of the database.
The most visited nuclear structure data dissemination tool is the NuDat interface designed and maintained by the NNDC. This is a standalone data environment of the ENSDF pool the main characteristic of which is that it delivers only numerical data (with no comments), which makes it very popular for the majority of the more casual users who need "just a number" to use in their applications, but are not interested to follow how that piece of data was obtained. The easiness in delivering the data combined with the general trend of enhanced needs for nuclear data brought the number of data retrivals at over five million per year. However for the expert users the full ENSDF data structure is the most suitable because it makes the evaluation process transparent and such users can evaluate by themselves the quality of the data, and conduct their private evalution sequences once they get the data and the bibliographic references.

Data evaluation
At Texas A&M University we assumed the scope of the US Nuclear Data Program to promote and accomplish mass-chain nuclear structure data evaluation as well as to address gaps in experimental data through targeted experiments. We took this mission since 2005 when we started to work under contract with BNL/NNDC; since 2018 we extended our activity as an international NSDD evalution center of the ENSDF/US Nuclear Data Program financed by the Cyclotron Institute DOE grant. Figure 5 presents the performance of the Texas A&M Evaluation Center and our actual responsibility assumed in the evaluation network. We covered a large region from the relatively low mass numbers A=34, 35 and 37, then A=77 and A=97, the rare earth region marked distinctly on the figure, up to the superheavy mass chain A=252. Each of these regions has its own specificity (for example abundant resonance reactions at low A numbers, among others, and sparse and fragmented data at the extreme superheavy elements). For this reason nuclear data evalution was done traditionally by experimentalists with a large domain of expertise.
Finally our focus became the rare earths region of nuclei with A=140-160, where there is the highest abundance of data, both as number of isobars (the nuclear chart being the widest in this region, typically 17 nuclei), as well as number of reactions and decays datasets and their size. Texas A&M Center got specialized in dealing with ones of the most complex datasets of the nuclear chart. Complexity is expressed by the number of levels and the number of connectiong transitions, over the number of dataset, for these mass chains the typical numbers being 1000-1500 levels, 2200-3000 gamma transitions, and 80-100 datasets. The size of the database is of the largest (typically 20,000-25,000 lines) which generates huge manuscripts that in the unabridged versions are of 600-700 pages per mass chain, while the printed manuscripts get typically 400-500 pages (generally by omitting the level schemes of the reaction datasets, considered redundant).
The mass chains in general and especially those in the rare earth mass region are difficult to bring to publication because of their complexity and size, especially because of the review process that can spread over a couple of years (it is very difficult to review 400-500 pages of very data-dense manuscripts). Moreover once a manuscript gets ready for publication, a couple of years of new publications should be reincluded, which can trigger another round of review, which once more can propagate the delays. Unfortunately the number of experienced data evaluators is close to a critically low limit and the whole activity is being also slowed down because the evaluators are getting overloaded. For these reasons the currency of the ENSDF database tends to be higher then the goal of ten years. Overall we evaluated more than 300 nuclei of 20 mass chains, resulting in a number of Nuclear Data Sheets publication (refs. [6]- [20])

Precise measurements for data evaluation
An important contribution to the USNDP effort was solving the important problem of the internal conversion coefficients (ICC's). ICC is an important parameter over the entire ENSDF database. They are calculated by the code BrIcc (see Fig. 3) which strongly depends, among others, on the treatment of the atomic vacancy formed by the emission of the conversion electron. The only way to test which of the theoretical approaches is best to describe reality was to precisely measure a set of ICC's and see what theory is in best agreement with the experimental values.
We measured the series of well selected isomeric transitions in 93 Nb, 103 Rh, 125 Te, 127 Te, 111 Cd, 119 Sn, 134 Cs, 137 Ba,191 Os,193 Ir,and 197 Pt and found that the Relativistic Dirac-Fock theory with the so called "frozen orbital" treatment of the atomic vacancy gave the best agreement with the experimental series (refs. [21]- [35]. Based on the results of our ICC measurements, these type of calculations became standard in the ENSDF database.

New Initiatives & Directions
Derived from nuclear research, evaluated data feeds back to enhance the usefulness of nuclear science and applications. What if reversing the terms one could do nuclear research out of nuclear evalution data? What is the rationale for such an attempt?
Nuclear science collects data in order to add knowledge to nuclear phenomena. Motivated by novelty, basic science gets theoretical interpretations based on some properties of the data that were identified at the moment of the reserch. Once the results are published the data enters the nuclear data repository and pieces of information are then used to aid research and applications, or to study systematic behaviors of some known properties. However it is quite possible that the stored data can be looked for some new "metaproperties" that were not known at the time the data was produced, pretty much like history archives whose documents can sprout new knowledge after centuries of silence, or like a mine that can be reopened for exploitations many years after it was closed.
Thus based on such metaproperty the concept on the level scheme can be revisited with possible good scientific benefits. This research is in progress.