THE MCSAFE PROJECT-HIGH-PERFORMANCE MONTE CARLO BASED METHODS FOR SAFETY DEMONSTRATION: FROM PROOF OF CONCEPT TO INDUSTRY APPLICATIONS

The increasing use of Monte Carlo methods for core analysis is fostered by the huge and cheap computer power available nowadays e.g. in large HPC. Apart from the classical criticality calculations, the application of Monte Carlo methods for depletion analysis and cross section generation for diffusion and transport core simulators is also expanding. In addition, the development of multi-physics codes by coupling Monte Carlo solvers with thermal hydraulic codes (CFD, subchannel and system thermal hydraulics) to perform full core static core analysis at fuel assembly or pin level has progressed in the last decades. Finally, the extensions of the Monte Carlo codes to describe the behavior of prompt and delay neutrons, control rod movements, etc. has been started some years ago. Recent coupling of dynamic versions of Monte Carlo codes with subchannel codes make possible the analysis of transient e.g. rod ejection accidents and it paves the way for the simulation of any kind of design basis accidents as an alternative option to the use of diffusion and transport based deterministic solvers. The H2020 McSAFE Project is focused on the improvement of methods for depletion considering thermal hydraulic feedbacks, extension of the coupled neutronic/thermal hydraulic codes by the incorporation of a fuel performance solver, the development of dynamic Monte Carlo codes and the development of methods to handle large depletion problems and to reduce the statistical uncertainty. The validation of the multi-physics tools developed within McSAFE will be performed using plant data and unique tests e.g. the SPERT III E REA test. This paper will describe the main developments, solution approaches, and selected results.


INTRODUCTION
Fast running and highly accurate numerical methods and codes able to predict local safety parameters of reactor cores at steady state conditions, e.g. during the operation lifetime (BOC, EOC) or under transient situations, are urgently needed by manufacturers, utilities and regulators to optimize core designs and to assess the safety features [1]. Nowadays efforts are underway to move from nodal diffusion-based core simulations to more detailed simulations based on higher-order transport solver such as SP 3 , P N or S N in order for pin-by-pin simulations to become feasible. In this direction, new developments are recently emerging with focus on a new class of transport codes written from scratch to be run in High Performance Computing (HPC) environments. However, it will take time until such new developments can be used in industry-like applications due to challenging problems such as acceleration techniques, efficient algorithms for parallelization, reduction of memory requirements, etc. In fact, one of the main drawbacks of deterministic higher-order methods is the huge CPU time needed to solve large problems such as for a whole core while retaining some of the approximations inherent to the solution methods. At the same time, Monte Carlo codes for core simulations have experienced a tremendous increase of usage in the nuclear community because of their flexibility in modelling any kind of geometry, of the absence of angular, energy and spatial approximations and because they are inherently adapted to effectively utilize HPC architectures. In addition, efforts are underway worldwide to develop multi-physics tools based on Monte Carlo codes which are intended to have the capability of performing time-dependent solutions considering the behavior of prompt and delayed neutrons. These tools will be able to treat safety related transients and time dependent geometry changes such as control rods movement. These goals have been tackled from a "proof of principle" point of view within the FP7 High Performance Monte Carlo Reactor Core Simulations (HPMC) project [2]. Based on the success of HPMC, the H2020 McSAFE project started in 2017 with the objective to further develop, improve, and validate the Monte Carlo-based coupled codes and methods already implemented within HPMC in order to facilitate the transition from the "proof of concept and academic-like" core simulations to more realistic, "industry-like and safetyrelevant applications" [3]. McSAFE gathers experts on code development (neutronic, thermal hydraulic and thermo-mechanics) and multi-physics coupling from universities, research centers, and industry with a total of twelve partners from nine European countries. Under this framework, different Monte Carlo codes e.g. TRIPOLI [4], SERPENT [5], MONK [6] and MCNP [7], subchannel codes such as SCF (SubChanFlow) [8] and the fuel performance code TRANSURANUS [9] are being improved and optimized for high fidelity simulations. In order to achieve the overall objective of the project, the R&D activities within McSAFE are focused on a) Methods for full-core Monte Carlo depletion and optimized thermal-hydraulic feedback, b) Multi-physics code coupling and integration into the NURESIM platform, c) Developments of dynamic Monte Carlo methods for transient analysis, d) Validation of developed multi-physics codes using both plant data (depletion problem) and experimental data of SPERT III E REA (dynamic Monte Carlo problem) In Chapter 2, the status of the investigations related to advanced depletion calculation with McSAFE codes are presented and selected results are discussed. In Chapter 3, the multi-physics coupling methods improved and integrated in the NURESIM Platform are described and the main features highlighted. The developments related to dynamic Monte Carlo methods for transient analysis are discussed in Chapter 4. The paper concludes with a short description of the validation work in Chapter 5.

MONTE-CARLO METHODS FOR DEPLETION CALCULATIONS
The McSAFE goal was to extend and optimize the multiphysics codes based on Monte Carlo to perform depletion calculation of real core loading taking into account thermal hydraulic feedbacks. For full core depletion calculations at pin level improvement and optimization of the convergence behaviour, of the computational efficiency, memory management of HPC, and parallel scalability of the Monte Carlo codes is needed. In order to assess the current burnup capabilities, limitations and potential bottlenecks of the Monte Carlo codes e.g. SERPENT2, a mini-core using a 2D model of a 17x17 pins FA was analyzed. Axially 34 zones of 34 cm and no radial subdivision of the pins are considered. It resulted in a SERPENT2 model of 9556 independent burnable zones, [10]. It was found out that a good global convergence at the low flux locations (absorber position e.g. burnable poison) can be achieved with the assumption of equilibrium Xenon with a statistical uncertainty below 1% in the central zones and larger than 1% in the top and bottom zones. In order to improve the statistical uncertainty at the bottom and top zones, more than 2.0E8 histories are needed. Based on these results, it is estimated that more as 1.0E11 histories are needed per burnup step when solving a full PWR core depletion problem. To assess the RAM-requirements of MC-codes for depletion calculations (RAM demand will increase with number of burnable materials, in HPC-architectures RAM is limited to around 60 to 1000 GB per node), different mini-core (5x5 FA, 8x8 FA) were investigated with SERPENT2 in a HPC architecture. In Figure 1a, the increase of RAM against number of FA-considered is shown. It case of the KIT HPC FH2 cluster with a 64 GB RAM-limit, a maximum number of 10 to 12 FAs fully divided in depletion zones can be modeled for the time being. An extrapolation of the RAM-requirement for a typical 193 FA core with full subdivision of depletion zones ends up with a memory size of around 750 GB. Regarding the parallel scalability, a performance analysis was done for a mini-core with eight FA discretized for a depletion calculation (40 GB RAM) at the HPC FH2-Cluster with increasing number of nodes -each node with 20 Intel Xeon E5-2660 v3 processors with a base clock frequency of 2.6 GHz). In Figure 1b, the time relative to one node @20 processors is represented versus the increasing number of nodes for a single burnup step (5.0E8 histories) normalized to one node calculation time. There, the ideal scalability per node (black curve) and the inverse of the normalized calculation time (red) are also exhibited. A good scalability is observed until 50/100 nodes i.e. 1000 / 2000 cores. This scalability can be improved by increasing the number of histories. To solve this bottleneck a special collision-based domain decomposition scheme (CDD) has been developed and implemented in the SERPENT2 code [11]. The main idea behind CDD is to partition the model in a way similar to data decomposition (DD) and to use a tracking method similar to the one used in Spatial Domain Decomposition (SDD). Following this approach, memory-intensive materials are split among MPI tasks, enabling the memory demand to be divided among nodes in a high-performance computer ( Figure 2). A new depletion algorithm has been also developed, which automatizes the selection of the time step length as well as the number of criticality cycles to be done at each time step [12]. Moreover, to optimize Monte Carlo criticality calculations a source convergence acceleration method based on a variable neutron population size over the successive criticality cycles has been implemented into the SERPENT2 code [13]. All these improved burn-up related features have been successfully tested and verified for several models representative of both PWR and VVER reactor cores. In companion papers for the same conference, more details can be found in [14], [15].   For the master-slave approach, SCF is modularized and it is integrated in SERPENT2 as a dynamic library. Here, Serpent is the master and SCF is the slave. In case of the object-oriented approach, the involved codes are modularized and implemented as shared libraries. Then, each solver is wrapped in a corresponding C++ class, which is derived from a C++ base class. Latter defines in a standardized manner the calculation, the methods for the feedback exchange and the handling of the input and output variables. A C++ supervisor program control the execution of both codes, and the data exchange between the domains. A key aspect of the ICoCo-based coupling is the need to modularize each solver in different tasks e.g. initialization, steady state solver, transient solver, termination of the calculation, exchange of feedbacks, etc. so that each solver is an independent module for which a API is generated. It is based on the unique specifications of ICoCo. In this approach, a C++ base class problem is defined to represent a "generic problem" with multi-physics functionalities [16]. In this approach, the data exchanges between the involved solvers is done using the MED-coupling library, which is based on unstructured meshes and associated fields and it includes 1D, 2D and 3D interpolation methods between the meshes of the different solvers [17]. Therefore, for a ICoCo-based coupling each solver must have its own mesh, where e.g. feedback parameters are stored in the fields attached to the mesh. In Figure 3, the different solvers (N, TH, TM) coupled using ICoCo are shown. For the development of SCF input decks for fuel assembly and subchannel level simulations of very large problems, a novel pre-processor was developed for cores consisting of both square and hexagonal FAs. Besides, the pre-processor includes several other capabilities such as the subchannel merging (for core level models), mapping files generation between codes and the indexing of channels and rods for SCF following the SERPENT order. At present, three coupled code versions have been developed within McSAFE, namely SERPENT2/SCF, TRIPOLI/SCF and MONK/SCF. In case of SERPENT2/SCF, both coupling approaches mentioned above are implemented. Moreover, the fuel performance solver TRANSURANUS (TU) has been integrated into the two coupling approaches of SERPENT2/SCF [18] obtaining the coupled code SERPENT2/SCF/TU [15]. In this approach, the fuel rod model of SCF is replaced by the TRANSURANUS solver. First testing of SERPENT2/SCF/TU was performed by solving a 360-day depletion calculation of a VVER-1000 FA at a pin /subchannel level. In Figure 4, a comparison of the depletion calculations carried out with Serpent2/SCF/TU is shown.

TIME-DEPENDENT MONTE CARLO METHODS
One of the most challenging goals of the McSAFE project is the development of dynamic versions of the Monte Carlo codes such as SERPENT2, TRIPOLI-4® and MCNP6 to enable the simulation of transient behavior of nuclear reactors. A major scientific challenge for time-dependent Monte Carlo transport simulations is represented by the very different time scales of prompt neutrons and delayed neutron precursors, which demand distinct strategies and variance reduction techniques compared to stationary simulations. Additionally, the delayed neutron fraction is very small, which might lead to serious underprediction biases. Consequently, variance reduction techniques based on stratified sampling in time and importance sampling between the neutron and precursor populations were developed. Furthermore, during time-dependent simulations, one must prevent the neutron and precursor populations from dying out or growing unbounded. For this purpose, population control methods, such as combing and Russian roulette/splitting [21] have been implemented in the TRIPOLI-4® code. In Figure 5, first results of the neutron flux change after a control rod extraction in the SPERT-III E core as predicted by TRIPOLI and point kinetics is shown. In addition, in order to evaluate the dynamic capability of the codes under development a mini-core consisting of nine fuel assemblies derived from the TMI-1 benchmark was defined for a code-to-code comparison within McSAFE project. Different reactivity excursion scenarios were defined for the CR movement starting from equilibrium critical conditions. The particle source for the kinetic simulations is prepared by running a criticality calculation and appropriately sampling the neutron and precursor populations. A case was studied where starting from the critical configuration, the control rods are extracted and later reinserted into the core. The time scale was selected in such a way that both neutron and precursors play an important role in determining the response of the mini-core. In Figure 6 a, the mini-core cross section is shown while in Figure 6 b, the power change after the CR movements as predicted by SERPENT2 and TRIPOLI is shown and compared with a point kinetic solution [22]. The agreement of the power evolution for this scenario predicted by the two dynamic versions of the codes is excellent.
(a) (b) Figure 6: Cross section of a TMI-1 mini-core (a) and the power change for a control rod extraction of the central fuel assembly (b) predicted by SERPENT2 and TRIPOLI and compared to a point kinetics solutions.
The development of the dynamic version of MCNP6 is close to be finalized after large modifications of the source code and implementation of new routines to describe the CR movements, the behavior of the prompt neutrons and precursors or the start of a time-dependent calculation from the situation of a critical reactor. First results for a pin cluster where a CR is extracted have been obtained. Details on the advances of the dynamic capability of TRIPOLI are given in a companion paper of the same conference [23].

VALIDATION
After the developmental phase of the methods and tools and the subsequent testing phase, the focus is currently given on the verification and validation activities, which are of paramount importance to demonstrate the prediction capabilities of the developed tools to solve industry-relevant problems. For this purpose, benchmark problems including mini-cores build of square and hexagonal fuel assemblies are defined for the analysis using Monte Carlo based multi-physics codes (N/TH: SERPENT2/SCF, N/TH/TM; SERPENT2/SCF/TU). The depletion capabilities of the tools will be validated using plant data of a PWR Konvoi and of a VVER-1000. Finally, the validation of the dynamic MC-codes will be performed using data from the SPERT-III E REA experiments.

CONCLUSIONS
After twenty-four months of intensive investigations it can be concluded that the McSAFE project is progressing as scheduled and that novel and unique developments were performed that are paving the way for the application of Monte Carlo based multi-physics codes to solve industry needs e.g. transient analysis using high fidelity tools. On one hand, the envisaged developments will permit to predict important local safety parameters (pin/sub-channel level) with less conservatism than current state-of-theart methods. On the other hand, they will make possible the increase in performance and operational flexibility of nuclear reactors. Finally, this validated high fidelity tools are very much appropriate to provide reference solutions to lower order deterministic codes for cases, where detailed experimental data is not available. Because of the flexibility of the Monte Carlo codes regarding geometry, the McSAFE tools are applicable to analyze different types of innovative reactors including also research reactors.