LBL logo Lawrence Berkeley National Laboratory ยป Earth Sciences Division
Questions & Comments | Privacy & Security Notice

TOUGH2-MP Software

Summary

TOUGH2-MP Model of geothermal reservoirTOUGH2-MP is a massively parallel version of TOUGH2. It was developed for running on distributed-memory parallel computers to solve large simulation problems that may not be solved by the standard, single-CPU TOUGH2 code. TOUGH2-MP implements an efficient massively parallel scheme, while preserving the full capacity and flexibility of the original TOUGH2 code. It uses the METIS software package for grid partitioning and AZTEC linear-equation solver. The standard MPI message-passing interface is adopted for communication among processors. The parallel code has been successfully applied from multi-core PCs to supercomputers on real field problems of multi-million-cell simulations for three-dimensional multiphase and multicomponent fluid and heat flow, as well as solute transport.

Features & Capabilities

In performing a parallel simulation, the TOUGH2-MP code first subdivides a simulation domain, defined by an unstructured grid of a TOUGH2 mesh, into a number of subdomains using the partitioning algorithm from the METIS software package (special installation instructions for METIS Version 5 can be found in the User Forum). The parallel code then relies on the MPI (Message-Passing Interface) for its parallel implementation. Parallel simulations are run as multiple processes on a few or many processors simultaneously.

For a typical simulation with the fully implicit scheme and Newton iteration, such as in the TOUGH2 run, the most time-consuming steps of the
execution consist of three parts:

(1) updating thermophysical parameters,

(2) assembling the Jacobian matrix, and

(3) solving the linearized system of equations.

Consequently, one of the most important aims of a parallel simulation is to distribute computational time for these three parts. In addition, a parallel scheme must take into account domain decomposition, grid node/element reordering, data input and output optimizing, and efficient message exchange between processors. Each process/processor is in charge of one portion of the simulation domain for updating thermophysical properties, assembling mass and energy balance equations, solving liner equation systems, and performing other local computations. The local linear equation systems are solved in parallel by multiple processors with the AZTEC linear solver package. AZTEC includes a number of Krylov iterative methods, such as conjugate gradient (CG), generalized minimum residual (GMRES) and stabilized biconjugate gradient (BiCGSTAB). Although each processor solves the linearized equations of subdomains independently, the entire linear equation system is solved together by all processors collaboratively via communication between neighboring processors during each Newton iteration step.

The numerical scheme of the TOUGH2 code is based on the integral finite-difference (IFD) method. In the TOUGH2 formulation, conservation equations, involving mass of air, water and chemical components as well as thermal energy, are discretized in space using the IFD method. Time is discretized fully implicitly using a first-order backward finite difference scheme. The resulting discrete finite-difference equations for mass and energy balances are nonlinear and solved simultaneously using the Newton/Raphson iterative scheme. All these numerical schemes are adopted by TOUGH2-MP. The parallel code also inherits all the process capabilities of the TOUGH2 code, including descriptions of the thermodynamics and thermophysical properties of the multiphase flow system.

TOUGH2-MP has been tested on IBM and CRAY supercomputers, Linux clusters, Macs, and multi-core PCs under different operating systems. The parallelization of TOUGH2 improves modeling capabilities significantly in terms of problem size and simulation time. The code demonstrates excellent scalability. Test examples show that a linear or super-linear speedup can be obtained on typical Linux clusters as well as on supercomputers. By using the parallel simulator, multi-million gridblock problems can be run on a typical Linux cluster with several tens to hundreds of processors to achieve ten to hundred times improvement in
computational time or problem size. The growing availability of multi-core CPUs will make parallel processing on PCs far more attractive.

The current version of TOUGH2-MP includes the following modules: EOS1, EOS2, EOS3, EOS4, EOS5, EOS7, EOS7R, EOS8, EOS9, ECO2N, EWASG, T2R3D, TMVOC, and TOUGH+HYDRATE.

Applications

See TOUGH2-MP User's Guide.

Licensing & Download

See price list of available TOUGH2-MP modules

Documentation

TOUGH2-MP User's Guide