@@@
it2help.txt: Last modification: May 24, 2000
@@@
> COMMAND INDEX
Command Index
=============
> PARAMETER
>> ABSOLUTE PERMEABILITY
>> BOTTOMHOLE PRESSURE
>> CAPACITY
>> CAPILLARY PRESSURE FUNCTION
>> COMPRESSIBILITY
>> CONDUCTIVITY (WET/DRY)
>> DRIFT
>> ENTHALPY
>> FACTOR
>> FORCHHEIMER
>> GUESS (FILE: file_name)
>> IFS
>> INITIAL (PRESSURE/: ipv)
>> KLINKENBERG
>> LAG
>> LIST
>> MINC
>> PARALLEL PLATE
>> POROSITY
>> PRODUCTIVITY INDEX
>> PUMPING RATIO
>> RATE
>> RELATIVE PERMEABILITY FUNCTION
>> SCALE
>> SELEC
>> SHIFT
>> SKIN
>> TIME
>> USER (: anno)
>>> DEFAULT
>>> LIST
>>> MATERIAL: mat_name (mat_name_i...) (+ iplus)
>>> MODEL
>>> NONE
>>> ROCK: mat_name (mat_name_i...) (+ iplus)
>>> SET: iset
>>> SINK: sink_name (sink_name_i ...) (+ iplus)
>>> SOURCE: source_name (source_name_i ...) (+ iplus)
>>>> ANNOTATION: anno
>>>> BOUND: lower upper
>>>> DEVIATION: sigma
>>>> FACTOR
>>>> GAUSS
>>>> GUESS: guess
>>>> INDEX: index (index_i ...)
>>>> LIST
>>>> LOGARITHM
>>>> LOG(F)
>>>> NORMAL
>>>> PARAMETER: index (index_i ...)
>>>> PERTURB: ()alpha (%)
>>>> PRIOR: prior_info
>>>> RANGE: lower upper
>>>> STEP: max_step
>>>> UNIFORM
>>>> VALUE
>>>> VARIANCE: sigma^2
>>>> VARIATION: sigma
>>>> WEIGHT: 1/sigma
> OBSERVATION
>> CONCENTRATION (comp_name/COMPONENT: icomp)
(phase_name/PHASE: iphase)
>> CONTENT (phase_name/PHASE: iphase)
>> COVARIANCE (FILE: filename)
>> CUMULATIVE (comp_name/COMPONENT: icomp) (phase_name/PHASE: iphase)
>> DRAWDOWN (phase_name/PHASE: iphase)
>> ENTHALPY (phase_name/PHASE: iphase)
>> FLOW (phase_name/PHASE: iphase/HEAT)
>> GENERATION (comp_name/COMPONENT: icomp) (phase_name/PHASE: iphase)
>> LIST
>> MASS FRACTION (comp_name/COMPONENT: icomp)
(phase_name/PHASE: iphase)
>> MOMENT (FIRST/SECOND) (X/Y/Z) (comp_name/COMPONENT: icomp)
(phase_name/PHASE: iphase)
>> PRESSURE (CAPILLARY) (phase_name/PHASE: iphase)
>> PRODUCTION (comp_name/COMPONENT: icomp) (phase_name/PHASE: iphase)
>> RESTART TIME: ntime (time_unit) (NEW)
>> SATURATION (phase_name/PHASE: iphase)
>> SECONDARY (phase_name/PHASE: iphase) (: ipar)
>> TEMPERATURE
>> TIME: ntime (EQUAL/LOGARITHMIC) (time_unit)
>> TOTAL MASS (comp_name/COMPONENT: icomp)
(phase_name/PHASE: iphase) (CHANGE)
>> USER (: anno)
>> VOLUME (phase_name/PHASE: iphase) (CHANGE)
>>> CONNECTION: elem1 elem2 (elem_i elem_j ...) (++/+/+ iplus)
>>> ELEMENT: elem (elem_i ...) (+ iplus)
>>> GRID BLOCK: elem (elem_i ...) (+ iplus)
>>> INTERFACE: elem1 elem2 (elem_i elem_j ...) (++/+/+ iplus)
>>> LIST
>>> MODEL
>>> NONE
>>> SINK: sink_name (sink_namei ...) (+ iplus)
>>> SOURCE: source_name (source_namei ...) (+ iplus)
>>>> ABSOLUTE
>>>> ANNOTATION: anno
>>>> AUTO
>>>> AVERAGE (VOLUME)
>>>> COLUMN: itime idata (istd_dev)
>>>> COMPONENT comp_name/: icomp
>>>> DATA (time_unit) (FILE: file_name)
>>>> DEVIATION: sigma
>>>> FACTOR: factor
>>>> FORMAT: format
>>>> HEADER: nskip
>>>> INDEX: index (index_i ...)
>>>> LIST
>>>> LOGARITHM
>>>> MEAN (VOLUME)
>>>> PARAMETER: index (index_i ...)
>>>> PHASE phase_name/: iphase
>>>> PICK: npick
>>>> POLYNOM: idegree (time_unit)
>>>> RELATIVE: rel_err (%)
>>>> SET: iset
>>>> SHIFT: shift (TIME (time_unit))
>>>> SKIP: nskip
>>>> SUM
>>>> USER
>>>> VARIANCE: sigma^2
>>>> WEIGHT: 1/sigma
>>>> WINDOW: time_A time_B (time_unit)
> COMPUTATION
>> CONVERGE/STOP/TOLERANCE
>>> ADJUST
>>> CONSECUTIVE: max_iter1
>>> DELTFACT: deltfact
>>> DIRECT
>>> FORWARD
>>> INCOMPLETE: max_incomplete
>>> INPUT
>>> ITERATION: max_iter
>>> LEVENBERG: lambda
>>> LIST
>>> MARQUARDT: nue
>>> REDUCTION: max_red
>>> SIMULATION: mtough2
>>> STEP (UNSCALED): max_step
>>> UPHILL: max_uphill
>>> WARNING
>> ERROR
>>> ALPHA: alpha (%)
>>> EMPIRICAL ORTHOGONAL FUNCTIONS (MATRIX: ndim) (iTOUGH2) (CORRELATION)
>>> EOF (MATRIX: ndim) (iTOUGH2) (CORRELATION)
>>> FISHER
>>> FOSM (MATRIX: ndim) (iTOUGH2) (CORRELATION) (DIAGONAL)
>>> HESSIAN
>>> LATIN HYPERCUBE (MATRIX: ndim) (iTOUGH2) (CORRELATION/COVARIANCE)
>>> LINEARITY (: alpha (%))
>>> LIST
>>> MONTE CARLO (SEED: iseed) (GENERATE) (CLASS: nclass)
>>> POSTERIORI
>>> PRIORI
>>> TAU: ()niter
>> JACOBIAN
>>> CENTER
>>> FORWARD (: iswitch)
>>> HESSIAN
>>> LIST
>>> PERTURB : ()perturb (%)
>> OPTION
>>> ANDREW: c
>>> ANNEAL
>>>> ITERATION: max_iter
>>>> LIST
>>>> SCHEDULE: beta
>>>> STEP: max_step
>>>> TEMPERATURE: ()temp0
>>> CAUCHY
>>> DESIGN
>>> DIRECT
>>> FORWARD
>>> GRID SEARCH (UNSORTED) (: ninval1 (ninval2 (inval3)) / FILE: filename)
>>> GAUSSNEWTON
>>> L1ESTIMATOR
>>> LEASTSQUARE
>>> LEVENBERGMARQUARDT
>>> LIST
>>> OBJECTIVE (UNSORTED) (: ninval1 (ninval2 (inval3)) / FILE: filename)
>>> QUADRATICLINEAR: c
>>> SELECT
>>>> CORRELATION: ()rcorr
>>>> ITERATION: niter
>>>> LIST
>>>> SENSITIVITY: ()rsens
>>> SIMPLEX
>>> SENSITIVITY
>>> STEADYSTATE (SAVE) (: max_time_step)
>> OUTPUT
>>> BENCHMARK
>>> CHARACTERISTIC
>>> COVARIANCE
>>> FORMAT: format (LIST)
>>> INDEX
>>> LIST
>>> JACOBIAN
>>> NEW OUTPUT
>>> OBJECTIVE
>>> PERFORMANCE
>>> PLOTFILE: format (LIST)
>>> PLOTTING: niter
>>> SENSITIVITY
>>> time_unit
>>> UPDATE
>>> RESIDUAL
>>> VERSION
@@@
> VERSION UPDATE
iTOUGH2 Updates
===============
Version 3.0: July 1996
++++++++++++++++++++++
Reference:
Finsterle, S., K. Pruess, and P. Fraser, iTOUGH2 Software Qualification
Lawrence Berkeley National Laboratory, LBNL39489, Berkeley, Calif., 1996.
Date: 010397
Comment: A UNIX script file "it2help" is now available for online
printing of iTOUGH2 manual pages.
Version 3.1: April 1997
+++++++++++++++++++++++
References:
Finsterle, S., iTOUGH2 Command Reference, Version 3.1, Report LBNL40041,
Lawrence Berkeley National Laboratory, Berkeley, Calif., 1997.
Finsterle, S., iTOUGH2 Sample Problems, Report LBNL40042,
Lawrence Berkeley National Laboratory, Berkeley, Calif., 1997.
Version 3.2: July 1998
++++++++++++++++++++++
Reference:
Finsterle, S., iTOUGH2 V3.2 Verification and Validation Report, Report LBNL42002,
Lawrence Berkeley National Laboratory, Berkeley, Calif., 1997.
Version 3.3: October 1998
+++++++++++++++++++++++++
Reference:
Finsterle, S., Parallelization of iTOUGH2 Using PVM, Report LBNL42261,
Lawrence Berkeley National Laboratory, Berkeley, Calif., 1997.
Version 4.0: January 1999
+++++++++++++++++++++++++
Released to Energy Science and Technology Software Center
Version 5.0: July 2002
++++++++++++++++++++++
Qualified (again!) for use wihin Yucca Mountain Project
@@@
Syntax: >> ABSOLUTE
Parent Command:
> PARAMETER
Subcommand:
>>> MATERIAL
Description:
This command selects as a parameter the absolute permeability (TOUGH2 variable
PER(ISOT,NMAT)) of the specified material.
Subcommand >>>> LOGARITHM invokes estimation of a single logvalue which is
assigned to all selected materials;
subcommand >>>> FACTOR invokes estimation of a common multiplication factor
which is applied to all selected permeabilities, thus maintaining the
permeability ratios between the materials;
subcommand >>>> LOG(F) should be used to estimate a lognormally distributed
factor with which all the permeabilities are multiplied.
By default, the estimate refers to all three flow directions (ISOT=1, 2, 3).
Subcommand >>>> INDEX can be used to select the permeability of a specific
flow direction ISOT.
Example:
> PARAMETER
>> ABSOLUTE permeability
>>> MATERIAL: BOREH SKIN_
>>>> LOGARITHM
<<<<
>>> MATERIAL: SAND1 SAND2
>>>> FACTOR
>>>> INDEX: 1 2 (horizontal permeability)
<<<<
>>> MATERIAL: SAND1 SAND2
>>>> FACTOR
>>>> INDEX: 3 (vertical permeability)
<<<<
<<<
<<
See Also:

@@@
Syntax: >>>> ABSOLUTE
Parent Command:
all thirdlevel commands in block > OBSERVATION
Subcommand:

Description:
This command takes the absolute value of the calculated system response as
the model output to be compared to the observed data.
Example:
> OBSERVATION
>> CAPILLARY PRESSURE
>>> ELEMENT: ELM99
>>>> take ABSOLUTE value and compare to ...
>>>> ... positive DATA stored on FILE: pcap.dat
>>>> the RELATIVE error is: 5 %
<<<<
<<<
<<
See Also:

@@@
Syntax: >>> ADJUST
Parent Command:
>> CONVERGE
Subcommand:

Description:
The initial time step size is provided through TOUGH2 variable DELTEN or
DLT(1), usually followed by automatic time stepping (see TOUGH2 variables
MOP(16) and REDLT). The initial time step may be too big (i.e., is
automatically reduced by TOUGH2) or too small (i.e., convergence is achieved
within one NewtonRaphson iteration), depending on the parameter set supplied
by iTOUGH2 during an inversion. This command allows iTOUGH2 to automatically
adjust the initial time step size so that convergence is achieved within more
than 1 but less than MOP(16) NewtonRaphson iterations.
Automatic time step adjustment may improve the speed of an inversion, but may
also make the inversion unstable if time discretization errors are significant.
Example:
> COMPUTATION
>> CONVERGE
>>> automatically ADJUST initial timestep DELTEN
<<<
<<
See Also:
>>> CONSECUTIVE, >>> REDUCTION
@@@
Syntax: >>> ALPHA: alpha (%)
Parent Command:
>> ERROR
Subcommand:

Description:
alpha is a probability used for a variety of statistical tests within iTOUGH2.
The choice of alpha does not affect the estimated parameter set, but is used
in the residual analysis, the Fisher model test, the analysis of the resulting
distribution when performing Monte Carlo simulations, and the width of the
error band when performing FOSM uncertainty propagation analysis.
alpha is expected to assume values between 0.001 and 0.200 (default: 0.01).
Instead of the risk alpha, one can also provide the confidence level (1alpha),
in which case alpha assumes values between 0.8 an 0.999.
Use keyword % if alpha is given in percent.
Example:
> COMPUTATION
>> STOP
>>> number of TOUGH2 SIMULATIONS: 200
<<<
>> ERROR propagation analysis
>>> MONTE CARLO simulations
>>> print quantile for risk ALPHA =: 5.0 %
<<<
<<
See Also:

@@@
Syntax: >>> ANDREW: c
Parent Command:
>> OPTION
Subcommand:

Description:
This command selects a robust estimator named Andrews.
Given this estimator, the objective function to be minimized
is the sum of the cosine functions g(y), where y is the weighted residual:
S=Sum(g(y )) i=1,...,m
i
where:
 1cos(y/c) y < c*Pi
g(y ) = 
i  1 y > c*Pi
with:
y = (r /sigma )
i i i
This objective function does not correspond to a standard probability density
function. It has the general characteristic that the weight given individual
residuals first increases with deviation, then decreases to reduce the impact
of outliers. The parameter c indicates the deviation at which residuals are
considered to be outliers. If the measurement errors happen to be close to
a normal distribution with standard deviation sigma, then the optimal value
for the constant c is c=2.1.
Note that this objective function can be minimized using the standard
LevenbergMarquardt algorithm which is designed for a quadratic objective
function. Since (1cosine) can be reasonably well approximated by a quadratic
function for small g, the LevenbergMarquardt algorithm is usually quite
efficient.
Example:
> COMPUTATION
>> OPTION
>>> use the robust estimator ANDREW with a constant c : 1.5
<<<
<<
See Also:
>>> CAUCHY, >>> L1ESTIMATOR, >>> LEASTSQUARES, >>> QUADRATICLINEAR
@@@
Syntax: >>> ANNEAL
Parent Command:
>> OPTION
Subcommand:
>>>> ITERATION
>>>> LIST
>>>> SCHEDULE
>>>> STEP
>>>> TEMPERATURE
Description:
This command invokes Simulated Annealing to minimize the objective function S.
The following steps are performed by iTOUGH2, controlled by a number of
fourthlevel commands:
(1) Define the range of possible parameter values using command >>>> RANGE
in block > PARAMETER.
(2) Define an initial value of the control parameter tau using command
>>>> TEMPERATURE.
(3) iTOUGH2 generates random perturbations delta(p) of the parameter vector p.
The probability density function of the perturbation is either Gaussian
or uniform; the initial standard deviations of these distributions are
given by command >>>> DEVIATION (p).
(4) The objective function S(p ) for the new parameter set
k+1
p = p + delta(p) is evaluated.
k+1 k
(5) If the objective function is decreased (i.e.,
delta(S)=S(P )  S(p ) < 0), the change is retained.
k+1 k
If the objective function is increased (i.e., delta(S) > 0),
the perturbation is accepted with probability P=exp(delta(S)/tau).
(6) After a sufficient number of perturbations have been accepted
(see command >>>>STEP (a)), tau is lowered according to the annealing
schedule (see command >>>> SCHEDULE).
(7) Steps (3) through (6) are repeated until the maximum number of
temperature reductions (see command >>>> ITERATION (a)) is reached.
This scheme of always taking a downhill step and sometimes taking an uphill
step with probability P depending on tau is known as the Metropolis algorithm.
Simulated Annealing may be especially useful for the minimization of a
discontinuous cost function in order to optimize operational parameters.
Example:
> PARAMETER
>> pumping RATE
>>> SINK: EXT_1
>>>> RANGE: 1E1 1E4
>>>> LOGARITHM
<<<<
<<<
<<
> COMPUTATION
>> TIME: 1 [YEARS]
2.0
>> USER specified cost function: Extraction cost
>>> SINK: EXT_1
>>>> NO DATA
>>>> WEIGHT (=specific costs): 1.0
<<<<
<<<
<<
> COMPUTATION
>> OPTION
>>> a cost function is minimized using L1ESTIMATOR
>>> perform minimization using Simulated ANNEALing
>>>> initial TEMPERATURE : 0.05 (=5 % of initial cost)
>>>> update after maximum : 100 STEPS
>>>> annealing SCHEDULE: 0.95
>>>> Simulated Annealing ITERATIONS: 50
<<<<
<<<
<<
See Also:
>>> GAUSSNEWTON, >>> GRID SEARCH, >>> LEVENBERGMARQUARDT, >>>> ITERATION (a),
>>>> SCHEDULE, >>>> STEP (a), >>>> TEMPERATURE
@@@
Syntax: >>>> ANNOTATION: anno
Parent Command:
all thirdlevel commands in block > PARAMETER and > OBSERVATION
Subcommand:

Description:
A fifteencharacter string anno can be provided to label parameters and
observations in the iTOUGH2 output file. The annotation does not have any
function except for making the iTOUGH2 output more readable (exceptions are
the userspecified functions; see commands >> USER (p,o). If no annotation
is provided, iTOUGH2 internally generates a string which allows unique
identification of the parameter or observation, respectively. The internally
generated annotation can be used to check the correctness of the iTOUGH2 input.
Example:
> PARAMETER
>> van Genuchten's CAPILLARY pressure function
>>> ROCK TYPE : MATRI
>>>> ANNOTATION : AIR ENTRY PRESSURE
>>>> PARAMETER no. : 2
<<<<
>>> ROCK TYPE : FRAC1
>>>> PARAMETER no. : 2
<<<<
<<<
<<
> OBSERVATION
>> CONCENTRATION of COMPONENT No.: 3 in PHASE No.: 2
>>> ELEMENT: ELM10
>>>> NO DATA and no annotation
<<<<
<<<
<<
In the iTOUGH2 output file, the first parameter is referred to as
"AIR ENTRY PRESS". The annotation internally generated for the second
parameter reads "CAP.PR.2 FRAC1", where "2" indicates that the second
parameter of the capillary pressure ("CAP.PR.") function referring to rock
type "FRAC1" is estimated. The automatically generated annotation for the
observation reads "CONC.(3,2)ELM10".
See Also:

@@@
Syntax: >>>> AUTO
Parent Command:
all thirdlevel commands in block > OBSERVATION
Subcommand:

Description:
This command provides automatic weighting of observations. The standard
deviation calculated by iTOUGH2 is 10 % of the mean of the observed values
for a given data set. It is suggested, however, that the assumed measurement
error or expected standard deviation of the final residuals be explicitly
provided using command >>>> DEVIATION (o).
Example:
> OBSERVATION
>> PRESSURE
>>> ELEMENT: BH__0
>>>> AUTOmatic weighting
>>>> DATA are on FILE: pres.dat
<<<<
<<<
<<
See Also:
>>>> DEVIATION (o)
@@@
Syntax: >>>> AVERAGE (VOLUME)
or
>>>> MEAN (VOLUME)
Parent Command:
all thirdlevel commands in block > OBSERVATION
Subcommand:

Description:
If multiple elements or connections are provided to indicate the location of
a measurement point, iTOUGH2 takes the average of all calculated values as
the model output to be compared to the data. The user must ensure that the
averaging of the quantity is meaningful. If keyword VOLUME is present on
the command line, the calculated values are weighted by the grid block volume.
Example:
> OBSERVATION
>> GAS CONTENT
>>> ELEMENTS: A1__1 A1__2 A1__3 A1__4 B1__1 B1__2 B1__3 B1__4 &
C1__1 C1__2 C1__3 C1__4 D1__1 +3
>>>> ANNOTATION: Ave. Gas Content
>>>> Take VOLUME AVERAGE
>>>> NO DATA
<<<<
<<<
<<
See Also:
>>>> SUM
@@@
Syntax: >>> BENCHMARK
Parent Command:
>> OUTPUT
Subcommand:

Description:
A short benchmark calculation is performed, and the CPU time used
is compared to that on a reference machine, and a printout is
generated with information about approximate relative computer
performance.
Example:
> COMPUTATION
>> OUTPUT
>>> perform BENCHMARK calculation
<<<
On output:
Computer is faster than a SUN ULTRA1 by a factor of: 2.1
See Also:

@@@
@@@
Syntax: >> BOTTOMHOLE PRESSURE
Parent Command:
> PARAMETER
Subcommand:
>>> SOURCE
Description:
This command selects as a parameter the bottomhole pressure for wells on
deliverability (TOUGH2 variable EX). This parameter refers to a sink/source
code name. The generation type must be DELV.
Example:
> PARAMETER
>> BOTTOMHOLE PRESSURE in well of deliverability
>>> SINK: WEL_1 + 5
>>>> ANNOTATION: wellb. pres. Pwb
>>>> estimate VALUE
>>>> RANGE : 0.5E5 5.0E5 [Pa]
<<<<
<<<
<<
See Also:

@@@
Syntax: >>>> BOUND: lower upper
Parent Command:
all thirdlevel commands in block > PARAMETER
Subcommand:

Description:
(see command >>>> RANGE)
Example:
(see command >>>> RANGE)
See Also:
>>>> RANGE
@@@
Syntax: >> CAPACITY
Parent Command:
> PARAMETER
Subcommand:
>>> MATERIAL
Description:
This command selects as a parameter the rock grain specific heat
(TOUGH2 variable SPHT(NMAT)).
Example:
> PARAMETER
>> heat CAPACITY
>>> ROCK type : GRANI
>>>> VALUE
>>>> RANGE : 600.0 3000.0 [J/kg/C]
<<<<
<<<
<<
See Also:

@@@
Syntax: >> CAPILLARY
Parent Command:
> PARAMETER
Subcommand:
>>> DEFAULT
>>> MATERIAL
Description:
This command selects a parameter of the capillary pressure function
(TOUGH2 variable CP(IPAR,NMAT)) of a certain rock type, or a parameter of
the default capillary pressure function (TOUGH2 variable CPD(IPAR)).
Use command >>>> INDEX to select the parameter index IPAR. The physical
meaning of the parameter depends on the type of capillary pressure function
selected in the TOUGH2 input file, variable ICP and ICPD respectively.
The admissible range should be specified explicitly to comply with parameter
restrictions (see Pruess [1987], Appendix B).
Example:
> PARAMETER
>> parameter of CAPILLARY pressure function
>>> DEFAULT
>>>> ANNOTATION : Slr
>>>> INDEX CPD(: 2)
>>>> VALUE
>>>> RANGE : 0.01 0.99
<<<<
>>> MATERIAL: SAND1 SAND2
>>>> ANNOTATION : vG alpha [Pa^1]
>>>> INDEX no. : 3
>>>> LOGARITHM
>>>> RANGE : 5.0 1.0
<<<<
<<<
<<
See Also:
>> RELATIVE
@@@
Syntax: >>> CAUCHY
Parent Command:
>> OPTION
Subcommand:

Description:
This command selects an objective function that corresponds to a Cauchy or
Lorentzian distribution, i.e., the probability density function of the
residuals r reads:
phi(r )=(1+0.5*(r /sigma )^2)^(1)
i i i
This distribution exhibits more extensive tails compared to the normal
distribution, and leads therefore to a more robust estimation if outliers
are present. The objective function to be minimized is given by the
following equation:
S = Sum(log(1+0.5*g ^2) i=1,...,m
i
with
g = (r /sigma )
i i i
This objective function can bE minimized using the standard LevenbergMarquardt
algorithm which is designed for a quadratic objective function.
The objective function can be reasonably well approximated by a quadratic
function, so that the LevenbergMarquardt algorithm is usually quite efficient.
Example:
> COMPUTATION
>> OPTION
>>> assume measurement errors follow a CAUCHY distribution
<<<
<<
See Also:
>>> ANDREW, >>> L1ESTIMATOR, >>> LEASTSQUARES, >>> QUADRATICLINEAR
@@@
Syntax: >>> CENTERED
Parent Command:
>> JACOBIAN
Subcommand:

Description:
This command calculates elements of the Jacobian matrix by means of centered
finite difference quotients:
dz z (p +dp )  z (p dp )
i i j j i j j
J =  <> 
ij dp 2dp
j j
The evaluation of the Jacobian thus requires (2n+1) TOUGH2 simulations,
where n is the number of parameters. The size of the perturbation dp can
be controlled using command >>> PERTURB. Centered finite differences are
more accurate than forward finite differences, but computationally twice
as expensive (also see command >>> FORWARD).
Example:
> COMPUTATION
>> JACOBIAN
>>> use CENTERED finite difference quotient
>>> PERTURBation factor is : 0.005 times the parameter value
<<<
<<
See Also:
>>> FORWARD (j), >>> PERTURB
@@@
Syntax: >>> CHARACTERISTIC
Parent Command:
>> OUTPUT
Subcommand:

Description:
This command generates a file in the format specified by command >>> FORMAT
containing the characteristic curves (relative permeability and capillary
pressure functions) of all rock types used in the TOUGH2 model.
The file name contains the string "_ch".
Example:
> COMPUTATION
>> OUTPUT
>>> generate file with CHARACTERISTIC curves
>>> in : TECPLOT FORMAT
<<<
<<
See Also:
>>> FORMAT
@@@
Syntax: >>>> COLUMN: itime idata (istd_dev)
Parent Command:
all thirdlevel commands in block > OBSERVATION
Subcommand:

Description:
This command identifies the column holding the time and observed value in
the data definition block (see command >>>> DATA). By default, transient
observations are expected to be provided in two columns, where the first
column contains the observation time, and the second column contains the
corresponding measurement. Deviations from this format must be indicated by
providing the numbers of the columns, itime and idata, holding time and data
information, respectively. If a third integer istd_dev is provided, an
additional column is expected holding the standard deviation of the
corresponding measurement. This allows one to specify individual standard
deviations for each data point (the subcommand >>>> DEVIATION assigns a
single standard deviation to all data points of a data set).
The columns following command >>>> DATA are read in free format. If a
column contains nonnumeric characters, command >>>> FORMAT must be used.
Example:
> OBSERVATION
>> CAPILLARY PRESSURE
>>> ELEMENT : A1__1
>>>> COLUMN : 2 3
>>>> skip : 3 HEADER lines
>>>> conversion FACTOR : 100.0 [hPa]  [Pa]
>>>> DATA (time is in MINUTES)

# Time [min] Cap. Pres, [hPa] FlowmeterX Flow Rate [mg/s]

1 5.0 0.1698331055E+02 FlowmeterA 0.9869399946E+01
2 10.0 0.2075763428E+02 FlowmeterA 0.1039689596E+02
3 15.0 0.2357142822E+02 FlowmeterA 0.1162893932E+02
4 20.0 0.2529052490E+02 FlowmeterA 0.1353439620E+02
5 25.0 0.2598133789E+02 FlowmeterA 0.1469761628E+02
. .... ................ .......... ................
>>>> standard DEVIATION: 0.5 [hPa]
<<<<
<<<
See Also:
>>>> DATA, >>>> DEVIATION (o), >>>> FORMAT, >>>> HEADER
@@@
Syntax: >>>> COMPONENT comp_name/: icomp
Parent Command:
>>> ELEMENT
>>> SOURCE (o)
Subcommand:

Description:
This command identifies a component either by its name (comp_name) or the
component number (icomp). A list of allowable component names for the
given EOS module can be obtained from the header of the iTOUGH2 output file.
Example:
> OBSERVATION
>> CONCENTRATION
>>> ELEMENT: ZZZ99
>>>> ANNOTATION: TCE concentration
>>>> COMPONENT No.: 3 (=VOC)
>>>> dissolved in LIQUID PHASE
>>>> DATA on FILE: tce.dat
>>>> standard DEVIATION: 1.0E6
<<<<
<<<
<<
See Also:
>>>> PHASE
@@@
Syntax: >> COMPRESSIBILITY
Parent Command:
> PARAMETER
Subcommand:
>>> MATERIAL
Description:
This command selects as a parameter the pore space compressibility c
(TOUGH2 variable COM(NMAT)) of a certain rock type.
It is suggested to estimate the logarithm of c. Under fully liquidsaturated
conditions, pore space compressibility estimates c can be converted to a
specific storage coefficient S [1/m] as follows:
s
S = phi*rho*g*(c + c)
s w
where phi is porosity, rho is density of water, g is gravitational
acceleration, and c = 4.4E10 [1/Pa] is water compressibility.
w
Similarly, estimation of c for grid blocks representing a well or borehole
is useful for the determination of a dimensionless wellbore storage
coefficient C :
bh
C = phi *rho*g*V *(S * c + S * c + c)
bh bh bh l w g g
where phi and V are the porosity and volume of the grid block representing
bh bh
the well, S and S are the liquid and gas saturation, and c is gas
l g g
compressibility which is approximately 1/p.
Example:
> PARAMETER
>> pore space COMPRESSIBILITY
>>> MATERIAL: BOREH
>>>> ANNOTATION: Wellbore storage
>>>> LOGARITHM
>>>> RANGE : 10.00 7.0
<<<<
>>> MATERIAL: SKIN_ ROCK_ BOUND
>>>> ANNOTATION: Storativity
>>>> LOGARITHM
<<<
<<
See Also:

@@@
Syntax: > COMPUTATION
Parent Command:

Subcommand:
>> CONVERGE
>> ERROR
>> JACOBIAN
>> OPTION
>> OUTPUT
Description:
This is the firstlevel command for specifying a number of computational
parameters, convergence criteria, program options, and output formats.
The general format is as follows:
> COMPUTATION
>> specify various program OPTIONS
>> specify CONVERGEnce criteria
>> specify parameters for calculating JACOBIAN matrix
>> specify parameters for ERROR analysis
>> specify OUTPUT formats
<<
Example:
> COMPUTATION
>> CONVERGENCE criteria
>>> perform : 5 ITERATIONS
<<<
>> JACOBIAN
>>> use CENTERED finite difference quotient with
>>> a relative parameter PERTURBation of : 0.5 %
<<<
>> program OPTIONS
>>> use LEASTSQUARES objective function (default)
>>> allow the simulation to reach STEADY state
<<<
>> OUTPUT
>>> generate PLOTFILE for: TECPLOT visualization software
>>> print all times in HOURS
<<
See Also:

@@@
Syntax: >> CONCENTRATION (comp_name/COMPONENT: icomp) (phase_name/PHASE: iphase)
Parent Command:
> OBSERVATION
Subcommand:
>>> ELEMENT
Description:
This command selects as an observation type the concentration [kg/m3] of
a component in a given phase. Concentration is defined as the product of
mass fraction of component icomp in phase iphase times density of phase iphase.
This observation type refers to an element. Component number icomp or
component name comp_name, and phase number iphase or phase name phase_name
depend on the EOS module being used. They are listed in the header of the
iTOUGH2 output file, and can be specified either on the command line or using
the two subcommands >>>> COMPONENT and >>>> PHASE, respectively.
Example:
> OBSERVATION
>> CONCENTRATION of BRINE in LIQUID
>>> ELEMENT: A1__1
or
> OBSERVATION
>> CONCENTRATION of COMPONENT No.: 2 in PHASE No.: 2
>>> ELEMENT: A1__1
or
> OBSERVATION
>> CONCENTRATION
>>> ELEMENT: A1__1
>>>> COMPONENT: 2
>>>> LIQUID PHASE
See Also:
>> MASS FRACTION
@@@
Syntax: >> CONDUCTIVITY (WET/DRY)
Parent Command:
> PARAMETER
Subcommand:
>>> MATERIAL
Description:
This command selects as a parameter the formation heat conductivity under
fully liquid saturated (keyword WET, default, TOUGH2 variable CWET(NMAT))
or desaturated conditions (keyword DRY, TOUGH2 variable CDRY(NMAT)).
Example:
> PARAMETER
>> heat CONDUCTIVITY under DRY conditions
>>> ROCK type : GRANI
>>>> VALUE
>>>> RANGE : 0.5 5.0 [W/m/C]
<<<<
<<<
<<
See Also:

@@@
Syntax: >>> CONNECTION: elem1 elem2 (elem_i elem_j ...) (++/+/+ iplus)
or
>>> INTERFACE: elem1 elem2 (elem_i elem_j ...) (++/+/+ iplus)
Parent Command:
>> FLOW
Subcommand:
all fourthlevel commands in block > OBSERVATION
Description:
This command reads pairs of element names defining a connection.
Element names are designated by a threecharacter/twointeger
(FORTRAN format: AAAII) code name. Blanks in the element names as printed
in the TOUGH2 output file must be replaced by underscores (e.g., an
element name specified in the TOUGH2 input file as 'B 007' is printed as
'B 0 7' to the TOUGH2 output file. Therefore, it must be addressed in
the iTOUGH2 input file as 'B_0_7'). Multiple connections can be specified,
and iTOUGH2 calculates the sum or mean of all flow rates (see subcommands
>>>> SUM and >>>> AVERAGE, respectively).
A sequence of iplus connections can be generated where the number of the
first and/or the second element is increased by 1. If only the first (second)
element in a sequence of connections should be increased, use + (+).
If both elements are to be increased, use + (or ++).
The following two command lines are thus identical:
>>> CONNECTION: AA__1 BB_15 + 2
>>> CONNECTION: AA__1 BB_15 AA__1 BB_16 AA__1 BB_17
Example:
> OBSERVATION
>> LIQUID FLOW RATE
>>> list of CONNECTIONS: ELM_1 ELM_2 + 48
>>>> ANNOTATION : Boundary flux
>>>> take the SUM of the flow rates across 49 connections
>>>> DATA on FILE : flow.dat
>>>> RELATIVE error: 10 %
<<<<
<<<
<<
See Also:
>>>> AVERAGE, >>>> SUM
@@@
Syntax: >>> CONSECUTIVE: max_iter1
Parent Command:
>> CONVERGE
Subcommand:

Description:
By default, TOUGH2 simulations are stopped if 10 consecutive time steps
converge with a single NewtonRaphson iteration because no update of primary
variables occurs. This command allows changing the maximum number of
allowable time steps with ITER=1 to max_iter1.
Consecutive time steps with no update of primary variable occur if:
(1) steadystate is reached;
(2) calibration or printout times are too narrowly spaced;
(3) the maximum time step size (TOUGH2 variable DELTMX) is too small;
(4) the initial time step (TOUGH2 variable DELTEN or DLT(1)) is too small;
(5) a small time step is taken to land on a calibration or printout time.
Only (1) is an acceptable TOUGH2 convergence (see command >>> STEADYSTATE).
All the other reasons may lead to premature termination of a TOUGH2 simulation.
Convergence problems are more often encountered in iTOUGH2 than in a standard
TOUGH2 simulation because many parameter combinations are submitted.
This command makes TOUGH2 more tolerant of this kind of convergence
failure. It is important, however, that max_iter1 is only increased to
overcome temporary convergence problems during the optimization, i.e., the
final parameter set should yield a TOUGH2 simulation without convergence
problems. Calibration points should not be spaced too narrowly in time
(see command >> TIME). Command >>> ADJUST can be used to overcome
problem (4). Note that a special time stepping procedure is incorporated
in iTOUGH2 to avoid problem (5).
Example:
> COMPUTATION
>> CONVERGE
>>> accept : 20 CONSECUTIVE time steps that converge on ITER=1
<<<
<<
See Also:
>> TIME, >>> ADJUST, >>> REDUCTION
@@@
Syntax: >> CONTENT (phase_name/PHASE: iphase)
Parent Command:
> OBSERVATION
Subcommand:
>>> ELEMENT
Description:
This command selects as a parameter the content of phase iphase as an
observation type. Phase content is defined as the product of saturation
and porosity. The phase name phase_name or phase number iphase, which
depend on the EOS module being used, are listed in the iTOUGH2 header,
and can be specified either on the command line or using subcommand
>>>> PHASE. Estimating phase content and phase saturation is identical
only if porosity remains constant. Porosity can be variable (i) due to
compression of the pore space (i.e., if TOUGH2 variable COM(NMAT) is not zero),
and (ii) if porosity is one of the parameters to be estimated and is updated
during the inversion.
Example:
> OBSERVATION
>> TIME: 1 point at steadystate
1.0E20
>> LIQUID CONTENT or
CONTENT in PHASE No.: 2
>>> ELEMENT: A1__1
>>>> ANNOTATION: Water content
>>>> FACTOR : 0.01 (data given in %)
>>>> one steadystate DATA point
0.0 23.0
1.0E50 23.0
>>>> DEVIATION: 5.0 %
<<<<
>>> ELEMENT: A1__1
>>>> ANNOTATION: Gas content
>>>> GAS PHASE (overwrites iphase specified on command line)
>>>> NO DATA, just for plotting
>>>> WEIGHT : 1.0E20
<<<<
<<<
<<
See Also:

@@@
Syntax: >> CONVERGE
or
>> STOP
or
>> TOLERANCE
Parent Command:
> COMPUTATION
Subcommand:
>>> ADJUST
>>> CONSECUTIVE
>>> DELTFACT
>>> FORWARD
>>> INCOMPLETE
>>> INPUT
>>> ITERATION
>>> LEVENBERG
>>> LIST
>>> MARQUARDT
>>> REDUCTION
>>> SIMULATION
>>> STEP
>>> UPHILL
>>> WARNING
Description:
This is the parent command of a number of subcommands that deal with tolerance
measures and convergence criteria for the inversion and, to a certain extent,
the TOUGH2 simulation.
Example:
> COMPUTATION
>> CONVERGEnce criteria
>>> ignore WARNING messages, then
>>> perform : 5 ITERATIONS
>>> stop if more than : 5 unsuccessful UPHILL steps are proposed
>>> allow for : 20 CONSECUTIVE time steps converging on ITER=1
>>> and : 20 time step REDUCTIONS
>>> accept : 6 INCOMPLETE TOUGH2 runs
>>> the initial LEVENBERG parameter shall be : 0.01
>>> use the default value (=: 10.0) for the MARQUARDT parameter
<<<
See Also:
>> OPTION
@@@
Syntax: >>>> CORRELATION: ()rcorr
Parent Command:
>>> SELECT
Subcommand:

Description:
This command defines one of the criteria used for automatic parameter
selection. It examines the ratio between the apparent conditional standard
deviation sigma* and the joint standard deviation sigma as a measure of
overall parameter correlation (since the calculation is performed far from
the minimum, the standard deviations cannot be interpreted as actual estimation
uncertainties):
X = sigma*/sigma (O < X < 1)
Those parameters with a ratio larger than rcorr, i.e., the most independent
parameters, are selected. Strongly correlated parameters are (temporarily)
excluded from the optimization process.
If a negative value is given for rcorr, the selection criterion is relaxed with
each iteration k, and reaches zero for the last iteration max_iter, i.e., all
parameters are selected for the final step.
rcorr = rcorr*(1k/max_iter)
k
The choice for rcorr depends on the number of parameters n specified in block
> PARAMETER. As more parameters are estimated simultaneously, the higher
parameter interdependencies become. This fact should be acknowledged by
specifying a smaller value for rcorr if n increases.
Example:
> COMPUTATION
>> OPTION
>>> SELECT parameter automatically every
>>>> : 3 ITERATIONS
>>>> based on the CORRELATION criterion with rcorr : 0.10
<<<<
<<<
<<
See Also:
>>>> ITERATION (s), >>>> SENSITIVITY
@@@
Syntax: >> COVARIANCE (FILE: file_name)
Parent Command:
> OBSERVATION
Subcommand:

Description:
This command reads the diagonal elements of the a priori covariance matrix C .
zz
Usually the variances of the observations are specified separately for each
data set using command >>>> DEVIATION (o), or they are provided as a separate
column along with the data (see commands >>>> COLUMN and >>>> DATA).
As an alternative, one can assign or overwrite variances using the secondlevel
command >> COVARIANCE, followed by two columns holding index i and variance c .
ii
Since this command addresses elements of the assembled covariance matrix,
the user must provide the index that corresponds to the position of the
observation in vector z. This information is best retrieved from the iTOUGH2
output file after running one forward simulation. Note that the index changes
whenever the number of observations, parameters, or calibration times is
changed. The variances can also be read from a covariance file which must
contain three columns, holding index i, index j, and the (co)variance c
ij
(this is the same format as the one on the covariance file generated by
command >>> COVARIANCE). Despite the fact that two indexes must be provided,
only diagonal elements, i.e., c , will be accepted as input.
ii
Example:
> OBSERVATION
>> : 30 EQUALLY space calibration TIMES in MINUTES between
3.0 90.0
>> plus : 1 steadystate TIME near
86400.0 seconds
>> PRESSURE
>>> ELEMENT: A1__1
>>>> transient and steadystate DATA are on FILE: pres.dat
>>>> standard DEVIATION: 2000.0 Pa
<<<<
<<<
>> change one element of COVARIANCE matrix to increase its weight
32 1.0E4
(provided that 2 parameters are estimated, the steadystate data point is
observation number 32)
<<
See Also:
>>> COVARIANCE, >>>> COLUMN, >>>> DEVIATION, >>>> DATA
@@@
Syntax: >>> COVARIANCE
Parent Command:
>> OUTPUT
Subcommand:

Description:
This command generates a file with extension ".cov " with the covariance
matrix of the calculated system response:
T
C = J C J
zz pp
Note that C is a square matrix of dimension m*m.
zz
Example:
> COMPUTATION
>> OUTPUT
>>> print COVARIANCE matrix of calculated system response
<<<
<<
See Also:

@@@
Syntax: >> CUMULATIVE (comp_name/COMPONENT: icomp) (phase_name/PHASE: iphase)
Parent Command:
> OBSERVATION
Subcommand:
>>> SOURCE
Description:
This command selects as an observation type the cumulative injection or
production of component icomp or phase iphase. This observation type refers
to a sink or source code name. It can be used when timedependent generation
rates are to be estimated where the total amount of injected or produced fluid
is approximately known, or if the total generation rate is prescribed in
block GENER, but the phase composition of the produced fluid is variable and
sensitive to the parameters of interest. Finally, the cumulative amount of
injected or produced fluid can be used as an observable variable for wells
on deliverability. Note that the cumulative mass of a phase produced in an
element strongly depends on the composition of the produced fluid mixture
according to the options provided by TOUGH2 flag MOP(9). Component number
icomp or component name comp_name, and phase number iphase or phase name
phase_name depend on the EOS module being used. They are listed in the
iTOUGH2 header and can be specified either on the command line or using the
two subcommands >>>> COMPONENT and >>>> PHASE, respectively.
If neither a phase nor component number is specified, the total, cumulative
mass of all phases or components will be calculated.
Example:
> OBSERVATION
>> CUMULATIVE METHANE produced
>>> SOURCE: RW__1
>>>> ANNOTATION : Total methane [l]
>>>> FACTOR : 7.67358E04 [l]  [kg]
>>>> DATA from FILE: tot_ch4.dat [HOUR]
>>>> DEVIATION : 5.0 [l]
<<<<
<<<
<<
<
See Also:

@@@
Syntax: >>>> DATA (time_unit) (FILE: file_name)
Parent Command:
all thirdlevel commands in block > OBSERVATION
Subcommand:

Description:
This command reads a list of observation times and the corresponding data.
Each list will be referred to as a data set in the iTOUGH2 output file.
An annotation is generated for each data set, or a string can be supplied
by the user for its identification (see command >>>> ANNOTATION).
Data can be supplied either directly following the command line or on a
separate file (use keyword FILE followed by the name of the data file after
a colon). Time and data must be arranged in columns. They are read in
free format. Data are accepted until a FORTRAN input error occurs.
The total number of data points read by iTOUGH2 is printed to the iTOUGH2
output file and should be checked for consistency.
By default, the first column is expected to hold the observation times, and
the second column the data values. Deviations from this format are possible
(see commands >>>> COLUMN, >>>> FORMAT, >>>> SET (o), and >>>> HEADER for
details). Data can also be represented by a polynomal or a userspecified
function (see commands >>>> POLYNOM and >>>> USER, respectively).
Observation times do not need to coincide with the calibration times defined
by command >> TIME (o). Linear interpolation is performed for calibration
times that fall between observation times. This requires, however, that
the first observation time is earlier than the first calibration time, and
that the last observation time is later than the last calibration time.
If this condition is not met, command >>>> WINDOW should be used.
Only one time window can be specified for each data set, i.e., multiple data
sets must be provided if multiple time windows are needed.
If time is not given in seconds, the appropriate time unit (MINUTE, HOUR,
DAY, WEEK, MONTH, YEAR) must be specified on the command line.
If the units of the data points are different from the standard units used
in TOUGH2, a conversion factor must be provided through command >>>> FACTOR.
If no observed data are available (e.g., when performing design calculations
prior to testing, or when using iTOUGH2 for generating time series plots),
a dummy data set must be supplied. Alternatively, command >>>> NO DATA can
be used.
Example:
> OBSERVATION
>> GAS PRESSURE
>>> in ELEMENT: A1__1
>>>> DATA follow in default format, time (sec) vs. pressure (Pa)
1.0 100000.0
10.0 101343.8
20.0 105991.3
30.0 108965.9
60.0 115003.8
.... ........
3600.0 218762.0
>>>> standard DEVIATION: 5000.0 Pa
<<<<
<<<
>> LIQUID FLOW rate
>>> CONNECTION B1__1 B1__2
>>>> HEADER contains : 3 lines to be skipped
>>>> time and value are in COLUMNS: 3 6
>>>> conversion FACTOR is : 1.6667E5 [ml/min] to [kg/sec]
>>>> use SET No. : 3
>>>> DATA are provided on FILE : flow.dat with time in MINUTES
>>>> a RELATIVE measurement error of : 5.0 % is assumed
<<<<
<<<
>> BRINE CONCENTRATION in LIQUID PHASE
>>> ELEMENT: C1__1
>>>> NO DATA available, just for plotting
>>>> assign small WEIGHT of : 1.0E20
<<<<
<<<
<<
See Also:
>>>> ANNOTATION, >>>> COLUMN, >>>> FACTOR, >>>> FORMAT, >>>> HEADER,
>>>> PICK, >>>> POLYNOM, >>>> SET (o), >>>> USER, >>>> WINDOW
@@@
Syntax: >>> DEFAULT
or
>>> MATERIAL: DEFAU
Parent Command:
>> CAPILLARY
>> INITIAL
>> RELATIVE
Subcommand:
all fourthlevel commands in block > PARAMETER
Description:
Parameters of the default relative permeability and capillary pressure
functions (TOUGH2 block RPCAP) or default initial conditions
(TOUGH2 block PARAM.4) are addressed by command >>>> DEFAULT.
Alternatively, a material name DEFAU can be provided following command
>>>> MATERIAL.
Example:
> PARAMETER
>> INITIAL PRESSURE
>>> DEFAULT initial pressure (block PARAM.4)
>>>> ANNOTATION: Init. Formation Pres.
>>>> GUESS : 1.5E5
<<<<
<<<
>> RELATIVE PERMEABILITY FUNCTION
>>> MATERIAL: BOREH
>>>> ANNOTATION : Sgr borehole
>>>> PARAMETER #: 2
<<<<
>>> MATERIAL: BOUND DEFAU
>>>> ANNOTATION : Sgr elsewhere
>>>> PARAMETER #: 2
<<<<
<<<
<<
See Also:
>>> MATERIAL, >>> MODEL
@@@
Syntax: >>> DELTFACT: deltfact
Parent Command:
>> CONVERGE
Subcommand:

Description:
In TOUGH2, time step size can be controlled in various ways
(see Pruess [1987] for a description of input variables DELTEN, DLT,
DELTMX, NOITE, REDLT, and MOP(16)).
The time step is also automatically adjusted in order for the simulation
time to land on any of the specified calibration or printout times.
This may lead to very small time steps or even convergence failures
(see command >>> CONSECUTIVE).
In iTOUGH2, the proposed time step is increased up to a factor of DELTFACT
(default: 1.2) if this increase allows the simulation to reach the next
calibration or printout time.
This may lead to time stepping different from standard TOUGH2.
Example:
> COMPUTATION
>> CONVERGE
>>> DELTFACT: 1.0 (as in standard TOUGH2)
>>> allow : 20 CONSECUTIVE time steps converging in 1 iteration
<<<
<<
See Also:
>>> CONSECUTIVE
@@@
Syntax: >>> DESIGN
Parent Command:
>> OPTION
Subcommand:

Description:
(synonym for command >>> SENSITIVITY)
Example:
(see command >>> SENSITIVITY)
See Also:
>>> SENSITIVITY
@@@
Syntax: >>>> DEVIATION (There are two fourthlevel commands >>>> DEVIATION.
Check parent command.)
Parent Command 1:
all thirdlevel commands in block > PARAMETER
Syntax:
>>>> DEVIATION: sigma
Subcommand:

Description:
This command specifies the standard deviation sigma of the initial parameter
guess. Prior information about model parameter will be weighted by 1/sigma,
i.e., the difference between the prior information value p* and the estimate
p contributes to the objective function.
Commands for specifying the standard deviation are:
>>>> DEVIATION: sigma
>>>> VARIANCE: sigma^2
>>>> WEIGHT: 1/sigma
By default, prior information is not weighted, i.e. sigma=infinity.
The standard deviation reflects the uncertainty associated with the initial
guess. If the initial guess is to be weighted, prior information should
originate from an independent source. For example, if porosity will be
estimated based on transient pressure data, the prior information value
should be taken from a "direct" porosity measurement, e.g. using
mercuryporosimetry or ovendrying methods. In these cases, the measured
parameter values P* are considered to be additional data points which serve
as a physical plausibility criterion for the estimate p. The p* values,
along with the observations of the system state z*, are then weighted
according to their uncertainties (see >>>> DEVIATION (o)).
Note that the relative weighting between prior information and the
observations z* depends on the number of calibration points selected.
If many transient data points are available, a smaller standard deviation
sigma may be specified to increase the relative weight of prior information.
In many cases, appropriately weighting the initial guess makes an illposed
inverse problem unique. Furthermore, the solution becomes more stable if
a parameter is not very sensitive. However, using 1/sigma as a regularization
parameter to improve the ability to obtain a unique solution with a poorly
conceptualized inverse problem inverse problem is not recommended.
Erratic behavior of a parameter during the inversion should be taken as an
indication that the data do not contain sufficient information for the
determination of the parameter. Differences between parameter values that
are independently determined from laboratory experiments and inverse modeling
suggest the presence of a systematic error or scaling problem. These
inconsistencies should be resolved rather than averaged out.
The standard deviation sigma is also used to scale the columns of the
Jacobian matrix. While the solution of the inverse problem is not affected
by the choice of the scaling factor, all the qualitative sensitivity measures
are directly proportional to sigma. If prior information is not weighted,
the scaling factor is taken to be 10 % of the respective parameter value.
Command >>>> VARIATION should be used to change the default scaling factor
without concurrently assigning a weight to prior information.
When performing uncertainty propagation analyses, sigma designates the
parameter uncertainty affecting the model prediction. It is used to generate
a set of random parameter values for Monte Carlo simulations, and it represents
the standard deviation of a normal distribution if performing linear
uncertainty propagation analysis (for more details see commands
>>> MONTE CARLO and >>> FOSM, respectively).
Example:
> PARAMETER
>> POROSITY
>>> MATERIAL: TUFFn
>>>> PRIOR information : 0.38 (laboratory measurement)
>>>> standard DEVIATION: 0.04 (measurement error)
<<<<
>>> MATERIAL: ALLUV
>>>> PRIOR information : 0.30 (from experience)
>>>> VARIANCE : 0.01 (uncertainty of guess)
<<<<
>>> MATERIAL: FAULT
>>>> initial GUESS : 0.25 (no measurements available)
>>>> WEIGHT : 0.00 (default)
>>>> VARIATION : 0.10 (for scaling of Jacobian)
<<<<
<<<
<<
See Also:
>> GUESS, >>> FOSM, >>> MONTE CARLO, >>>> DEVIATION (o), >>>> PRIOR,
>>>> VARIANCE, >>>> VARIATION, >>>> WEIGHT
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
Parent Command 2:
all thirdlevel commands in block > OBSERVATION
Syntax:
>>>> DEVIATION: sigma
Subcommand:

Description:
This command specifies the standard deviation sigma of the observations.
The squares of the standard deviations constitute the diagonal elements of
the a priori covariance matrix C .
zz
The specified value sigma is assigned to all data points of the corresponding
data set. It must be given in the same units as the data (see command
>>>> FACTOR). Individual values for each calibration point can be explicitly
specified using command >>>> COLUMN or >> COVARIANCE, or are calculated as a
fraction of the measured value if using command >>>> RELATIVE.
The standard deviation should represent the expected variability of the final
residuals. In the absence of modeling errors, the standard deviation is
equivalent to the measurement error. A reasonable value can be derived by
visual examination of the data, i.e., by estimating the standard deviation
of the differences between the observed values and a line representing the
expected match; note that this procedure is based on the assumption that
time averages can be used to calculate the ensemble average, i.e., that the
data set is a result of an ergodic process.
The inverse of the a priori covariance matrix is used to weight the fitting
error. It also scales observations of different types and units.
In the framework of maximum likelihood theory, the covariance matrix
constitutes the stochastic model along with the assumption of normality
and independence.
The parameter estimates are not affected by the absolute values of sigma,
but only by the ratios sigma /sigma . It is suggested, however, to use
i j
reasonable values that are related to the measurement error.
If the final residuals are  on average  significantly larger than the
a priori specified standard deviations, the Fisher model test fails.
The a posteriori standard deviations of the final residuals are printed in
the output for each data set for comparison purposes.
Alternative commands are:
>>>> DEVIATION: sigma
>>>> VARIANCE: sigma^2
>>>> WEIGHT: 1/sigma
The following command lines are thus equivalent:
>>>> standard DEVIATION: 0.1
>>>> VARIANCE: 0.01
>>>> WEIGHT: 10.0
Example:
> OBSERVATION
>> PRESSURE
>>> ELEMENT : AA412
>>>> conversion FACTOR: 1E5 from [bar] to [Pa]
>>>> DATA on FILE: pressure.dat
>>>> standard DEVIATION: 0.05 [bar]
<<<<
<<<
<<
See Also:
>> COVARIANCE, >>>> AUTO, >>>> COLUMN, >>>> DEVIATION (p), >>>> RELATIVE,
>>>> VARIANCE, >>>> WEIGHT
@@@
Syntax: >>> DIRECT
Parent Command:
>> OPTION
or
>> CONVERGE
Subcommand:

Description:
(synonym for command >>> FORWARD (o))
Example:
(see command >>> FORWARD (o))
See Also:
>>> FORWARD (o)
@@@
Syntax: >> DRAWDOWN (phase_name/PHASE: iphase)
Parent Command:
> OBSERVATION
Subcommand:
>>> ELEMENT
Description:
This command selects as an observation type the pressure drawdown
during a pumping test. This observation type refers to one or more elements.
The drawdown is calculated from a reference pressure, which is the pressure
at the specified element at the time of the first active calibration point
for that data set, i.e., a calibration time greater than zero must be provided,
indicating the beginning of the pumping period; the corresponding observation
is a drawdown of zero. Note that the sensitivity of this calibration point is
zero by definition, i.e., the first data point is not used for calibration.
If drawdown is measured in meters, command >>>> FACTOR must be used to convert
the units to Pascals. The phase name phase_name or phase number iphase,
which depend on the EOS module being used, are listed in the header of the
iTOUGH2 output file. They can be specified either on the command line or
using subcommand >>>> PHASE. If no phase is specified, iTOUGH2 takes the
pressure drawdown of the first phase which is usually the reference pressure.
Example:
> OBSERVATION
>> TIMES: 1 in [MINUTES]
10.0 = beginning of pumping
>> TIMES: 20 LOGARITHMICALLY spaced in [MINUTES]
11.0 120.0
>> DRAWDOWN
>>> ELEMENT: AA__1
>>>> ANNOTATION : Pressure drawdown [m]
>>>> FACTOR : 9810. [m]  [Pa]
>>>> LOGARITHM
>>>> DATA [MINUTES]
10.0 0.0 beginning of pumping
11.0 0.1
15.0 0.3
25.0 0.5
.... .....
>>>> VARIANCE : 0.01 [m^2]
<<<<
See Also:
>> PRESSURE
@@@
Syntax: >> DRIFT
Parent Command:
> PARAMETER
Subcommand:
>>> NONE
>>> SET
Description:
This command selects as a parameter the slope of a timedependent trend.
The trend is added to the TOUGH2 output referring to a specific data set:
z = z + drift*time
TOUGH2
where (drift*time) is the trend added to the calculated TOUGH2 output z .
TOUGH2
The result z is compared to the measurement z* of the corresponding data set.
This option allows removal of a trend in the data (for example, a flowmeter
may exhibit an unknown offset and timedependent trend that needs to be
estimated). A nonzero value must be provided as initial guess through the
iTOUGH2 input file using command >>>> GUESS.
The data set is identified by number using command >>> SET (p).
Example:
> PARAMETER
estimate coefficients of regression dz=A+B*time to correct flowmeter data
>> SHIFT
>>> NONE
>>>> ANNOTATION: coefficient A (constant)
>>>> INDEX : 2 3 4 (identifies data sets)
>>>> GUESS : 4.0E6 [kg/sec]
<<<<
<<<
>> DRIFT
>>> NONE
>>>> ANNOTATION: coefficient B (slope)
>>>> INDEX : 2 3 4
>>>> GUESS : 1.0E9 [kg/sec/sec]
<<<<
<<<
<<
See Also:
>> FACTOR, >> LAG, >> SHIFT, >>> SET (p)
@@@
Syntax: >>> ELEMENT: eleme (eleme_i ...) (+ iplus)
or
>>> GRID BLOCK: eleme (eleme_i ...) (+ iplus)
Parent Command:
all secondlevel commands in block > OBSERVATION requiring element names.
Subcommand:
all fourthlevel commands in block > OBSERVATION
Description:
This command reads one or more element names. Most observation types refer
to a variable that is associated with a grid block (as opposed to a connection
or sink/source name). Element names are designated by a
threecharacter/twointeger (FORTRAN format: AAAII) code name.
Blanks in the element names as printed in the TOUGH2 output file must be
replaced by underscores (e.g. an element name specified in the TOUGH2 input
file as 'B 007' is printed as 'B 0 7' in the TOUGH2 output file. Therefore,
it must be addressed in the iTOUGH2 input file as 'B_0_7').
Multiple elements can be specified, and iTOUGH2 calculates the sum or mean
of the corresponding output variable (see subcommands >>>> SUM and >>>> MEAN,
respectively). A sequence of iplus elements can be generated by increasing
the number of the last element.
The following two command lines are identical:
>>> ELEMENT: AA__1 BB_15 +3
>>> ELEMENT: AA__1 BB_15 BB_16 BB_17 BB_18
Example:
> OBSERVATION
>> GAS SATURATION
>>> ELEMENTS: ELM_0 + 99
>>>> ANNOTATION : Mean saturation
>>>> take the MEAN of the saturation in all 100 elements
>>>> DATA on FILE : Sg.dat
>>>> DEVIATION : 0.05
<<<<
<<<
<<
See Also:
>>> CONNECTION, >>> SINK, >>>> MEAN, >>>> SUM
@@@
Syntax: >> ENTHALPY (There are two secondlevel commands >> ENTHALPY.
Check parent command.)
Parent Command 1:
> PARAMETER
Syntax:
>> ENTHALPY
Subcommand:
>>> SOURCE
Description:
This command selects as a parameter the fixed specific enthalpy of the
injected fluid (TOUGH2 variable EX) or the timedependent specific enthalpy of
the produced or injected fluid (TOUGH2 variable F3(L), LTAB>1 and ITAB
nonblank). This parameter refers to a sink/source code name.
Estimating a timedependent enthalpy requires providing index L through
command >>>> INDEX.
Note that enthalpy is also an observation type (see command >> ENTHALPY (o)).
Example:
> PARAMETER
>> specific ENTHALPY
>>> SOURCE: INJ_1
>>>> ANNOTATION: fixed enthalpy
<<<<
>>> SOURCE: INJ_2 INJ_5
>>>> ANNOTATION: variable enthalpy
>>>> VALUE
>>>> INDEX : 1 2 3 8 9 10
<<<<
<<<
<<
See Also
>> TIME (p), >> ENTHALPY (o)
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
Parent Command 2:
> OBSERVATION
Syntax:
>> ENTHALPY (phase_name/PHASE: iphase)
Subcommand:
>>> SINK
Description:
This command selects as an observation type the flowing enthalpy in a
production well. This observation type refers to a sink code name.
If multiple elements are provided, the flowing enthalpy is weighted by the
individual production rates. If a phase is selected either by specifying a
valid phase_name or a phase number iphase or through command >>>> PHASE,
only the flowing enthalpy of that phase is calculated.
Note that the enthalpy of the injected fluid can also be an unknown parameter
to be estimated (see command >> ENTHALPY (p)).
Example:
> OBSERVATION
>> flowing ENTHALPY
>>> SINK: WEL_1 + 5
>>>> ANNOTATION : Flowing enthalpy
>>>> FACTOR : 1000.0 [kJ/kg]  [J/kg]
>>>> DATA from FILE: enthalpy.dat [HOUR]
>>>> DEVIATION : 10.0 [kJ/kg]
<<<<
<<<
<<
<
See Also:
>> ENTHALPY (p)
@@@
Syntax: >> ERROR
Parent Command:
> COMPUTATION
Subcommand:
>>> ALPHA
>>> EMPIRICAL ORTHOGONAL FUNCTIONS
>>> EOF
>>> FISHER
>>> FOSM
>>> HESSIAN
>>> LINEARITY
>>> LIST
>>> MONTE CARLO
>>> POSTERIORI
>>> PRIORI
>>> TAU
Description:
This is the parent command of a number of subcommands that deal with the
a posteriori error analysis or uncertainty propagation analysis.
Example:
> COMPUTATION
>> ERROR analysis
>>> use confidence level 1ALPHA:= 95 %
>>> use sigma as determined by FISHER model test
>>> calculate finite difference HESSIAN matrix
>>> check LINEARITY assumption on : 99 % level
<<<
<<
See Also:

@@@
Syntax: >> FACTOR
Parent Command:
> PARAMETER
Subcommand:
>>> NONE
>>> SET
Description:
This command selects as a parameter a constant factor with which the calculated
TOUGH2 output will be multiplied. The factor is applied to the output that
refers to a specific data set:
z = z * factor
TOUGH2
where factor is the multiplication factor and z is the TOUGH2 output.
TOUGH2
The result z is compared to the measurement z* of the corresponding data set.
This option allows correcting for a systematic, but unknown relative error in
the data. The data set is identified by number using command >>> SET (p).
If the factor is known and does not need to be estimated, use command
>>>> FACTOR (o).
Example:
> PARAMETER
>> FACTOR
>>> SET No. : 1
>>>> ANNOTATION: correct amplitude
>>>> GUESS: 1.0
<<<<
<<<
<<
See Also:
>> DRIFT, >> LAG, >> SHIFT, >>> SET (p), >>>> FACTOR (o)
@@@
Syntax: >>>> FACTOR (There are two fourthlevel commands >>>> FACTOR.
Check parent command.)
Parent Command 1:
all thirdlevel commands in block > OBSERVATION
Syntax:
>>>> FACTOR: factor
Subcommand:

Description:
This command provides a conversion factor with which the data are multiplied
so they comply with standard TOUGH2 units. The conversion factor is also
applied to the standard deviation (see command >>>> DEVIATION (o)), i.e.,
the measurement error must be given in the same units as the data.
The standard TOUGH2 units are used throughout the iTOUGH2 output file, i.e.,
all observed and calculated values as well as residuals and standard
deviations have been multiplied by factor.
Example:
> OBSERVATION
>> PRESSURE
>>> ELEMENT: WEL99
>>>> conversion FACTOR: 1E5 [bar]  [Pa]
>>>> pressure DATA in HOURS and bar on FILE: pres.dat
>>>> standard DEVIATION: 0.01 [bar]
<<<<
<<<
<<
See Also:
>> FACTOR
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
Parent Command 2:
all thirdlevel commands in block > PARAMETER
Syntax:
>>>> FACTOR
Subcommand:

Description:
The parameter to be estimated is a factor with which the initial parameter
guess is multiplied.
X=p*X0 <=> p=X/X0
Here, p is the estimated parameter, X is the TOUGH2 parameter, and X0 is the
initial value of the TOUGH2 parameter. This option is useful to estimate a
scaling factor for variable initial and boundary conditions, or to determine
the mean of a quantity while maintaining ratios (e.g., if estimating a common
factor applied to all three permeability values in a model domain, the
anisotropy ratio remains constant). If the factor is lognormally distributed,
add command >>>> LOGARITHM or use command >>>> LOG(F).
Estimating a factor is opposed to estimating the parameter value directly
(command >>>> VALUE) or its logarithm (command >>>> LOGARITHM (p)).
Example:
> PARAMETER
>> ABSOLUTE permeability
>>> MATERIAL: CLAY1 SAND1
>>>> estimate multiplication FACTOR, and maintain both the
anisotropy ratio within a layer as well as the permeability
ratio between clay and sand.
>>>> INDEX : 1 2 3
>>>> initial GUESS: 1.0 (default)
>>>> RANGE : 0.01 100.0
<<<<
<<<
<<
See Also:
>>>> LOGARITHM (p), >>>> LOG(F), >>>> VALUE
@@@
Syntax: >>> FISHER
Parent Command:
>> ERROR
Subcommand:

Description:
The estimated error variance s02 represents the variance of the mean weighted
residual and is thus a measure of goodnessoffit:
T 1
r C r
2 zz
s = 
0 m  n
The value s02 is used in the subsequent error analysis. For example, the
covariance matrix of the estimated parameters, C , is directly proportional
pp
to the scalar s02. Note that if the residuals are consistent with the
distributional assumption about the measurement errors (i.e., matrix C ),
zz
then the estimated error variance assumes a value close to one. s02 is also an
estimate for the true or a priori error variance sigma02.
It can be shown that the ratio (s02/sigma02) follows an Fdistribution with
the two degrees of freedom f1=mn, and f2=infinity. Therefore, it can be
statistically tested to see whether the final match deviates significantly
from the modeler's expectations, expressed by matrix C .
zz
This is called the Fisher Model Test. The user must decide
whether the error analysis should be based on the a posteriori or a priori
error variance (see commands >>> POSTERIORI and >>> PRIORI, respectively).
The decision can also be delegated to the Fisher Model Test according to
the following table:

Fisher Model Test Error Variance Comment

s02/sigma02 > F s02 error either in the functional or
stochastic model
1 < s02/sigma02 < F s02 model test passed
s02/sigma02 < 1 sigma02 probably error in stochastic model

Example:
> COMPUTATION
>> ERROR
>>> let the FISHER model test decide whether the
a priori or a posteriori error variance should be used
>>> confidence level 1ALPHA : 95 %
<<<
See Also:
>>> ALPHA, >>> POSTERIORI, >>> PRIORI
@@@
Syntax: >> FLOW (phase_name/PHASE: iphase/HEAT)
Parent Command:
> OBSERVATION
Subcommand:
>>> CONNECTION
Description:
This command selects as an observation type the flow rate of phase iphase or
total fluid flow rate or heat flux. This observation type refers to a connection.
The phase name phase_name or phase number iphase, which depend on the EOS
module being used, are listed in the iTOUGH2 header. They can be specified
either on the command line or using the subcommand >>>> PHASE. If no phase
is specified, the total flow rate is selected. IF keyword HEAT is present,
the total heat flux accross teh connection is selected.
Note that the sign of the calculated flow rate depends on the order of the
two elements in a connection.
Example:
> OBSERVATION
>> GAS FLOW rate
>>> CONNECTION: INJ_1 ELM_2
>>>> ANNOTATION : Gas flow
>>>> FACTOR : 16.666667 to convert from m^3/min to kg/s
>>>> DATA on FILE : inject.dat
>>>> RELATIVE error: 3.0 %
<<<<
<<<
>> FLOW rate
>>> CONNECTION: A11_1 A11_2 B11_1 B11_2 C11_1 C11_2 &
D11_1 D11_2 E11_1 E11_2 F11_1 F11_2
>>>> ANNOTATION: Flow across boundary
>>>> LIQUID PHASE flow rate
>>>> take the SUM
>>>> of the ABSOLUTE values of the 6 flow rates
>>>> DATA in [kg/s], time in [MINUTES] are on FILE: outflow.dat
>>>> standard DEVIATION: 0.1 kg/sec
<<<<
<<<
<<
See Also:

@@@
Syntax: >> FORCHHEIMER
Parent Command:
> PARAMETER
Subcommand:
>>> MODEL
Description:
This command selects a parameter of the Forchheimer nonDarcy flow
coefficient model (TOUGH2 variable FORCH(IPAR)).
Use command >>>> INDEX to select the parameter index IPAR.
Example:
> PARAMETER
>> parameter of FORCHHEIMER nonDarcy flow coefficient model
>>> MODEL
>>>> ANNOTATION : Constant Beta
>>>> INDEX : 1
>>>> LOGARITHM
>>>> RANGE : 3.0 8.0
<<<<
<<<
<<
See Also:

@@@
Syntax: >>> FORMAT: format (LIST)
or
>>> PLOTFILE: format (LIST)
Parent Command:
>> OUTPUT
Subcommand:

Description:
iTOUGH2 does not directly generate graphs (except for the residual plot and
correlation chart). However, it does generate a plot file with data at the
calibration points and system response as calculated with the initial,
intermediate (see command >>> PLOTTING), and final parameter set. A plot
file with the relative permeability and capillary pressure curves can also
be requested (see command >>> CHARACTERISTIC). These plot files must be
processed by an external visualization package.
iTOUGH2 generates plot files in PLOPO format, a visualization software
developed by U. Kuhlmann at ETH, ZuŸrich. The PLOPO plot file is internally
reformatted to comply with formats of other visualization programs.
The string format identifies the visualization software (for a list of
available formats add keyword LIST). The reformatted plot files have a file
extension specific for the chosen software (for example ".tec" for TECPLOT.)
A general format accepted by most commercially available plotting programs
is the arrangement in columns, where the first column contains the time,
and additional columns hold the data and calculated system response for
various observations. This general format can be obtained by using keyword
COLUMNS. The default format is TECPLOT, and can be changed to another format
in file it2main.f, BLOCK DATA IT, through variable IPLOTFMT.
Additional interfaces can be programmed into subroutine PLOTIF and REFORMAT.
Example:
> COMPUTATION
>> OUTPUT
>>> FORMAT of plot file : COLUMNS (print LIST of available formats)
<<<
<<
See Also:
>>> CHARACTERISTIC, >>> PLOTTING
@@@
Syntax: >>>> FORMAT: format
Parent Command:
all thirdlevel commands in block > OBSERVATION
Subcommand:

Description:
This command accepts a FORTRAN format statement for reading data. By default,
data are read in free format following the >>>> DATA command or directly from
a data file. If a column contains nonnumeric characters, formatted input
can be invoked by providing a string format representing the corresponding
FORTRAN format statement. The format statement must be in brackets and must
not contain any blanks.
Example:
> OBSERVATION
>> LIQUID FLOW rate
>>> CONNECTION : ELM50 BOT_1
>>>> multiply measurements by FACTOR: 1.0E06
>>>> FORMAT: (3X,F11.1,30X,E17.10) used to read time and value
>>>> COLUMN: 1 2 (time is in col. 1, flow rate in col. 2
of above FORMAT statement)
>>>> SKIP : 3 lines before reading values
>>>> DATA in MINUTES

# Time [min] Cap. Pres, [hPa] FlowmeterX Flow Rate [mg/s]

1 5.0 0.1698331055E+02 FlowmeterA 0.9869399946E+01
2 10.0 0.2075763428E+02 FlowmeterA 0.1039689596E+02
3 15.0 0.2357142822E+02 FlowmeterA 0.1162893932E+02
4 20.0 0.2529052490E+02 FlowmeterA 0.1353439620E+02
5 25.0 0.2598133789E+02 FlowmeterA 0.1469761628E+02
. .... ................ .......... ................
>>>> RELATIVE error is: 10.0 % of the individual measurement
<<<<
<<<
See Also:
>>>> DATA, >>>> COLUMN, >>>> HEADER, >>>> PICK
@@@
Syntax: >>> FORWARD (There are two thirdlevel commands >>> FORWARD.
Check parent command.)
Parent Command 1:
>> CONVERGE
or
>> OPTION
Syntax:
>>> FORWARD
or
>>> DIRECT
Subcommand:

Description:
This command allows one TOUGH2 simulation to be performed in order to solve
the forward problem. It is advantageous to perform a single TOUGH2 simulation
before invoking more expensive inversions. The result from the forward run
can be used to check TOUGH2 and iTOUGH2 input. Furthermore, a plotfile is
generated with the results from the simulation with the initial parameter set
which can be compared to the observed data. The CPU time requirement for an
inversion can also be estimated from a single TOUGH2 simulation.
(Note that there is another command >>> FORWARD in block >> JACOBIAN.)
Example:
> COMPUTATION
>> STOP after
>>> solving the DIRECT problem
/*
>>> before performing : 5 iTOUGH2 ITERATIONS
*/
<<<
<<
See Also:
>>> SIMULATION
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
Parent Command 2:
>> JACOBIAN
Syntax:
>>> FORWARD (: iswitch)
Subcommand:

Description:
With this command the elements of Jacobian matrix are calculated by means of
a forward finite difference quotient:
dz z (p +dp )  z (p )
i i j j i j
J =  = 
ij dp dp
j j
The evaluation of the Jacobian thus requires n+1 TOUGH2 simulations, where n
is the number of parameters. The size of the perturbation dp can be controlled
using command >>> PERTURB.
The Jacobian is used in both the minimization algorithm and the a posteriori
error analysis. For the LevenbergMarquardt minimization algorithm, the
accuracy obtained by using a forward (as opposed to centered) finite difference
quotient is usually sufficient, especially during the first few iterations far
away from the minimum. However, if approaching the minimum and especially for
the subsequent error analysis one might want to use a more accurate
approximation of the Jacobian. If a colon is given on the command line
followed by an integer iswitch, iTOUGH2 switches from forward to centered
finite differences after iswitch iterations.
Example:
> COMPUTATION
>> CONVERGE
>>> perform a total of : 6 iTOUGH2 ITERATIONS
<<<
>> JACOBIAN
>>> use FORWARD finite differences for : 5 iterations, i.e.
a centered finite difference quotient is used for the final
iteration and error analysis.
>>> PERTURBation factor is : 0.005 times the parameter value
<<<
<<
See Also:
>>> CENTERED, >>> PERTURB
@@@
Syntax: >>> EOF (MATRIX: ndim) (iTOUGH2) (CORRELATION)
Parent Command:
>> ERROR
Subcommand:

Description:
This command must be used in combination with command >>> MONTE CARLO
to invoke stochastic simulations of correlated parameters using
Empirical Orthogonal Functions (EOF).
EOF is a variant of Monte Carlo simulations to quantify the
uncertainty of model predictions as a result of parameter uncertainty.
Many parameter sets are generated, following the predefined covariance matrix Cpp.
The ith parameter set Yi is obtained by linear combination of the eigenvectors uk
of Cpp and stochastic coefficients phi(xsi):
n
Y = SUM ( u * phi )
i k=1 k k
phi (xsi ) = xsi * SQRT( a )
k i i k
where xsi is a standard normal distributed random variable, and ak is the kth eigenvalue of
Cpp. For more details, see Kitterod and Gottschalk [1997].
The elements of matrix Cpp can be supplied using one of the following options:
(1) Provide indices and elements of Cpp. Example:
>>> EMIRICAL ORTHOGONAL FUNCTION
1 1 0.80643E04
2 2 0.71921E04
2 1 0.64412E04
Use keyword CORRELATION if offdiagonal term is correlation coefficient
instead of covariance. Example:
>>> EMPIRICAL ORTHOGONAL FUNCTION, CORRELATION
1 1 0.80643E04
2 2 0.71921E04
2 1 0.864
(2) Provide keyword MATRIX, followed by a colon and the dimension ndim of the
square matrix Cpp. The lower triangle of the covariance matrix is then
provided on exactly ndim additional lines.
If keyword CORRELATION is present, the offdiagonal terms represent
correlation coefficients rather than covariances. Example:
>>> EMPIRICAL ORTHOGONAL FUNCTION, dim. of CORRELATION MATRIX: 4
9.1234
0.67 0.00413
0.80 0.213 1.3E6
0.50 0.155 0.90 4.3E12
(3) If calculated during a previous iTOUGH2 inversion, the covariance matrix
can be taken from the iTOUGH2 output file and directly copied after the
command line. This option is invoked by keyword iTOUGH2.
The matrix will be read by formatted input, so it is crucial that the
correct format is maintained. If ndim is greater than 6, the matrix is
split in multiple submatrices. All submatrices must be copied exactly
as they were printed to the iTOUGH2 output file. Example:
>>> EOF error analysis, read MATRIX of dim.: 3 in iTOUGH2 format
log(abs. perm.) POROSITY SAND Gas entrapped
log(abs. perm.) .80643E04 .846 .253
POROSITY SAND .64412E04 .71921E04 .500
Gas entrapped .52623E05 .98296E05 .53843E05
Example:
> COMPUTATION
>> STOP
>>> number of Monte Carlo SIMULATIONS: 250
<<<
>> ERROR propagation analysis
>>> MONTE CARLO simulations, SEED number: 777
>>> EMPIRICAL ORTHOGONAL FUNCTION, dim. of CORRELATION MATRIX: 4
9.1234
0.67 0.00413
0.80 0.213 1.3E6
0.50 0.155 0.90 4.3E12
<<<
<<
See Also:
>>> FOSM, >>> MONTE CARLO
@@@
Syntax: >>> FOSM (MATRIX: ndim) (iTOUGH2) (CORRELATION) (DIAGONAL)
Parent Command:
>> ERROR
Subcommand:

Description:
This command performs FirstOrderSecondMoment (FOSM) uncertainty propagation
analysis. FOSM quantifies the uncertainty of model predictions as result of
parameter uncertainty. FOSM is the analysis of the mean and covariance of a
random function based on its first order Taylor series expansion. FOSM
analysis presumes that the mean and covariance are sufficient to characterize
the distribution of the dependent variables, i.e., the model results are
assumed to be normally distributed, and perturbations about the mean can be
approximated by linear functions J. The covariance of the uncertain parameters,
C , is translated into the covariance of the simulated system response, C :
pp zz
T
C = J C J
zz pp
The diagonal elements of matrix C , i.e., the variances of the parameters,
pp
can be supplied by command >>>> VARIANCE (or related commands) in block
> PARAMETER. If correlations are to be taken into account, the full
covariance matrix must be provided. This is indicated by keyword MATRIX,
which is followed by a colon and the dimension ndim of the square matrix C .
pp
The elements of matrix C can be supplied using one of the following options:
pp
(1) ndim lines must be provided, each line holding the lower triangle of
the covariance matrix. If keyword CORRELATION is present, the
offdiagonal terms represent correlation coefficients rather than
covariances.
Example:
>>> FOSM error analysis, read CORRELATION MATRIX of dimension: 4
9.1234
0.67 0.00413
0.80 0.213 1.3E6
0.50 0.155 0.90 4.3E12
(2) If calculated during a previous iTOUGH2 inversion, the covariance matrix
can be taken from the iTOUGH2 output file and directly copied after the
command line. This option is invoked by keyword iTOUGH2. The matrix will
be read by formatted input, so it is crucial that the correct format is
maintained. If ndim is greater than 6, the matrix is split in multiple
submatrices. All submatrices must be copied exactly as they were
printed to the iTOUGH2 output file.
Example:
>>> FOSM error analysis, read MATRIX of dim.: 3 in iTOUGH2 format
log(abs. perm.) POROSITY SAND Gas entrapped
log(abs. perm.) .80643E04 .846 .253
POROSITY SAND .64412E04 .71921E04 .500
Gas entrapped .52623E05 .98296E05 .53843E05
If the full matrix is provided, but only the diagonal terms (variances) shall
be used in the uncertainty analysis, use keyword DIAGONAL on the command line.
This option makes it easy to study the impact of correlations on the
uncertainty propagation analysis.
The uncertainty of the model prediction as a result of parameter uncertainty
is given as a standard deviation in the residual analysis. Furthermore, the
plotfile contains the system response for the mean parameter set as well as
error band on the specified confidence level (see command >>> ALPHA).
It is suggested to also increase the perturbation factor for calculating
the Jacobian matrix, and to use a centered finite difference quotient.
This yields in an more realistic the error band if the model is highly
nonlinear. It should be realized, however, that Monte Carlo is the
preferred method if dealing with highly nonlinear flow systems.
Example:
> COMPUTATION
>> ERROR propagation analysis
>>> perform FirstOrderSecondMoment (FOSM) analysis
>>> draw error bands on (1ALPHA)=: 95 % confidence level
<<<
>> JACOBIAN
>>> use CENTERED finite difference quotient
>>> PERTURBATION factor at least: 5.0 %
<<<
<<
See Also:
>>> ALPHA, >>> MONTE CARLO
@@@
Syntax: >>>> GAUSS
or
>>>> NORMAL
Parent Command:
all thirdlevel commands in block > PARAMETER
Subcommand:

Description:
This command generates normally distributed input parameters for
Monte Carlo simulations. This is the default distribution.
Parameter values will be generated following a normal distribution with
the initial guess as the mean, and the standard deviation taken from
command >>>> DEVIATION. Only values within the specified range will
be accepted. If command >>>> LOGARITHM is also present, parameters
follow a lognormal distribution.
Example:
> PARAMETER
>> ABSOLUTE permeability
>>> MATERIAL: SAND1 BOUND WELLB
>>>> ANNOTATION : log(k) is uncertain
>>>> LOGARITHM
>>>> generate logNORMAL distribution about mean...
>>>> initial GUESS : 12.0 and with...
>>>> standard DEVIATION : 1.0 within the...
>>>> admissible RANGE : 15.0 9.0
<<<<
<<<
<<
> COMPUTATION
>> perform ERROR propagation analysis by means of...
>>> MONTE CARLO simulations
<<<
>> STOP after...
>>> : 400 TOUGH2 runs
<<<
<<
See Also:
>>> MONTE CARLO, >>>> UNIFORM
@@@
Syntax: >>> GAUSSNEWTON
Parent Command:
>> OPTION
Subcommand:

Description:
This command performs GaussNewton steps to minimize the objective
function. The GaussNewton algorithm assumes linearity and can be
described as follows:
T 1 1 T 1
dp = (J C J) J C r
zz zz
(k+1) (k)
p = p + dp
GaussNewton steps are efficient if the model is linear (only one
iteration required to find minimum) or nearlylinear. If the model is
highly nonlinear, GaussNewton steps are usually too large, leading to
an inefficient or even unsuccessful step.
By default, iTOUGH2 uses the LevenbergMarquardt minimization algorithm,
which is a modification of the GaussNewton algorithm.
Example:
> COMPUTATION
>> OPTION
>>> use GAUSSNEWTON minimization algorithm
<<<
>> STOP
after >>> :1 ITERATION
<<<
<<
See Also:
>>> ANNEAL, >>> GRID SEARCH, >>> LEVENBERGMARQUARDT
@@@
Syntax: >> GENERATION (comp_name/COMPONENT: icomp) (phase_name/PHASE: iphase)
or
>> PRODUCTION (comp_name/COMPONENT: icomp) (phase_name/PHASE: iphase)
Parent Command:
> OBSERVATION
Subcommand:
>>> SINK
Description:
This command selects as an observation type the total or fractional
generation rate in a production well. This observation type refers
to a sink code name. The total generation rate is usually prescribed
in TOUGH2 block GENER, or can be considered a parameter to be estimated
(see command >> RATE). A variable generation rate suitable for calibration
is obtained only for wells on deliverability (type DELV), or if the
generation rate of a specific phase is used.
The fractional generation rate refers to the production of a specific phase
or component. The component name comp_name or component number icomp,
as well as the phase name phase_name or phase number iphase, which depend on
the EOS module being used, are listed in the iTOUGH2 header.
They can be specified either on the command line or using the subcommands
>>>> COMPONENT and >>>> PHASE, respectively.
If neither a phase nor a component is specified, the total generation rate
is calculated. If a phase but no component is selected, the generation rate
of that phase including all components is calculated. If a component but no
phase is selected, the generation rate of that component in all phases is
calculated. Finally, if a specific phase and a specific component is given,
only the generation rate of that component in the specified phase is
calculated.
Example:
> OBSERVATION
>> GAS GENERATION (includes both air and vapor)
>>> SINK: DLV_1 + 5
>>>> ANNOTATION : Gas generation [Nl/s]
>>>> FACTOR : 1000.0 [Nl/s]  [kg/s]
>>>> DATA on FILE : gasflow.dat [HOUR]
>>>> RELATIVE error: 10.0 [%]
<<<<
<<<
<<
<
See Also:
>> TOTAL MASS, >> CUMULATIVE, >> RATE, >>>> COMPONENT, >>>> PHASE
@@@
Syntax: >>> GRID BLOCK: eleme (eleme_i ...) (+ iplus)
Parent Command:
all secondlevel commands in block > OBSERVATION requiring element names.
Subcommand:
all fourthlevel commands in block > OBSERVATION
Description:
(synonym for command >>> ELEMENT)
Example:
(see command >>> ELEMENT)
See Also:
>>> ELEMENT
@@@
Syntax: >>> GRID SEARCH (UNSORTED) (: ninval1 (ninval2 (inval3)) / FILE: filename)
or
>>> OBJECTIVE (UNSORTED) (: ninval1 (ninval2 (inval3)) / FILE: filename)
Parent Command:
>> OPTION
Subcommand:

Description:
This command evaluates the objective function for a number of specific parameter sets.
A list of parameter sets can be provided on file filename.
Alternatively, parameter sets are generated internally on a regular grid,
mapping out the entire parameter space.
In this case, lower and upper bounds must be defined for each
parameter using command >>>> RANGE in block > PARAMETER. This range is then
subdivided into ninteri intervals by inserting invali+1 equally spaced
points, generating parameter sets on a regular grid in the dimensional
parameter space. The objective function is evaluated at each grid point.
Evaluating the objective function throughout the entire parameter space is
referred to as grid searching.
The parameter set, the value of the objective function, and the contribution
of each observation type to the objective function is printed to the iTOUGH2
output file. The global minimum is likely to be in the vicinity of the
parameter set with the lowest objective function. Furthermore, the
information listed in the output file can be used to visualy represent and
study the topology of the objective function. For example, one can generate
a contour plot of the objective function for n=2 which may reveal the
presence local minima.
If only one output variable is defined in block > OBSERVATION and no data
are provided, this option can also be used in combination with command
>>> L1ESTIMATOR to examine the sensitivity of the output variable over an
extensive parameter range.
Keyword UNSORTED can be used in PVM applications to improve efficiency.
The output list must then be sorted in a postprocessing step.
Example:
> COMPUTATION
>> OPTION
>>> GRID SEARCH! subdivide 2Dparameter space into: 20 10 intervals
<<<
<<
See Also:
>>> ANNEAL, >>> LEVENBERGMARQUARDT, >>> GAUSSNEWTON, >>> PVM, >>> SIMPLEX
@@@
Syntax: >> GUESS (FILE: file_name)
Parent Command:
> PARAMETER
Subcommand:

Description:
This command identifies initial guesses of the parameters to be estimated.
The initial guess vector p0 is the starting point of the minimization
algorithm (iteration k=0). Usually, the initial guess vector is identical
with vector p* which holds prior information about the parameters.
By default, p0 and p* are identical, taken from the TOUGH2 input file,
and overwritten by command >>>> PRIOR. The starting point for the
minimization algorithm may be different from the prior information vector.
In this case, prior information is taken from the TOUGH2 input file,
or must be provided through command >>>> PRIOR. The Starting point is
taken from this command which overwrites the initial guess provided
through command >>>> GUESS.
A parameter is identified by an integer value indicating its position
in the parameter block:
I XIGUESS(I)
This format is identical to that of file .par.
Initial guesses can be provided either following the command line,
or read from a file if keyword FILE is present.
The filename is given after the colon. This latter option is useful
to transfer best estimates from one inversion to another if the order
of the parameters is the same.
Example:
> PARAMETER
>> GUESS
2 0.345 (initial guess for parameter no. 2)
3 16.971 (initial guess for parameter no. 3)
/* (the initial guess for parameter no. 1 is taken from the
fourthlevel command >>>> PRIOR, or  if not present 
from the TOUGH2 input file) */
...
> PARAMETER
>> read GUESS from FILE: testi.par
See Also:
>>>> GUESS, >>>> PRIOR
@@@
Syntax: >>>> GUESS: guess
Parent Command:
all thirdlevel commands in block > PARAMETER
Subcommand:

Description:
This command provides an initial guess of the parameter to be estimated.
If neither this command or command >>ÊGUESS is used, the initial guess
is taken from the TOUGH2 input file.
The initial guess is the starting point for the minimization algorithm,
to be distinguished from prior information (see command >>>> PRIOR).
The initial guess can be overwritten by the secondlevel command >> GUESS.
If command >>>> LOGARITHM is present, the initial guess is the logarithm
of the parameter. Similarly, if command >>>> FACTOR is present,
the initial guess should be a multiplication factor (default is 1.0).
Example:
> PARAMETER
>> ABSOLUTE permeability
>>> ROCK types: CLAY1 CLAY2 CLAY3 BOUND
>>>> FACTOR
>>>> initial GUESS : 1.0
>>>> is not WEIGHTed : 0.0 (default)
<<<<
>>> ROCK type : SAND1
>>>> LOGARITHM
>>>> PRIOR information : 12.0
>>>> standard DEVIATION: 1.0 order of magnitude
<<<<
<<<
>> GUESS, i.e., starting point for optimization
2 13.0
<<
See Also:
>> GUESS, >>>> PRIOR, >>>> DEVIATION, >>>> VARIATION
@@@
Syntax: >>>> HEADER: nskip
or
>>>> SKIP: nskip
Parent Command:
all thirdlevel commands in block > OBSERVATION
Subcommand:

Description:
This command identifies the number of lines to be skipped before reading data.
Data reading starts nskip+1 lines after the command line >>>> DATA, or on the
(nskip+1)th line of the data file (see command >>>> DATA).
Example:
> OBSERVATION
>> TEMPERATURE
>>> ELEMENTS: ELM_10 + 4
>>>> SKIP: 3 lines before reading data
>>>> DATA [HOURS]
(1) 
(2) time [h] temperature comment
(3) 
0.00 21.3 mean temperature prior to experiment
1.12 21.3 heater turned on
1.15 21.9
1.20 23.4
2.00 32.8 heater turned off
2.10 29.3
2.20 26.7
2.30 24.1
4.00 21.6 end of experiment

>>>> standard DEVIATION: 0.5 degrees C
<<<<
<<<
<<
See Also:
>>>> COLUMN, >>>> DATA, >>>> FORMAT, >>>> SET (o)
@@@
Syntax: HELP (a keyword in combination with any command)
Parent Command:

Subcommand:

Description:
A short message about the command usage is printed to the iTOUGH2 output file
if keyword HELP is present on the command line. See also LIST and >>> INDEX
for further support. If command >>> INPUT is used, the help message can be
retrieved without performing any iTOUGH2 calculations.
Example:
> COMPUTATION
>> CONVERGE (what does this command do? HELP!)
>>> while you're at it, print a LIST of available commands,
>>> then stop after INPUT is read (HELP again!)
<<<
<<
See Also:
>>> INDEX, LIST
@@@
Syntax: >>> HESSIAN
Parent Command:
>> ERROR
>> JACOBIAN
Subcommand:

Description:
This command computes a finite difference Hessian matrix H for the error
analysis following optimization. The elements of H are given by:
2
1 dz dz d z
i i i
H = 2*Sum(  [    r ]) i=1,...m
jk 2 i
sigma dp dp dp dp
i j k j k
The evaluation of H by means of finite differences requires 2n+n*(n1)/2
additional TOUGH2 simulations, where n is the number of parameters.
By default, the Hessian matrix, which is the inverse of the parameter
covariance matrix, is approximated by
T 1
J C J
zz
based on the linearity assumption, i.e., the second derivative term is ignored.
Evaluating the finite difference Hessian, which takes into account the
nonlinearities, provides a means by which to check the linearity assumption
(for another approach see command >>> LINEARITY). This may lead to a more
accurate calculation of the covariance matrix of the estimated parameters.
However, inclusion of the secondderivative term may yield a Hessian matrix that
is not positive definite due to the presence of outliers, strong nonlinearities,
or the fact that the minimum has not been detected accurately, i.e., when the
positive and negative residuals r do not cancel each other. In this case,
iTOUGH2 automatically proceeds with the linearized Hessian which is positive
definite by definition.
Example:
> COMPUTATION
>> ERROR analysis should be based on
>>> finite difference approximation of the HESSIAN matrix
<<<
See Also:
>>> LINEARITY
@@@
Syntax: >>> INCOMPLETE: max_incomplete
Parent Command:
>> CONVERGE
Subcommand:

Description:
A successful iTOUGH2 run is based on the robustness and stability of the
underlying TOUGH2 simulation. It is therefore imperative to develop a TOUGH2
model that is capable of completing the desired simulation for a variety of
parameter combinations. The simulation must reach the time of the last
calibration point, i.e., a premature termination due to convergence failures is
not acceptable. The number of potential convergence failures or errors leading
to premature termination is large. The type of convergence failure is
indicated in the iTOUGH2 output file. It is usually impossible to continue the
optimization process after a convergence failure. However, in some cases
iTOUGH2 is able to retrieve information from a previous simulation which allows
it to continue the inversion despite an incomplete run. iTOUGH2 always
terminates if an incomplete run is encountered during the first evaluation of
the Jacobian. The maximum number of incomplete simulations to be accepted by
iTOUGH2 can be set by variable max_incomplete (default: 5). However, one should
not rely on inversions that contain incomplete TOUGH2 runs.
Note that if option >>> STEADYSTATE is invoked, all incomplete simulations are
accepted by iTOUGH2 assuming that steadystate conditions have been reached.
Example:
> COMPUTATION
>> TOLERANCE
>>> perform : 10 ITERATIONS
>>> or stop if : 250 TOUGH2 SIMULATIONS are executed
>>> accept : 10 INCOMPLETE runs, if possible
<<<
<<
See Also:
>>> STEADYSTATE
@@@
Syntax: >>> INDEX
Parent Command:
>> OUTPUT
Subcommand:

Description:
Prints the iTOUGH2 command index to the iTOUGH2 output file.
Example:
> COMPUTATION
>> OUTPUT
>>> print command INDEX
<<<
<<
See Also:
LIST, HELP
@@@
Syntax: >>>> INDEX (There are two fourthlevel commands >>>> INDEX. Check parent command.)
Parent Command 1:
most thirdlevel commands in block > PARAMETER
Syntax:
>>>> INDEX: index (index_i...)
or
>>>> PARAMETER: index (index_i...)
Subcommand:

Description:
This command provides a list of integers for further parameter specification.
The integers are usually indexes of TOUGH2 arrays, such as IPAR in arrays
CP(IPAR,NMAT) or RPD(IPAR), selecting the IPARth parameter of the capillary
pressure or default relative permeability function, respectively.
If multiple indexes are provided, a single parameter will be estimated and
assigned to all the corresponding array elements.
Example:
> PARAMETER
>> estimate 2nd parameter of default CAPILLARY pressure function
>>> DEFAULT
>>>> PARAMETER CPD: 2
<<<<
<<<
>> optimize generation RATE of alternating "huff & puff" system
>>> SOURCE: WEL_1
>>>> ANNOTATION : Injection
>>>> INDEX of array F2 : 1 3 5 7 9
<<<<
>>> SOURCE: WEL_1
>>>> ANNOTATION : Pumping
>>>> INDEX of array F2 : 2 4 6 8 10
<<<<
<<<
<<
See Also:

@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
Parent Command 2:
all thirdlevel commands in block > OBSERVATION
Syntax:
>>>> INDEX: index (index_i...)
or
>>>> PARAMETER: index (index_i...)
Subcommand:

Description:
This command provides a list of integers for further specification of
userspecified observations (see command >> USER (o)).
Example:
(see command >> USER (o))
See Also:
>> USER(o)
@@@
Syntax: >> IFS
Parent Command:
> PARAMETER
Subcommand:
>>> MODEL
Description:
This command selects as a parameter one of the IFS parameters. An index I must be
provided through command >>>> INDEX. If I is positive, the parameter is one of the
affine transform entries (variable PIFS(I)). If I is negative, the parameter is the increment
parameter for property field I (variable TINC(I)). If I is lower than 100, the parameter is
the smoothing parameter in direction (I100) (variable SMOOTH(I100)).
Example:
> PARAMETER
>> IFS parameter
>>> MODEL
>>>> ANNOTATION: Diag. elements of B
>>>> INDEX : 1 4
>>>> LOGARITHM
<<<<
>>> MODEL
>>>> ANNOTATION: Increment
>>>> INDEX : 1
>>>> VALUE
<<<<
>>> MODEL
>>>> ANNOTATION: Smoothing in X and Y direction
>>>> INDEX : 101 102
>>>> VALUE
<<<<
<<<
<<
See Also:

@@@
Syntax: >> INITIAL (PRESSURE/: ipv)
Parent Command:
> PARAMETER
Subcommand:
>>> DEFAULT
>>> MATERIAL
Description:
This command selects as a parameter the initial condition for all grid blocks
associated with a certain rock type (TOUGH2 variable DEPU(ipv)) or default
initial condition (TOUGH2 variable DEP(ipv)). Since boundary conditions are
specified as initial conditions for inactive grid blocks or grid blocks with a
large volume, this command can also be used to estimate boundary conditions.
Initial conditions estimates cannot be provided for individual elements
unless they have a unique material name associated with them. Estimating the
first primary variable can be selected using keyword PRESSURE. All the other
primary variables must be identified by number, i.e. by an integer ipv that
follows a colon on the command line. Alternatively, ipv can be provided
through command >>>> INDEX. The initial guess for the parameter is taken from
TOUGH2 block PARAM.4, or should be provided by using commands >> GUESS or
>>>> GUESS or >>>> PRIOR.
Example:
> PARAMETER
>> INITIAL PRESSURE
>>> MATERIAL: BOUND
>>>> ANNOTATION: Boundary Pressure
>>>> scale pressures on boundary by constant FACTOR
<<<<
<<<
>> INITIAL condition for primary variable No.: 2
>>> MATERIAL : SAND1 DEFAU
>>>> ANNOTATION : Initial Saturation
>>>> VALUE
>>>> initial GUESS : 10.4
>>>> admissible RANGE: 10.01 10.99
<<<<
<<<
<<
See Also:

@@@
Syntax: >>> INPUT
Parent Command:
>> CONVERGE
Subcommand:

Description:
This command makes iTOUGH2 stop immediately after the TOUGH2 and iTOUGH2 input
files have been read and checked for consistency. This is useful to check
input before time consuming simulations are invoked, or in combination with
LIST, HELP, and >>> INDEX.
Example:
> COMPUTATION
>> print LIST of available commands on this command level
>> CONVERGE (print HELP message to iTOUGH2 output file and
>>> stop immediately after INPUT is read)
<<<
<<
See Also:
>>> INDEX, HELP, LIST
@@@
Syntax: >>> INTERFACE: elem_1 elem_2 (elem_i elem_j ...) (+ iplus)
Parent Command:
>> FLOW
Subcommand:
all fourthlevel commands in block > OBSERVATION
Description:
(synonym for command >>> CONNECTION)
Example:
(see command >>> CONNECTION)
See Also:
>> CONNECTION
@@@
Syntax: >>> ITERATION: max_iter
Parent Command:
>> CONVERGE
Subcommand:

Description:
This command sets the maximum number of iTOUGH2 iterations to max_iter.
If using the LevenbergMarquardt minimization algorithm, an iTOUGH2 iteration
consists of a number of TOUGH2 simulations and includes the following steps:
(1) solution of the forward problem;
(2) evaluation of the Jacobian matrix
(requiring n or 2n TOUGH2 simulations depending on whether a forward or
centered finite difference quotient is requested, see command >> JACOBIAN);
(3) updating of the parameter vector (see also command >>> STEP);
(4) check run(s) to see whether the new parameter set leads to a reduction of
the objective function; if not, go back to step 3.
If the objective function is successfully reduced, the iteration is completed,
and the last check run is used as the solution of the forward problem (step (1)
above) for the next iteration. By default, new iterations are performed until
one of the following convergence criteria is met (note that different
convergence criteria apply if options other than LevenbergMarquardt
optimization are used):
(1) the maximum number of TOUGH2 simulations is reached
(see command >>> SIMULATION);
(2) the maximum number of incomplete TOUGH2 simulations is reached
(see command >>> INCOMPLETE);
(3) the scaled step size is smaller than the minimum relative step size 1E9;
(4) all parameters are at their userspecified bounds;
(5) the objective function is smaller than the relative function tolerance;
(6) the maximum number of unsuccessful uphill steps is exceeded
(see command >>> UPHILL);
(7) the Levenberg parameter exceeds 1E12;
(8) the norm of the gradient vector is smaller than 1E5 (optimality criterion).
In most cases, however, it is sufficient to stop the inversion after a few
iterations because no significant fit improvement is obtained after about 5 to
15 iterations. Generally more iterations are required with increasing number
of parameters and stronger nonlinearities of the flow problem.
The progress of the objective function reduction can be observed by typing the
command prista (see unix script file prista), and inversion can be terminated
using the kit command (see unix script file kit).
It is suggested to perform a single iTOUGH2 iteration or to use option
>> SENSITIVITY ANALYSIS prior to running a full inversion in order to check
the relative importance and sensitivity of each parameter, the parameter step
size, the initial value of the Levenberg parameter, etc.
Example:
> COMPUTATION
>> STOP after
>>> : 6 ITERATIONS
<<<
<<
See Also:
>>> SENSITIVITY, >>> CENTERED, >>> FORWARD, >>> INCOMPLETE, >>> SIMULATION,
>>> UPHILL
@@@
Syntax: >>>> ITERATION (There are two fourthlevel commands >>>> ITERATION. Check parent command.)
Parent Command 1:
>>> ANNEAL
Syntax:
>>>> ITERATION: max_iter
Subcommand:

Description:
This command limits the maximum number of iterations performed by the Simulated
Annealing minimization algorithm. An iteration is completed if:
(1) the maximum number of steps mstep on a temperature level is reached
(see command >>>> STEP (a)), or
(2) the objective function has been reduced 0.2*mstep times
Each iteration is followed by a reduction of the control parameter tau
(temperature) according to the annealing schedule (see command >>>> SCHEDULE).
Example:
> COMPUTATION
>> OPTION
>>> Simulated ANNEALing
>>>> maximum number of ITERATIONS: 100
>>>> maximum number of STEPS per ITERATION: 50
>>>> annealing SCHEDULE: 0.95
>>>> initial TEMPERATURE: 0.02
<<<<
<<<
<<
See Also:
>>>> STEP (a), >>>> SCHEDULE
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
Parent Command 2:
>>> SELECT
Syntax:
>>>> ITERATION: niter
Subcommand:

Description:
This command defines the number of iterations after which the criteria for
automatic parameter selection is reevaluated (see commands >>>> CORRELATION
and >>>> SENSITIVITY). A full Jacobian matrix is evaluated every multiple of
niter iterations. In intermediate iterations, only the columns of the Jacobian
corresponding to the selected parameters are updated.
Example:
> COMPUTATION
>> OPTION
>>> SELECT parameter automatically every
>>>> : 3 ITERATIONS
>>>> based on the SENSITIVITY criterion with rsens : 0.10
<<<<
<<<
<<
See Also:
>>>> CORRELATION, >>>> SENSITIVITY
@@@
Syntax: >> JACOBIAN
Parent Command:
> COMPUTATION
Subcommand:
>>> CENTERED
>>> FORWARD
>>> HESSIAN
>>> LIST
>>> PERTURB
Description:
This is the parent command of a number of subcommands that deal with the
calculation of the Jacobian matrix J. The elements of the Jacobian matrix,
calculated at the calibration points, are the partial derivatives of the system
response with respect to the parameters to be estimated:
dr d(z*  z ) dz
i i i i
J =   =   = 
ij dp dp dp
j j j
The Jacobian matrix discussed here must be distinguished from the one
calculated in the simulation program TOUGH2. The latter is used to solve the
set of nonlinear algebraic iterations arising at each time step; its elements
are the partial derivatives of the mass residuals with respect to the primary
variables, and its numerical computation is controlled by the TOUGH2 variable
DFAC.
Example:
> COMPUTATION...
>> ...of the JACOBIAN matrix is performed...
>>> ...using FORWARD finite difference quotients for : 3 iterations
before switching to centered finite difference quotients.
>>> Parameter PERTURBation factor is: 1 % (default)
<<<
<<
See Also

@@@
Syntax: >>> JACOBIAN
Parent Command:
>> OUTPUT
Subcommand:

Description:
This command prints the Jacobian matrix after each iteration. By default, the
(scaled) Jacobian matrix is printed only once at the end of the optimization.
Example:
> COMPUTATION
>> OUTPUT
>>> print JACOBIAN after each iteration.
<<<
<<
See Also:
>>> RESIDUAL
@@@
Syntax: >> KLINKENBERG
Parent Command:
> PARAMETER
Subcommand:
>>> MATERIAL
Description:
This command selects as a parameter the Klinkenberg slip factor (TOUGH2
variable GK(NMAT)).
Example:
> PARAMETER
>> KLINKENBERG slip factor
>>> ROCK type : GRANI
>>>> LOGARITHM
>>>> RANGE : 1.0 10.0
>>>> VARIATION : 1.0
<<<<
<<<
<<
See Also:

@@@
Syntax: >>> L1ESTIMATOR
Parent Command:
>> OPTION
Subcommand:

Description:
This command selects the L1estimator, i.e., the objective function to be
minimized is the sum of the weighted absolute residuals.
S = Sum(r /sigma ) i=1,...,m
i i
Minimizing the mean absolute deviation leads to a maximumlikelihood estimate
if the errors follow a double exponential distribution.
1
phi(r ) =  exp(r /sigma )
i 2*sigma i i
i
The L1estimator should be used, for example, to minimize a cost function for
the optimization of a cleanup operation. Furthermore, it can be used whenever
the objective function of interest is a linear function of the model output
(e.g., in a sensitivity analysis using command >>> OBJECTIVE in block >> OPTION).
Note that this objective function is usually minimized using the
LevenbergMarquardt algorithm which is designed for a quadratic objective
function. Minimization is therefore rather inefficient, requiring more
iterations and a high initial Levenberg parameter. The downhill simplex
algorithm (see command >>> SIMPLEX) can be used as an alternative.
Example:
> COMPUTATION
>> OPTION
>>> use L1ESTIMATOR, then draw contours of the
>>> OBJECTIVE function based on : 10 points in the parameter space
<<<
<<
See Also:
>>> ANDREW, >>> CAUCHY, >>> LEASTSQUARES, >>> QUADRATICLINEAR
@@@
Syntax: >> LAG
Parent Command:
> PARAMETER
Subcommand:
>>> NONE
>>> SET
Description:
This command selects as a parameter a constant lag, shifting data in time.
The time lag is applied to the output that refers to a specific data set:
z(t) = z (t + lag)
TOUGH2
Here, t is time, lag is the estimated time lag in seconds, and z is the
TOUGH2
TOUGH2 output. The result z is compared to the measurement z* of the
corresponding data set.
A single data set is identified by number using command >>> SET (p);
multiple data sets a specified with command >>>> INDEX (p).
If the lag is known, i.e., does not need to be estimated, use command
>>>> SHIFT TIME, a subcommand of > OBSERVATION.
Example:
> PARAMETER
>> LAG
>>> NONE
>>>> ANNOTATION: time lag
>>>> INDEX: 1 2 3
>>>> GUESS: 300 sec.
<<<<
<<<
<<
See Also:
>> DRIFT, >> FACTOR, >> SHIFT, >>> SET (p), >>>> INDEX (p), >>>> SHIFT
@@@
Syntax: >>> LEASTSQUARES
Parent Command:
>> OPTION
Subcommand:

Description:
This command selects leastsquares optimization, i.e., the objective function
to be minimized is the sum of the squared weighted residuals.
T 1
S = r C r
zz
Minimizing the squared weighted residuals leads to a maximumlikelihood
estimate if the errors are normally distributed with zero mean and covariance
matrix C :
zz
m/2 1/2 T 1
phi(r) = (2*Pi) C  exp(0.5 r C r)
zz zz
Leastsquares estimation is the default. If outliers are more prominent than
described by the tail of the normal distribution, one may want to use one of
the robust estimators to reduce the weight of outliers or even eliminate them
(see commands >>> ANDREW, >>> CAUCHY, or >>> QUADRATICLINEAR).
The leastsquares objective function is a quadratic function if the residuals
depend linearly on the parameters; it is nearlyquadratic for a nonlinear
model. The LevenbergMarquardt algorithm is best suited to minimize the
nonlinear leastsquares objective function.
Example:
> COMPUTATION
>> OPTION
>>> use LEASTSQUARES
<<<
<<
See Also:
>>> ANDREW, >>> CAUCHY, >>> L1ESTIMATOR, >>> QUADRATICLINEAR
@@@
Syntax: >>> LEVENBERG: lambda
Parent Command:
>> CONVERGE
Subcommand:

Description:
This command sets the initial value of the Levenberg parameter lambda
(default: 0.001). During the optimization process (see command
>>> LEVENBERGMARQUARDT), the Levenberg parameter is divided by the Marquardt
parameter nue (see command >>> MARQUARDT) after each successful iteration,
and is multiplied by nue if the new parameter set leads to an increased value
of the objective function, i.e., if an unsuccessful step was proposed.
A big value for lambda means that a small step along the steepest descent
direction is performed. A lambdavalue of zero is equivalent to a GaussNewton
step. The former is robust but inefficient, the latter has a quadratic
convergence rate but may lead to unsuccessful steps.
Example:
> COMPUTATION
>> CONVERGE
>>> maximum number of ITERATIONS: 10
>>> set initial LEVENBERG parameter to: 0.1 to make a
safe first step
<<<
<<
See Also:
>>> MARQUARDT
@@@
Syntax: >>> LEVENBERGMARQUARDT
Parent Command:
>> OPTION
Subcommand:

Description:
This command selects the LevenbergMarquardt algorithm to minimize the
objective function. This is the default minimization algorithm.
The LevenbergMarquardt algorithm combines the robustness of a steepest
descent method with the efficiency of a GaussNewton step (see command
>>> GAUSSNEWTON):
T 1 1 T 1
dp = (J C J + lambda*D) J C r
zz zz
T 1
where D is a diagonal matrix with elements D = (J C J)
ii zz ii
The LevenbergMarquardt method switches continuously from a gradient method
(large lambda, see command >>> LEVENBERG) far from the minimum to a
GaussNewton step as the minimum is approached and lambda is reduced.
Example:
> COMPUTATION
>> OPTION
>>> use LEVENBERGMARQUARDT minimization algorithm (default)
<<<
>> CONVERGE
>>> initial LEVENBGERG parameter lambda is : 0.001 (default)
>>> MARQUARDT parameter nue is : 10.000 (default)
<<<
<<
See Also:
>>> ANNEAL, >>> GAUSSNEWTON, >>> GRID SEARCH, >>> LEVENBERG, >>> MARQUARDT
@@@
Syntax: >>> LINEARITY (: alpha (%))
Parent Command:
>> ERROR
Subcommand:

Description:
The covariance matrix of the estimated parameters, C , is calculated using
pp
linear error analysis:
2 T 1 1
C = s (J C J)
pp 0 zz
The confidence region around the estimated parameter set for the linearized
case consists of those values p for which
_ T _
(p  p) C (p  p) < n * F
pp n,mn,1alpha
_
where p is the parameter vector at the optimum. The covariance matrix C
pp
approximates the actual surface of the objective function at its minimum by a
tangent hyperellipsoid under the assumption of normality and linearity.
If the model is nonlinear, the coverage of the confidence region by the linear
approximation may be very poor with respect to both its size and its shape.
When using this command, it is assumed that the shape of the confidence region
is close to ellipsoidal, and that the orientation of the hyperellipsoid in the
ndimensional parameter space is accurately obtained from the linear error
analysis. Then, by only adjusting the size of the hyperellipsoid, we can
better approximate the confidence region without losing the advantage of
producing easily understandable results which are also simple to report.
The procedure adopted here is based on a comparison of the actual objective
function with the results from the linear approximation at discrete points in
the parameter space. These test points p' are preferably located along the
main axis of the hyperellipsoid, i.e.:
_ 1/2
p' = p + (n * F ) a u i=1,...,n
+ n,mn,1alpha i i
Here, p' are two test parameter sets on the ith axis, the direction of
+
which is given by the eigenvector u of the covariance matrix C .
i _ pp
Note that the distance from the optimal parameter set p is selected as a
multiple of the corresponding eigenvalue a and the quantile of the
i
Fdistribution. This means that the correction is tailored to approximate the
confidence region on a certain confidence level (1alpha). The eigenvalues a'
which represent the length of the semiaxis are now corrected as follows: i
2 2 2
a' = a s (A + A )/2
i i 0 +  i
with
n * F
n,mn,1alpha
A = 
+i _
S(p')  S(p)
2
Finally, the new covariance matrix is backcalculated from the eigenvectors u
i
and the updated eigenvalues a' .
i
This correction procedure requires 2*n additional solutions of the direct
problem and is thus relatively inexpensive. While the resulting confidence
region is ellipsoidal by definition, the differences between S(p' ) and S(p' )
+i i
provide, as a byproduct of the correction procedure, some insight into the
asymmetry of the true confidence region. The user may specify alpha or
(1alpha) in % or the quantile n * F directly, (by omitting % on
n,mn,1alpha
the command line). Note that the correction procedure fails if the minimum is
not accurately identified. In this case iTOUGH2 automatically proceeds with
the covariance matrix from the linear error analysis.
Example:
> COMPUTATION
>> ERROR analysis
>>> calculate finite difference HESSIAN and then
>>> check LINEARITY assumption on the : 95 % confidence level
<<<
<<
See Also:
>>> HESSIAN
@@@
Syntax: LIST (available on all command levels)
Parent Command:

Subcommand:

Description:
This command provides a list of all available commands on the corresponding
command level. The list contains the actual upper case spelling of the command
as interpreted by the program. Potential keywords are not listed.
A complete list of all commands can be obtained with command >>> command INDEX.
Example:
> LIST
> COMPUTATION
>> STOP after
>>> INPUT
<<<
<<
<
The following list is printed to the iTOUGH2 output file showing all
firstlevel commands:
***** LIST OF COMMANDS ***** on level 1:
* <
* > OPTION
* > COMPUTATION
* > PARAMETER
* > OBSERVATION
* > MEASUREMENT
*
***** LIST OF COMMANDS *****
See Also:
>>> INDEX, >>> INPUT
@@@
Syntax: >>>> LOGARITHM (There are two fourthlevel commands >>>> LOGARITHM. Check parent command.)
Parent Command 1:
all thirdlevel commands in block > PARAMETER
Syntax:
>>>> LOGARITHM
Subcommand:

Description:
The parameter p to be estimated is the logarithm of the TOUGH2 parameter X:
p=log10(X) <=> X=10**p
Estimation of logarithms is recommended if the parameter is expected to vary over a large range of values, suggesting a
lognormal distribution of its estimate. Note that all quantities referring to this parameter are
also in logspace (e.g. range, standard deviation, step length, etc.). Estimating the logarithm
is chosen here over estimating the parameter value directly (command >>>> VALUE), estimating a multiplication factor
(command >>>> FACTOR (p)), or estimating a lognormally distributed multiplication factor (command >>>> LOG(F)).
Example:
> PARAMETER
>> ABSOLUTE permeability
>>> MATERIAL: SAND1
>>>> estimate LOGARITHM (this is the default for this parameter)
>>>> initial GUESS : 12.0 = 1 darcy
>>>> admissible RANGE : 15.0 9.0
>>>> standard DEVIATION : 1.0 order of magnitude
>>>> maximum STEP size : 0.5 logcycles per iteration
<<<<
<<<
<<
See Also:
>>>> FACTOR (p), >>>> VALUE
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
Parent Command 2:
all thirdlevel commands in block > OBSERVATION
Syntax:
>>>> LOGARITHM
Subcommand:

Description:
This command takes the logarithm (base 10) of an observable variable. The corresponding
measurement error is assumed to be lognormally distributed. Taking the logarithm is
suggested when the observation assumes values over many orders of magnitude (e.g.,
concentration, water potential). Note that this option emphasizes the importance of smaller values
of the variable, and is not applicable to data sets that contain zero measurements.
Example:
> OBSERVATION
>> CAPILLARY PRESSURE
>>> ELEMENT: TP__1
>>>> first take the ABSOLUTE value and
>>>> then the LOGARITHM
>>>> standard DEVIATION of log: 1.0
>>>> DATA on FILE: wp.dat
<<<<
<<<
<<
See Also:

@@@
Syntax: >>>> LOG(F)
Parent Command:
all thirdlevel commands in block > PARAMETER
Subcommand:

Description:
The parameter to be estimated is a lognormally distributed factor with which the initial TOUGH2 parameter value is multiplied:
p = log10(X/X0) <=> X = X0 * 10**p
Here, p is the estimated parameter, X is the TOUGH2 parameter, and X0 is the initial value of the TOUGH2 parameter.
This option is useful to determine the mean of a lognormally distributed quantity while maintaining ratios
(e.g., if estimating a common factor applied to all three permeability values in a model domain, the anisotropy ratio remains
constant). Estimating a factor is chosen here over estimating the parameter value directly (command >>>> VALUE) or its logarithm
(command >>>> LOGARITHM (p)).
Example:
> PARAMETER
>> ABSOLUTE permeability
>>> MATERIAL: SAND1 SAND2 SAND3 SAND4 SAND5
>>>> estimate LOG(F)
>>>> INDEX : 1 2 3
>>>> initial GUESS: 0.0 (default)
>>>> RANGE : 3.0 3.0
<<<<
<<<
<<
See Also:
>>>> LOGARITHM (p), >>>> FACTOR, >>>> VALUE
@@@
Syntax: >>> MARQUARDT: nue
Parent Command:
>> CONVERGE
Subcommand:

Description:
This command sets the Marquardt parameter nue (default: 10.0). During the optimization
process, the Levenberg parameter lambda (see command >>> LEVENBERG) will be divided by
the Marquardt parameter nue after each successful iteration, and it will be multiplied by nue if
the new parameter set leads to an increased value of the objective function, i.e., if an
unsuccessful step was proposed.
A big value for lambda means that a small step along the gradient direction is performed.
A lambda value of zero is equivalent to a GaussNewton step. The former is robust but inefficient, the
latter has a quadratic convergence rate but may lead to unsuccessful steps.
The Marquardt parameter therefore determines how fast the step size and step direction
changes from steepest descent to GaussNewton and vice versa.
Example:
> COMPUTATION
>> CONVERGE
>>> maximum number of ITERATIONS: 10
>>> set initial LEVENBERG parameter to: 0.1 to make a
safe first step
>>> MARQUARDT parameter : 2.0 (slow change to GaussNewton steps)
<<<
<<
See Also:
>>> LEVENBERG
@@@
Syntax: >> MASS FRACTION (comp_name/COMPONENT: icomp) (phase_name/PHASE: iphase)
Parent Command:
> OBSERVATION
Subcommand:
>>> ELEMENT
Description:
This command selects as an observation type the mass fraction of component icomp in phase
iphase. This observation type refers to one or more elements.
Component number icomp or component name comp_name, and phase number iphase or
phase name phase_name depend on the EOS module being used. They are listed in the
iTOUGH2 header, and can be specified either on the command line or using the two
subcommands >>>> COMPONENT and >>>>ÊPHASE, respectively.
Example:
> OBSERVATION
>> MASS FRACTION of BRINE in LIQUID
>>> ELEMENT: A1__1
or
> OBSERVATION
>> MASS FRACTION of COMPONENT No.: 2 in PHASE No.: 2
>>> ELEMENT: A1__1
or
> OBSERVATION
>> MASS FRACTION
>>> ELEMENT: A1__1
>>>> COMPONENT: 2
>>>> PHASE : 2
See Also:
>> CONCENTRATION
@@@
Syntax: >>> MATERIAL: mat_name (mat_name_i ...) (+ iplus)
or
>>> ROCKS : mat_name (mat_name_i ...) (+ iplus)
Parent Command:
all secondlevel commands in block > PARAMETER referring to a material name
Subcommand:
all fourthlevel commands in block > PARAMETER
Description:
This command identifies material names (TOUGH2 variable MAT). Most parameters refer to a
particular material, i.e., they are specified in TOUGH2 in block ROCKS. Rock types are
designated by a fivecharacter code name. Blanks in the material name must be replaced by
underscores (e.g., if MAT in TOUGH2 reads 'CLAY ', mat_name must read 'CLAY_').
If multiple material names are provided, the estimate of the corresponding parameter will be
jointly assigned to all listed materials. If distinct parameters are sought for each rock type,
separate >>> MATERIAL blocks must be defined. Default properties (i.e., for parameters
referring to TOUGH2 blocks PARAM.4 and RPCAP) can be selected either by >>> DEFAULT or >>> MATERIAL: DEFAU.
If the last two characters of the last mat_name are integers, a sequence of iplus material
names can be generated. The following two command lines are thus identical:
>>> MATERIAL: BOREH ROC10 ROC_1 +4
>>> MATERIAL: BOREH ROC_1 ROC_2 ROC_3 ROC_4 ROC_5 ROC10
Example:
> PARAMETER
>> ABSOLUTE permeability
>>> MATERIAL: BOUND BOREH SAN_1 +2
>>>> LOGARITHM
<<<<
<<<
>> INITIAL PRESSURE
>>> MATERIAL: BOREH
>>>> ANNOTATION: Pi borehole
<<<<
>>> MATERIAL: BOUND DEFAU
>>>> ANNOTATION: Pi elsewhere
<<<<
<<<
See Also:
>>> DEFAULT, >>> MODEL
@@@
Syntax: >>>> MEAN (VOLUME)
Parent Command:
all thirdlevel commands in block > OBSERVATION
Subcommand:

Description:
(synonym for command >>>> AVERAGE)
Example:
(see command >>>> AVERAGE)
See Also:
>>>> AVERAGE
@@@
Syntax: >> MINC
Parent Command:
> PARAMETER
Subcommand:
>>> MODEL
Description:
This command selects as a parameter the fracture spacing which is a parameter of the MINC
preprocessor (TOUGH2 variable PARMINC(I)). The fracture set is identified through
command >>>> INDEX. This parameter refers to the entire model.
Estimating fracture spacing is only possible if the mesh can be generated internally without
further editing by the user. Single elements and connections can be modified, however, by
means of TOUGH2 commands ELEM2 and CONN2, respectively. For example, if a mesh
is generated using MESHMAKER and MINC, the volumes of individual boundary elements
can be adjusted in the ELEM2 block, i.e., avoiding the need for external editing.
Example:
> PARAMETER
>> MINC parameter
>>> MODEL
>>>> ANNOTATION: Fracture spacing
>>>> INDEX : 1 2 3 (one value for all three fracture sets)
>>>> GUESS : 50.0 [m]
>>>> DEVIATION : 10.0 [m]
<<<<
<<<
<<
See Also:

@@@
Syntax: >>> MODEL
or
>>> NONE
Parent Command:
all secondlevel commands in block > PARAMETER or > OBSERVATION addressing
parameters or observations that refer to the entire model domain or a nonspecific item.
Subcommand:
all fourthlevel commands in block > PARAMETER or > OBSERVATION, respectively.
Description:
Most parameters refer to a rock type or code name of a sink or source. Some parameters,
however, are general or associated with the entire model domain. Similarly, while most
observations refer to an element, connection, or sink/source, there are observation types that
are not associated with a welldefined point in space. The commands >>> MODEL or
>>> NONE are dummy commands, i.e. place holders on the third command level for
parameter and observation types not further specified by one of the other thirdlevel commands.
Example:
> PARAMETER
>> SELEC parameter
>>> NONE
>>>> ANNOTATION: Gel Density (EOS11)
>>>> INDEX : 2
>>>> VALUE
<<<<
<<<
<<
> OBSERVATION
>> TOTAL MASS of PHASE: 2
>>> refers to entire MODEL
>>>> ANNOTATION : Mass of liquid
>>>> DATA on FILE : liquid.dat
>>>> DEVIATION : 0.01 [kg]
<<<
<<
See Also:
>>> ELEMENT, >>> CONNECTION, >>> DEFAULT, >>> MATERIAL, >>> SET (p), >>> SOURCE
@@@
Syntax: >> MOMENT (FIRST/SECOND) (X/Y/Z) (comp_name/COMPONENT: icomp)
(phase_name/PHASE: iphase)
Parent Command:
> OBSERVATION
Subcommand:
>>> MODEL
Description:
This command selects as an observation type the first or second spatial moment
in X, Y, or Z direction of component icomp or phase iphase.
This observation type refers to all elements with 10,000 > SPHT > 0.
The X, Y, and Z coordinates of the grid blocks must be given in TOUGH2 block
ELEME, columns 5160, 6170, 7180, respectively.
Component number icomp or component name comp_name, and phase number iphase
or phase name phase_name depend on the EOS module being used.
They are listed in the iTOUGH2 header, and can be specified either on the
command line or using the two subcommands >>>> COMPONENT and >>>> PHASE,
respectively. If only a phase but no component is specified, the spatial
moment of the indicated phase including all components is calculated.
If only a component but no phase is given, the spatial moment of that
component in all phases is calculated. If both a component and a phase are
given, the spatial moment of the component in the specified phase is calculated.
The spatial moments are calculated as follows:
(nel) i j k kappa
M =Sum Sum Sum (X * Y * Z * V * phi * S * rho * X )
ijk (beta) (kappa) (n) beta beta n
where the first sum is taken over the selected phase(s) beta, the second sum is
taken over the selected component(s) kappa, and the third summation accumulates
the masses of component kappa in phase beta from all elements that are included
in the global mass balance calculation (i.e., elements with a rock grain
specific heat lower than 104 J/kg C). The mass in element n is the product of
the element volume V, the porosity phi, the saturation S, the phase density rho,
and the mass fraction X.
The first moment represents the center of mass coordinates which are given by:
= M /M
100 000
= M /M
010 000
= M /M
001 000
The second moment represents the variances in the three directions given by:
2 2
sigma = M /M 
X 200 000
2 2
sigma = M /M 
Y 020 000
2 2
sigma = M /M 
Z 002 000
This option may be especially useful for characterizing the location and
spreading of a contaminant or saturation plume. Note that boundary effects
may have a strong impact on spatial moment calculation.
Example:
> OBSERVATION
>> FIRST MOMENT (Zcoordinate) of TRACER COMPONENT in LIQUID PHASE
>>> entire MODEL
>>>> ANNOTATION: plume Z coord.
>>>> NO DATA
<<<<
<<<
>> SECOND MOMENT (Xcoordinate) of TRACER COMPONENT in LIQUID PHASE
>>> entire MODEL (except boundary elements with SPHT .gt. 10000)
>>>> ANNOTATION: transversal spreading
>>>> NO DATA
<<<<
<<<
<<
See Also:

@@@
Syntax: >>> MONTE CARLO (SEED: iseed) (GENERATE) (CLASS: nclass)
Parent Command:
>> ERROR
Subcommand:

Description:
This command performs Monte Carlo simulations to determine the uncertainty of model
predictions as a result of parameter uncertainty. The general procedure is as follows:
(1) a probability distribution is defined for each uncertain parameter;
(2) parameter values are randomly sampled from their respective distributions;
(3) sampled parameter values are randomly combined to obtain a parameter vector;
(4) a TOUGH2 simulation is performed and the results at the specified observation points are stored;
(5) steps (3) through (4) are repeated;
(6) the ensemble of the modeling results can be statistically analyzed.
The type of distribution can be specified for each parameter in the respective parameter
definition block (see command >>>> UNIFORM or >>>> NORMAL). The mean is given
by the initial guess (see command >>>> GUESS), and the standard deviation by command
>>>> DEVIATION (p). The generated parameter value is rejected if outside the specified
range (see command >>>> RANGE).
The number of Monte Carlo simulations, nMC, must be provided in block >> CONVERGE
using command >>> SIMULATION, i.e., by specifying the maximum number of TOUGH2 simulations.
The number of Monte Carlo simulations nMC can be considered sufficient if:
(1) the selected probability density function of the input parameters is reasonably well
approximated by the histogram of the randomly generated parameter values.
Keyword GENERATE can be used in combination with various seed numbers (keyword SEED)
and values for nMC (command >>> SIMULATION) to generate histograms of the
input parameters without actually performing Monte Carlo simulations. Once a
satisfactory distribution of the input parameters is achieved, the Monte Carlo
simulations can be invoked by simply deleting keyword GENERATE.
(2) the histogram of the model predictions allows for a statistical analysis. That means that
a sufficient number of realizations (= simulation results) should fall within each interval
used to calculate probabilities. For example: The probability that TCE concentrations cTCE
fall within the interval [a,b] is approximated by:
number of realizations in interval [a,b] n[a,b]
P(a < cTCE < b) =  = 
total number of Monte Carlo simulations nMC
Therefore, the minimum number of Monte Carlo simulations, nMC(min), should be large
enough so that P remains constant, i.e. independent of nMC. This condition is
fulfilled for relatively small values of nMC in the case of intervals around the mean,
where n[a,b] is usually large due to the high probability density. However, if we are
interested in the tail of the distribution, for example to calculate the (low) risk that TCE
concentrations exceed a certain standard, then the number of Monte Carlo simulations
required is much higher.
(3) The minimum number of Monte Carlo simulations must be increased if the number of
uncertain parameter increases because more parameter combinations are possible.
(4) From experience, the number of Monte Carlo simulations can be as low as 50 and as high as 2000 or greater.
Histograms of the input parameters and output variables as printed to the iTOUGH2 output
file. By default, the interval between the smallest and highest value is subdivided into sqrt(nMC)
classes. The number of classes can be changed using keyword CLASS.
The standard plot file contains all output sets from nMC simulations, where each set contains
the model result as a function of time. A second plotfile is generated with "_mc" in the file
name. In this second plot file, each curve represents the ensemble of the model output at one
point in time. Denoting the number of output sets, for which prediction uncertainty is to be studied,
by nSET, and the number of times by nTIMES, then the standard plot contains nSET*nMC curves,
each curve consisting of nTIMES points. The second plotfile contains nSET*nTIMES curves, each
consisting of nMC points.
Example:
> PARAMETER
>> ABSOLUTE permeability
>>> MATERIAL : ROCK_
>>>> generate NORMAL distribution
>>>> of LOGARITHM
>>>> with mean (GUESS) : 16.0
>>>> standard DEVIATION : 1.0
>>>> in the RANGE : 18.5 13.5
<<<<
<<<
>> POROSITY
>>> MATERIAL : ROCK_
>>>> VALUE
>>>> UNIFORM distribution
>>>> RANGE : 0.02 0.10
<<<<
<<<
<<
> OBSERVATION
>> TIMES: 20 EQUALLY spaced in MINUTES
3.0 60.0
>> PRESSURE
>>> ELEMENT: PRE_0
>>>> NO DATA
>>>> WEIGHT: 1.0
<<<<
<<<
<<
> COMPUTATION
>> STOP
>>> number of Monte Carlo SIMULATIONS: 250
<<<
>> ERROR propagation analysis
>>> MONTE CARLO (SEED number is: 777)
<<<
<<
See Also:
>>> FOSM, >>> SIMULATION, >>>> NORMAL, >>>> RANGE, >>>> UNIFORM
@@@
Syntax: >>> NEW OUTPUT
Parent Command:
>> OUTPUT
Subcommand:

Description:
This command creates a new TOUGH2 output file with unique file name
for each TOUGH2 run. By default, the TOUGH2 output file is overwritten
each time a new simulation starts.
Example:
> COMPUTATION
>> OUTPUT
>>> write to NEW OUTPUT after each TOUGH2 run.
<<<
<<
See Also:

@@@
Syntax: >>> NONE
Parent Command:
all secondlevel commands in block > PARAMETER or > OBSERVATION addressing
parameters or observations that refer to the entire model domain or a nonspecific item.
Subcommand:
all fourthlevel commands in block > PARAMETER or > OBSERVATION, respectively.
Description:
(synonym for command >>> MODEL)
Example:
(see command >>> MODEL)
See Also:
>>> MODEL
@@@
Syntax: >>>> NORMAL
Parent Command:
all thirdlevel commands in block > PARAMETER
Subcommand:

Description:
(synonym for command >>>> GAUSS)
Example:
(see command >>>> GAUSS)
See Also:
>>>> GAUSS
@@@
Syntax: >>> OBJECTIVE
Parent Command:
>> OUTPUT
Subcommand:

Description:
This command prints the value of the objective function after each TOUGH2 simulation.
By default, the value of the objective function is printed to the output file only after completion of an iTOUGH2 iteration.
Example:
> COMPUTATION
>> OUTPUT
>>> always print OBJECTIVE function
<<<
<<
See Also:
>>> JACOBIAN
@@@
Syntax: >>> OBJECTIVE (UNSORTED) (: ninval1 (ninval2 (inval3)) / FILE: filename)
Parent Command:
>> OPTION
Subcommand:

Description:
(synonym for command >>> GRID SEARCH)
Example:
(see command >>> GRID SEARCH)
See Also:
>>> GRID SEARCH
@@@
Syntax: > OBSERVATION
Parent Command:

Subcommands:
>> CONCENTRATION
>> CONTENT
>> COVARIANCE
>> CUMULATIVE
>> DRAWDOWN
>> ENTHALPY
>> FLOW
>> GENERATION
>> LIST
>> MASS FRACTION
>> MOMENT
>> PRESSURE
>> PRODUCTION
>> RESTART TIME
>> SATURATION
>> SECONDARY
>> TEMPERATURE
>> TIME (o)
>> TOTAL MASS
>> USER (o)
>> VOLUME
Description:
This is the firstlevel command for specifying the observations available for use in parameter
estimation. It can also be used to specify the points in space and time at which output is
requested for sensitivity analysis, uncertainty propagation analysis, or plotting. The observations
are a subset of all TOUGH2 output variables. Only those variables should be specified for
which data are available. Besides the observations listed above, the user can specify
additional data types by means of command >> USER.
The secondlevel command >> TIME is used to select calibration points in time. The ob
servation type is specified by the secondlevel command, the points in space are identified by
the thirdlevel command, and further specifications as well as the data themselves are given
through fourthlevel commands. The generic structure of a observation block is as follows:
> OBSERVATION
>> specify calibration points in TIME
>> specify observation type
>>> specify location
>>>> provide details
>>>> provide data
<<<<
<<<
<<
Example:
(see examples on second command level)
See Also:

@@@
Syntax: >> OPTION
Parent Command:
> COMPUTATION
Subcommand:
>>> ANDREW
>>> ANNEAL
>>> CAUCHY
>>> DIRECT
>>> FORWARD
>>> GAUSSNEWTON
>>> GRID SEARCH
>>> L1ESTIMATOR
>>> LEASTSQUARE
>>> LEVENBERGMARQUARDT
>>> LIST
>>> OBJECTIVE
>>> QUADRATICLINEAR
>>> SELECT
>>> SENSITIVITY
>>> SIMPLEX
>>> STEADYSTATE
Description:
This is the parent command of a number of subcommands for selecting iTOUGH2 program
options. By default, iTOUGH2 performs automatic model calibration using the weighted
leastsquares objective function and the LevenbergMarquardt minimization algorithm.
Example:
(see examples on third command level)
See Also:

@@@
Syntax: >> OUTPUT
Parent Command:
> COMPUTATION
Subcommand:
>>> BENCHMARK
>>> CHARACTERISTIC
>>> COVARIANCE
>>> FORMAT
>>> INDEX
>>> LIST
>>> JACOBIAN
>>> NEW OUTPUT
>>> OBJECTIVE
>>> PERFORMANCE
>>> PLOTFILE
>>> PLOTTING
>>> SENSITIVITY
>>> time_unit
>>> UPDATE
>>> RESIDUAL
>>> VERSION
Description:
This command specifies the format and amount of printout generated.
The iTOUGH2 output file contains the results usually needed for
subsequent interpretation. Additional information can be requested
which may be useful for debugging.
Example:
> COMPUTATION
>> OUTPUT
>>> Generate plot files in : TECPLOT FORMAT
>>> plot CHARACTERISTIC curves
>>> print OBJECTIVE function after each TOUGH2 run
>>> print RESIDUALS after each iTOUGH2 iteration
>>> print the iTOUGH2 command INDEX
>>> print VERSION control statements
>>> print with UPDATE information.
<<<
<<
See Also:

@@@
Syntax: >> PARALLEL PLATE
Parent Command:
> PARAMETER
Subcommand:
>>> MATERIAL
Description:
This command selects as a parameter the aperture of a parallel plate fracture
model. The single fracture must be modeled as a 1D or 2D model of thickness
1 meter, i.e., aperture must be identical to porosity.
Furthermore, the capillary pressure model must contain a parameter CP(2,NMAT)
which represents the gas entry pressure (e.g., BrooksCorey (ICP=10),
van Genuchten (ICP=11)). The parallel plate model relates porosity phi,
permeability k [m2], and gas entry pressure pe [Pa] to aperture a [m]
as follows:
phi = a
k = a**3/12
pe = 0.14366/a
Note that using this option overwrites the porosity, absolute permeability
and parameter CP(2,NMAT) given in the TOUGH2 input file.
Note that the parallel plate model is not consistently applied.
Using a relative permeability and capillary pressure function presumes some
surface roughness or aperture distribution within the fracture plane.
Example:
> PARAMETER
>> aperture in PARALLEL PLATE fracture model
>>> ROCK type : FRACT
>>>> LOGARITHM
>>>> PRIOR information : 4.0 (100 micron)
>>>> standard DEVIATION: 1.0
>>>> RANGE : 5.0 3.0
<<<<
<<<
<<
See Also:

@@@
Syntax: > PARAMETER
Parent Command:

Subcommands:
>> ABSOLUTE PERMEABILITY
>> BOTTOMHOLE PRESSURE
>> CAPACITY
>> CAPILLARY PRESSURE FUNCTION
>> COMPRESSIBILITY
>> CONDUCTIVITY
>> DRIFT
>> ENTHALPY
>> FORCHHEIMER
>> GUESS
>> IFS
>> INITIAL
>> KLINKENBERG
>> LAG
>> LIST
>> MINC
>> PARALLEL PLATE
>> POROSITY
>> PRODUCTIVITY INDEX
>> PUMPING RATIO
>> RATE
>> RELATIVE PERMEABILITY FUNCTION
>> SCALE
>> SELEC
>> SHIFT
>> SKIN
>> TIME (p)
>> USER (p)
Description:
This is the firstlevel command used to specify the parameters to be estimated. The parameters
to be estimated are a subset of all TOUGH2 input parameters defined in the TOUGH2 input
file. Only those parameters should be specified that are unknown or uncertain; they will be
subjected to the estimation process or uncertainty propagation analysis. Select only those
parameters that are sensitive enough yo influence the state variables for which
observations are available (see also >>> SELECT). Besides the parameters listed
above, the user can specify additional parameters by means of command >> USER.
The parameter type is specified by the secondlevel command, the domain it refers to is
identified by the thirdlevel command, and further specifications must be provided through
a number of fourthlevel commands. The generic structure of a parameter block is as follows:
> PARAMETER
>> specify parameter type
>>> specify parameter domain
>>>> provide details
<<<<
<<<
<<
Example:
(see examples for secondlevel commands)
See Also:
> OBSERVATION, > COMPUTATION
@@@
Syntax: >>>> PARAMETER: index (index_i...)
Parent Command:
all thirdlevel commands in block > PARAMETER and > OBSERVATION
Subcommand:

Description:
(synonym for command >>>> INDEX)
Example:
(see command >>>> INDEX)
See Also:
>>>> INDEX
@@@
Syntax: >>> PERFORMANCE
Parent Command:
>> OUTPUT
Subcommand:

Description:
This command performs a very rough benchmark analysis of computer performance and prints the
relative CPU time requirement as compared to a reference workstation.
Example:
> COMPUTATION
>> OUTPUT
>>> perform PERFORMANCE comparison
<<<
<<
See Also:

@@@
Syntax: >>> PERTURB: ()alpha (%)
Parent Command:
>> JACOBIAN
Subcommand:

Description:
This command specifies the perturbation factor alpha for numerical computation of the Jacobian matrix.
The columns of the Jacobian matrix are calculated by perturbing the corresponding parameter p
by a small amount dp, and taking either a forward or centered finite difference quotient.
The perturbation is usually a fraction of the parameter value itself, independent of parameter value:
dp = alpha * p
The default value for alpha is 1 %, which can be changed globally for all parameters using the
thirdlevel command >>> PERTURB, or individually for each parameter using the fourth
level command >>>> PERTURB. A negative value can be provided to specify a constant
perturbation:
dp = alpha
This option is sometimes required for parameters having either very small or very large
values (such as initial pressure, generation times, or residual gas saturation).
Example:
> COMPUTATION
>> JACOBIAN
>>> PERTURBation factor alpha : 0.005 of parameter value
<<<
<<
See Also:
>>> FORWARD, >>> CENTERED, >>>> PERTURB
@@@
Syntax: >>>> PERTURB: ()alpha (%)
Parent Command:
all thirdlevel commands in block > PARAMETER
Subcommand:

Description:
This command specifies the perturbation factor alpha for numerical computation of the Jacobian
matrix. The columns of the Jacobian matrix are calculated by perturbing the corresponding
parameter p by a small amount dp, and taking either a forward or centered finite difference
quotient. The perturbation is usually a fraction of the parameter value itself:
dp = alpha * p
The default value for alpha is 1 %, which can be changed globally for all parameters using the
thirdlevel command >>> PERTURB, or individually for each parameter using the fourth level command
>>>> PERTURB. A negative value can be provided to specify a constant perturbation,
independent of parameter value:
dp = alpha
This option is sometimes required for parameters having either very small or very large
values (such as initial pressure, generation times, or residual gas saturation).
Example:
> PARAMETER
>> TIME at which production rate changes
>>> SOURCE: WEL_3
>>>> INDEX of array F1 : 5
>>>> initial GUESS : 1.7E8 seconds
>>>> PERTURB by constantly : 3.6E3 seconds
<<<<
<<<
<<
> COMPUTATION
>> JACOBIAN
>>> all other parameters are PERTURBed by : 2 % of their values
<<<
<<
See Also:
>>> PERTURB
@@@
Syntax: >>>> PHASE phase_name/: iphase
Parent Command:
>>> ELEMENT
>>> SOURCE (o)
Subcommand:

Description:
This command identifies a phase either by its name (phase_name) or the phase number (iphase).
A list of allowable phase names for the given EOS module can be obtained from the header of the iTOUGH2 output file.
Example:
> OBSERVATION
>> CONCENTRATION
>>> ELEMENT: ZZZ99
>>>> ANNOTATION: TCE concentration
>>>> COMPONENT No.: 3
>>>> dissolved in LIQUID PHASE
>>>> DATA on FILE: tce.dat
>>>> standard DEVIATION: 1.0E6
<<<<
<<<
<<
See Also:
>>>> COMPONENT
@@@
Syntax: >>>> PICK: npick
Parent Command:
all thirdlevel commands in block > OBSERVATION
Subcommand:

Description:
This command identifies the number of data points being skipped when
reading from a long data file. Only every npick data point is read.
The default is npick=1, i.e., every data point is accepted.
Example:
> OBSERVATION
>> TEMPERATURE
>>> ELEMENTS: ELM_10 + 4
>>>> SKIP: 3 lines before reading data
>>>> PICK only every : 10 data point
>>>> DATA on file: temp.log
>>>> standard DEVIATION: 0.5 degrees C
<<<<
<<<
<<
See Also:
>>>> COLUMN, >>>> DATA, >>>> FORMAT, >>>> SET (o), >>>> SKIP
@@@
Syntax: >>> PLOTFILE: format (LIST)
Parent Command:
>> OUTPUT
Subcommand:

Description:
(synonym for command >>> FORMAT (o))
Example:
(see command >>> FORMAT (o))
See Also:
>>> FORMAT (o)
@@@
Syntax: >>> PLOTTING: niter
Parent Command:
>> OUTPUT
Subcommand:

Description:
By default, the plot file contains the observed data (interpolated at the calibration points), as
well as the system response calculated with the initial and final parameter set. Additional
intermediate curves can be requested for visualizing the optimization process. Curves are
generated for every multiple of niter iTOUGH2 iterations.
Example:
> COMPUTATION
>> OUTPUT
>>> PLOTTING: 1 (plots the calculated system response after each
iTOUGH2 iteration)
<<<
<<
See Also:

@@@
Syntax: >>>> POLYNOM: idegree (time_unit)
Parent Command:
all thirdlevel commands in block > OBSERVATION
Subcommand:

Description:
Represents observed data by a polynomial of degree idegree:
i
z(t)=Sum(A * t ) i=0,...,idegree
i
where t denotes time in time_units (default is seconds), and A are the idegree+1 coefficients
i
which are read in free format on the lines following the command line.
Example:
> OBSERVATION
>> LIQUID FLOW
>>> CONNECTION: A1__1 A1__2
>>>> TIME SHIFT: 5 MINUTES
>>>> conversion FACTOR: 1.6667E02 [kg/min]  [kg/sec]
>>>> POLYNOM of degree: 1 (linear function), time in MINUTES
2.34 0.03
A0 A1 = coefficients of linear regression through
data given in [min] and [kg/min]
>>>> standard DEVIATION: 0.01 kg/min
<<<<
<<<
<<
See Also:
>>>> DATA, >>>> USER
@@@
Syntax: >> POROSITY
Parent Command:
> PARAMETER
Subcommand:
>>> MATERIAL
Description:
This command selects as a parameter porosity (TOUGH2 variable POR(NMAT)).
This parameter refers to a rock type.
Note that the porosity specified in TOUGH2 block ROCKS (variable POR(NMAT))
may be overwritten by nonzero values given in block INCON (variable PORX).
However, porosity values are not overwritten if the element belongs to a
rock type for which porosity is a parameter to be estimated, unless a negative
value for PORX is provided.
Example:
> PARAMETER
>> POROSITY
>>> ROCK type : CLAY1
>>>> VALUE
>>>> PRIOR information : 0.28
>>>> standard DEVIATION: 0.05
>>>> RANGE : 0.10 0.50
>>>> PERTURB : 0.01
<<<<
<<<
<<
See Also:

@@@
Syntax: >>> POSTERIORI
Parent Command:
>> ERROR
Subcommand:

Description:
The estimated error variance s02 represents the variance of the mean weighted residual and is thus a measure of goodnessoffit:
T 1
r C r
2 zz
s = 
0 m  n
The a posteriori error variance so2 or a priori error variance sigma02 is used in the subsequent
error analysis. For example, the covariance matrix of the estimated parameters, C ,
pp
is directly proportional to the scalar s02 or sigma02, respectively. Note that if the residuals are
consistent with the distributional assumption about the measurement errors (i.e., matrix C ),
zz
then the estimated error variance assumes a value close to one.
The user must decide whether the error analysis should be based on the a posteriori or a priori
error variance. The decision can also be delegated to the Fisher Model Test (see command >>> FISHER).
iTOUGH2 uses the a posteriori error variance s02 by default.
Example:
> COMPUTATION
>> ERROR analysis
>>> based on A POSTERIORI error variance
<<<
See Also:
>>> FISHER, >>> PRIORI
@@@
Syntax: >> PRESSURE (CAPILLARY) (phase_name/PHASE: iphase)
Parent Command:
> OBSERVATION
Subcommand:
>>> ELEMENT
Description:
This command selects as an observation type the phase pressure or capillary
pressure. This observation type refers to one or more elements.
The phase name phase_name or phase number iphase, which depend on the EOS module
being used, are listed in the header of the iTOUGH2 output file.
They can be specified either on the command line or using subcommand >>>> PHASE.
If no phase is specified, iTOUGH2 takes the pressure of the first phase
which is usually the reference pressure. The pressure of a phase is calculated
as the sum of the reference pressure and the corresponding capillary pressure.
In a twophase system, there is only one capillary pressure.
The capillary pressure can be selected using keyword CAPILLARY.
In a threephase system (e.g., gas, NAPL, aqueous phase), there are two capillary
pressures, i.e., the capillary pressure between the NAPL and the gas phase,
and between the aqueous and the gas phase, which is the reference phase.
The name of the wetting phase must be provided to identify the particular capillary
pressure in the threephase system.
Example:
> OBSERVATION
>> PRESSURE
>>> ELEMENT: AA__1 + 56
>>>> ANNOTATION : Mean gas pressure
>>>> take AVERAGE of pressures in all 57 elements
>>>> DATA on FILE : pres.dat
>>>> VARIANCE : 1E6 [Pa^2]
<<<<
<<<
>> NAPL CAPILLARY PRESSURE
>>> ELEMENT: BB__1
>>>> take LOGARITHM of
>>>> ABSOLUTE Pco
>>>> DATA on FILE : Pco.dat
>>>> DEVIATION : 1.0 logcycle
<<<<
<<<
See Also:
>> DRAWDOWN
@@@
Syntax: >>>> PRIOR: prior_info
Parent Command:
all thirdlevel commands in block > PARAMETER
Subcommand:

Description:
This command provides the prior information of the parameter to be estimated.
If command >>>> PRIOR is omitted, the value from the TOUGH2 input file is taken.
Prior information is only effective if a standard deviation is provided
(see command >>>> DEVIATION (p)), in which case the difference between the
parameter estimate and its prior value is penalized in the objective function.
The starting point for the optimization is given through commands >> GUESS and
>>>> GUESS.
If command >>>> LOGARITHM is present, the prior information is the logarithm of
the parameter. Similarly, if command >>>> FACTOR is present, the prior information
should be a multiplication factor (default is 1.0).
Example:
> PARAMETER
>> ABSOLUTE permeability
>>> ROCK type : SAND1
>>>> LOGARITHM
>>>> PRIOR information : 12.0
>>>> standard DEVIATION: 1.0 order of magnitude
<<<<
>>> ROCK types: CLAY1 CLAY2 CLAY3 BOUND
>>>> FACTOR
>>>> initial GUESS : 1.0
>>>> is not WEIGHTed : 0.0 (default)
<<<<
<<<
>> GUESS, i.e., starting point for optimization
1 13.0
<<
See Also:
>> GUESS, >>>> GUESS, >>>> DEVIATION, >>>> VARIATION
@@@
Syntax: >>> PRIORI
Parent Command:
>> ERROR
Subcommand:

Description:
The estimated error variance s02 represents the variance of the mean weighted residual and is thus a measure of goodnessoffit:
T 1
r C r
2 zz
s = 
0 m  n
The a posteriori error variance so2 or a priori error variance sigma02 is used in the subsequent
error analysis. For example, the covariance matrix of the estimated parameters, C ,
pp
is directly proportional to the scalar s02 or sigma02, respectively. Note that if the residuals are
consistent with the distributional assumption about the measurement errors (i.e., matrix C ),
zz
then the estimated error variance S02 assumes a value close to one.
The user must decide whether the error analysis should be based on the a posteriori or a priori
error variance. The decision can also be delegated to the Fisher Model Test (see command >>> FISHER).
iTOUGH2 uses the a posteriori error variance s02 by default.
However, for design calculations or synthetic inversions, the error analysis should be based
on the a priori variance sigma02 which is 1 by definition.
Example:
> COMPUTATION
>> ERROR
>>> based on A PRIORI error variance
<<<
See Also:
>>> FISHER, >>> POSTERIORI
@@@
Syntax: >> PRODUCTION (phase_name/PHASE: iphase)
Parent Command:
> OBSERVATION
Subcommand:
>>> SINK
Description:
(synonym for command >> GENERATION)
Example:
(see command >> GENERATION)
See Also:
>> GENERATION
@@@
Syntax: >> PRODUCTIVITY INDEX
Parent Command:
> PARAMETER
Subcommand:
>>> SOURCE
Description:
This command selects the productivity index for wells on deliverability (TOUGH2 variable GX) as a parameter to
be estimated. This parameter refers to a sink/source code name. The generation type must be DELV.
Example:
> PARAMETER
>> PRODUCTIVITY INDEX
>>> SINK: WEL_1
<<<<
<<<
<<
See Also:

@@@
Syntax: >> PUMPING RATIO
Parent Command:
> PARAMETER
Subcommand:
>>> SOURCE
Description:
This command selects as a parameter the pumping ratios of wells belonging to the same well
group. This option can be used to determine the distribution of generation rates among a
group of wells, e.g., to determine the optimum pumping strategy for a cleanup operation.
For example, the total extraction rate from a system of wells is often limited by the treatment
capacity. However, extraction rates of individual wells can be adjusted, optimizing the efficiency
of the cleanup operation. A well group consists of all sinks/sources with the same
code name (TOUGH2 variable SL). The sum of the constant generation rates (TOUGH2
variable GX) of all wells within the same well group remains constant, but individual rates are
adjusted. It is imperative to estimate as many pumping ratios as the number n of wells
within the well group of interest. After optimization, the actual generation rate for well i is
calculated from the prescribed total pumping rate q and the estimated pumping ratio a :
tot i
a
i
q = q  j=1,...,n
i tot Sum(a )
j
Example:
> PARAMETER
>> PUMPING RATIO
>>> SOURCE: EXT_1
>>>> GUESS : 0.25
<<<<
>>> SOURCE: EXT_2
>>>> GUESS : 0.5
<<<<
>>> SOURCE: EXT_3
>>>> GUESS : 0.25
<<<<
<<<
<<
See Also:

@@@
Syntax: >>> QUADRATICLINEAR: c
Parent Command:
>> OPTION
Subcommand:

Description:
This command selects a quadraticlinear objective function. Given this estimator, the objective function to be
minimized is a combination of leastsquares for small residuals and the first norm for residuals larger than ctimes
the prior standard deviation:
S = Sum(g(y )) i=1,...,m
i
where
 2
 y y  < c
 i i
g(y ) = 
i 
 c(2y   c) y  > c
 i i
with
y = r /sigma
i i i
This objective function does not correspond to a standard probability density function.
It has the general characteristic that the weight given individual residuals first increases
quadratically with deviation, then only linearly to reduce the impact of outliers.
For c > infinity, the estimator is identical to leastsquares; for c > 0, it approaches the L1estimator.
Note that this objective function is minimized using the standard LevenbergMarquardt
algorithm which is designed for a quadratic objective function. Since the function is
quadratic for y < c, the LevenbergMarquardt algorithm is usually quite efficient.
Example:
> COMPUTATION
>> OPTION
>>> use a QUADRATICLINEAR robust estimator with a constant c : 1.0
<<<
<<
See Also:
>>> ANDREW, >>> CAUCHY, >>> L1ESTIMATOR, >>> LEASTSQUARES
@@@
Syntax: >>>> RANGE: lower upper
or
>>>> BOUND: lower upper
Parent Command:
all thirdlevel commands in block > PARAMETER
Subcommand:

Description:
This command sets the admissible parameter range. It provides lower and upper bounds
of the parameter. During the optimization process, iTOUGH2 may suggest parameter values
that are physically not valid (e.g. negative porosity) or not reasonable (e.g. very high
permeability). Limiting the admissible range for the values a parameter can assume prevents
the simulation from stopping due to an unphysical or unreasonable parameter value.
However, it is strongly suggested not to specify a narrow parameter range about the initial
guess. The range should reflect physical bounds, and the expected variation of the
parameter. If prior knowledge suggests that a certain parameter varies only slightly about the
initial guess, this information should enter the inversion as a standard deviation associated
with the initial guess, i.e., prior information (see command >>>> DEVIATION (p)).
A parameter tends to greatly vary, potentially hitting its lower or upper bound, if (i) a
systematic error is present, (ii) the initial guess is far away from the best estimate, (iii) the
parameter is not sensitive, or (iv) the parameter is highly correlated to more sensitive
parameters. The final parameter set should not contain parameters at their lower or upper
bounds. If some of the estimated parameters are at the bounds, it is suggested to carefully
examine the four potential reasons mentioned above. A new inversion should be performed
after corrective actions have been taken.
Example:
> PARAMETER
>> POROSITY
>>> MATERIAL: SANDY
>>>> PRIOR information : 0.34
>>>> standard DEVIATION: 0.05
>>>> admissible RANGE : 0.01 0.99
<<<<
<<<
<<
See Also:
>>>> DEVIATION (p), >>>> STEP
@@@
Syntax: >> RATE
Parent Command:
> PARAMETER
Subcommand:
>>> SOURCE
Description:
This command selects as a parameter the constant generation rate (TOUGH2 variable GX) or
timedependent generation rate (TOUGH2 variable F2(L) for LTAB > 1). This parameter refers to a
sink/source code name. Estimating a timedependent generation rate requires providing index L through command >>>> INDEX.
Example:
> PARAMETER
>> RATE
>>> SOURCE: WEL_1 + 5
>>>> ANNOTATION: const. rate
>>>> LOGARITHM
<<<<
>>> SOURCE: INJ_1
>>>> ANNOTATION: variable rate
>>>> VALUE
>>>> INDEX : 7
<<<<
<<<
<<
See Also:
>> TIME (p)
@@@
Syntax: >>> REDUCTION: max_red
Parent Command:
>> CONVERGE
Subcommand:

Description:
By default, a TOUGH2 simulation stops if 20 consecutive time step reductions have occurred
without convergence. This command changes the maximum number of allowable time step
reductions to max_red. Time step reductions occur if:
(1) the initial time step is too large (see TOUGH2 variable DELTEN or DLT(1), and command >>> ADJUST),
(2) TOUGH2 parameter NOITE is too small,
(3) a failure in the EOS module occurs ,
(4) boundary conditions are changed drastically during the course of the simulation.
Variable max_red should only be increased to address problem (4), i.e., when drastic
changes in generation rates (TOUGH2 block GENER) are imposed, or if Dirichlet boundary
conditions are changed using command >> RESTART TIME or through subroutine USERBC.
Example:
> COMPUTATION
>> CONVERGE
>>> accept : 30 CONSECUTIVE time step reductions
<<<
<<
See Also:
>> RESTART TIME, >>> ADJUST, >>> CONSECUTIVE
@@@
Syntax: >> RELATIVE
Parent Command:
> PARAMETER
Subcommand:
>>> DEFAULT
>>> MATERIAL
Description:
This command selects a parameter of the relative permeability function (TOUGH2 variable
RP(IPAR,NMAT)) of a certain rock type, or a parameter of the default relative permeability
function (TOUGH2 variable RPD(IPAR)). Command >>>> INDEX must be used to
select the parameter index IPAR. The physical meaning of the parameter depends on the
type of relative permeability function selected in the TOUGH2 input file, variable IRP or
IRPD. The admissible range should be specified explicitly to comply with parameter
restrictions (see Pruess [1987], Appendix A).
Example:
> PARAMETER
>> parameter of RELATIVE permeability function
>>> DEFAULT
>>>> ANNOTATION : Resid. Gas Sat.
>>>> PARAMETER no. : 2
>>>> VALUE
>>>> RANGE : 0.01 0.99
<<<<
>>> MATERIAL: ZONE1 +3
>>>> ANNOTATION : IRP=7, RP(1)=m
>>>> PARAMETER no. : 1
>>>> FACTOR
<<<<
<<<
<<
See Also:
>> CAPILLARY
@@@
Syntax: >>>> RELATIVE: rel_err (%)
Parent Command:
all thirdlevel commands in block > OBSERVATION
Subcommand:

Description:
This command takes a fraction of the observed value as the standard deviation of the
measurement error (for more details see command >>>> DEVIATION (o)).
This option cannot be applied to data sets that contain zero measurements.
Example:
> OBSERVATION
>> LIQUID FLOW RATE
>>> CONNECTION INJ_1 DOM_2
>>>> DATA on FILE: flow.dat
>>>> RELATIVE measurement error of : 5 % is assumed
<<<<
<<<
<<
See Also:
>>>> DEVIATION (o)
@@@
Syntax: >>> RESIDUAL
Parent Command:
>> OUTPUT
Subcommand:

Description:
This command prints the observed and calculated system response as well as the residuals after each iTOUGH2 iteration.
By default, the residuals are printed only once at the end of the optimization.
Example:
> COMPUTATION
>> OUTPUT
>>> print RESIDUALs after each TOUGH2 simulation.
<<<
<<
See Also:
>>> JACOBIAN
@@@
Syntax: >> RESTART TIME: ntime (time_unit) (NEW)
Parent Command:
> OBSERVATION
Subcommand:

Description:
This command selects points in time at which the boundary conditions are changed by a step
function, and reassigns primary variables and/or grid block volumes for selected elements.
Such times are termed "restart times" because in standard TOUGH2 one would have to stop
the simulation, and restart it after having changed the initial conditions of certain elements.
This option is useful for modeling well tests, where the condition in the borehole is changed
at certain points in time. For the simultaneous calibration of well tests consisting of many
test events, it is imperative to model the entire sequence in a single simulation run, i.e.
individual events must be connected. Recall, that complicated, timevarying boundary
conditions can also be programmed into subroutine USERBC.
The specification of a restart time is identical to specifying calibration points (see command
>> TIME (o)). Instead of calibration times, a restart time is indicated. Following the line
containing the restart time, a list of element names elem (or element numbers) can be given,
followed by an integer ipv indicating which primary variable is to be changed. If ipv is zero,
the volume of the corresponding element is changed. Finally, the new primary variable or
grid block volume, respectively, is given. The general format is as follows:
>> RESTART TIME: 1 (time_unit)
restart_time
elem ipv pv
elem ipv pv
.... ... ..
A comprehensive example is given below.
This option can also be used to periodically generate a SAVE file. This may be desirable for
very long TOUGH2 simulations. In case of a computer failure, the run can be restarted at the
time of the last restart time.
If keyword NEW is present, a new SAVE file with unique file name is created for each restart time.
Example:
This example demonstrates the use of restart times to connect four individual test events into
a single simulation. The test sequence starts with a shutin recovery period, i.e., in the
TOUGH2 input file the actual interval volume is assigned to the element representing the
borehole. At time tCP, a constant pressure pumping test is initiated, i.e. the element volume
is increased to a very large number, and the prescribed interval pressure is specified as
initial conditions. A shutin pressure recovery period starts at time tREC,
modeled by reassigning the actual interval volume. Finally, a pulse test is simulated,
assuming that all the gas that potentially accumulates in the borehole interval has been
released at time tPULSE, prior to applying a short pressure pulse. The pressure response in
the injection well is schematically shown in the figure below, followed by the appropriate
iTOUGH2 block with restart times and boundary condition specifications.
> OBSERVATION
>> RESTART TIME: 1 [HOURS]
2.56 (tCP)
BOR_1 0 1.0E+50 (large volume for const. pressure b.c.)
BOR_1 1 1.3E+05 (set constant interval pressure)
>> RESTART TIME: 1 [HOURS]
3.51 (tREC)
BOR_1 0 0.73 (actual volume for shutin recovery)
>> RESTART TIME: 1 [HOURS]
5.44 (tPULSE)
BOR_1 2 0.00 (prescribe singlephase liquid cond.)
BOR_1 1 4.85E+05 (pressure pulse)
  
element ipv pv/vol
TOUGH2 stops at each restart time, writes a SAVE file, reads it as an INCON file, changes
initial conditions, i.e. updates the primary variables or grid block volumes of the indicated
elements, and continues the simulation.
See Also:
>> TIME (o)
@@@
Syntax: >>> ROCKS: mat_name (mat_name_i ...) (+ iplus)
Parent Command:
all secondlevel commands in block > PARAMETER referring to a material name
Subcommand:
(synonym for command >>> MATERIAL)
Description:
(see command >>> MATERIAL)
Example:

See Also:
>>> MATERIAL
@@@
Syntax: >> SATURATION (phase_name/PHASE: iphase)
Parent Command:
> OBSERVATION
Subcommand:
>>> ELEMENT
Description:
This command selects phase saturation as an observation type. This observation type refers
to an element. The phase name phase_name or phase number iphase, which depend on the
EOS module being used, are listed in the iTOUGH2 header. They can be specified either on
the command line or using the subcommand >>>> PHASE.
Example:
> OBSERVATION
>> GAS SATURATION
>>> calculated for ELEMENT: AA__5
>>>> ANNOTATION: Sg during gas injection
>>>> DATA [HOURS]
0.1 0.01
0.3 0.13
0.5 0.34
0.8 0.59
1.2 0.72
1.5 0.81
2.0 0.86
2.5 0.89
5.0 0.91
>>>> DEVIATION: 0.05
<<<<
<<<
>> SATURATION
>>> ELEMENT: BB__5
>>>> NAPL PHASE saturation
>>>> NO DATA available (i.e. just for plotting)
>>>> WEIGHT: 1.0E20 (don't weigh, just plot)
<<<<
<<<
<<
See Also:

@@@
Syntax: >> SCALE
Parent Command:
> PARAMETER
Subcommand:
>>> NONE
Description:
This command selects as a parameter a grid scaling factor.
Nodal distances, interface areas, and gridblock volumes will be scaled
accordingly. There are three grid scaling factors, referring to the
three directions specifed by TOUGH2 variable ISOT. The direction is
selected through command >>>> INDEX. If all three directions are selected,
the mesh is scaled isotropically (see TOUGH2 variable SCALE).
Selecting a grid scaling factor as a parameter can be used to design and
optimize the horizontal and/or vertical spacing between injection points.
Example:
> PARAMETER
>> grid SCALEing factor
>>> NONE
>>>> ANNOTATION: horizontal well spacing
>>>> INDEX : 1 2
>>>> GUESS : 1.0
<<<<
<<<
<<
See Also:

@@@
Syntax: >>>> SCHEDULE: beta
Parent Command:
>>> ANNEAL
Subcommand:

Description:
This command defines the annealing schedule for Simulated Annealing minimization.
The annealing schedule is a function describing the temperature reduction (and thus the
probability of accepting an uphill step, see command >>> ANNEAL). The temperature is
updated after a certain number of successful steps have been performed (see command >>>> STEP (a)).
There are two functions available. If beta is in the range 0 < beta < 1 (typically 0.9), the following
annealing schedule is used:
k
tau = beta * tau
k 0
where k is the iteration index, and tau is the initial temperature specified by command >>>> TEMPERATURE.
0
If beta is greater than one, the following annealing schedule is invoked:
beta
tau = tau (1  k/K)
k 0
where K is the maximum number of iterations (see command >>>> ITERATION (a)).
Example:
> COMPUTATION
>> OPTION
>>> Simulated ANNEALing
>>>> total number of ITERATIONS: 200
>>>> initial TEMPERATURE: 0.05
>>>> update after: 100 STEPS
>>>> annealing SCHEDULE: 0.95
<<<<
<<<
<<
See Also:
>>>> ITERATION (a), >>>> STEP, >>>> TEMPERATURE
@@@
Syntax: >> SECONDARY (phase_name/PHASE: iphase) (: ipar)
Parent Command:
> OBSERVATION
Subcommand:
>>> ELEMENT
Description:
This command selects thermophysical properties, which are socalled
"secondary parameters" in TOUGH2, as an observation type.
This observation type refers to an element.
The secondary parameter refers to elements of TOUGH2 vector PAR
(see Pruess [1991], Figure 2).
The index ipar must be provided in the command line after a colon or through
the fourthlevel command >>>> INDEX. The fluid phase must also be identified.
The phase name phase_name or phase number iphase, which depend on the EOS
module being used, are listed in the iTOUGH2 header.
They can be specified either on the command line or using the fourthlevel
command >>>> PHASE.
The following table lists the thermophysical property addressed by ipar.
Some of the properties can be selected using an alternative secondlevel
command.

ipar Secondary Parameter Alternative Command

1 phase saturation SATURATION
2 relative permeability 
3 viscosity 
4 density 
5 specific enthalpy 
6 capillary pressure PRESSURE
NB+i mass fraction of component i MASS FRACTION

Example:
> OBSERVATION
>> SECONDARY parameter No: 3
>>> ELEMENT: AA__5
>>>> ANNOTATION: gas viscosity
>>>> GAS PHASE
>>>> NO DATA
<<<<
<<<
See Also:
>> MASS FRACTION, >> PRESSURE, >> SATURATION,
>>>> PHASE, >>>> INDEX
@@@
Syntax: >> SELEC
Parent Command:
> PARAMETER
Subcommand:
>>> MODEL
Description:
This command selects as a parameter one of the SELEC parameters (TOUGH2 variable
FE(I)). The physical meaning of the parameter depends on the module being used.
The index I must be provided through command >>>> INDEX.
Example:
> PARAMETER
>> SELEC parameter
>>> MODEL
>>>> ANNOTATION: Gel Density (EOS11)
>>>> INDEX : 2
>>>> VALUE
<<<<
<<<
<<
See Also:

@@@
Syntax: >>> SELECT
Parent Command:
>> OPTION
Subcommand:
>>>> CORRELATION
>>>> ITERATION
>>>> LIST
>>>> SENSITIVITY
Description:
This command invokes the automatic parameter selection option of iTOUGH2. The
parameters defined in block > PARAMETER are screened according to two selection
criteria (see commands >>>> SENSITIVITY and >>>> CORRELATION).
Only the most sensitive and/or most independent parameters are subjected to the
optimization process. The selection procedure is repeated every few iterations.
Automatic parameter selection allows one to submit a larger set of parameters to the
estimation process. If a parameter is not sensitive enough to be estimated from the available
data, it is automatically removed from the set of parameters being updated. This makes the
inversion faster because fewer parameters have to be perturbed for calculating the Jacobian
matrix (the full Jacobian is only calculated every few iteration when the selection criteria are
reevaluated). The inversion is also more robust. Parameters that are not sensitive or highly
correlated tend to be changed drastically during an iTOUGH2 iteration which may cause
unnecessary numerical difficulties. If they are (temporarily) removed from the parameter set,
they remain at their initial current value.
An additional advantage of using this option is the fact that sensitivities, estimation
uncertainties, and parameter correlations are calculated for all the specified parameters,
regardless of whether they are updated during the optimization.
Due to the nonlinearity of the inverse problem at hand, sensitivity coefficients and parameter
correlations constantly change during the optimization. Therefore, the selection criteria must
be reevaluated from time to time (see command >>>> ITERATION (s)), i.e., parameters may be deactivated
and reactivated during the course of an inversion.
Example:
> COMPUTATION
>> OPTION
>>> automatic parameter SELECTion
>>>> SENSITIVITY criterion: 0.1
>>>> repeat selection every : 3 rd ITERATION
<<<<
<<<
See Also:

@@@
Syntax: >>> SENSITIVITY (There are two thirdlevel commands >>> SENSITIVITY. Check paraent command.)
Parent Command 1:
>> OPTION
Syntax:
>>> SENSITIVITY
or
>>> DESIGN
Subcommand:

Description:
This command makes iTOUGH2 evaluate the sensitivity matrix without performing any optimization.
By default, the scaled Jacobian matrix, i.e., the sensitivity coefficients scaled by the standard deviation
of the observation and expected parameter variation, respectively, is printed to the iTOUGH2 output file:
sigma dz sigma
p i p
_ j j
J = J  =  
ij ij sigma dp sigma
z j z
i i
In addition, the unscaled sensitivity coefficients can be printed by invoking subcommand >>> SENSITIVITY
in block >> OUTPUT. This information can be used to identify the parameters that most strongly affect the system
behavior at actual or potential observation points. Similarly, the relative information content
of actual or potential observations, i.e., the contribution of each data point to the solution of
the inverse problem can be evaluated.
Based on this command, iTOUGH2 also calculates the covariance matrix of the estimated parameters, i.e., the
estimation uncertainty under the assumption that the variances of the residuals are accurately
depicted by the prior covariance matrix Czz. This information along with the global
sensitivity measures (sums of absolute sensitivity coefficients) can be used to optimize the design of an experiment.
It is recommended to use a relatively large perturbation factor (see command >>> PERTURB),
possibly in combination with centered finite difference quotients (see command >>> CENTERED)
for the purpose of sensitivity analysis.
Example:
> COMPUTATION
>> OPTION
>>> perform a SENSITIVITY analysis for test DESIGN
<<<
<<
See Also:
>>> CENTERED, >>> PERTURB, >>>> SENSITIVITY (o)
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
Parent Command 2:
>> OUTPUT
Syntax:
>>> SENSITIVITY
Subcommand:

Description:
This command prints the sensitivity matrix to the iTOUGH2 output file. The elements of the
sensitivity matrix are the partial derivatives of the system response at the observation points
with respect to the parameters:
dz
i
J = 
ij dp
j
By default, iTOUGH2 prints the scaled Jacobian matrix, the elements of which are the
sensitivity coefficients scaled by the ratio of the standard deviation of the observation and expected
parameter variation:
sigma dz sigma
p i p
_ j j
J = J  =  
ij ij sigma dp sigma
z j z
i i
Example:
> COMPUTATION
>> OUTPUT
>>> print SENSITIVITY matrix in addition to the scaled Jacobian
<<<
<<
See Also:

@@@
Syntax: >>>> SENSITIVITY: ()rsens
Parent Command:
>>> SELECT
Subcommand:

Description
This command defines one of the criteria used for automatic parameter selection. It examines
the potential of a parameter to reduce the objective function based on the sensitivity coefficient:
dD = delta S
where (delta S) is the change of the objective function if the parameter is perturbed by a certain,
small value (see command >>> PERTURB).
The sensitivities are normalized to the maximum sensitivity:
omega = dD / dDmax
Those parameters with a omegavalue larger than rsens, i.e., the most sensitive parameters, are
selected. Parameters which are unlikely to significantly reduce the objective function are
(temporarily) excluded from the optimization process.
If a negative value is given for rsens, the selection criterion is relaxed with each iteration k,
and reaches zero for the last iteration max_iter, i.e., all parameters specified in block > PARAMETER
are selected for the final step.
rsens = rsens (1  k/max_iter)
k
Example:
> COMPUTATION
>> OPTION
>>> SELECT parameter automatically every
>>>> : 3 ITERATIONS
>>>> based on the SENSITIVITY criterion with rsens : 0.10
<<<<
<<<
<<
See Also:
>>>> ITERATION (s), >>>> CORRELATION
@@@
Syntax: >>> SET: iset
Parent Command:
>> DRIFT
>> FACTOR
>> LAG
>> SHIFT
Subcommand:
all fourthlevel commands in block > PARAMETER
Description:
The parent command of this command identifies a parameter that refers to a data set, i.e., it
manipulates the observed data. The data set is identified by its ordering number iset as
specified in block > OBSERVATION. Alternatively, iset can be read using subcommand >>>> INDEX;
the latter command must be used if multiple sets are selected.
Example:
> PARAMETER
Estimate coefficients of regression dz=A+B*time to correct flowmeter
data. The flowmeter data are provided as data set No. 2, 3, and 4 in the
OBSERVATION block. All three data sets exhibit a constant offset to be
estimated. Data set No. 3 also shows a drift.
>> SHIFT
>>> NONE (multiple sets will be selected)
>>>> INDEX : 2 3 4
>>>> ANNOTATION: constant A (set 2, 3 and 4)
>>>> GUESS : 4.0E6 [kg/sec]
<<<<
<<<
>> DRIFT
>>> data SET No. : 3
>>>> ANNOTATION: coefficient B (set 3 only)
>>>> GUESS : 1.0E9 [kg/sec/sec]
<<<<
<<<
<<
See Also:

@@@
Syntax: >>>> SET: iset
Parent Command:
all thirdlevel commands in block > OBSERVATION
Subcommand:

Description:
Multiple data sets can be stored on a single file, separated by a single line containing
characters. This command is used to select the iset'th data set on a data file. By default, data
are read from the first set following the header lines (see command >>>> HEADER).
Example:
> OBSERVATION
>> PRESSURE
>>> ELEMENT: AAA99
>>>> ANNOTATION: Pres. well BBB
>>>> conversion FACTOR: 1.0E5
>>>> skip: 3 HEADER lines, then
>>>> select: 2 nd SET on
>>>> COLUMNS: 2 3
>>>> DATA FILE: pres.dat [HOURS]
>>>> RELATIVE error: 0.05
<<<<
<<<
The content of file pres.dat is:
1 This is file pres.dat
Line Time Pressure
3 Data set 1: Pressure in borehole AAA
4 0.0 1.000
5 1.0 1.134
6 2.0 1.495
7 Data set 2: Pressure in borehole BBB
8 0.0 1.051
9 1.0 1.433
10 2.0 1.874
11 3.0 2.431
.. ... .....
See Also:
>>>> HEADER
@@@
Syntax: >> SHIFT
Parent Command:
> PARAMETER
Subcommand:
>>> NONE
>>> SET
Description:
This command selects as a parameter a constant used to shift the calculated TOUGH2 output.
The constant is added to the output that refers to a specific data set:
z = z + shift
TOUGH2
where shift is the constant added to the calculated TOUGH2 output z . The result z is
TOUGH2
compared to the measurement z* of the corresponding data set.
This option allows removal of a constant from the data. For example, if matching relative
pressure data that fluctuate around an unknown mean, or if a flowmeter exhibits an unknown
offset that needs to be estimated, the shift parameter being estimated is the mean or offset, respectively.
A nonzero value must be provided as an initial guess through the iTOUGH2 input file using command >>>> GUESS.
The data set is identified by number using command >>> SET (p) or command >>>> INDEX (p).
If the constant is known, i.e. does not need to be estimated, use command >>>> SHIFT,
a subcommand of > OBSERVATION, to shift the data.
Example:
> PARAMETER
>> SHIFT
>>> SET No. : 1
>>>> ANNOTATION: atmos. pres.
>>>> GUESS: 1.0E5 [Pa] subtract atmospheric pressure
<<<<
>>> NONE
>>>> ANNOTATION: offset
>>>> INDEX : 2 3 4 (one constant for data sets 2, 3, and 4)
>>>> GUESS : 4.0E6 [kg/sec]
<<<<
<<<
<<
See Also:
>> DRIFT, >> FACTOR, >> LAG, >>> SET (p), >>>> SHIFT
@@@
Syntax: >>>> SHIFT: shift (TIME (time_unit))
Parent Command:
all thirdlevel commands in block > OBSERVATION
Subcommand:

Description:
This command shifts data values or observation times by a known, constant value shift. The
effect of this command is a direct data manipulation which must be distinguished from
estimating an unknown constant that shifts the model output (see command >> SHIFT).
If keyword TIME is present on the command line, the constant shift is added to all
observation times of the corresponding data set. This is useful to adjust the reference time of
the data record if it is different from the starting time of the simulation (TOUGH2 variable TSTART).
The time unit can be selected using one of the time_unit keywords.
If keyword TIME is not present, shift is added to all observed values.
The units of shift must match those of the observed data (see command >>>> FACTOR).
Example:
> OBSERVATION
>> PRESSURE
>>> ELEMENT: GUG_0
>>>> ANNOTATION: rel. pres.
>>>> conversion FACTOR: 1000.0 [kPa]  [Pa]
>>>> SHIFT: 100.0 [kPa] subtract atmospheric pressure from data
>>>> SHIFT TIMES as well by: 1.25 DAYS
>>>> relative pressure DATA in [HOURS] and [kPa] follow:
30.0 101.314
30.5 101.789
31.0 102.113
.... .......
>>>> standard DEVIATION: 0.5 [kPa]
<<<<
<<<
<<
See Also:
>> SHIFT, >>>> FACTOR
@@@
Syntax: >>> SIMPLEX
Parent Command:
>> OPTION
Subcommand:

Description:
This command selects the downhill simplex method to minimize the objective
function. The downhill simplex method does not calculate derivatives and
can therefore be used for discontinuous objective functions.
The initial simplex is defined by the initial parameter set p0 and n
additional points given by :
p = p + sigma * e i=2,...,(n+1)
i 0 i i
where sigma is the parameter variation given by commands >>>> VARIATION or
>>>> DEVIATION (p), and the e's are n unit vectors along the parameter axis.
At the end of minimization by means of the simplex algorithm,
the Jacobian matrix is evaluated to allow for a standard error analysis.
The simplex algorithm is most useful in connection with the L1estimator
and discontinuous cost functions.
Example:
> COMPUTATION
>> OPTION
>>> use SIMPLEX algorithm to minimize objective function
<<<
<<
See Also:
>>> ANNEAL, >>> GAUSSNEWTON, >>> GRID SEARCH,
>>> LEVENBERGMARQUARDT
@@@
Syntax: >>> SIMULATION: mtough2
Parent Command:
>> CONVERGE
Subcommand:

Description:
This command limits the number of TOUGH2 simulations performed during an inversion.
A rough estimate of the total number of TOUGH2 simulations during an inversion can be made
given the number of iterations requested (see command >>> ITERATION) and the type of
finite difference quotients calculated (see commands >>> FORWARD and >>> CENTERED).
However, there is an unknown number of unsuccessful steps that cannot be accounted for.
The maximum number of TOUGH2 simulations may also be limited in order to stop and
examine a specific simulation that leads to convergence problems (see command >>> INCOMPLETE).
If performing Monte Carlo simulations (see command >>> MONTE CARLO), mtough2 is
the number of parameter sets generated and thus the number of realizations of the calculated system output.
Example:
> COMPUTATION
>> STOP after
>>> a total of : 500 TOUGH2 SIMULATIONS
<<<
>> ERROR analysis
>>> MONTE CARLO
<<<
<<
See Also:
>>> ITERATION, >>> MONTE CARLO, >>> INCOMPLETE
@@@
Syntax: >>> SINK: sink_name (sink_name_i ...) (+ iplus)
Parent Command:
all secondlevel commands in block > PARAMETER or > OBSERVATION referring to sink names
Subcommand:
all fourthlevel commands in block > PARAMETER or > OBSERVATION
Description:
(synonym for command >>> SOURCE)
Example:
(see command >>> SOURCE)
See Also:
>>> SOURCE
@@@
Syntax: >> SKIN
Parent Command:
> PARAMETER
Subcommand:
>>> MATERIAL
Description:
This command selects as a parameter the skin zone radius around a well. In general, the
radial thickness of the skin zone is predefined by the discretization of the flow region.
In order to estimate the skin zone radius, the TOUGH2 mesh must be generated using the
MESHMAKER utility. Furthermore, two zones must be generated using logarithmically
increasing radial distances (option LOGAR). The skin radius is defined by the parameter
RLOG after the first appearance of keyword LOGAR. In addition, an I5 integer must be
provided after NRAD, NEQU, and NLOG, indicating the material type number for the
corresponding grid blocks (see example below). Estimation of a skin zone radius may be
unstable if changing the discretization significantly affects the modeling results.
Example:
The following MESHMAKER block is from a TOUGH2 input file. A radial mesh is
generated with a borehole of radius 0.1 m (material type 1), a skin zone of material type 2
and initial radius of 0.3 m, partitioned into 20 grid blocks, and an outer zone of material type
3 which extends to a radial distance of 10.0 m. The thickness of the layer is 1.0 m.
MESHMAKER
RZ2D
RADII
2 1
0.000E+00 0.100E+00
LOGAR
20 2 0.300E+00 0.100E01
LOGAR
80 3 1.000E+01
LAYER
1
0.100E+01
The corresponding parameter block in the iTOUGH2 input file reads:
> PARAMETER
>> SKIN zone radius
>>> MATERIAL: SKINZ (= material name 2 in block ROCKS)
>>>> ANNOTATION: Skin radius
>>>> VALUE
>>>> initial guess: 0.3
>>>> RANGE : 0.12 1.00
<<<<
<<<
<<
See Also:

@@@
Syntax: >>>> SKIP: nskip
Parent Command:
all thirdlevel commands in block > OBSERVATION
Subcommand:

Description:
(synonym for command >>>> HEADER)
Example:
(see command >>>> HEADER)
See Also:
>>>> HEADER
@@@
Syntax: >>> SOURCE: source_name (source_name_i ...) (+ iplus)
or
>>> SINK: sink_name (sink_name_i ...) (+ iplus)
Parent Command:
all secondlevel commands in block > PARAMETER or > OBSERVATION referring to source names.
Subcommand:
all fourthlevel commands in block > PARAMETER or > OBSERVATION, respectively
Description:
This command reads one or more sink/source code names (TOUGH2 variable SOURCE) or
element names associated with a sink or source (TOUGH2 variable ELEG) as specified in the
TOUGH2 block GENER. Both parameter and observations may refer to sources.
Sources and source elements are designated by a threecharacter/twointeger (FORTRAN format: AAAII)
code name. Blanks in the names as printed in the TOUGH2 output file must be replaced
by underscores (e.g., a source or element name specified in the TOUGH2 input file as
'BÊ007' is printed as 'BÊ0Ê7' in the TOUGH2 output file. Therefore, it must be
addressed in the iTOUGH2 input file as 'B_0_7').
Multiple names can be specified. A sequence of iplus elements can be generated by
increasing the number of the last element. The following two command lines are identical:
>>> SOURCE: WEL_1 WEL15 +3
>>> SOURCE: WEL_1 WEL15 WEL16 WEL17 WEL18
Example:
> PARAMETER
>> determine ENTHALPY of injected fluid...
>>> SOURCE: WEL_1
>>>> LOGARITHM
<<<<
<<<
<< ... based on...
> OBSERVATION
>> ... the total VAPOR PRODUCTION in the extraction wells
>>> SINK : EXT_1 + 3
>>>> DATA on FILE : steam.dat
>>>> RELATIVE error: 10 %
<<<<
<<<
See Also:

@@@
Syntax: >>> STEADYSTATE (SAVE) (: max_time_step)
Parent Command:
>> OPTION
Subcommand:

Description:
This command allows TOUGH2 to reach steady state prior to or after a transient simulation.
This option can be used to calibrate against steadystate data (only or in combination with
transient data), or to reach an equilibrium as initial conditions for a subsequent transient simulation.
Note that a transient TOUGH2 simulation must be performed to reach steady state which is
usually indicated by one of the following convergence failures:
 too many time steps converged on a single NewtonRaphson iteration (see command >>> CONSECUTIVE);
 convergence failure followed by two time steps converging on a single Newton Raphson iteration;
 too many time step reductions (see command >>> REDUCTION);
 maximum number of time steps reached (TOUGH2 variable MCYC);
 maximum simulation time reached (TOUGH2 variable TIMAX);
 maximum time step size reached (use a colon on the command line followed by
max_time_step to select a maximum time step size. The max_time_step must be
smaller than TOUGH2 variable DELTMX.).
iTOUGH2 usually stops if a TOUGH2 run terminates due to one of the above mentioned
convergence failures (see command >>> INCOMPLETE). If command >>> STEADYSTATE is present, however,
it accepts convergence failures and proceeds with the inversion.
Inversions involving steadystate runs can be greatly accelerated by using keyword SAVE
on the command line. After the first steadystate run has been completed, iTOUGH2 takes
the steadystate conditions stored on file SAVE as the initial condition (file INCON) for the
next TOUGH2 run, sets the initial time step to the (usually large) last time step of the
previous simulation, and updates the parameters. Since the parameter set is changed only slightly between
subsequent TOUGH2 runs, the new steady state is usually reached within a few additional
time steps. This option should not be used if also calibrating against transient data preceding
the steadystate data point.
Assuming that calibration against steadystate data is to be performed, a steadystate calibration time tINF
should be specified at a very late point in time (i.e., beyond the transient phase of the simulation), and the same
time should be assigned to the observed steadystate data point in block >>>> DATA. (In addition to the steadystate point,
one might also specify calibration points during the transient phase).
The TOUGH2 simulation proceeds until a convergence failure occurs at an unknown point in time tCF.
The primary variables at that point are written to file SAVE and reused as initial conditions for the subsequent TOUGH2 run
(only if keyword SAVE is present). Then, the calculated system state at time tCF is compared to the steadystate data point
at time tINF (make sure that tINF is always greater than tCF, e.g., set tINF=1.0E20).
The second application of the steadystate option is the following.
If a steadystate regime is to precede a transient regime within a single simulation
(e.g., to assure initial conditions are at equilibrium, a state that depends on the parameters to be
estimated), a negative starting time tINF (TOUGH2 variable TSTART) must be specified. Ensure that tINF is
larger than the duration required to reach steady state, e.g., tINF = 1.0E20. The TOUGH2
simulation proceeds until a convergence failure occurs at an unknown point in time tCF creating the steadystate regime.
The primary variables at that point are written to file SAVE to be used as initial conditions for the
subsequent TOUGH2 run (only if keyword SAVE is present). The simulation time is then set to zero, and
the transient regime of the simulation is initiated. This requires that boundary
conditions are changed at time zero, e.g., by starting injection or withdrawal, or by changing
Dirichlettype boundary conditions (see command >> RESTART TIME).
(Note that if transient data are combined with steadystate data, the standard deviation of the
steadystate data point may have to be decreased substantially to outweigh the large number of transient points.)
Example:
> OBSERVATION
>> steadystate point in TIME: 1 (= t_inf)
1.0E20
>> PRESSURE
>>> ELEMENT: FDF76
>>>> DATA
1.0E20 1.354E5
>>>> DEVIATION: 0.05E5
<<<<
<<<
<<
> COMPUTATION
>> OPTION
>>> allow STEADYSTATE, use SAVE file for restart
<<<
>> CONVERGEnce criteria
>>> Presume steadystate if : 5 CONSECUTIVE time steps
converge on ITER=1
>>> number of ITERATIONS: 5
<<<
<<
<
See Also:
>> RESTART TIME, >>> CONSECUTIVE, >>> INCOMPLETE, >>> REDUCTION
@@@
Syntax: >>> STEP: max_step (UNSCALED)
Parent Command:
>> CONVERGENCE
Subcommand:

Description:
This command defines the maximum allowable size of the scaled (x=p, default)
or unscaled (x=1, keyword UNSCALED) update vector delta(p) per iTOUGH2 iteration.
The step length of the scaled parameter update vector is defined as follows:
2 1/2
delta(p) = [Sum(dp /x ) ]
i i
This is a global step size limitation as opposed to the one specified for each individual
parameter (see command >>>> STEP). Limiting the global step size may make the
inversion more stable. Parameter max_step should be chosen large enough, so that the
size of the proposed step size delta(p') is reduced to delta(pmax) only during the first few
iterations.
Example:
> COMPUTATION
>> CONVERGENCE
>>> number of ITERATIONS : 10 (may need more iterations!)
>>> limit scaled STEP size to: 1.0 [dimensionless]
<<<
<<
See Also:
>>>> STEP
@@@
Syntax: >>>> STEP (There are two fourthlevel commands >>>> STEP. Check parent command.)
Parent Command 1:
all thirdlevel commands in block > PARAMETER
Syntax:
>>>> STEP: max_step
Subcommand:

Description:
This command defines the maximum allowable step size, limiting the change in the
corresponding parameter per iTOUGH2 iteration. Large steps are usually proposed for a
parameter if (i) the initial guess is far away from the best estimate, (ii) the parameter has a
low sensitivity, or (iii) the parameter is highly correlated to more sensitive parameters. If the
system state is a strongly nonlinear function of the parameter, even a moderate step size may
make the inversion unstable. Parameter max_step should be chosen large enough, so
that the proposed step size deltpa(p') is reduced to delta(pmax) only during the first few
iterations. The figure below illustrates that limiting the step size also changes the direction of
the step taken in the parameter space. A global step size limitation can also be specified (see command >>> STEP),
maintaining the direction of vector delta(p).
Example:
> PARAMETER
>> POROSITY
>>> MATERIAL: SANDY
>>>> standard DEVIATION: 0.10
>>>> max. STEP size : 0.05
<<<<
<<<
<<
See Also:
>>> STEP, >>>> DEVIATION, >>>> RANGE
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
Parent Command 2:
>>> ANNEAL
Syntax:
>>>> STEP: max_step
Subcommand:

Description:
This command defines an iteration for Simulated Annealing minimization based on the number of successful steps or
total number of steps as defined by max_step. After each iteration is completed, the temperature is reduced
according to the annealing schedule (see command >>>> SCHEDULE).
An iteration is completed and the annealing temperature reduced if:
(1) the total number of successful and unsuccessful steps reaches max_step, or
(2) (0.2 * max_step) successful steps have been performed
Example:
> COMPUTATION
>> OPTION
>>> Simulated ANNEALing
>>>> maximum number of ITERATIONS: 100
>>>> maximum number of STEPS per ITERATION: 50
>>>> annealing SCHEDULE: 0.9
>>>> initial TEMPERATURE: 0.02
<<<<
<<<
<<
See Also:
>>>> SCHEDULE
@@@
Syntax: >> STOP
Parent Command:
> COMPUTATION
Subcommand:
(see command >> CONVERGE)
Description:
(synonym for command >> CONVERGE)
Example:
(see command >> CONVERGE)
See Also:
>> CONVERGE
@@@
Syntax: >>>> SUM
Parent Command:
all thirdlevel commands in block > OBSERVATION
Subcommand:

Description:
If multiple elements or connections are provided to indicate the location of a measurement
point, this command takes the sum of all calculated values as the model output to be
compared to the data. The user must ensure that the summation of the quantity is meaningful.
Example:
> OBSERVATION
>> LIQUID FLOW rate
>>> CONNECTION: A1__1 A2__1 + 7
>>>> ANNOTATION: Liquid flow across boundary
>>>> Take SUM of flow rates across 9 connections
>>>> NO DATA
<<<<
<<<
<<
See Also:
>>>> AVERAGE
@@@
Syntax: >>> TAU: ()niter
Parent Command:
>> ERROR
Subcommand:

Description:
If multiple observation types (e.g., pressure, flow rate, prior information, etc.) are used to
estimate model parameters, the relative weights assigned to each type is given by the ratio
lambda = tau /tau
ij i j
where tau are scalars such that C = tau * V . Here, C is the covariance matrix of
i i i i
all observations of type i (a submatrix of covariance matrix C ), and V is a positive
zz i
symmetric matrix. By default, tau is fixed at 1. This command allows lambda to be updated
in an iterative process, where tau is recalculated every multiple of niter iterations and is
given by:
i
T 1
r C r
i i i
tau = 
i m
i
If a negative number is provided for niter, prior information is excluded from the process.
Recall that tau refers to all data of a certain observation type (i.e., not to individual data sets).
i
The process assigns weights such that the relative contribution of each observation type to the
objective function tends to 1/omega, where omega is the number of observation types used in the
inversion. It should also be realized that the Fisher model test becomes meaningless if lambda is
updated during the inversion because the test will always be fulfilled by definition.
While this option provides flexibility in reassigning relative weights to data of different types,
it is instead suggested to carefully select the standard deviations of the observations and prior
parameter estimates (see command >>>> DEVIATION (p/o)) based on potential measurement and modeling errors.
Example:
> COMPUTATION
>> ERROR
>>> update TAU every : 3 rd iteration
<<<
See Also:
>>>> DEVIATION
@@@
Syntax: >> TEMPERATURE
Parent Command:
> OBSERVATION
Subcommand:
>>> ELEMENT
Description:
This command selects temperature as an observation type. This observation type refers to an element.
Example:
> OBSERVATION
>> TEMPERATURE
>>> ELEMENT : TCP_1
deg(C)=(deg(F)32)*0.55
>>>> SHIFT : 32.00
>>>> FACTOR : 0.55
>>>> temperature DATA in deg(F) on file: temp.dat
>>>> DEVIATION: 0.2 degree Fahrenheit
<<<<
<<<
<<
See Also:

@@@
Syntax: >>>> TEMPERATURE: ()temp0
Parent Command:
>>> ANNEAL
Subcommand:

Description:
This command defines the initial temperature tau0 for Simulated Annealing minimization.
The initial temperature is reduced after each iteration k according to the annealing schedule (see command >>>> SCHEDULE).
The temperature tau defines the probability P with which an uphill step delta(S) should be accepted:
P = exp(delta(S)/tau)
The probability decreases with decreasing temperature. The initial temperature temp0 should
be a reasonable fraction of the objective function S. If a positive value is given, then
tau0 = temp0. If a negative value is provided, the initial temperature is internally calculated as
a fraction of the initial objective function:
tau0=temp0 * S0
Example:
> COMPUTATION
>> OPTION
>>> Simulated ANNEALing
>>>> maximum number of ITERATIONS: 100
>>>> maximum number of STEPS per ITERATION: 50
>>>> annealing SCHEDULE: 0.95
>>>> initial TEMPERATURE: 0.01 times the initial obj. func.
<<<<
<<<
<<
See Also:
>>>> STEP (a), >>>> SCHEDULE
@@@
Syntax: >> TIME (There are two secondlevel commands >> TIME. Check parent command.)
Parent Command 1:
> PARAMETER
Syntax:
>> TIME
Subcommand:
>>> SOURCE
Description:
This command selects as a parameter the time of a timedependent generation rate (TOUGH2 variable F1(L) for LTAB>1).
This parameter refers to a sink/source code name. Index L must be provided through command >>>> INDEX.
Time stepping during simulation should be small compared to the expected variation of the parameter.
Furthermore, it is suggested to prescribe the parameter perturbation for calculation of the Jacobian, i.e.,
use command >>>> PERTURB and specify a reasonably large time perturbation provided as a negative number.
Example:
> PARAMETER
>> generation TIME
>>> SOURCE: INJ_1 + 5
>>>> ANNOTATION: End of spill
>>>> INDEX : 2
>>>> VALUE
>>>> RANGE : 1E8 2E8 [sec]
>>>> PERTURB : 1E5 [sec]
>>>> max. STEP : 86400 [sec]
<<<<
<<<
<<
See Also:
>> ENTHALPY, >> RATE
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
Parent Command 2:
> OBSERVATION
Syntax:
>> TIME: ntime (EQUAL/LOGARITHMIC) (time_unit)
Subcommand:

Description:
This command selects points in time during a TOUGH2 run at which the calculated system response is used for
subsequent iTOUGH2 analysis. In the case of inverse modeling, these are the calibration
times at which calculated and observed system responses are compared.
Time stepping in the TOUGH2 simulation is forced to exactly match all specified calibration
times, i.e., defining calibration times has the same effect on the simulation as specifying
printout times in the TOUGH2 block TIMES (printout can be suppressed, however, by
specifying a negative integer for TOUGH2 parameter KDATA). Thus, time stepping
considerations must be taken into account when selecting the spacing of calibration points
(see also command >>> CONSECUTIVE).
Only one set of calibration times can be specified which is then applied to all data sets (see,
however, command >>>> WINDOW).
Calibration times do not have to match the times at which measurements are available since the value
of the observed data at the calibration point will be linearly interpolated between adjacent data points.
Calibration times can be specified in three ways: (i) explicitly listed, (ii) automatically
generated with equal spacing, and (iii) automatically generated with logarithmic spacing.
The corresponding input formats are given by example below.
Admissible keywords for selecting the time units time_unit are SECOND, MINUTE, HOUR, DAY, WEEK and YEAR.
>> (i) provide a list of : 10 TIMES in [MINUTES] on the following lines
1.0 2.0 5.0 10.0 15.0 20.0
30.0 45.0 60.0 120.0 180.0 360.0
In the example above, the first ntime=10 numbers will be read in free format and interpreted
as calibration times in minutes.
>> (ii) generate :10 points TIMES, EQUALLY spaced between tA and tB
60.0 600.0
tA tB
10 equally spaced points in time will be generated between 60.0 and 600.0 seconds. Time
limits tA and tB are read in free format.
>> (iii) generate :10 , LOGARITHMICALLY spaced points TIMES in [HOURS]
0.01 12.0
10 logarithmically spaced points in time will be generated between 36 seconds and 12 hours.
The three formats can be used repeatedly, concurrently, and with overlapping time periods
(see example below). All specified times will be sorted internally.
Example:
The following block generates a total of 11 calibration times.
> OBSERVATION
>> : 6 LOGARITHMICALLY spaced TIMES in SECONDS between
30.0 720.0 (tA and tB)
>> TIME: 5 EQUALLY spaced [MINUTES]
10.0 30.0 (tC and tD)
See Also:
>> RESTART TIME, >>> CONSECUTIVE, >>>> WINDOW
@@@
Syntax: >>> time_unit
or as a keyword to any command requesting time units
Parent Command:
>> OUTPUT
Subcommand:

Description:
This command or keyword specifies the time units of all times in the iTOUGH2 output and
plot files. Admissible time units are SECOND (default), MINUTE, HOUR, DAY, WEEK,
MONTH, and YEAR. The time output unit may be different from the time units specified in
the input or data files.
Example:
> OBSERVATION
>> : 7 EQUALLY spaced TIMES in [DAYS]
1.0 7.0
>> PRESSURE
>>> ELEMENT: ZZZ90
>>>> DATA in [MINUTES] on FILE: pres.dat
>>>> time WINDOW: 48.0 123.0 [HOURS]
<<<<
<<<
<<
> COMPUTATION
>> print OUTPUT times
>>> in WEEKS
<<<
<<
See Also:

@@@
Syntax: >> TOLERANCE
Parent Command:
> COMPUTATION
Subcommand:
(see command >> CONVERGE)
Description:
(synonym for command >> CONVERGE)
Example:
(see command >> CONVERGE)
See Also:
>> CONVERGE
@@@
Syntax: >> TOTAL MASS (phase_name/PHASE: iphase/ comp_name/COMPONENT: icomp) (CHANGE)
Parent Command:
> OBSERVATION
Subcommand:
>>> MATERIAL
>>> MODEL
Description:
This command selects as an observation type the total mass of phase iphase or component
icomp. This observation type refers to the entire model domain or to the subdomain
defined by a list of rock types.
Component number icomp or component name comp_name, and phase number iphase or
phase name phase_name depend on the EOS module being used. They are listed in the header
of the iTOUGH2 output file, and can be specified either on the command line or using the
two subcommands >>>> COMPONENT and >>>> PHASE, respectively.
If keyword CHANGE is present, the change of the total mass since data initialization is computed.
Example:
> OBSERVATION
>> CHANGE of TOTAL MASS in place since beginning of simulation
>>> entire MODEL
>>>> ANNOTATION : Total brine mass injected
>>>> BRINE is the COMPONENT of interest
>>>> FACTOR : 1.0E03 [g]  [kg]
>>>> DATA on FILE : brine.dat
>>>> DEVIATION : 10.0 [g]
<<<<
<<<
>> TOTAL MASS of PHASE: 2 (= net weight of laboratory column)
>>> MATERIAL domains : FRACT MATRI
>>>> ANNOTATION : Mass of liquid (water and brine)
>>>> DATA on FILE : liquid.dat
>>>> DEVIATION : 0.01 [kg]
<<<
<<
See Also:
>> VOLUME
@@@
Syntax: >>>> UNIFORM
Parent Command:
all thirdlevel commands in block > PARAMETER
Subcommand:

Description:
This command generates uniformly distributed input parameters for Monte Carlo simulations.
Parameter values are sampled between the lower and upper bound specified by command >>>> RANGE.
Example:
> PARAMETER
>> RELATIVE permeability function
>>> DEFAULT
>>>> PARAMETER RPD(: 1)
>>>> ANNOTATION : Slr is uncertain
>>>> sample from UNIFORM distribution between...
>>>> RANGE : 0.01 0.50
<<<<
<<<
<<
> COMPUTATION
>> perform ERROR propagation analysis by means of...
>>> MONTE CARLO simulations
<<<
>> STOP after...
>>> : 400 TOUGH2 runs
<<<
<<
See Also:
>>> MONTE CARLO, >>>> GAUSS
@@@
Syntax: >>> UPDATE
Parent Command:
>> OUTPUT
Subcommand:

Description:
This command writes version update information to the iTOUGH2 output file.
Example:
> COMPUTATION
>> OUTPUT
>>> write UPDATE information to iTOUGH2 output file
<<<
<<
See Also:
>>> INDEX, >>> VERSION, HELP, LIST
@@@
Syntax: >>> UPHILL: max_uphill
Parent Command:
>> CONVERGE
Subcommand:

Description:
By default, an inversion is stopped if more than 10 consecutive unsuccessful parameter steps
have been proposed. The default can be overwritten by max_uphill. Each unsuccessful step
results in an increase of the Levenberg parameter (see command >>> LEVENBERG),
leading to a smaller step size deflected towards the gradient.
Example:
> COMPUTATION
>> STOP either if
>>> : 10 ITERATIONS have been completed
or
>>> : 3 TOUGH2 runs were INCOMPLETE
or
>>> : 5 unsuccessful UPHILL steps have been proposed
<<<
<<
See Also:
>>> ITERATION, >>> LEVENBERG
@@@
Syntax: >> USER (There are two secondlevel commands >> USER. Check parent command.)
Parent Command 1:
> PARAMETER
Syntax:
>> USER (: anno)
Subcommand:
>>> ELEMENT
>>> MATERIAL
>>> MODEL
>>> SET
>>> SOURCE
Description:
This command selects a userspecified parameter. This option enables a user to estimate any
conceivable parameter which can be any TOUGH2 variable or function thereof. The user must
program the function in subroutine USERPAR, file it2user.f. Identification of and
details about the parameter can be given in the iTOUGH2 input file and will be transferred to
subroutine USERPAR. A parameter annotation anno should be provided following a colon
on the command line. This annotation (or a substring thereof) will be available in subroutine
USERPAR (variable ANNO) and can be used to identify the parameter type. The significant
part of the string should therefore not be changed by command >>>> ANNOTATION.
Furthermore, multiple material names as well as grid block or sink/source code names
defined by the corresponding thirdlevel command will be transferred to the subroutine in
array NAMEA. Integers read after command >>>> INDEX are provided through array
IDA. A flag (variable IVLF) indicates whether the estimate is a value, logarithm, or
multiplication factor of the corresponding parameter.
The user must ensure that all TOUGH2 variables used by the function are transferred to
subroutine USERPAR via COMMON blocks. If a variable is not predefined in one of the
standard COMMON blocks, a new COMMON block must be created which and added to the
include file usercom.inc.
Subroutine USERPAR has two major blocks. In the first block (IUIG=1), the value
specified in the TOUGH2 input file is transferred to iTOUGH2 as an initial guess.
Programming this first part is optional because initial guesses can also be provided through
the iTOUGH2 input file by means of command >> GUESS, >>>> PRIOR, or >>>> GUESS.
Programming the second part (IUIG=2) is mandatory. In this part, the parameter is updated,
i.e., the generic parameter value calculated by iTOUGH2, which is stored in variable XX,
must be assigned to the appropriate TOUGH2 variable.
The following is the header of subroutine USERPAR in file it2user.f, describing the transfer
variables. File it2user.f must be recompiled before the userspecified parameter becomes active.
************************************************************************
SUBROUTINE USERPAR(IUIG,XX,IVLF,IDA,NAMEA,ANNO)
************************************************************************
* User specified parameters *
* IUIG = 1 : Provide initial guess (input) *
* = 2 : Update parameter *
* XX = iTOUGH2 variable = parameter to be estimated *
* (output if IUIG=1, input if IUIG=2) *
* IVLF = 1: value (input) *
* = 2: logarithm *
* = 3: factor *
* IDA = parameter IDs (if needed) (input) *
* the number of elements in IDA is stored in IDA(MAXR1) *
* NAMEA = material or element names (if needed) (input) *
* the number of elements in NAMEA is stored in IDA(MAXR) *
* ANNO = parameter annotation as specified in iTOUGH2 input file *
************************************************************************
Example:
In this simple example, the tortuosity factor (TOUGH2 variable TORT(NMAT)) is treated as
a userspecified parameter. Variable TORT is provided through COMMON block SOLI11 in
include file rock.inc. The parameter block in the iTOUGH2 input file and subroutine
USERPAR are given below.
> PARAMETER
>> USERspecified parameter: TORTUOSITY
>>> MATERIAL: SAND1
<<<<
<<<
<<
************************************************************************
SUBROUTINE USERPAR(IUIG,XX,IVLF,IDA,NAMEA,ANNO)
************************************************************************
C
C$$$$$$$$$ COMMON BLOCKS FOR ROCK PROPERTIES $$$$$$$$$$$$$$$$$$$$$$$$$$$
INCLUDE 'rock.inc'
C
CHARACTER ANNO*(*),NAMEA*5
DIMENSION IDA(*),NAMEA(*)
C
CALL GETNMAT(NAMEA(1),NMAT)
IF (IUIG.EQ.1) XX=TORT(NMAT)
IF (IUIG.EQ.2) TORT(NMAT)=XX
END
************************************************************************
The following is a more general version of subroutine USERPAR for the determination of
tortuosity. It allows specifying multiple rock types on command line >>>> MATERIAL,
and offers the possibility to estimate tortuosity either as a value, a logarithm, or a
multiplication factor. Note that identifying the parameter by its annotation is only necessary
if other userspecified parameters are also addressed in the same subroutine.
************************************************************************
SUBROUTINE USERPAR(IUIG,XX,IVLF,IDA,NAMEA,ANNO)
************************************************************************
C
C$$$$$$$$$ PARAMETERS FOR SPECIFYING THE MAXIMUM PROBLEM SIZE $$$$$$$$$$
INCLUDE 'maxsize.inc'
C
C$$$$$$$$$ COMMON BLOCKS FOR ROCK PROPERTIES $$$$$$$$$$$$$$$$$$$$$$$$$$$
INCLUDE 'rock.inc'
C
CHARACTER ANNO*(*),NAMEA*5
DIMENSION IDA(*),NAMEA(*),X0(MAXROC)
C
C  Return TOUGH2 value as initial guess
IF (IUIG.EQ.1) THEN
IF (ANNO(1:10).EQ.'TORTUOSITY') THEN
DO 1000 I=1,IDA(MAXR)
CALL GETNMAT(NAMEA(I),NMAT)
IF (IVLF.EQ.1) THEN
XX=TORT(NMAT)
ELSE IF (IVLF.EQ.2) THEN
XX=LOG10(TORT(NMAT))
ELSE
XX=1.0D0
X0(NMAT)=TORT(NMAT)
ENDIF
1000 CONTINUE
ENDIF
C
C  Update parameter
ELSE IF (IUIG.EQ.2) THEN
C  Identify parameter by its annotation
IF (ANNO(1:10).EQ.'TORTUOSITY') THEN
DO 1001 I=1,IDA(MAXR)
C  Obtain material identifier NMAT
CALL GETNMAT(NAMEA(I),NMAT)
IF (IVLF.EQ.1) THEN
C  Estimate is tortuosity value
TORT(NMAT)=XX
ELSE IF (IVLF.EQ.2) THEN
C  Estimate is logarithm of tortuosity
TORT(NMAT)=10.0D0**XX
ELSE
C  Estimate is multiplication factor for initial tortuosity
TORT(NMAT)=X0(NMAT)*XX
ENDIF
1001 CONTINUE
ENDIF
ENDIF
END
************************************************************************
Additional examples can be found in file it2user.f .
See Also:

@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
Parent Command 2:
> OBSERVATION
Syntax:
>> USER (: anno)
Subcommand:
>>> CONNECTION
>>> ELEMENT
>>> MODEL
>>> SOURCE
Description:
This command selects a userspecified observation type. This option enables a user to
introduce a new observation type, i.e., any type of data can be used for parameter estimation
by defining the corresponding model output which is a userspecified function of TOUGH2
variables. The user must program the function in subroutine USEROBS, file it2user.f.
Identification of and details about the observation can be given in the iTOUGH2 input file
and will be transferred to subroutine USEROBS. The annotation anno (or a substring
thereof) will be available in subroutine USEROBS (variable ANNO); it can be used to
identify the observation type and data set. The significant part of the string should therefore
not be changed by subcommand >>>> ANNOTATION. Multiple grid block names or
sink/source code names defined by the appropriate thirdlevel command will be transferred to
the subroutine in array GRIDA. The corresponding element numbers are stored in array
NECA, with INEC pointing to the currently processed array index. Integer variables read
after command >>>> INDEX are provided through array IOBSA. On output, subroutine
USEROBS returns the userspecified model result TRESULT.
The user must ensure that all TOUGH2 variables used by the function are transferred to
subroutine USERPAR via COMMON blocks. If a variable is not predefined in one of the
standard COMMON blocks, a new COMMON block must be created and added to the include file usercom.inc.
The following is the header of subroutine USEROBS, describing the transfer variables.
File it2user.f must be recompiled before the userspecified observation becomes active.
************************************************************************
SUBROUTINE USEROBS(IUSER,IOBSA,GRIDA,NECA,INEC,ANNO,TRESULT)
************************************************************************
* Provides TOUGH2 result for userspecified observation type *
* IUSER : Number of dataset (input) *
* IOBSA : Array containing user specified IDs (input) *
* GRIDA : Array containing grid block names (input) *
* NECA : Array containing index of grid block or connection (input) *
* INEC : Current pointer to GRIDA and NECA, respectively (input) *
* ANNO : Annotation (input) *
* TRESULT: Userdefined TOUGH2 result (output) *
************************************************************************
Example:
In this example the pressure difference measured between two points in a laboratory column
is defined as a userspecified observation. Two differential pressures are measured,
referring to the liquid and NAPL phase, respectively. The iTOUGH2 input block and sub
routine USEROBS are given below. Variable IOBSA(MAXR2) holds the phase number.
> OBSERVATION
>> USERspecified observation type: Pres. Diff.
>>> difference between ELEMENTs: A1112 A1152
>>>> ANNOTATION : Pres. Diff. aqueous phase
>>>> PHASE : 2
>>>> DATA FILE : dp_aqu.dat
>>>> DEVIATION : 1000.0
<<<<
>>> difference between ELEMENTs: A1112 A1152
>>>> ANNOTATION : Pres. Diff. NAPL phase
>>>> NAPL PHASE
>>>> DATA FILE : dp_napl.dat
>>>> DEVIATION : 1000.0
<<<<
<<<
************************************************************************
SUBROUTINE USEROBS(IUSER,IOBSA,GRIDA,NECA,INEC,ANNO,TRESULT)
************************************************************************
C$$$$$$$$$ PARAMETERS FOR SPECIFYING THE MAXIMUM PROBLEM SIZE $$$$$$$$$$
INCLUDE 'maxsize.inc'
C$$$$$$$$$ COMMON BLOCKS FOR PRIMARY VARIABLES $$$$$$$$$$$$$$$$$$$$$$$$$
INCLUDE 'primary.inc'
C$$$$$$$$$ COMMON BLOCK FOR SECONDARY VARIABLES $$$$$$$$$$$$$$$$$$$$$$$$
INCLUDE 'second.inc'
C  Identify observation:
IF (ANNO(1:11).EQ.'Pres. Diff.') THEN
C  Reference and capillary pressure of first element:
NEC=NECA(1)
NLOC=(NEC1)*NK1
NLOC2=(NEC1)*NSEC*NEQ1+(IOBSA(MAXR2)1)*NBK
PREF1=X(NLOC+1)
PCAP1=PAR(NLOC2+6)
P1=PREF1+PCAP1
C  Reference and capillary pressure of second element:
NEC=NECA(2)
NLOC=(NEC1)*NK1
NLOC2=(NEC1)*NSEC*NEQ1+(IOBSA(MAXR2)1)*NBK
PREF2=X(NLOC+1)
PCAP2=PAR(NLOC2+6)
P2=PREF2+PCAP2
C  Take the difference
TRESULT=P2P1
ENDIF
END
C  End of USEROBS
See Also:

@@@
Syntax: >>>> USER
Parent Command:
all thirdlevel commands in block > OBSERVATION
Subcommand:

Description:
Observed data can be represented as a userspecified function of time. The function must be
coded into subroutine USERDATA, file it2user.f. The following is the header of subroutine
USERDATA, describing the transfer variables. File it2user.f must be recompiled before the
userspecified data definition becomes active.
************************************************************************
SUBROUTINE USERDATA(IDF,TIME,ANNO,F)
************************************************************************
* User specified function to represent observed data *
* IDF : data set identifier (input) *
* TIME: time at which data are to be provided (input) *
* ANNO: annotation (input) *
* F : value of observed data at time TIME (output) *
************************************************************************
Example
In this example an analytical solution is provided as data to be matched (see sample problem 8).
Liquid saturation in response to a liquid pulse is calculated as a function of time and
location in subroutine USERDATA (see following page). Note that the Zcoordinate is
passed to USERDATA through the annotation.
> OBSERVATIONS
>> LIQUID SATURATION
>>> GRID BLOCK : BP1_1
>>>> ANNOTATION : ANALYT. Z=0.100
>>>> DEVIATION : 0.002
>>>> analytical solution given by USERspecified function
<<<<
>>> GRID BLOCK : BZ1_1
>>>> ANNOTATION : ANALYT. Z=0.200
>>>> DEVIATION : 0.002
>>>> analytical solution given by USERspecified function
<<<<
<<<
<<
************************************************************************
SUBROUTINE USERDATA(IDF,TIME,ANNO,F)
************************************************************************
* User specified function to represent observed data *
* IDF : data set identifier (input) *
* TIME: time at which data are to be provided (input) *
* ANNO: annotation (input) *
* F : value of observed data at time TIME (output) *
************************************************************************
C
C$$$$$$$$$ PARAMETERS FOR SPECIFYING THE MAXIMUM PROBLEM SIZE $$$$$$$$$$
INCLUDE 'maxsize.inc'
C
C$$$$$$$$$ COMMON BLOCKS FOR SIMULATION PARAMETERS $$$$$$$$$$$$$$$$$$$$$
INCLUDE 'param.inc'
C
C$$$$$$$$$ COMMON BLOCKS FOR ROCK PROPERTIES $$$$$$$$$$$$$$$$$$$$$$$$$$$
INCLUDE 'rock.inc'
C
C$$$$$$$$$ COMMON BLOCK FOR SECONDARY VARIABLES $$$$$$$$$$$$$$$$$$$$$$$$
INCLUDE 'second.inc'
C
C$$$$$$$$$ COMMON BLOCKS FOR TOTAL MASS AND VOLUMES $$$$$$$$$$$$$$$$$$$$
INCLUDE 'rmasvol.inc'
C
C$$$$$$$$$ COMMON BLOCKS FOR ELEMENTS $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
INCLUDE 'elements.inc'
CHARACTER ANNO*(*)
SAVE A
F=0.0D0
C  Analytical solution 1Dpulse
IF (ICALL.EQ.1) A=CP(1,1)
IF (ANNO(1:10).EQ.'ANALYT. Z=') THEN
READ(ANNO(11:15),'(F5.3)',IOSTAT=IOS) Z
V=PER(3,1)*PAR(4)*GF/POR(1)/PAR(3)
DIFF=A*PER(3,1)/POR(1)/PAR(3)
XM=XPVOLU0(2)/POR(1)
F=XM/(DSQRT(4.0D0*3.1416D0*DIFF*TIME))*
& DEXP((ZV*TIME)**2/(4.0D0*DIFF*TIME))
ENDIF
END
C  END of USERDATA
See Also:
>>>> DATA, >>>> POLYNOM
@@@
Syntax: >>>> VALUE
Parent Command:
all thirdlevel commands in block > PARAMETER
Subcommand:

Description:
This command selects the estimation of the parameter value as opposed to its logarithm
(see command >>>> LOGARITHM (p)) or a multiplication factor (commands >>>> FACTOR (p) and >>>> LOG(F)).
Example:
> PARAMETER
>> parameter of RELATIVE permeability function
>>> MATERIAL: SAND1
>>>> estimate VALUE of ...
>>>> ANNOTATION : Resid. Liq. Sat.
>>>> PARAMETER no. : 1
<<<<
<<<
<<
See Also:
>>>> FACTOR (p), >>>> LOGARITHM (p), >>>> LOG(F)
@@@
Syntax: >>>> VARIANCE: sigma^2
Parent Command:
all thirdlevel commands in block > PARAMETER and > OBSERVATION
Subcommand:

Description:
(see >>>> DEVIATION (p/o))
Example:
(see >>>> DEVIATION (p/o))
See Also:
>>>> DEVIATION (p/o)
@@@
Syntax: >>>> VARIATION: sigma
Parent Command:
all thirdlevel commands in block > PARAMETER
Subcommand:

Description:
This command specifies the expected variation sigma of a parameter p. Sigma is used to scale the
columns of the Jacobian matrix, yielding dimensionless and comparable sensitivity
coefficients. While the solution of the inverse problem is not affected by the choice of the
scaling factor, all the qualitative sensitivity measures are directly proportional to sigma:
dz sigma sigma
i p p
j j
scaled sensitivity coefficient: S =   = J 
ij dp sigma ij sigma
j z z
i i
If no standard deviation (see >>>> DEVIATION (p)) or parameter variation is specified,
the scaling factor is taken to be 10 % of the respective parameter value.
For sensitivity analyses, sigma can be taken as the perturbation one would apply to study the
effect of the parameter on the modeling result.
Example:
> PARAMETER
>> POROSITY
>>> MATERIAL: FAULT
>>>> expected VARIATION : 0.10
<<<<
<<<
>> ABSOLUTE permeability
>>> MATERIAL: FAULT
>>>> expected VARIATION : 1.00 (one order of magnitude)
<<<<
<<<
<<
> COMPUTATION
>> OPTION
>>> SENSITIVITY analysis
<<<
<<
See Also:
>>>> DEVIATION (p)
@@@
Syntax: >>> VERSION
Parent Command:
>> OUTPUT
Subcommand:

Description:
This command prints version control statements at the end of the iTOUGH2 output file. The
same information is always written to the *.msg file. If making code modifications it is
strongly suggested to update the version control statement at the beginning of each subroutine or function.
Example:
> COMPUTATION
>> OUTPUT
>>> VERSION control statements
<<<
On output:

iTOUGH2 Current version V3.0 (JULY 12, 1996)

iTOUGH2 1.0 1 SEPTEMBER 1992 First version is an adaptation of ITOUGH V1.0
iTOUGH2 1.1 1 JANUARY 1993 INOBSDAT,INPAIRED: Flexible time specification
1 JANUARY 1993 INERROR,MTCARLO: number of classes can be specified
1 FEBRUARY 1993 INUSER,USERPAR: user specified parameter
1 FEBRUARY 1993 INUSROBS,USEROBS: user specified observations
15 FEBRUARY 1993 PLOTFI,PLOTCHAR: Reformat plotfiles, plot characteristic curves
15 FEBRUARY 1993 PLOTCHAR: Plots characteristic curves
iTOUGH2 1.2 1 APRIL 1993 Add version for IBM RS/6000
26 MAY 1993 INANNEAL,ANNEAL: Simulated Annealing minimization
iTOUGH2 2.0 12 AUGUST 1993 File t2cg1.f: Conjugate gradient solvers added
iTOUGH2 2.1 23 SEPTEMBER 1993 Rearrange parameter vector
29 SEPTEMBER 1993 Rearrange observation vector
15 FEBRUARY 1994 Add new observation types
iTOUGH2 2.2 1 FEBRUARY 1994 Steadystate data points allowed
This version is documented in
LBL34581, iTOUGH2 User's Guide, Version 2.2
iTOUGH2 2.3 10 MAY 1994 Correlation chart, performance comparison, stopping and restarting
16 JUNE 1994 New parameter types added
iTOUGH2 2.4 15 DECEMBER 1994 Sensitivity analysis
iTOUGH2 2.5 10 JANUARY 1995 Automatic parameter selection
13 JANUARY 1996 Aligned common blocks in include files
iTOUGH2 3.0 12 JULY 1996 YMP Software Qualification

WHATCOM 1.0 10 AUGUST 1993 #35: Q: WHAT COMPUTER IS USED? A: IBM
CALLSIG 2.5 20 MAY 1996 #112: SIGNAL HANDLER
CPUSEC 1.0 10 AUGUST 1993 #: RETURNS CPUTIME (VERSION IBM)
OPENFILE 2.5 4 JUNE 1996 #31: OPENS MOST OF THE FILES
LENOS 1.0 1 MARCH 1992 #28: RETURNS LENGTH OF LINE
PREC 1.0 1 AUGUST 1992 #86: CALCULATE MACHINE DEPENDENT CONSTANTS
ITHEADER 1.0 1 AUGUST 1992 #29: PRINTS iTOUGH2 HEADER
DAYTIM 1.0 10 AUGUST 1993 #32: RETURNS DATE AND TIME (VERSION IBM)
THEADER 1.1 27 MAY 1993 #30: PRINTS TOUGH2 HEADER
INPUT 2.5 15 MARCH 1996 READ ALL DATA PROVIDED THROUGH FILE *INPUT*, + SECONDARY MESH + USERX
MESHM 1.0 24 MAY 1990 EXECUTIVE ROUTINE FOR INTERNAL MESH GENERATION
...... ... .. ........ .... .....................................................................
See Also:
>>> UPDATE
@@@
Syntax: >> VOLUME (phase_name/PHASE: iphase) (CHANGE)
Parent Command:
> OBSERVATION
Subcommand:
>>> MATERIAL
>>> MODEL
Description:
This command selects as an observation type the total volume of phase iphase. This
observation type refers to the entire model domain or to the subdomain defined
by a list of rock types.
The phase name phase_name or phase number iphase, which depend on the EOS module
being used, are listed in the iTOUGH2 header. They can be specified either on the command
line or using the subcommand >>>> PHASE.
If keyword CHANGE is present, the change of the total volume since data initialization is
computed. This option can be used, for example, to calculate the cumulative volumetric
liquid flow which is equivalent to the change of the total gas volume in place.
Example:
> OBSERVATION
>> CHANGE in total GAS VOLUME
>>> entire MODEL
>>>> ANNOTATION: Cum. vol. water flux
>>>> FACTOR : 1.0E06 [ml]  [m^3]
>>>> DATA [DAYS]
0.00 0.00
0.10 245.16
0.25 354.84
0.50 419.35
0.75 451.61
1.00 470.97
1.60 503.23
1.95 516.13
2.10 522.58
2.50 539.35
3.00 552.26
>>>> DEVIATION : 5.0 [ml]
<<<<
<<<
<<
See Also:

@@@
Syntax: >>> WARNING
Parent Command:
>> CONVERGE
Subcommand:

Description:
iTOUGH2 checks the consistency of the TOUGH2 and iTOUGH2 input, printing
error and warning messages to the iTOUGH2 output file. The program stops if an error or
warning is encountered. Command >>> WARNING makes iTOUGH2 continue despite
the occurrence of a warning message.
It is strongly suggested to resolve all warning messages before continuing. Warning
messages should only be ignored if the reason for message is completely understood and deemed harmless.
Example:
> COMPUTATION
>> STOP
>>> don't stop because of WARNINGS
<<<
<<
See Also:
>>> INPUT
@@@
Syntax: >>>> WEIGHT: 1/sigma
Parent Command:
all thirdlevel commands in block > PARAMETER and > OBSERVATION
Subcommand:

Description:
(see >>>> DEVIATION (p/o))
Example:
(see >>>> DEVIATION (p/o))
See Also:
>>>> DEVIATION (p/o)
@@@
Syntax: >>>> WINDOW: time_A time_B (time_unit)
Parent Command:
all thirdlevel commands in block > OBSERVATION
Subcommand:

Description:
The times at which the observed and calculated system response are compared are referred to
as calibration times. They are specified using command >> TIME. The calibration times
are defined globally, i.e., there is only one set of calibration times and it is applied to all data
sets. Calibration times and observation times may be different; observed data are linearly
interpolated between adjacent data points to obtain a value at the calibration time.
In the standard case, the first data point is observed at or before the first calibration time, and
the last data point is observed at or after the last calibration time. This configuration ensures
that a value for calibration is available or can be interpolated at every calibration point.
If, for some reason, only a subset of the available data can be used for calibration,
the user must specify the time window between which data should be processed:
time_A and time_B are the lower and upper bounds of the window. Only one time window may
be specified per data set. If more than one time window is required, new data sets containing
the same data must be generated and different time windows specified for each data set.
The time units of time_A and time_B can be specified using one of the time_unit keywords.
If a data set does not contain data that extend over the entire range of calibration times,
iTOUGH2 automatically adjusts the time window to coincide with the first and last data point.
A warning message is printed, however, which can be avoided by explicitly specify the time window.
If the specified time window is larger than the available data set, the value of the first and last
data point is horizontally extrapolated to time_A and time_B, respectively.
If time is shifted, the time window must be given with reference to the time system of the
data, if command >>>> SHIFT TIME: tshift appears before command >>>> WINDOW.
Conversely, the window must be given in the shifted time system, if the shift command appears after command >>>> WINDOW.
Example:
> OBSERVATION
>> :7 EQUALLY spaced TIMES in [MINUTES]
5.0 35.0
>> PRESSURE
>>> ELEMENT: ZZZ99
>>>> DATA [HOURS] on FILE: pres.dat
>>>> time WINDOW: 299.0 901.0 [SECONDS] > 3 calibration points
>>>> DEVIATION : 1.E3
<<<<
<<<
See Also:
>> TIME, >>>> SHIFT
@@@