source: flex_extract.git/Documentation/html/_sources/Documentation/Input/templates.rst.txt @ 8a53342

ctbtodev
Last change on this file since 8a53342 was 5920b7f, checked in by anphi <anne.philipp@…>, 4 years ago

New compilation of Sphinx after extensive review in language/content/understanding

  • Property mode set to 100644
File size: 11.7 KB
Line 
1*********
2Templates
3*********
4
5In ``flex_extract``, the Python package `genshi <https://genshi.edgewall.org/>`_ is used to create specific files from templates. It is the most efficient way to be able to quickly adapt, e. g., the job scripts sent to the ECMWF batch queue system, or the namelist file für the Fortran program, without the need to change the program code.
6
7.. note::
8   Do not change anything in these files unless you understand the effects!
9   
10Each template file has its content framework and keeps so-called placeholder variables in the positions where the values need to be substituted at run time. These placeholders are marked by a leading ``$`` sign. In case of the Kornshell job scripts, where (environment) variables are used, the ``$`` sign needs to be doubled for `escaping`.
11   
12The following templates are used; they can be found in the directory ``flex_extract_vX.X/Templates``:
13
14convert.nl
15----------
16
17    This is the template for a Fortran namelist file called ``fort.4`` read by ``calc_etadot``.
18    It contains all the parameters ``calc_etadot`` needs.
19   
20    .. code-block:: fortran
21 
22        &NAMGEN
23          maxl = $maxl,
24          maxb = $maxb,
25          mlevel = $mlevel,
26          mlevelist = "$mlevelist",
27          mnauf = $mnauf,
28          metapar = $metapar,
29          rlo0 = $rlo0,
30          rlo1 = $rlo1,
31          rla0 = $rla0,
32          rla1 = $rla1,
33          momega = $momega,
34          momegadiff = $momegadiff,
35          mgauss = $mgauss,
36          msmooth = $msmooth,
37          meta = $meta,
38          metadiff = $metadiff,
39          mdpdeta = $mdpdeta
40        /
41
42ecmwf_env.template
43------------------
44
45    This template is used to create the ``ECMWF_ENV`` file in the application modes **gateway** and **remote**. It contains the user credentials and gateway server settings for the file transfers.
46
47    .. code-block:: bash
48   
49        ECUID $user_name
50        ECGID $user_group
51        GATEWAY $gateway_name
52        DESTINATION $destination_name
53
54compilejob.template
55-------------------
56
57    This template is used to create the job script file called ``compilejob.ksh`` during the installation process for the application modes **remote** and **gateway**.
58
59    At the beginning, some directives for the batch system are set.
60    On the **ecgate** server, the ``SBATCH`` comments are the directives for the SLURM workload manager. A description of the single lines can be found at `SLURM directives <https://confluence.ecmwf.int/display/UDOC/Writing+SLURM+jobs>`_.
61    For the high-performance computers **cca** and **ccb**, the ``PBS`` comments are necessary;  for details see `PBS directives <https://confluence.ecmwf.int/display/UDOC/Batch+environment%3A++PBS>`_.
62
63    The software environment requirements mentioned in :ref:`ref-requirements` are prepared by loading the corresponding modules depending on the ``HOST``. It should not be changed without testing.
64   
65    Afterwards, the installation steps as such are done. They included the generation of the root directory, putting files in place, compiling the Fortran program, and sending a log file by email.
66
67    .. code-block:: ksh
68   
69        #!/bin/ksh
70
71        # ON ECGB:
72        # start with ecaccess-job-submit -queueName ecgb NAME_OF_THIS_FILE  on gateway server
73        # start with sbatch NAME_OF_THIS_FILE directly on machine
74
75        #SBATCH --workdir=/scratch/ms/$usergroup/$username
76        #SBATCH --qos=normal
77        #SBATCH --job-name=flex_ecmwf
78        #SBATCH --output=flex_ecmwf.%j.out
79        #SBATCH --error=flex_ecmwf.%j.out
80        #SBATCH --mail-type=FAIL
81        #SBATCH --time=12:00:00
82
83        ## CRAY specific batch requests
84        ##PBS -N flex_ecmwf
85        ##PBS -q ns
86        ##PBS -S /usr/bin/ksh
87        ##PBS -o /scratch/ms/$usergroup/$username/flex_ecmwf.$${Jobname}.$${Job_ID}.out
88        # job output is in .ecaccess_DO_NOT_REMOVE
89        ##PBS -j oe
90        ##PBS -V
91        ##PBS -l EC_threads_per_task=1
92        ##PBS -l EC_memory_per_task=3200MB
93
94        set -x
95        export VERSION=$version_number
96        case $${HOST} in
97          *ecg*)
98          module unload grib_api
99          module unload eccodes
100          module unload python
101          module unload emos
102          module load python3
103          module load eccodes/2.12.0
104          module load emos/455-r64
105          export FLEXPART_ROOT_SCRIPTS=$fp_root_scripts
106          export MAKEFILE=$makefile
107          ;;
108          *cca*)
109          module unload python
110          module switch PrgEnv-cray PrgEnv-intel
111          module load python3
112          module load eccodes/2.12.0
113          module load emos
114          echo $${GROUP}
115          echo $${HOME}
116          echo $${HOME} | awk -F / '{print $1, $2, $3, $4}'
117          export GROUP=`echo $${HOME} | awk -F / '{print $4}'`
118          export SCRATCH=/scratch/ms/$${GROUP}/$${USER}
119          export FLEXPART_ROOT_SCRIPTS=$fp_root_scripts
120          export MAKEFILE=$makefile
121          ;;
122        esac
123
124        mkdir -p $${FLEXPART_ROOT_SCRIPTS}/flex_extract_v$${VERSION}
125        cd $${FLEXPART_ROOT_SCRIPTS}/flex_extract_v$${VERSION}   # if FLEXPART_ROOT is not set this means cd to the home directory
126        tar -xvf $${HOME}/flex_extract_v$${VERSION}.tar
127        cd Source/Fortran
128        \rm *.o *.mod $fortran_program
129        make -f $${MAKEFILE} >flexcompile 2>flexcompile
130
131        ls -l $fortran_program >>flexcompile
132        if [ $$? -eq 0 ]; then
133          echo 'SUCCESS!' >>flexcompile
134          mail -s flexcompile.$${HOST}.$$$$ $${USER} <flexcompile
135        else
136          echo Environment: >>flexcompile
137          env >> flexcompile
138          mail -s "ERROR! flexcompile.$${HOST}.$$$$" $${USER} <flexcompile
139        fi
140
141
142job.temp
143--------
144
145    This template is used to create the actual job script file called ``job.ksh`` for the execution of ``flex_extract`` in the application modes **remote** and **gateway**.
146
147    At the beginning, some directives for the batch system are set.
148    On the **ecgate** server, the ``SBATCH`` comments are the directives for the SLURM workload manager. A description of the single lines can be found at `SLURM directives <https://confluence.ecmwf.int/display/UDOC/Writing+SLURM+jobs>`_.
149    For the high performance computers **cca** and **ccb**, the ``PBS`` comments are necessary;
150    for details see `PBS directives <https://confluence.ecmwf.int/display/UDOC/Batch+environment%3A++PBS>`_.
151
152    The software environment requirements mentioned in :ref:`ref-requirements` are prepared by loading the corresponding modules depending on the ``HOST``. It should not be changed without testing.
153   
154    Afterwards, the run directory and the ``CONTROL`` file are created and ``flex_extract`` is executed. In the end, a log file is send by email.
155   
156    .. code-block:: ksh
157   
158        #!/bin/ksh
159
160        # ON ECGB:
161        # start with ecaccess-job-submit -queueName ecgb NAME_OF_THIS_FILE  on gateway server
162        # start with sbatch NAME_OF_THIS_FILE directly on machine
163
164        #SBATCH --workdir=/scratch/ms/at/km4a
165        #SBATCH --qos=normal
166        #SBATCH --job-name=flex_ecmwf
167        #SBATCH --output=flex_ecmwf.%j.out
168        #SBATCH --error=flex_ecmwf.%j.out
169        #SBATCH --mail-type=FAIL
170        #SBATCH --time=12:00:00
171
172        ## CRAY specific batch requests
173        ##PBS -N flex_ecmwf
174        ##PBS -q np
175        ##PBS -S /usr/bin/ksh
176        ## -o /scratch/ms/at/km4a/flex_ecmwf.$${PBS_JOBID}.out
177        ## job output is in .ecaccess_DO_NOT_REMOVE
178        ##PBS -j oe
179        ##PBS -V
180        ##PBS -l EC_threads_per_task=24
181        ##PBS -l EC_memory_per_task=32000MB
182
183        set -x
184        export VERSION=7.1
185        case $${HOST} in
186          *ecg*)
187          module unload grib_api
188          module unload eccodes
189          module unload python
190          module unload emos
191          module load python3
192          module load eccodes/2.12.0
193          module load emos/455-r64
194          export PATH=$${PATH}:$${HOME}/flex_extract_v7.1/Source/Python
195          ;;
196          *cca*)
197          module unload python
198          module switch PrgEnv-cray PrgEnv-intel
199          module load python3
200          module load eccodes/2.12.0
201          module load emos
202          export SCRATCH=$${TMPDIR}
203          export PATH=$${PATH}:$${HOME}/flex_extract_v7.1/Source/Python
204          ;;
205        esac
206
207        cd $${SCRATCH}
208        mkdir -p python$$$$
209        cd python$$$$
210
211        export CONTROL=CONTROL
212
213        cat >$${CONTROL}<<EOF
214        $control_content
215        EOF
216
217
218        submit.py --controlfile=$${CONTROL} --inputdir=./work --outputdir=./work 1> prot 2>&1
219
220        if [ $? -eq 0 ] ; then
221          l=0
222          for muser in `grep -i MAILOPS $${CONTROL}`; do
223              if [ $${l} -gt 0 ] ; then
224                 mail -s flex.$${HOST}.$$$$ $${muser} <prot
225              fi
226              l=$(($${l}+1))
227          done
228        else
229          l=0
230          for muser in `grep -i MAILFAIL $${CONTROL}`; do
231              if [ $${l} -gt 0 ] ; then
232                 mail -s "ERROR! flex.$${HOST}.$$$$" $${muser} <prot
233              fi
234              l=$(($${l}+1))
235          done
236        fi
237       
238
239job.template
240------------
241
242    This template is used to create the template for the execution job script ``job.temp`` for ``flex_extract`` in the installation process. A description of the file can be found under ``job.temp``. Several parameters are set in this process, such as the user credentials and the ``flex_extract`` version number.
243       
244    .. code-block:: ksh
245   
246        #!/bin/ksh
247
248        # ON ECGB:
249        # start with ecaccess-job-submit -queueName ecgb NAME_OF_THIS_FILE  on gateway server
250        # start with sbatch NAME_OF_THIS_FILE directly on machine
251
252        #SBATCH --workdir=/scratch/ms/$usergroup/$username
253        #SBATCH --qos=normal
254        #SBATCH --job-name=flex_ecmwf
255        #SBATCH --output=flex_ecmwf.%j.out
256        #SBATCH --error=flex_ecmwf.%j.out
257        #SBATCH --mail-type=FAIL
258        #SBATCH --time=12:00:00
259
260        ## CRAY specific batch requests
261        ##PBS -N flex_ecmwf
262        ##PBS -q np
263        ##PBS -S /usr/bin/ksh
264        ## -o /scratch/ms/$usergroup/$username/flex_ecmwf.$$$${PBS_JOBID}.out
265        ## job output is in .ecaccess_DO_NOT_REMOVE
266        ##PBS -j oe
267        ##PBS -V
268        ##PBS -l EC_threads_per_task=24
269        ##PBS -l EC_memory_per_task=32000MB
270
271        set -x
272        export VERSION=$version_number
273        case $$$${HOST} in
274          *ecg*)
275          module unload grib_api
276          module unload eccodes
277          module unload python
278          module unload emos
279          module load python3
280          module load eccodes/2.12.0
281          module load emos/455-r64
282          export PATH=$$$${PATH}:$fp_root_path
283          ;;
284          *cca*)
285          module unload python
286          module switch PrgEnv-cray PrgEnv-intel
287          module load python3
288          module load eccodes/2.12.0
289          module load emos
290          export SCRATCH=$$$${TMPDIR}
291          export PATH=$$$${PATH}:$fp_root_path
292          ;;
293        esac
294
295        cd $$$${SCRATCH}
296        mkdir -p python$$$$$$$$
297        cd python$$$$$$$$
298
299        export CONTROL=CONTROL
300
301        cat >$$$${CONTROL}<<EOF
302        $$control_content
303        EOF
304
305
306        submit.py --controlfile=$$$${CONTROL} --inputdir=./work --outputdir=./work 1> prot 2>&1
307
308        if [ $? -eq 0 ] ; then
309          l=0
310          for muser in `grep -i MAILOPS $$$${CONTROL}`; do
311              if [ $$$${l} -gt 0 ] ; then
312                 mail -s flex.$$$${HOST}.$$$$$$$$ $$$${muser} <prot
313              fi
314              l=$(($$$${l}+1))
315          done
316        else
317          l=0
318          for muser in `grep -i MAILFAIL $$$${CONTROL}`; do
319              if [ $$$${l} -gt 0 ] ; then
320                 mail -s "ERROR! flex.$$$${HOST}.$$$$$$$$" $$$${muser} <prot
321              fi
322              l=$(($$$${l}+1))
323          done
324        fi
325
326
327
328   
329
330.. toctree::
331    :hidden:
332    :maxdepth: 2
Note: See TracBrowser for help on using the repository browser.
hosted by ZAMG