source: flex_extract.git/For_developers/Sphinx/source/Documentation/Input/templates.rst @ b1674ed

ctbtodev
Last change on this file since b1674ed was b1674ed, checked in by Anne Philipp <anne.philipp@…>, 4 years ago

updated online documentation FIX for tickets #265 and #262

  • Property mode set to 100644
File size: 11.8 KB
RevLine 
[0b00607]1*********
2Templates
3*********
4
[b1674ed]5In ``flex_extract`` we use the Python package `genshi <https://genshi.edgewall.org/>`_ to create specific files from templates. It is the most efficient way to be able to quickly adapt e.g. the job scripts send to the ECMWF batch queue system or the namelist file für the Fortran program without the need to change the program code.
[0b00607]6
7.. note::
8   Usually it is not recommended to change anything in these files without being able to understand the effects.
9   
10Each template file has its content framework and keeps so-called placeholder variables in the positions where the values needs to be substituted at run time. These placeholders are marked by a leading ``$`` sign. In case of the Kornshell job scripts, where (environment) variables are used the ``$`` sign needs to be doubled to `escape` and keep a single ``$`` sign as it is.
11   
12The following templates are used and can be found in directory ``flex_extract_vX.X/Templates``:
13
14convert.nl
15----------
16
[b1674ed]17    This is the template for a Fortran namelist file called ``fort.4`` which will be read by ``calc_etadot``.
18    It contains all the parameters ``calc_etadot`` needs.
[0b00607]19   
20    .. code-block:: fortran
21 
22        &NAMGEN
23          maxl = $maxl,
24          maxb = $maxb,
25          mlevel = $mlevel,
26          mlevelist = "$mlevelist",
27          mnauf = $mnauf,
28          metapar = $metapar,
29          rlo0 = $rlo0,
30          rlo1 = $rlo1,
31          rla0 = $rla0,
32          rla1 = $rla1,
33          momega = $momega,
34          momegadiff = $momegadiff,
35          mgauss = $mgauss,
36          msmooth = $msmooth,
37          meta = $meta,
38          metadiff = $metadiff,
39          mdpdeta = $mdpdeta
40        /
41
42ecmwf_env.template
43------------------
44
45    This template is used to create the ``ECMWF_ENV`` file in the application modes **gateway** and **remote**. It contains the user credentials and gateway server settings for the file transfers.
46
47    .. code-block:: bash
48   
49        ECUID $user_name
50        ECGID $user_group
51        GATEWAY $gateway_name
52        DESTINATION $destination_name
53
54compilejob.template
55-------------------
56
57    This template is used to create the job script file called ``compilejob.ksh`` during the installation process for the application modes **remote** and **gateway**.
58
59    At the beginning some directives for the batch system are set.
60    On the **ecgate** server the ``SBATCH`` comments are the directives for the SLURM workload manager. A description of the single lines can be found at `SLURM directives <https://confluence.ecmwf.int/display/UDOC/Writing+SLURM+jobs>`_.
61    For the high performance computers **cca** and **ccb** the ``PBS`` comments are necessary and can be view at `PBS directives <https://confluence.ecmwf.int/display/UDOC/Batch+environment%3A++PBS>`_.
62
63    The software environment requirements mentioned in :ref:`ref-requirements` are prepared by loading the corresponding modules depending in the ``HOST``. It should not be changed without testing.
64   
65    Afterwards the installation steps as such are done. Including the generation of the root directory, putting files in place, compiling the Fortran program and sending a log file via email.
66
67    .. code-block:: ksh
68   
69        #!/bin/ksh
70
71        # ON ECGB:
72        # start with ecaccess-job-submit -queueName ecgb NAME_OF_THIS_FILE  on gateway server
73        # start with sbatch NAME_OF_THIS_FILE directly on machine
74
75        #SBATCH --workdir=/scratch/ms/$usergroup/$username
76        #SBATCH --qos=normal
77        #SBATCH --job-name=flex_ecmwf
78        #SBATCH --output=flex_ecmwf.%j.out
79        #SBATCH --error=flex_ecmwf.%j.out
80        #SBATCH --mail-type=FAIL
81        #SBATCH --time=12:00:00
82
83        ## CRAY specific batch requests
84        ##PBS -N flex_ecmwf
85        ##PBS -q ns
86        ##PBS -S /usr/bin/ksh
87        ##PBS -o /scratch/ms/$usergroup/$username/flex_ecmwf.$${Jobname}.$${Job_ID}.out
88        # job output is in .ecaccess_DO_NOT_REMOVE
89        ##PBS -j oe
90        ##PBS -V
91        ##PBS -l EC_threads_per_task=1
92        ##PBS -l EC_memory_per_task=3200MB
93
94        set -x
95        export VERSION=$version_number
96        case $${HOST} in
97          *ecg*)
98          module unload grib_api
99          module unload eccodes
100          module unload python
101          module unload emos
102          module load python3
103          module load eccodes/2.12.0
104          module load emos/455-r64
105          export FLEXPART_ROOT_SCRIPTS=$fp_root_scripts
106          export MAKEFILE=$makefile
107          ;;
108          *cca*)
109          module unload python
110          module switch PrgEnv-cray PrgEnv-intel
111          module load python3
112          module load eccodes/2.12.0
113          module load emos
114          echo $${GROUP}
115          echo $${HOME}
116          echo $${HOME} | awk -F / '{print $1, $2, $3, $4}'
117          export GROUP=`echo $${HOME} | awk -F / '{print $4}'`
118          export SCRATCH=/scratch/ms/$${GROUP}/$${USER}
119          export FLEXPART_ROOT_SCRIPTS=$fp_root_scripts
120          export MAKEFILE=$makefile
121          ;;
122        esac
123
124        mkdir -p $${FLEXPART_ROOT_SCRIPTS}/flex_extract_v$${VERSION}
125        cd $${FLEXPART_ROOT_SCRIPTS}/flex_extract_v$${VERSION}   # if FLEXPART_ROOT is not set this means cd to the home directory
126        tar -xvf $${HOME}/flex_extract_v$${VERSION}.tar
[b1674ed]127        cd Source/Fortran
[0b00607]128        \rm *.o *.mod $fortran_program 
129        make -f $${MAKEFILE} >flexcompile 2>flexcompile
130
131        ls -l $fortran_program >>flexcompile
132        if [ $$? -eq 0 ]; then
133          echo 'SUCCESS!' >>flexcompile
134          mail -s flexcompile.$${HOST}.$$$$ $${USER} <flexcompile
135        else
136          echo Environment: >>flexcompile
137          env >> flexcompile
138          mail -s "ERROR! flexcompile.$${HOST}.$$$$" $${USER} <flexcompile
139        fi
140
141
142job.temp
143--------
144
145    This template is used to create the actual job script file called ``job.ksh`` for the execution of ``flex_extract`` in the application modes **remote** and **gateway**.
146
147    At the beginning some directives for the batch system are set.
148    On the **ecgate** server the ``SBATCH`` comments are the directives for the SLURM workload manager. A description of the single lines can be found at `SLURM directives <https://confluence.ecmwf.int/display/UDOC/Writing+SLURM+jobs>`_.
149    For the high performance computers **cca** and **ccb** the ``PBS`` comments are necessary and can be view at `PBS directives <https://confluence.ecmwf.int/display/UDOC/Batch+environment%3A++PBS>`_.
150
151    The software environment requirements mentioned in :ref:`ref-requirements` are prepared by loading the corresponding modules depending in the ``HOST``. It should not be changed without testing.
152   
153    Afterwards the run directory and the ``CONTROL`` file are created and ``flex_extract`` is executed. In the end a log file is send via email.
154   
155    .. code-block:: ksh
156   
157        #!/bin/ksh
158
159        # ON ECGB:
160        # start with ecaccess-job-submit -queueName ecgb NAME_OF_THIS_FILE  on gateway server
161        # start with sbatch NAME_OF_THIS_FILE directly on machine
162
163        #SBATCH --workdir=/scratch/ms/at/km4a
164        #SBATCH --qos=normal
165        #SBATCH --job-name=flex_ecmwf
166        #SBATCH --output=flex_ecmwf.%j.out
167        #SBATCH --error=flex_ecmwf.%j.out
168        #SBATCH --mail-type=FAIL
169        #SBATCH --time=12:00:00
170
171        ## CRAY specific batch requests
172        ##PBS -N flex_ecmwf
173        ##PBS -q np
174        ##PBS -S /usr/bin/ksh
175        ## -o /scratch/ms/at/km4a/flex_ecmwf.$${PBS_JOBID}.out
176        ## job output is in .ecaccess_DO_NOT_REMOVE
177        ##PBS -j oe
178        ##PBS -V
179        ##PBS -l EC_threads_per_task=24
180        ##PBS -l EC_memory_per_task=32000MB
181
182        set -x
183        export VERSION=7.1
184        case $${HOST} in
185          *ecg*)
186          module unload grib_api
187          module unload eccodes
188          module unload python
189          module unload emos
190          module load python3
191          module load eccodes/2.12.0
192          module load emos/455-r64
[b1674ed]193          export PATH=$${PATH}:$${HOME}/flex_extract_v7.1/Source/Python
[0b00607]194          ;;
195          *cca*)
196          module unload python
197          module switch PrgEnv-cray PrgEnv-intel
198          module load python3
199          module load eccodes/2.12.0
200          module load emos
201          export SCRATCH=$${TMPDIR}
[b1674ed]202          export PATH=$${PATH}:$${HOME}/flex_extract_v7.1/Source/Python
[0b00607]203          ;;
204        esac
205
206        cd $${SCRATCH}
207        mkdir -p python$$$$
208        cd python$$$$
209
210        export CONTROL=CONTROL
211
212        cat >$${CONTROL}<<EOF
213        $control_content
214        EOF
215
216
217        submit.py --controlfile=$${CONTROL} --inputdir=./work --outputdir=./work 1> prot 2>&1
218
219        if [ $? -eq 0 ] ; then
220          l=0
221          for muser in `grep -i MAILOPS $${CONTROL}`; do
222              if [ $${l} -gt 0 ] ; then 
223                 mail -s flex.$${HOST}.$$$$ $${muser} <prot
224              fi
225              l=$(($${l}+1))
226          done
227        else
228          l=0
229          for muser in `grep -i MAILFAIL $${CONTROL}`; do
230              if [ $${l} -gt 0 ] ; then 
231                 mail -s "ERROR! flex.$${HOST}.$$$$" $${muser} <prot
232              fi
233              l=$(($${l}+1))
234          done
235        fi
236       
237
238job.template
239------------
240
241    This template is used to create the template for the execution job script ``job.temp`` for ``flex_extract`` in the installation process. A description of the file can be found under ``job.temp``. A couple of parameters are set in this process, such as the user credentials and the ``flex_extract`` version number.
242       
243    .. code-block:: ksh
244   
245        #!/bin/ksh
246
247        # ON ECGB:
248        # start with ecaccess-job-submit -queueName ecgb NAME_OF_THIS_FILE  on gateway server
249        # start with sbatch NAME_OF_THIS_FILE directly on machine
250
251        #SBATCH --workdir=/scratch/ms/$usergroup/$username
252        #SBATCH --qos=normal
253        #SBATCH --job-name=flex_ecmwf
254        #SBATCH --output=flex_ecmwf.%j.out
255        #SBATCH --error=flex_ecmwf.%j.out
256        #SBATCH --mail-type=FAIL
257        #SBATCH --time=12:00:00
258
259        ## CRAY specific batch requests
260        ##PBS -N flex_ecmwf
261        ##PBS -q np
262        ##PBS -S /usr/bin/ksh
263        ## -o /scratch/ms/$usergroup/$username/flex_ecmwf.$$$${PBS_JOBID}.out
264        ## job output is in .ecaccess_DO_NOT_REMOVE
265        ##PBS -j oe
266        ##PBS -V
267        ##PBS -l EC_threads_per_task=24
268        ##PBS -l EC_memory_per_task=32000MB
269
270        set -x
271        export VERSION=$version_number
272        case $$$${HOST} in
273          *ecg*)
274          module unload grib_api
275          module unload eccodes
276          module unload python
277          module unload emos
278          module load python3
279          module load eccodes/2.12.0
280          module load emos/455-r64
281          export PATH=$$$${PATH}:$fp_root_path
282          ;;
283          *cca*)
284          module unload python
285          module switch PrgEnv-cray PrgEnv-intel
286          module load python3
287          module load eccodes/2.12.0
288          module load emos
289          export SCRATCH=$$$${TMPDIR}
290          export PATH=$$$${PATH}:$fp_root_path
291          ;;
292        esac
293
294        cd $$$${SCRATCH}
295        mkdir -p python$$$$$$$$
296        cd python$$$$$$$$
297
298        export CONTROL=CONTROL
299
300        cat >$$$${CONTROL}<<EOF
301        $$control_content
302        EOF
303
304
305        submit.py --controlfile=$$$${CONTROL} --inputdir=./work --outputdir=./work 1> prot 2>&1
306
307        if [ $? -eq 0 ] ; then
308          l=0
309          for muser in `grep -i MAILOPS $$$${CONTROL}`; do
310              if [ $$$${l} -gt 0 ] ; then 
311                 mail -s flex.$$$${HOST}.$$$$$$$$ $$$${muser} <prot
312              fi
313              l=$(($$$${l}+1))
314          done
315        else
316          l=0
317          for muser in `grep -i MAILFAIL $$$${CONTROL}`; do
318              if [ $$$${l} -gt 0 ] ; then 
319                 mail -s "ERROR! flex.$$$${HOST}.$$$$$$$$" $$$${muser} <prot
320              fi
321              l=$(($$$${l}+1))
322          done
323        fi
324
325
326
327
328
329
330
331 
332
333
334
335
336
337   
338   
339
340 
341   
342   
343
344.. toctree::
345    :hidden:
346    :maxdepth: 2
Note: See TracBrowser for help on using the repository browser.
hosted by ZAMG