source: flexpart.git/src/timemanager_mpi.f90 @ f55fdce

10.4.1_peseiGFS_025bugfixes+enhancementsdevrelease-10release-10.4.1scaling-bugunivie
Last change on this file since f55fdce was f55fdce, checked in by Espen Sollum ATMOS <eso@…>, 9 years ago

Added a modification so thah number of particles in simulation reported to standard output (MPI version) is exact instead of estimated

  • Property mode set to 100644
File size: 30.5 KB
Line 
1!**********************************************************************
2! Copyright 1998,1999,2000,2001,2002,2005,2007,2008,2009,2010         *
3! Andreas Stohl, Petra Seibert, A. Frank, Gerhard Wotawa,             *
4! Caroline Forster, Sabine Eckhardt, John Burkhart, Harald Sodemann   *
5!                                                                     *
6! This file is part of FLEXPART.                                      *
7!                                                                     *
8! FLEXPART is free software: you can redistribute it and/or modify    *
9! it under the terms of the GNU General Public License as published by*
10! the Free Software Foundation, either version 3 of the License, or   *
11! (at your option) any later version.                                 *
12!                                                                     *
13! FLEXPART is distributed in the hope that it will be useful,         *
14! but WITHOUT ANY WARRANTY; without even the implied warranty of      *
15! MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the       *
16! GNU General Public License for more details.                        *
17!                                                                     *
18! You should have received a copy of the GNU General Public License   *
19! along with FLEXPART.  If not, see <http://www.gnu.org/licenses/>.   *
20!**********************************************************************
21
22subroutine timemanager
23
24!*****************************************************************************
25!                                                                            *
26! Handles the computation of trajectories, i.e. determines which             *
27! trajectories have to be computed at what time.                             *
28! Manages dry+wet deposition routines, radioactive decay and the computation *
29! of concentrations.                                                         *
30!                                                                            *
31!     Author: A. Stohl                                                       *
32!                                                                            *
33!     20 May 1996                                                            *
34!                                                                            *
35!*****************************************************************************
36!  Changes, Bernd C. Krueger, Feb. 2001:                                     *
37!        Call of convmix when new windfield is read                          *
38!------------------------------------                                        *
39!  Changes Petra Seibert, Sept 2002                                          *
40!     fix wet scavenging problem                                             *
41!     Code may not be correct for decay of deposition!                       *
42!  Changes Petra Seibert, Nov 2002                                           *
43!     call convection BEFORE new fields are read in BWD mode                 *
44!  Changes Caroline Forster, Feb 2005                                        *
45!   new interface between flexpart and convection scheme                     *
46!   Emanuel's latest subroutine convect43c.f is used                         *
47!  Changes Stefan Henne, Harald Sodemann, 2013-2014                          *
48!   added netcdf output code                                                 *
49!  Changes Espen Sollum 2014                                                 *
50!   MPI version                                                              *
51!   Variables uap,ucp,uzp,us,vs,ws,cbt now in module com_mod                 *
52!*****************************************************************************
53!                                                                            *
54! Variables:                                                                 *
55! dep                .true. if either wet or dry deposition is switched on   *
56! decay(maxspec) [1/s] decay constant for radioactive decay                  *
57! drydep             .true. if dry deposition is switched on                 *
58! ideltas [s]        modelling period                                        *
59! itime [s]          actual temporal position of calculation                 *
60! ldeltat [s]        time since computation of radioact. decay of depositions*
61! loutaver [s]       averaging period for concentration calculations         *
62! loutend [s]        end of averaging for concentration calculations         *
63! loutnext [s]       next time at which output fields shall be centered      *
64! loutsample [s]     sampling interval for averaging of concentrations       *
65! loutstart [s]      start of averaging for concentration calculations       *
66! loutstep [s]       time interval for which concentrations shall be         *
67!                    calculated                                              *
68! npoint(maxpart)    index, which starting point the trajectory has          *
69!                    starting positions of trajectories                      *
70! nstop              serves as indicator for fate of particles               *
71!                    in the particle loop                                    *
72! nstop1             serves as indicator for wind fields (see getfields)     *
73! memstat            additional indicator for wind fields (see getfields)    *
74! outnum             number of samples for each concentration calculation    *
75! prob               probability of absorption at ground due to dry          *
76!                    deposition                                              *
77! wetdep             .true. if wet deposition is switched on                 *
78! weight             weight for each concentration sample (1/2 or 1)         *
79! uap(maxpart),ucp(maxpart),uzp(maxpart) = random velocities due to          *
80!                    turbulence                                              *
81! us(maxpart),vs(maxpart),ws(maxpart) = random velocities due to inter-      *
82!                    polation                                                *
83! xtra1(maxpart), ytra1(maxpart), ztra1(maxpart) =                           *
84!                    spatial positions of trajectories                       *
85!                                                                            *
86! Constants:                                                                 *
87! maxpart            maximum number of trajectories                          *
88!                                                                            *
89!*****************************************************************************
90
91  use unc_mod
92  use point_mod
93  use xmass_mod
94  use flux_mod
95  use outg_mod
96  use oh_mod
97  use par_mod
98  use com_mod
99  use mpi_mod
100  use netcdf_output_mod, only: concoutput_netcdf,concoutput_nest_netcdf,&
101       &concoutput_surf_netcdf,concoutput_surf_nest_netcdf
102
103  implicit none
104
105  logical :: fc_mp=.true., fc_sync=.true.
106  logical :: reqv_state=.false. ! .true. if waiting for a MPI_Irecv to complete
107  integer :: j,ks,kp,l,n,itime=0,nstop,nstop1,memstat=0,mind
108! integer :: ksp
109  integer :: ip
110  integer :: loutnext,loutstart,loutend
111  integer :: ix,jy,ldeltat,itage,nage
112  integer :: i_nan=0,ii_nan,total_nan_intl=0  !added by mc to check instability in CBL scheme
113  integer :: numpart_tot_mpi ! for summing particles on all processes
114  real :: outnum,weight,prob(maxspec)
115  real :: decfact
116
117  real :: drydeposit(maxspec),gridtotalunc,wetgridtotalunc
118  real :: drygridtotalunc,xold,yold,zold,xmassfract
119!double precision xm(maxspec,maxpointspec_act),
120!    +                 xm_depw(maxspec,maxpointspec_act),
121!    +                 xm_depd(maxspec,maxpointspec_act)
122
123!open(88,file='TEST.dat')
124
125! Measure time spent in timemanager
126  if (mp_measure_time) call mpif_mtime('timemanager',0)
127
128! First output for time 0
129!************************
130
131  loutnext=loutstep/2
132  outnum=0.
133  loutstart=loutnext-loutaver/2
134  loutend=loutnext+loutaver/2
135
136
137!**********************************************************************
138! Loop over the whole modelling period in time steps of mintime seconds
139!**********************************************************************
140
141  do itime=0,ideltas,lsynctime
142
143! Computation of wet deposition, OH reaction and mass transfer
144! between two species every lsynctime seconds
145! maybe wet depo frequency can be relaxed later but better be on safe side
146! wetdepo must be called BEFORE new fields are read in but should not
147! be called in the very beginning before any fields are loaded, or
148! before particles are in the system
149! Code may not be correct for decay of deposition
150! changed by Petra Seibert 9/02
151!********************************************************************
152
153    if (mp_dev_mode) write(*,*) 'myid, itime: ',mp_pid,itime
154   
155    if (wetdep .and. itime .ne. 0 .and. numpart .gt. 0) then
156      if (verbosity.gt.0) then
157        write (*,*) 'timemanager> call wetdepo'
158      endif
159
160      if (mp_measure_time) call mpif_mtime('wetdepo',0)
161       
162! readwind process skips this step
163      if (.not.(lmpreader.and.lmp_use_reader)) then
164        call wetdepo(itime,lsynctime,loutnext)
165      end if
166
167      if (mp_measure_time) call mpif_mtime('wetdepo',1)
168    end if
169
170    if (OHREA .and. itime .ne. 0 .and. numpart .gt. 0) then
171! readwind process skips this step
172      if (.not.(lmpreader.and.lmp_use_reader)) then
173        call ohreaction(itime,lsynctime,loutnext)
174      endif
175    end if
176
177    if (assspec .and. itime .ne. 0 .and. numpart .gt. 0) then
178      stop 'associated species not yet implemented!'
179!     call transferspec(itime,lsynctime,loutnext)
180    endif
181
182
183! compute convection for backward runs
184!*************************************
185
186    if ((ldirect.eq.-1).and.(lconvection.eq.1).and.(itime.lt.0)) then
187      if (verbosity.gt.0) then
188        write (*,*) 'timemanager> call convmix -- backward'
189      endif
190! readwind process skips this step
191      if (.not.(lmpreader.and.lmp_use_reader)) call convmix(itime)
192    endif
193
194! Get necessary wind fields if not available
195!*******************************************
196    if (verbosity.gt.0 .and. lmpreader) then
197      write (*,*) 'timemanager> call getfields'
198    endif
199
200! This time measure includes reading/MPI communication (for the reader process),
201! or MPI communication time only (for other processes)
202    if (mp_measure_time) call mpif_mtime('getfields',0)
203
204    call getfields(itime,nstop1,memstat)
205
206    if (mp_measure_time) call mpif_mtime('getfields',1)
207
208
209! Broadcast fields to all MPI processes
210! Skip if all processes have called getfields or if no new fields
211!*****************************************************************
212
213    if (mp_measure_time.and..not.(lmpreader.and.lmp_use_reader)) call mpif_mtime('getfields',0)
214
215! Version 1 (lmp_sync=.true.) uses a read-ahead process where send/recv is done
216! in sync at start of each new field time interval
217    if (lmp_sync.and.lmp_use_reader.and.memstat.gt.0) then
218      call mpif_gf_send_vars(memstat)
219      call mpif_gf_send_vars_nest(memstat)
220! Version 2  (lmp_sync=.false.) is also used whenever 2 new fields are read (in which
221! case async send/recv is impossible.
222    else if (.not.lmp_sync.and.lmp_use_reader.and.memstat.ge.32) then
223      call mpif_gf_send_vars(memstat)
224      call mpif_gf_send_vars_nest(memstat)
225    end if
226
227! Version 2 (lmp_sync=.false.) is for holding three fields in memory. Uses a
228! read-ahead process where sending/receiving of the 3rd fields is done in
229! the background in parallel with performing computations with fields 1&2
230!********************************************************************************
231    if (.not.lmp_sync) then
232   
233! READER PROCESS:
234      if (memstat.gt.0..and.memstat.lt.32.and.lmp_use_reader.and.lmpreader) then
235        call mpif_gf_send_vars_async(memstat)
236      end if
237
238! COMPLETION CHECK:
239! Issued at start of each new field period.
240      if (memstat.ne.0.and.memstat.lt.32.and.lmp_use_reader) then
241! TODO: z0(7) changes with time, so should be dimension (numclass,2) to
242! allow transfer of the future value in the background
243        call MPI_Bcast(z0,numclass,mp_pp,id_read,MPI_COMM_WORLD,mp_ierr)
244        call mpif_gf_request
245      end if
246
247! RECVEIVING PROCESS(ES):
248      ! eso TODO: at this point we do not know if clwc/ciwc will be available
249      ! at next time step. Issue receive request anyway, cancel at mpif_gf_request
250      if (memstat.gt.0.and.lmp_use_reader.and..not.lmpreader) then
251        call mpif_gf_recv_vars_async(memstat)
252      end if
253
254    end if
255
256    if (mp_measure_time.and..not.(lmpreader.and.lmp_use_reader)) call mpif_mtime('getfields',1)
257
258
259!*******************************************************************************
260
261    if (lmpreader.and.nstop1.gt.1) stop 'NO METEO FIELDS AVAILABLE'
262
263! Reader process goes back to top of time loop (unless simulation end)
264!*********************************************************************
265
266    if (lmpreader.and.lmp_use_reader) then
267      if (itime.lt.ideltas) then
268        cycle
269      else
270        goto 999
271      end if
272    end if
273
274
275! Get hourly OH fields if not available
276!****************************************************
277    if (OHREA) then
278      if (verbosity.gt.0) then
279        write (*,*) 'timemanager> call gethourlyOH'
280      endif
281      call gethourlyOH(itime)
282    endif
283
284
285! Release particles
286!******************
287
288    if (verbosity.gt.0.and.lroot) then
289      write (*,*) 'timemanager>  Release particles'
290    endif
291
292    if (mdomainfill.ge.1) then
293      if (itime.eq.0) then
294        if (verbosity.gt.0) then
295          write (*,*) 'timemanager>  call init_domainfill'
296        endif
297        call init_domainfill
298      else
299        if (verbosity.gt.0.and.lroot) then
300          write (*,*) 'timemanager>  call boundcond_domainfill'
301        endif
302        call boundcond_domainfill(itime,loutend)
303      endif
304    else
305      if (verbosity.gt.0.and.lroot) then
306        print*,'call releaseparticles' 
307      endif
308      call releaseparticles(itime)
309    endif
310
311
312! Compute convective mixing for forward runs
313! for backward runs it is done before next windfield is read in
314!**************************************************************
315
316    if ((ldirect.eq.1).and.(lconvection.eq.1)) then
317      if (verbosity.gt.0) then
318        write (*,*) 'timemanager> call convmix -- forward'
319      endif
320      call convmix(itime)
321    endif
322
323! If middle of averaging period of output fields is reached, accumulated
324! deposited mass radioactively decays
325!***********************************************************************
326
327    if (DEP.and.(itime.eq.loutnext).and.(ldirect.gt.0)) then
328      do ks=1,nspec
329        do kp=1,maxpointspec_act
330          if (decay(ks).gt.0.) then
331            do nage=1,nageclass
332              do l=1,nclassunc
333! Mother output grid
334                do jy=0,numygrid-1
335                  do ix=0,numxgrid-1
336                    wetgridunc(ix,jy,ks,kp,l,nage)= &
337                         wetgridunc(ix,jy,ks,kp,l,nage)* &
338                         exp(-1.*outstep*decay(ks))
339                    drygridunc(ix,jy,ks,kp,l,nage)= &
340                         drygridunc(ix,jy,ks,kp,l,nage)* &
341                         exp(-1.*outstep*decay(ks))
342                  end do
343                end do
344! Nested output grid
345                if (nested_output.eq.1) then
346                  do jy=0,numygridn-1
347                    do ix=0,numxgridn-1
348                      wetgriduncn(ix,jy,ks,kp,l,nage)= &
349                           wetgriduncn(ix,jy,ks,kp,l,nage)* &
350                           exp(-1.*outstep*decay(ks))
351                      drygriduncn(ix,jy,ks,kp,l,nage)= &
352                           drygriduncn(ix,jy,ks,kp,l,nage)* &
353                           exp(-1.*outstep*decay(ks))
354                    end do
355                  end do
356                endif
357              end do
358            end do
359          endif
360        end do
361      end do
362    endif
363
364
365!!! CHANGE: These lines may be switched on to check the conservation
366!!! of mass within FLEXPART
367!   if (itime.eq.loutnext) then
368!   do 247 ksp=1, nspec
369!   do 247 kp=1, maxpointspec_act
370!47         xm(ksp,kp)=0.
371
372!   do 249 ksp=1, nspec
373!     do 249 j=1,numpart
374!          if (ioutputforeachrelease.eq.1) then
375!            kp=npoint(j)
376!          else
377!            kp=1
378!          endif
379!       if (itra1(j).eq.itime) then
380!          xm(ksp,kp)=xm(ksp,kp)+xmass1(j,ksp)
381!         write(*,*) 'xmass: ',xmass1(j,ksp),j,ksp,nspec
382!       endif
383!49     continue
384!  do 248 ksp=1,nspec
385!  do 248 kp=1,maxpointspec_act
386!  xm_depw(ksp,kp)=0.
387!  xm_depd(ksp,kp)=0.
388!     do 248 nage=1,nageclass
389!       do 248 ix=0,numxgrid-1
390!         do 248 jy=0,numygrid-1
391!           do 248 l=1,nclassunc
392!              xm_depw(ksp,kp)=xm_depw(ksp,kp)
393!    +                  +wetgridunc(ix,jy,ksp,kp,l,nage)
394!48                 xm_depd(ksp,kp)=xm_depd(ksp,kp)
395!    +                  +drygridunc(ix,jy,ksp,kp,l,nage)
396!             do 246 ksp=1,nspec
397!46                    write(88,'(2i10,3e12.3)')
398!    +              itime,ksp,(xm(ksp,kp),kp=1,maxpointspec_act),
399!    +                (xm_depw(ksp,kp),kp=1,maxpointspec_act),
400!    +                (xm_depd(ksp,kp),kp=1,maxpointspec_act)
401!  endif
402!!! CHANGE
403
404
405
406
407! Check whether concentrations are to be calculated
408!**************************************************
409
410    if ((ldirect*itime.ge.ldirect*loutstart).and. &
411         (ldirect*itime.le.ldirect*loutend)) then ! add to grid
412      if (mod(itime-loutstart,loutsample).eq.0) then
413
414! If we are exactly at the start or end of the concentration averaging interval,
415! give only half the weight to this sample
416!*****************************************************************************
417
418        if ((itime.eq.loutstart).or.(itime.eq.loutend)) then
419          weight=0.5
420        else
421          weight=1.0
422        endif
423        outnum=outnum+weight
424
425        call conccalc(itime,weight)
426
427      endif
428
429! :TODO: MPI output of particle positions;  each process sequentially
430!   access the same file
431      if ((mquasilag.eq.1).and.(itime.eq.(loutstart+loutend)/2)) &
432           call partoutput_short(itime)    ! dump particle positions in extremely compressed format
433
434
435! Output and reinitialization of grid
436! If necessary, first sample of new grid is also taken
437!*****************************************************
438
439      if ((itime.eq.loutend).and.(outnum.gt.0.)) then
440        if ((iout.le.3.).or.(iout.eq.5)) then
441
442! MPI: Root process collects/sums grids
443!**************************************
444          call mpif_tm_reduce_grid
445
446          if (mp_measure_time) call mpif_mtime('iotime',0)
447          if (surf_only.ne.1) then
448            if (lroot) then
449              if (lnetcdfout.eq.1) then
450                call concoutput_netcdf(itime,outnum,gridtotalunc,wetgridtotalunc,&
451                     &drygridtotalunc)
452              else
453                call concoutput(itime,outnum,gridtotalunc,wetgridtotalunc,drygridtotalunc)
454              endif
455            else
456              gridunc(:,:,:,:,:,:,:)=0.
457            end if
458          else
459            if (lroot) then
460              if (lnetcdfout.eq.1) then
461
462                call concoutput_surf_netcdf(itime,outnum,gridtotalunc,wetgridtotalunc,&
463                     &drygridtotalunc)
464              else
465                call concoutput_surf(itime,outnum,gridtotalunc,wetgridtotalunc,drygridtotalunc)
466              end if
467            else
468              gridunc(:,:,:,:,:,:,:)=0.
469            endif
470          endif
471          if (mp_measure_time) call mpif_mtime('iotime',1)
472
473! :TODO: Correct calling of conc_surf above?
474
475!   call concoutput_surf(itime,outnum,gridtotalunc,wetgridtotalunc,drygridtotalunc)
476! endif
477
478          if (nested_output.eq.1) then
479
480! MPI: Root process collects/sums nested grids
481!*********************************************
482            call mpif_tm_reduce_grid_nest
483 
484           if (mp_measure_time) call mpif_mtime('iotime',0)
485
486            if (lnetcdfout.eq.0) then
487              if (surf_only.ne.1) then
488
489                if (lroot) then
490                  call concoutput_nest(itime,outnum)
491                else
492                  griduncn(:,:,:,:,:,:,:)=0.
493                end if
494
495              else  ! :TODO: check for zeroing in the netcdf module
496                call concoutput_surf_nest(itime,outnum)
497
498              end if
499
500            else
501
502              if (surf_only.ne.1) then
503                if (lroot) then             
504                  call concoutput_nest_netcdf(itime,outnum)
505                else
506                  griduncn(:,:,:,:,:,:,:)=0.
507                end if
508              else
509                if (lroot) then
510                  call concoutput_surf_nest_netcdf(itime,outnum)
511                else
512                  griduncn(:,:,:,:,:,:,:)=0.
513                end if
514              endif
515
516
517            end if
518          end if
519         
520
521          outnum=0.
522        endif
523        if ((iout.eq.4).or.(iout.eq.5)) call plumetraj(itime)
524        if (iflux.eq.1) call fluxoutput(itime)
525        if (mp_measure_time) call mpif_mtime('iotime',1)
526
527
528! Decide whether to write an estimate of the number of particles released,
529! or exact number (require MPI reduce operation)
530        numpart_tot_mpi = numpart*mp_partgroup_np
531
532        if (mp_exact_numpart.and..not.(lmpreader.and.lmp_use_reader)) then
533          call MPI_Reduce(numpart, numpart_tot_mpi, 1, MPI_INTEGER, MPI_SUM, id_root, &
534               & mp_comm_used, mp_ierr)
535        endif
536       
537        if (lroot) then
538          write(*,45) itime,numpart_tot_mpi,gridtotalunc,&
539               &wetgridtotalunc,drygridtotalunc
540          if (verbosity.gt.0) then
541            write (*,*) 'timemanager> starting simulation'
542          end if
543        end if
544
54545      format(i13,' SECONDS SIMULATED: ',i13, ' PARTICLES:    Uncertainty: ',3f7.3)
54646      format(' Simulated ',f7.1,' hours (',i13,' s), ',i13, ' particles')
547        if (ipout.ge.1) then
548          do ip=0, mp_partgroup_np-1
549            if (ip.eq.mp_partid) call partoutput(itime) ! dump particle positions
550            call mpif_mpi_barrier
551          end do
552        end if
553
554        loutnext=loutnext+loutstep
555        loutstart=loutnext-loutaver/2
556        loutend=loutnext+loutaver/2
557        if (itime.eq.loutstart) then
558          weight=0.5
559          outnum=outnum+weight
560          call conccalc(itime,weight)
561        endif
562
563
564! Check, whether particles are to be split:
565! If so, create new particles and attribute all information from the old
566! particles also to the new ones; old and new particles both get half the
567! mass of the old ones
568!************************************************************************
569       
570        if (ldirect*itime.ge.ldirect*itsplit) then
571          n=numpart
572          do j=1,numpart
573            if (ldirect*itime.ge.ldirect*itrasplit(j)) then
574!                if (n.lt.maxpart) then
575              if (n.lt.maxpart_mpi) then
576                n=n+1
577                itrasplit(j)=2*(itrasplit(j)-itramem(j))+itramem(j)
578                itrasplit(n)=itrasplit(j)
579                itramem(n)=itramem(j)
580                itra1(n)=itra1(j)
581                idt(n)=idt(j)
582                npoint(n)=npoint(j)
583                nclass(n)=nclass(j)
584                xtra1(n)=xtra1(j)
585                ytra1(n)=ytra1(j)
586                ztra1(n)=ztra1(j)
587                uap(n)=uap(j)
588                ucp(n)=ucp(j)
589                uzp(n)=uzp(j)
590                us(n)=us(j)
591                vs(n)=vs(j)
592                ws(n)=ws(j)
593                cbt(n)=cbt(j)
594                do ks=1,nspec
595                  xmass1(j,ks)=xmass1(j,ks)/2.
596                  xmass1(n,ks)=xmass1(j,ks)
597                end do
598              endif
599            endif
600          end do
601          numpart=n
602        endif
603      endif
604    endif
605   
606
607    if (itime.eq.ideltas) exit         ! almost finished
608
609! Compute interval since radioactive decay of deposited mass was computed
610!************************************************************************
611
612    if (itime.lt.loutnext) then
613      ldeltat=itime-(loutnext-loutstep)
614    else                                  ! first half of next interval
615      ldeltat=itime-loutnext
616    endif
617
618
619! Loop over all particles
620!************************
621    if (mp_measure_time) call mpif_mtime('partloop1',0)
622
623!--------------------------------------------------------------------------------
624! various variables for testing reason of CBL scheme, by mc
625    well_mixed_vector=0. !erase vector to test well mixed condition: modified by mc
626    well_mixed_norm=0.   !erase normalization to test well mixed condition: modified by mc
627    avg_ol=0.
628    avg_wst=0.
629    avg_h=0.
630    avg_air_dens=0.  !erase vector to obtain air density at particle positions: modified by mc
631!--------------------------------------------------------------------------------
632
633    do j=1,numpart
634
635! If integration step is due, do it
636!**********************************
637
638      if (itra1(j).eq.itime) then
639
640        if (ioutputforeachrelease.eq.1) then
641          kp=npoint(j)
642        else
643          kp=1
644        endif
645! Determine age class of the particle
646        itage=abs(itra1(j)-itramem(j))
647        do nage=1,nageclass
648          if (itage.lt.lage(nage)) exit
649        end do
650
651! Initialize newly released particle
652!***********************************
653
654        if ((itramem(j).eq.itime).or.(itime.eq.0)) &
655             call initialize(itime,idt(j),uap(j),ucp(j),uzp(j), &
656             us(j),vs(j),ws(j),xtra1(j),ytra1(j),ztra1(j),cbt(j))
657
658! Memorize particle positions
659!****************************
660
661        xold=xtra1(j)
662        yold=ytra1(j)
663        zold=ztra1(j)
664
665! Integrate Lagevin equation for lsynctime seconds
666!*************************************************
667
668        mp_advance_wtime_beg = mpi_wtime()
669
670        call advance(itime,npoint(j),idt(j),uap(j),ucp(j),uzp(j), &
671             us(j),vs(j),ws(j),nstop,xtra1(j),ytra1(j),ztra1(j),prob, &
672             cbt(j))
673
674        mp_advance_wtime_end = mpi_wtime()
675        mp_advance_wtime_total = mp_advance_wtime_total + (mp_advance_wtime_end - &
676             & mp_advance_wtime_beg)
677
678
679! Calculate the gross fluxes across layer interfaces
680!***************************************************
681
682        if (iflux.eq.1) call calcfluxes(nage,j,xold,yold,zold)
683
684
685! Determine, when next time step is due
686! If trajectory is terminated, mark it
687!**************************************
688
689        if (nstop.gt.1) then
690          if (linit_cond.ge.1) call initial_cond_calc(itime,j)
691          itra1(j)=-999999999
692        else
693          itra1(j)=itime+lsynctime
694
695
696! Dry deposition and radioactive decay for each species
697! Also check maximum (of all species) of initial mass remaining on the particle;
698! if it is below a threshold value, terminate particle
699!*****************************************************************************
700
701          xmassfract=0.
702          do ks=1,nspec
703            if (decay(ks).gt.0.) then             ! radioactive decay
704              decfact=exp(-real(abs(lsynctime))*decay(ks))
705            else
706              decfact=1.
707            endif
708
709            if (DRYDEPSPEC(ks)) then        ! dry deposition
710              drydeposit(ks)=xmass1(j,ks)*prob(ks)*decfact
711              xmass1(j,ks)=xmass1(j,ks)*(1.-prob(ks))*decfact
712              if (decay(ks).gt.0.) then   ! correct for decay (see wetdepo)
713                drydeposit(ks)=drydeposit(ks)* &
714                     exp(real(abs(ldeltat))*decay(ks))
715              endif
716            else                           ! no dry deposition
717              xmass1(j,ks)=xmass1(j,ks)*decfact
718            endif
719
720
721            if (mdomainfill.eq.0) then
722              if (xmass(npoint(j),ks).gt.0.) &
723                   xmassfract=max(xmassfract,real(npart(npoint(j)))* &
724                   xmass1(j,ks)/xmass(npoint(j),ks))
725            else
726              xmassfract=1.
727            endif
728          end do
729
730          if (xmassfract.lt.0.0001) then   ! terminate all particles carrying less mass
731            itra1(j)=-999999999
732          endif
733
734!        Sabine Eckhardt, June 2008
735!        don't create depofield for backward runs
736          if (DRYDEP.AND.(ldirect.eq.1)) then
737            call drydepokernel(nclass(j),drydeposit,real(xtra1(j)), &
738                 real(ytra1(j)),nage,kp)
739            if (nested_output.eq.1) call drydepokernel_nest( &
740                 nclass(j),drydeposit,real(xtra1(j)),real(ytra1(j)), &
741                 nage,kp)
742          endif
743
744! Terminate trajectories that are older than maximum allowed age
745!***************************************************************
746
747          if (abs(itra1(j)-itramem(j)).ge.lage(nageclass)) then
748            if (linit_cond.ge.1) &
749                 call initial_cond_calc(itime+lsynctime,j)
750            itra1(j)=-999999999
751          endif
752        endif
753
754      endif
755
756    end do ! j=1, numpart
757
758    if(mp_measure_time) call mpif_mtime('partloop1',1)
759
760
761! Added by mc: counter of "unstable" particle velocity during a time scale
762!   of maximumtl=20 minutes (defined in com_mod)
763
764    total_nan_intl=0
765    i_nan=i_nan+1 ! added by mc to count nan during a time of maxtl (i.e. maximum tl fixed here to 20 minutes, see com_mod)
766    sum_nan_count(i_nan)=nan_count
767    if (i_nan > maxtl/lsynctime) i_nan=1 !lsynctime must be <= maxtl
768    do ii_nan=1, (maxtl/lsynctime)
769      total_nan_intl=total_nan_intl+sum_nan_count(ii_nan)
770    end do
771
772! Output to keep track of the numerical instabilities in CBL simulation
773! and if they are compromising the final result (or not):
774    if (cblflag.eq.1) print *,j,itime,'nan_synctime',nan_count,'nan_tl',total_nan_intl 
775
776
777  end do ! itime=0,ideltas,lsynctime
778
779
780! Complete the calculation of initial conditions for particles not yet terminated
781!*****************************************************************************
782
783! eso :TODO: this not implemented yet (transfer particles to PID 0 or rewrite)
784! the tools to do this are already in mpi_mod.f90
785  if (lroot) then
786    do j=1,numpart
787      if (linit_cond.ge.1) call initial_cond_calc(itime,j)
788    end do
789  end if
790
791
792  if (ipout.eq.2) then
793! MPI process 0 creates the file, the other processes append to it
794    do ip=0, mp_partgroup_np-1
795      if (ip.eq.mp_partid) then
796        !if (mp_dbg_mode) write(*,*) 'call partoutput(itime), proc, mp_partid',ip,mp_partid
797        call partoutput(itime)    ! dump particle positions
798      end if
799      call mpif_mpi_barrier
800    end do
801  end if
802
803! eso :TODO: MPI
804  if (linit_cond.ge.1.and.lroot) call initial_cond_output(itime)   ! dump initial cond. field
805
806
807! De-allocate memory and end
808!***************************
809
810999 if (iflux.eq.1) then
811    deallocate(flux)
812  endif
813  if (OHREA) then
814    deallocate(OH_field,OH_hourly,lonOH,latOH,altOH)
815  endif
816  if (ldirect.gt.0) then
817    deallocate(drygridunc,wetgridunc)
818  endif
819  deallocate(gridunc)
820  deallocate(xpoint1,xpoint2,ypoint1,ypoint2,zpoint1,zpoint2,xmass)
821  deallocate(ireleasestart,ireleaseend,npart,kindz)
822  deallocate(xmasssave)
823  if (nested_output.eq.1) then
824    deallocate(orooutn, arean, volumen)
825    if (ldirect.gt.0) then
826      deallocate(griduncn,drygriduncn,wetgriduncn)
827    endif
828  endif
829  deallocate(outheight,outheighthalf)
830  deallocate(oroout, area, volume)
831
832  if (mp_measure_time) call mpif_mtime('timemanager',1)
833
834end subroutine timemanager
Note: See TracBrowser for help on using the repository browser.
hosted by ZAMG