source: branches/jerome/src_flexwrf_v3.1/timemanager.f90 @ 16

Last change on this file since 16 was 16, checked in by jebri, 10 years ago

sources for flexwrf v3.1

File size: 39.1 KB
Line 
1!***********************************************************************
2!* Copyright 2012,2013                                                *
3!* Jerome Brioude, Delia Arnold, Andreas Stohl, Wayne Angevine,       *
4!* John Burkhart, Massimo Cassiani, Adam Dingwell, Richard C Easter, Sabine Eckhardt,*
5!* Stephanie Evan, Jerome D Fast, Don Morton, Ignacio Pisso,          *
6!* Petra Seibert, Gerard Wotawa, Caroline Forster, Harald Sodemann,   *
7!*                                                                     *
8!* This file is part of FLEXPART WRF                                   *
9!*                                                                     *
10!* FLEXPART is free software: you can redistribute it and/or modify    *
11!* it under the terms of the GNU General Public License as published by*
12!* the Free Software Foundation, either version 3 of the License, or   *
13!* (at your option) any later version.                                 *
14!*                                                                     *
15!* FLEXPART is distributed in the hope that it will be useful,         *
16!* but WITHOUT ANY WARRANTY; without even the implied warranty of      *
17!* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the       *
18!* GNU General Public License for more details.                        *
19!*                                                                     *
20!* You should have received a copy of the GNU General Public License   *
21!* along with FLEXPART.  If not, see <http://www.gnu.org/licenses/>.   *
22!***********************************************************************
23
24       subroutine timemanager(mts)
25
26!*******************************************************************************
27!                                                                              *
28! Handles the computation of trajectories, i.e. determines which               *
29! trajectories have to be computed at what time.                               *
30! Manages dry+wet deposition routines, radioactive decay and the computation   *
31! of concentrations.                                                           *
32!                                                                              *
33!     Author: A. Stohl                                                         *
34!                                                                              *
35!     20 May 1996                                                              *
36!                                                                              *
37!     Dec 2005, J. Fast - Only call conccalc & concoutput when (iout.ge.1)     *
38!     Aug 2007, W. Wang - call KFeta convection scheme (lconvection=2or3)
39!                       Note, backward is unavailabe for lconvection=2
40!     Mar 2012, J. Brioude: modifications to handle openmp and mpi             *
41!*******************************************************************************
42!  Changes, Bernd C. Krueger, Feb. 2001:                                       *
43!        Call of convmix when new windfield is read                            *
44!------------------------------------                                          *
45!  Changes Petra Seibert, Sept 2002                                            *
46!     fix wet scavenging problem                                               *
47!     Code may not be correct for decay of deposition!                         *
48!  Changes Petra Seibert, Nov 2002                                             *
49!     call convection BEFORE new fields are read in BWD mode                   *
50!  Changes Caroline Forster, Feb 2005
51!     new interface between flexpart and convection scheme
52!     Emanuel's latest subroutine convect43c.f is used
53!*******************************************************************************
54!                                                                              *
55! Variables:                                                                   *
56! DEP                .true. if either wet or dry deposition is switched on     *
57! decay(maxspec) [1/s] decay constant for radioactive decay                    *
58! DRYDEP             .true. if dry deposition is switched on                   *
59! ideltas [s]        modelling period                                          *
60! jtime [s]          actual temporal position of calculation                   *
61! ldeltat [s]        time since computation of radioact. decay of depositions  *
62! loutaver [s]       averaging period for concentration calculations           *
63! loutend [s]        end of averaging for concentration calculations           *
64! loutnext [s]       next time at which output fields shall be centered        *
65! loutsample [s]     sampling interval for averaging of concentrations         *
66! loutstart [s]      start of averaging for concentration calculations         *
67! loutstep [s]       time interval for which concentrations shall be calculated*
68! npoint(maxpart)    index, which starting point the trajectory has            *
69!                    starting positions of trajectories                        *
70! nstop              serves as indicator for fate of particles                 *
71!                    in the particle loop                                      *
72! nstop1             serves as indicator for wind fields (see getfields)       *
73! outnum             number of samples for each concentration calculation      *
74! outnum             number of samples for each concentration calculation      *
75! prob               probability of absorption at ground due to dry deposition *
76! WETDEP             .true. if wet deposition is switched on                   *
77! weight             weight for each concentration sample (1/2 or 1)           *
78! uap(maxpart),ucp(maxpart),uzp(maxpart) = random velocities due to turbulence *
79! us(maxpart),vs(maxpart),ws(maxpart) = random velocities due to interpolation *
80! xtra1(maxpart), ytra1(maxpart), ztra1(maxpart) =                             *
81!                    spatial positions of trajectories                         *
82!                                                                              *
83! Constants:                                                                   *
84! maxpart            maximum number of trajectories                            *
85!                                                                              *
86!*******************************************************************************
87
88!      include 'includepar'
89!      include 'includecom'
90  use unc_mod
91  use point_mod
92!  use xmass_mod
93  use flux_mod
94  use outg_mod
95  use oh_mod
96  use par_mod
97  use com_mod
98  use mpi_mod
99 use mt_stream
100
101!  use ran_mod
102!  use interpol_mod
103
104      implicit none
105
106! include 'mpif.h'
107
108  integer :: ix,jy,j,ks,kp,l,n,jtime,nstop,nstop1
109!,MPI_COMM_WORLD
110! integer :: ksp
111  integer :: loutnext,loutstart,loutend,jj,chunksize
112!,chunksize2
113  integer :: chunksize3,omp_get_num_threads
114  integer :: ldeltat,itage,nage,th_itra1,i
115  real :: outnum,weight,prob(maxspec),nrand,decfact
116!  real :: uap(maxpart),ucp(maxpart),uzp(maxpart)
117!  real :: us(maxpart),vs(maxpart),ws(maxpart)
118!  integer(kind=2) :: cbt(maxpart)
119!  real,allocatable, dimension (:) :: uap,ucp,uzp
120!  real,allocatable, dimension (:) :: us,vs,ws
121!  integer(kind=2),allocatable, dimension (:) :: cbt
122  real :: drydeposit(maxspec),gridtotalunc,wetgridtotalunc
123  real :: drygridtotalunc,xold,yold,zold,xmassfract
124!      integer j,k,l,n,jtime,nstop,nstop1
125!      integer loutnext,loutstart,loutend
126!      integer ix,jy,ldeltat,itage,nage
127!      real outnum,weight,prob(maxspec)
128!     real uap(maxpart),ucp(maxpart),uzp(maxpart),decfact
129!     real us(maxpart),vs(maxpart),ws(maxpart),cbt(maxpart)
130!     real drydeposit(maxspec),gridtotalunc,wetgridtotalunc
131!      real drygridtotalunc,xold,yold,zold
132!     real xm,xm1
133
134
135  integer :: th_npoint,th_idt,th_itramem,jdeb,jfin,stat,th_nclass
136  integer,save :: cpt(maxomp)=0
137! integer,save :: cpt(24)=0
138  real(kind=dp) :: th_xtra1,th_ytra1
139  real :: th_ztra1,th_uap,th_ucp,th_uzp
140  real :: th_us,th_vs,th_ws,ran3
141  integer(kind=2) :: th_cbt
142  integer :: OMP_GET_THREAD_NUM,from
143
144  real :: p1,p2,p3,p4,ddx,ddy,rddx,rddy,dtt,dt1,dt2
145  integer :: ixp,jyp,ngrid,indz,indzp,nbp,jj2,ii,offset
146  logical :: depoindicator(maxspec)
147  logical,save :: indzindicator(nzmax)
148  real :: ust,wst,ol,h,zeta,sigu,sigv,tlu,tlv,tlw
149  real :: sigw,dsigwdz,dsigw2dz,th_xmass1(maxspec)
150  real :: start, finish
151  real :: uprof(nzmax),vprof(nzmax),wprof(nzmax)
152  real :: usigprof(nzmax),vsigprof(nzmax),wsigprof(nzmax)
153  real :: rhoprof(nzmax),rhogradprof(nzmax)
154  real :: tkeprof(nzmax),pttprof(nzmax)
155  real :: u,v,w,usig,vsig,wsig,pvi
156  integer*4 :: now(3)
157  integer :: ttime,cpttra
158!  integer, dimension(MPI_STATUS_SIZE) :: status
159! integer, dimension(MPI_STATUS_SIZE) :: status
160  integer :: myid,ntasks,ierr,islave,tag2,ompid,n_threads,tag3,i_omp
161  type (mt_state) :: mts (0: MAX_STREAM)
162
163!************************
164
165!JB
166!  call MPI_COMM_RANK ( MPI_COMM_WORLD, myid, ierr )
167!  call MPI_COMM_SIZE ( MPI_COMM_WORLD, ntasks, ierr )
168! myid gives the info on the node id
169      ntasks=1
170      myid=0
171      loutnext=loutstep/2
172      outnum=0.
173      loutstart=loutnext-loutaver/2
174      loutend=loutnext+loutaver/2
175
176!   if (myid.eq.0) then
177    allocate(uap(maxpart) ,stat=stat)
178    allocate(ucp(maxpart) ,stat=stat)
179    allocate(uzp(maxpart) ,stat=stat)
180    allocate(us(maxpart) ,stat=stat)
181    allocate(vs(maxpart) ,stat=stat)
182    allocate(ws(maxpart) ,stat=stat)
183    allocate(cbt(maxpart) ,stat=stat)
184!   endif
185
186!**********************************************************************
187! Loop over the whole modelling period in time steps of mintime seconds
188!**********************************************************************
189
190!     print*,'time',myid,ideltas,lsynctime
191      do jtime=0,ideltas,lsynctime
192
193
194!         print*,'jtime',jtime
195! Computation of wet deposition every lsynctime seconds
196! maybe wet depo frequency can be relaxed later but better be on safe side
197! wetdepo must be called BEFORE new fields are read in but should not
198! be called in the very beginning before any fields are loaded, or
199! before particles are in the system
200! Code may not be correct for decay of deposition
201! changed by Petra Seibert 9/02
202!********************************************************************
203
204        if (WETDEP .and. jtime .ne. 0 .and. numpart .gt. 0) &
205          call wetdepo(jtime,lsynctime,loutnext)
206
207    if (OHREA .and. jtime .ne. 0 .and. numpart .gt. 0) &
208         call ohreaction(jtime,lsynctime,loutnext)
209
210! compute convection for backward runs
211!*************************************
212
213!          if ((ldirect.eq.-1).and.(lconvection.eq.1).and.(jtime.lt.0))
214!    &    call convmix(jtime)
215
216           if ((ldirect.eq.-1).and.(jtime.lt.0)) then
217             if (lconvection .eq. 1) call convmix(jtime)
218             if (lconvection .eq. 2 .or. lconvection .eq. 3) &
219                call convmix_kfeta(jtime)
220           endif
221
222! Get necessary wind fields if not available
223!*******************************************
224
225!        call itime(now)
226!        ttime=now(1)*3600+now(2)*60+now(3)
227        call cpu_time(start) 
228        call getfields(jtime,nstop1)
229        if (nstop1.gt.1) stop 'NO METEO FIELDS AVAILABLE'
230!        call itime(now)
231!        ttime=now(1)*3600+now(2)*60+now(3)-ttime
232        call cpu_time(finish) 
233!      print*,'read wind time',ttime
234
235! Release particles
236!******************
237
238!JB
239     if (myid.eq.0) then ! I let only the master thread releasing the particles and calculate the output
240!        call itime(now)
241        call cpu_time(start) 
242        if (mdomainfill.ge.1) then
243          if (jtime.eq.0) then
244            call init_domainfill()
245          else
246            call boundcond_domainfill(jtime,loutend)
247          endif
248        else
249        if (numpoint_option.eq.0) then
250          call releaseparticles_irreg(jtime)
251         elseif (numpoint_option.eq.1) then
252!      print*,'avant release'
253          call releaseparticles_reg(jtime)
254          endif
255        endif
256!           do i=1,numpart
257!         print*,'ipart 2',myid,i,ztra1(i)
258!            enddo
259!        print*,'test rel',npoint(1),npoint(2),npoint(3)
260
261!         print*,'test rel1',npoint(5139),npoint(6002),npoint(100003)
262! Compute convective mixing for forward runs
263! for backward runs it is done before next windfield is read in
264!**************************************************************
265
266!      if ((ldirect.eq.1).and.(lconvection.eq.1)) &
267!           call convmix(jtime)
268
269          if (ldirect.eq.1) then
270           if (lconvection.eq.1)call convmix(jtime)
271           if (lconvection.eq.2 .or. lconvection .eq. 3) &
272             call convmix_kfeta(jtime)
273          endif
274
275! If middle of averaging period of output fields is reached, accumulated
276! deposited mass radioactively decays
277!***********************************************************************
278
279    if (DEP.and.(jtime.eq.loutnext).and.(ldirect.gt.0)) then
280      do ks=1,nspec
281      do kp=1,maxpointspec_act
282        if (decay(ks).gt.0.) then
283          do nage=1,nageclass
284            do l=1,nclassunc
285  ! Mother output grid
286              do jy=0,numygrid-1
287                do ix=0,numxgrid-1
288                  wetgridunc(ix,jy,ks,kp,l,nage)= &
289                       wetgridunc(ix,jy,ks,kp,l,nage)* &
290                       exp(-1.*outstep*decay(ks))
291                  drygridunc(ix,jy,ks,kp,l,nage)= &
292                       drygridunc(ix,jy,ks,kp,l,nage)* &
293                       exp(-1.*outstep*decay(ks))
294                end do
295              end do
296  ! Nested output grid
297              if (nested_output.eq.1) then
298                do jy=0,numygridn-1
299                  do ix=0,numxgridn-1
300                    wetgriduncn(ix,jy,ks,kp,l,nage)= &
301                         wetgriduncn(ix,jy,ks,kp,l,nage)* &
302                         exp(-1.*outstep*decay(ks))
303                    drygriduncn(ix,jy,ks,kp,l,nage)= &
304                         drygriduncn(ix,jy,ks,kp,l,nage)* &
305                         exp(-1.*outstep*decay(ks))
306                  end do
307                end do
308              endif
309            end do
310          end do
311        endif
312      end do
313      end do
314    endif
315
316!!! CHANGE: These lines may be switched on to check the conservation
317!!! of mass within FLEXPART
318
319!       if (mod(jtime,loutsample).eq.0) then
320!          xm=0.
321!          xm1=0.
322!          do 247 j=1,numpart
323!47          if (itra1(j).eq.jtime) xm1=xm1+xmass1(j,1)
324!          xm=xm1
325!          do 248 nage=1,nageclass
326!            do 248 ix=0,numxgrid-1
327!              do 248 jy=0,numygrid-1
328!                do 248 l=1,nclassunc
329!48        xm=xm+wetgridunc(ix,jy,1,l,nage)+drygridunc(ix,jy,1,l,nage)
330!          write(*,'(i6,4f10.3)') jtime,xm,xm1
331!       endif
332!!! CHANGE
333
334         
335! Check whether concentrations are to be calculated
336!**************************************************
337
338        if ((ldirect*jtime.ge.ldirect*loutstart).and. &
339        (ldirect*jtime.le.ldirect*loutend)) then ! add to grid
340          if (mod(jtime-loutstart,loutsample).eq.0) then
341
342! If we are exactly at the start or end of the concentration averaging interval,
343! give only half the weight to this sample
344!*******************************************************************************
345
346            if ((jtime.eq.loutstart).or.(jtime.eq.loutend)) then
347              weight=0.5
348            else
349              weight=1.0
350            endif
351            outnum=outnum+weight
352            if(iout.ge.1) then
353             if (outgrid_option.eq.0) then
354             call conccalc_irreg(jtime,weight)
355             elseif (outgrid_option.eq.1) then
356             call conccalc_reg(jtime,weight)
357             endif
358            endif
359          endif
360
361
362!         if ((mquasilag.eq.1).and.(jtime.eq.(loutstart+loutend)/2)) &
363!         call partoutput_short(jtime)    ! dump particle positions in extremely compressed format
364
365
366! Output and reinitialization of grid
367! If necessary, first sample of new grid is also taken
368!*****************************************************
369
370          if ((jtime.eq.loutend).and.(outnum.gt.0.)) then
371!            print*,'iout',iout,ipout,outgrid_option
372            if ((iout.le.3.).or.(iout.eq.5)) then
373             if(iout.ge.1) then
374             if (outgrid_option.eq.0) then
375             call concoutput_irreg(jtime,outnum,gridtotalunc, &
376              wetgridtotalunc,drygridtotalunc)
377       if (nested_output.eq.1) call concoutput_nest_irreg(jtime,outnum)
378             elseif (outgrid_option.eq.1) then
379             call concoutput_reg(jtime,outnum,gridtotalunc, &
380              wetgridtotalunc,drygridtotalunc)
381       if (nested_output.eq.1) call concoutput_nest_reg(jtime,outnum)
382             endif
383            endif
384
385!             if (nested_output.eq.1.and.iout.ge.1)
386!    +           call concoutput_nest(jtime,outnum)
387              outnum=0.
388            endif
389            if ((iout.eq.4).or.(iout.eq.5)) call plumetraj(jtime)
390            if (iflux.eq.1) call fluxoutput(jtime)
391            write(*,45) jtime,numpart,gridtotalunc,wetgridtotalunc, &
392            drygridtotalunc
39345          format(i9,' SECONDS SIMULATED: ',i9, &
394            ' PARTICLES:    Uncertainty: ',3f7.3)
395            if (ipout.ge.1) call partoutput(jtime)    ! dump particle positions
396            loutnext=loutnext+loutstep
397            loutstart=loutnext-loutaver/2
398            loutend=loutnext+loutaver/2
399            if (jtime.eq.loutstart) then
400              weight=0.5
401              outnum=outnum+weight
402              if(iout.ge.1) then
403               if (outgrid_option.eq.0) then
404               call conccalc_irreg(jtime,weight)
405               elseif (outgrid_option.eq.1) then
406               call conccalc_reg(jtime,weight)
407               endif
408             endif
409            endif
410
411
412! Check, whether particles are to be split:
413! If so, create new particles and attribute all information from the old
414! particles also to the new ones; old and new particles both get half the
415! mass of the old ones
416!************************************************************************
417
418        if (ldirect*jtime.ge.ldirect*itsplit) then
419          n=numpart
420          do j=1,numpart
421            if (ldirect*jtime.ge.ldirect*itrasplit(j)) then
422              if (n.lt.maxpart) then
423                n=n+1
424                itrasplit(j)=2*(itrasplit(j)-itramem(j))+itramem(j)
425                itrasplit(n)=itrasplit(j)
426                itramem(n)=itramem(j)
427                itra1(n)=itra1(j)
428                idt(n)=idt(j)
429                npoint(n)=npoint(j)
430                nclass(n)=nclass(j)
431                xtra1(n)=xtra1(j)
432                ytra1(n)=ytra1(j)
433                ztra1(n)=ztra1(j)
434                uap(n)=uap(j)
435                ucp(n)=ucp(j)
436                uzp(n)=uzp(j)
437                us(n)=us(j)
438                vs(n)=vs(j)
439                ws(n)=ws(j)
440                cbt(n)=cbt(j)
441                do ks=1,nspec
442                  xmass1(j,ks)=xmass1(j,ks)/2.
443                  xmass1(n,ks)=xmass1(j,ks)
444                end do
445              endif
446            endif
447          end do
448          numpart=n
449        endif
450      endif
451    endif
452       
453
454
455
456! Loop over all particles
457!************************
458
459
460      chunksize=int(numpart/ntasks)  !if sent homogeneously
461!        call itime(now)
462!        ttime=now(1)*3600+now(2)*60+now(3)-ttime
463        call cpu_time(finish) 
464
465!      print*,'processing time',ttime
466   endif !over myid
467!JB
468! at this stage, I assume that each node has the same shared memory because they run getfields.
469! now we need to split the trajectories into pieces for each node
470!   if (myid.eq.0) then
471
472        if (jtime.eq.ideltas) exit   
473
474! Compute interval since radioactive decay of deposited mass was computed
475!************************************************************************
476
477        if (jtime.lt.loutnext) then
478          ldeltat=jtime-(loutnext-loutstep)
479        else                                  ! first half of next interval
480          ldeltat=jtime-loutnext
481        endif
482
483
484!  if (myid.eq.0) then
485!       call itime(now)
486!        ttime=now(1)*3600+now(2)*60+now(3)
487        call cpu_time(start) 
488!   do ii=1,ntasks-1
489!    call MPI_SEND(chunksize,1, MPI_INTEGER, ii,3001, MPI_COMM_WORLD, ierr)
490!    call MPI_SEND(numpart,1, MPI_INTEGER, ii,3002, MPI_COMM_WORLD, ierr)
491!   enddo
492!   else
493!    call MPI_RECV(chunksize,1, MPI_INTEGER, 0,3001, MPI_COMM_WORLD,status, ierr)
494!    call MPI_RECV(numpart,1, MPI_INTEGER, 0,3002, MPI_COMM_WORLD,status, ierr)
495!   endif
496!  print*,'numpart',numpart
497     chunksize2=chunksize
498      if (chunksize2.eq.0) chunksize2=1
499!    print*,'chunk',myid,chunksize2,numpart
500!     if (ntasks.gt.1) then
501!    allocate(mpi_npoint(chunksize2) ,stat=stat)
502!    if (stat.ne.0) write(*,*)'ERROR: could not 1'
503!    allocate(mpi_idt(chunksize2) ,stat=stat)
504!    if (stat.ne.0) write(*,*)'ERROR: could not 2'
505!    allocate(mpi_itra1(chunksize2) ,stat=stat)
506!    allocate(mpi_itramem(chunksize2) ,stat=stat)
507!    if (stat.ne.0) write(*,*)'ERROR: could not 3'
508!    allocate(mpi_uap(chunksize2) ,stat=stat)
509!    if (stat.ne.0) write(*,*)'ERROR: could not 4'
510!    allocate(mpi_ucp(chunksize2) ,stat=stat)
511!    if (stat.ne.0) write(*,*)'ERROR: could not 5'
512!    allocate(mpi_uzp(chunksize2) ,stat=stat)
513!    if (stat.ne.0) write(*,*)'ERROR: could not 6'
514!    allocate(mpi_us(chunksize2) ,stat=stat)
515!    if (stat.ne.0) write(*,*)'ERROR: could not 7'
516!    allocate(mpi_vs(chunksize2) ,stat=stat)
517!    if (stat.ne.0) write(*,*)'ERROR: could not 8'
518!    allocate(mpi_ws(chunksize2) ,stat=stat)
519!    if (stat.ne.0) write(*,*)'ERROR: could not 82'
520!    allocate(mpi_xtra1(chunksize2) ,stat=stat)
521!    if (stat.ne.0) write(*,*)'ERROR: could not 9'
522!    allocate(mpi_ytra1(chunksize2) ,stat=stat)
523!    if (stat.ne.0) write(*,*)'ERROR: could not10'
524!    allocate(mpi_ztra1(chunksize2) ,stat=stat)
525!    if (stat.ne.0) write(*,*)'ERROR: could not11'
526!    allocate(mpi_cbt(chunksize2) ,stat=stat)
527!    if (stat.ne.0) write(*,*)'ERROR: could not12'
528!    allocate(mpi_xmass1(chunksize2,nspec) ,stat=stat)
529!    if (stat.ne.0) write(*,*)'ERROR: could not13'
530!    allocate(mpi_nclass(chunksize2) ,stat=stat)
531!    chunksize2=chunksize
532!     endif
533!JB
534! here I am going to send the infos to each slave nodes.
535!     if (numpart.gt.0 .and. ntasks.gt.1 ) then
536!    call MPI_BARRIER(MPI_COMM_WORLD,ierr)
537!
538!    call sendint_mpi(1,npoint(1:numpart),numpart,chunksize,0)
539!    call sendint_mpi(2,idt(1:numpart),numpart,chunksize,0)
540!    call sendint_mpi(3,itra1(1:numpart),numpart,chunksize,0)
541!    call sendreal_mpi(4,uap(1:numpart),numpart,chunksize,0)
542!    call sendreal_mpi(5,ucp(1:numpart),numpart,chunksize,0)
543!    call sendreal_mpi(6,uzp(1:numpart),numpart,chunksize,0)
544!    call sendreal_mpi(7,us(1:numpart),numpart,chunksize,0)
545!    call sendreal_mpi(8,vs(1:numpart),numpart,chunksize,0)
546!    call sendreal_mpi(9,ws(1:numpart),numpart,chunksize,0)
547!    call sendreal_mpi(10,ztra1(1:numpart),numpart,chunksize,0)
548!    call senddouble_mpi(11,xtra1(1:numpart),numpart,chunksize,0)
549!    call senddouble_mpi(12,ytra1(1:numpart),numpart,chunksize,0)
550!    call sendint2_mpi(13,cbt(1:numpart),numpart,chunksize,0)
551!    call sendint_mpi(14,itramem(1:numpart),numpart,chunksize,0)
552!    call sendint_mpi(15,nclass(1:numpart),numpart,chunksize,0)
553!    call sendreal2d_mpi(99,xmass1(1:numpart,1:nspec),numpart,nspec,chunksize,0)
554!
555!   if (myid.eq.0) then
556!!        call itime(now)
557!!        ttime=now(1)*3600+now(2)*60+now(3)-ttime
558!        call cpu_time(finish) 
559!
560!!       print*,'sending time',ttime
561!       print*,'sending time',finish-start
562!    endif
563!    endif !if statement on numpart et ntasks
564
565!    sigw,dsigwdz,dsigw2dz,cpt(nbp),ompid)
566
567! initialize the temporary drydeposition grid
568
569            if (DRYDEP.and.ldirect.gt.0) then
570  do ks=1,nspec
571  do kp=1,maxpointspec_act
572    do nage=1,nageclass
573      do jy=0,numygrid-1
574        do ix=0,numxgrid-1
575          do l=1,nclassunc
576            drygridunc2(ix,jy,ks,kp,l,nage)=0.
577          end do
578        end do
579      end do
580    if (nested_output.eq.1) then
581      do jy=0,numygridn-1
582        do ix=0,numxgridn-1
583          do l=1,nclassunc
584            drygriduncn2(ix,jy,ks,kp,l,nage)=0.
585          end do
586        end do
587      end do
588    endif
589    end do
590  end do
591  end do
592   endif
593
594!JB
595! now we are entering the openmp zone.
596
597!        print*,'continue',myid,chunksize2
598!!!$OMP PARALLEL NUM_THREADS(10) DEFAULT(SHARED) &
599!$OMP PARALLEL DEFAULT(SHARED) &
600!$OMP PRIVATE(jj, th_npoint, th_idt, th_uap, th_ucp,  &
601!$OMP th_uzp, th_us, th_vs, th_ws, th_xtra1, th_ytra1, th_ztra1,decfact, &
602!$OMP th_cbt, xold, yold, zold, kp, itage, prob, nstop, xmassfract, &
603!$OMP th_nclass,chunksize3,start,finish,ngrid,ompid,depoindicator,nbp, &
604!$OMP indzindicator,cpttra,th_xmass1,th_itra1,th_itramem,drydeposit,n_threads) &
605!$OMP SHARED(height,rho,tt,vsetaver,dquer,xtra1,ytra1,ztra1, &
606!$OMP density,cunningham,itra1,ioutputforeachrelease,cbt,iflux, &
607!$OMP uun,vvn,wwn,ustar,wstar,oli,uupol,vvpol,uu,vv,ww,drhodz,ptt,tke, &
608!$OMP rhon,drhodzn,pttn,tken,vdep,vdepn,itramem,nageclass,lage, &
609!$OMP jtime,ldirect,memind,nglobal,switchnorthg,m_x,m_y,m_xn,m_yn, &
610!$OMP switchsouthg,numbnests,xln,xrn,yln,yrn,memtime,xresoln, &
611!$OMP yresoln,hmix,hmixn,tropopause, &
612!$OMP tropopausen,lsynctime,dxconst,dyconst,mdomainfill, &
613!$OMP turb_option,turbswitch,ifine,chunksize,chunksize2, &
614!!!maxrand, &
615!$OMP xmass,xmass1,DRYDEP,DRYDEPSPEC,nspec,rannumb,uniform_rannumb,cpt, &
616!$OMP lwindinterv,npart,npoint,idt,uap,ucp,uzp,us,vs,ws, &
617!$OMP linit_cond,decay,ldeltat,nclass,nested_output,numpart, &
618!$OMP  mpi_npoint,mpi_idt, mpi_uap, mpi_ucp, mpi_uzp, mpi_us, mpi_vs, &
619!$OMP  mpi_ws, mpi_xtra1, mpi_ytra1, mpi_ztra1, &
620!$OMP mpi_cbt,drygridunc2,drygriduncn2, &
621!$OMP  mpi_xmass1, mpi_itra1,myid,mpi_itramem,mpi_nclass, &
622!$OMP  mts)
623
624!        call itime(now)
625!        ttime=now(1)*3600+now(2)*60+now(3)
626        call cpu_time(start) 
627!       chunksize3=int(chunksize2/omp_get_num_threads())+1
628        n_threads=omp_get_num_threads()
629        ompid=OMP_GET_THREAD_NUM()
630        chunksize3=int(real(chunksize2)/real(n_threads)/20.)+1 !more efficient
631
632        if (ompid+1.gt.maxomp) then
633        print*,'problem with maxomp. modify par_mod.f90',maxomp,ompid+1
634        stop
635        endif
636
637        cpttra=0
638!        print*,'chunksi',chunksize2,myid
639        if (chunksize2.gt.0 .and. numpart.gt.0 ) then
640!         print*,'test rel2',npoint(5139),npoint(6002),npoint(100003)
641!!!$OMP DO  SCHEDULE(STATIC,10)
642!$OMP DO  SCHEDULE(STATIC,chunksize3)
643!!!$OMP DO SCHEDULE(GUIDED,1)
644!!!$OMP DO
645!        do jj=1,numpart
646!        do jj=numpart,1,-1
647!        print*,jj
648        do jj=1,chunksize2
649
650! If integration step is due, do it
651!**********************************
652!$OMP CRITICAL
653!         if (ntasks.gt.1) then
654!        th_itra1=mpi_itra1(jj)
655!        th_itramem=mpi_itramem(jj)
656!        th_npoint=mpi_npoint(jj)
657!            else
658         th_itra1=itra1(jj)
659         th_itramem=itramem(jj)
660         th_npoint=npoint(jj)
661!        endif
662!$OMP END CRITICAL
663!          if (th_itra1(jj).eq.jtime) then
664          if (th_itra1.eq.jtime .and. th_npoint.gt.0 .and. th_npoint.le.numpoint) then
665           cpttra=cpttra+1
666
667!       print*,'avant init',j
668! Initialize newly released particle
669!***********************************
670!$OMP CRITICAL
671!          if (ntasks.eq.1) then
672          th_npoint=npoint(jj)
673          th_idt=idt(jj)
674          th_uap=uap(jj)
675          th_ucp=ucp(jj)
676          th_uzp=uzp(jj)
677          th_us=us(jj)
678          th_vs=vs(jj)
679          th_ws=ws(jj)
680          th_xtra1=xtra1(jj)
681          th_ytra1=ytra1(jj)
682          th_ztra1=ztra1(jj)
683          th_nclass=nclass(jj)
684          th_cbt=cbt(jj)
685        do ks=1,nspec
686         th_xmass1(ks)=xmass1(jj,ks)
687        enddo
688!         else
689!         th_npoint=mpi_npoint(jj)
690!         th_idt=mpi_idt(jj)
691!         th_uap=mpi_uap(jj)
692!         th_ucp=mpi_ucp(jj)
693!         th_uzp=mpi_uzp(jj)
694!         th_us=mpi_us(jj)
695!         th_vs=mpi_vs(jj)
696!         th_ws=mpi_ws(jj)
697!         th_xtra1=mpi_xtra1(jj)
698!         th_ytra1=mpi_ytra1(jj)
699!         th_ztra1=mpi_ztra1(jj)
700!         th_nclass=mpi_nclass(jj)
701!         th_cbt=mpi_cbt(jj)
702!     do ks=1,nspec
703!         th_xmass1(ks)=mpi_xmass1(jj,ks)
704!        enddo
705!         endif
706!$OMP END CRITICAL
707        if (ioutputforeachrelease.eq.1) then
708            kp=npoint(jj)
709        else
710            kp=1
711        endif
712
713! Determine age class of the particle
714!            itage=abs(itra1(jj)-itramem(jj))
715            itage=abs(th_itra1-th_itramem)
716            do nage=1,nageclass
717              if (itage.lt.lage(nage)) exit
718         enddo
719!      if (jj.lt.5) print*,'xmass1',th_xmass1(1)
720!            nbp=OMP_GET_THREAD_NUM()+1
721             nbp=ompid+1
722!         print*,th_npoint,jj,npoint(jj)
723!     print*,'befo',th_xtra1,th_ytra1,th_ztra1
724!          iff=0
725!            if ((itramem(jj).eq.jtime).or.(jtime.eq.0)) &
726!       if (jj.eq.103) print*,th_xtra1,th_ytra1,th_ztra1
727            if ((th_itramem.eq.jtime).or.(jtime.eq.0)) then
728!           call initialize(jtime,idt(j),uap(j),ucp(j),uzp(j), &
729!           us(j),vs(j),ws(j),xtra1(j),ytra1(j),ztra1(j),cbt(j))
730            call initialize(jtime,th_idt,th_uap,th_ucp,th_uzp, &
731            th_us,th_vs,th_ws,th_xtra1,th_ytra1,th_ztra1,th_cbt, &
732      ngrid,depoindicator,indzindicator,cpt(nbp),ompid,myid,n_threads,mts )
733             endif
734
735!     print*,'after',th_xtra1,th_ytra1,th_ztra1
736! Memorize particle positions
737!****************************
738
739!            xold=xtra1(j)
740!            yold=ytra1(j)
741!            zold=ztra1(j)
742            xold=th_xtra1
743            yold=th_ytra1
744            zold=th_ztra1
745! Integrate Lagevin equation for lsynctime seconds
746!*************************************************
747!              write(*,*)'numpart,jtime, particle #=',numpart,jtime,j
748
749!        call advance(jtime,npoint(j),idt(j),uap(j),ucp(j),uzp(j),us(j), &
750!         vs(j),ws(j),nstop,xtra1(j),ytra1(j),ztra1(j),prob,cbt(j))
751!       if (jj.eq.103) print*,'bef',th_xtra1,th_ytra1,th_ztra1
752        call advance(jtime,th_npoint,th_idt,th_uap,th_ucp,th_uzp, &
753            th_us,th_vs,th_ws,nstop,th_xtra1,&
754            th_ytra1,th_ztra1,prob,th_cbt, &
755      ngrid,depoindicator,indzindicator,cpt(nbp),ompid,myid,n_threads,mts )
756!       if (jj.eq.103) print*,'aft',th_xtra1,th_ytra1,th_ztra1
757! Calculate the gross fluxes across layer interfaces
758!***************************************************
759
760
761            if (iflux.eq.1) call calcfluxes(nage,jj,xold,yold,zold)
762
763!      if (jj.lt.5) print*,'coord after',myid,th_itra1,th_xmass1(1),DRYDEPSPEC(ks)
764
765! Determine, when next time step is due
766! If trajectory is terminated, mark it
767!**************************************
768         do ks=1,nspec
769     drydeposit(ks)=0.
770          enddo
771
772        if (nstop.gt.1) then
773          if (linit_cond.ge.1) call initial_cond_calc(jtime,jj)
774!              itra1(jj)=-999999999
775              th_itra1=-999999999
776            else
777!              itra1(jj)=jtime+lsynctime
778              th_itra1=jtime+lsynctime
779
780
781!      if (jj.lt.5) print*,'coord after2',myid,th_itra1,th_xmass1(1),DRYDEPSPEC(ks)
782! Dry deposition and radioactive decay for each species
783!******************************************************
784          xmassfract=0.
785
786              do ks=1,nspec
787                if (decay(ks).gt.0.) then             ! radioactive decay
788                  decfact=exp(-real(abs(lsynctime))*decay(ks))
789                else
790                  decfact=1.
791                endif
792
793                if (DRYDEPSPEC(ks)) then        ! dry deposition
794!                  drydeposit(ks)=xmass1(j,ks)*prob(ks)*decfact
795                  drydeposit(ks)=th_xmass1(ks)*prob(ks)*decfact
796!                  xmass1(j,ks)=xmass1(j,ks)*(1.-prob(ks))*decfact
797                  th_xmass1(ks)=th_xmass1(ks)*(1.-prob(ks))*decfact
798                  if (decay(ks).gt.0.) then   ! correct for decay (see wetdepo)
799                    drydeposit(ks)=drydeposit(ks)* &
800                    exp(real(abs(ldeltat))*decay(ks))
801                  endif
802                else                           ! no dry deposition
803!                  xmass1(j,ks)=xmass1(j,ks)*decfact
804                  th_xmass1(ks)=th_xmass1(ks)*decfact
805                endif
806!      if (jj.lt.5) print*,'coord after3',myid,th_itra1,th_xmass1(1),DRYDEPSPEC(ks),xmass(th_npoint,1)
807
808            if (mdomainfill.eq.0) then
809              if (xmass(th_npoint,ks).gt.0.) &
810!                  xmassfract=max(xmassfract,real(npart(npoint(jj)))* &
811                   xmassfract=max(xmassfract,real(npart(th_npoint))* &
812!                   xmass1(j,ks)/xmass(npoint(j),ks))
813                   th_xmass1(ks)/xmass(th_npoint,ks))
814            else
815              xmassfract=1.
816            endif
817
818            end do
819
820          if (xmassfract.lt.0.000001) then   ! terminate all particles carrying less mass
821!            itra1(jj)=-999999999
822            th_itra1=-999999999
823          endif
824
825  !        Sabine Eckhardt, June 2008
826  !        don't create depofield for backward runs
827          if (DRYDEP.AND.(ldirect.eq.1)) then
828!           call drydepokernel(nclass(jj),drydeposit,real(xtra1(jj)), &
829            call drydepokernel(th_nclass,drydeposit,real(th_xtra1), &
830!                real(ytra1(jj)),nage,kp)
831                 real(th_ytra1),itage,nage,kp)
832            if (nested_output.eq.1) call drydepokernel_nest( &
833!                nclass(jj),drydeposit,real(xtra1(jj)),real(ytra1(jj)), &
834              th_nclass,drydeposit,real(th_xtra1),real(th_ytra1), &
835                 itage,nage,kp)
836          endif
837
838  ! Terminate trajectories that are older than maximum allowed age
839  !***************************************************************
840
841!          if (abs(itra1(jj)-itramem(jj)).ge.lage(nageclass)) then
842          if (abs(th_itra1-th_itramem).ge.lage(nageclass)) then
843            if (linit_cond.ge.1) &
844                  call initial_cond_calc(jtime+lsynctime,jj)
845!            itra1(jj)=-999999999
846            th_itra1=-999999999
847          endif
848      endif
849!!     print*,xtra1(j),th_xtra1,OMP_GET_THREAD_NUM()
850!$OMP CRITICAL
851!         if (ntasks.eq.1) then
852    npoint(jj)=th_npoint
853    idt(jj)=th_idt
854     uap(jj)=th_uap
855     ucp(jj)=th_ucp
856     uzp(jj)=th_uzp
857     us(jj)=th_us
858     vs(jj)=th_vs
859     ws(jj)=th_ws
860     xtra1(jj)=th_xtra1
861     ytra1(jj)=th_ytra1
862     ztra1(jj)=th_ztra1
863     cbt(jj)=th_cbt
864    do ks=1,nspec
865    xmass1(jj,ks)=th_xmass1(ks)
866    enddo
867 !    itramem(jj)=th_itramem
868     itra1(jj)=th_itra1
869!    else
870!   mpi_npoint(jj)=th_npoint
871!   mpi_idt(jj)=th_idt
872!   mpi_uap(jj)=th_uap
873!   mpi_ucp(jj)=th_ucp
874!   mpi_uzp(jj)=th_uzp
875!   mpi_us(jj)=th_us
876!   mpi_vs(jj)=th_vs
877!   mpi_ws(jj)=th_ws
878!   mpi_xtra1(jj)=th_xtra1
879!   mpi_ytra1(jj)=th_ytra1
880!   mpi_ztra1(jj)=th_ztra1
881!!  mpi_nclass(jj)=th_nclass
882!   mpi_cbt(jj)=th_cbt
883!   do ks=1,nspec
884!   mpi_xmass1(jj,ks)=th_xmass1(ks)
885!   enddo
886!!  mpi_itramem(jj)=th_itramem
887!   mpi_itra1(jj)=th_itra1
888!    endif
889!      if (jj.lt.5) print*,'coord cont',th_itra1
890
891!$OMP END CRITICAL
892
893      endif
894
895    end do !loop over particles
896!$OMP END DO
897    endif
898!!!$OMP
899!!!$OMP FLUSH(npoint,idt,uap,ucp,uzp,us,vs,ws,xtra1,ytra1,ztra1,cbt,xmass1,itra1)
900!!!$OMP FLUSH
901!        call itime(now)
902!        ttime=now(1)*3600+now(2)*60+now(3)-ttime
903        call cpu_time(finish) 
904!      print*,'time',ttime,cpttra,myid,OMP_GET_THREAD_NUM()
905             if (option_verbose.eq.1) then
906       print*,'time',finish-start,cpttra,myid,ompid
907           endif
908!$OMP END PARALLEL
909!JB
910! the openmp is done. the output above gives how long it takes to finish the loop over the particles. good benchmark
911 
912
913! here we use mpi to use the mpi_* arrays to update the big ones in the master
914! thread
915 
916!   call MPI_REDUCE (mypi ,pi ,1, MPI_DOUBLE_PRECISION , MPI_SUM ,0, &
917!    MPI_COMM_WORLD , ierr )
918!  print*,'after loop',myid,chunksize
919   if (chunksize.gt.0 .and. ntasks.gt.1) then
920
921!JB
922! I need a barrier so each node is a the same place
923! I am going to send the new results to the master thread now.
924!    call MPI_BARRIER(MPI_COMM_WORLD,ierr)
925   if (myid.eq.0) then
926!        call itime(now)
927!        ttime=now(1)*3600+now(2)*60+now(3)
928        call cpu_time(start) 
929   endif
930!    call sendint_mpi(1,npoint(1:numpart),numpart,chunksize,1)
931!    call sendint_mpi(2,idt(1:numpart),numpart,chunksize,1)
932!    call sendint_mpi(3,itra1(1:numpart),numpart,chunksize,1)
933!    call sendreal_mpi(4,uap(1:numpart),numpart,chunksize,1)
934!    call sendreal_mpi(5,ucp(1:numpart),numpart,chunksize,1)
935!    call sendreal_mpi(6,uzp(1:numpart),numpart,chunksize,1)
936!    call sendreal_mpi(7,us(1:numpart),numpart,chunksize,1)
937!    call sendreal_mpi(8,vs(1:numpart),numpart,chunksize,1)
938!    call sendreal_mpi(9,ws(1:numpart),numpart,chunksize,1)
939!    call sendreal_mpi(10,ztra1(1:numpart),numpart,chunksize,1)
940!    call senddouble_mpi(11,xtra1(1:numpart),numpart,chunksize,1)
941!    call senddouble_mpi(12,ytra1(1:numpart),numpart,chunksize,1)
942!    call sendint2_mpi(13,cbt(1:numpart),numpart,chunksize,1)
943!    call sendint_mpi(14,itramem(1:numpart),numpart,chunksize,1)
944!!   call sendint_mpi(15,nclass(1:numpart),numpart,chunksize2,1)
945!    call sendreal2d_mpi(99,xmass1(1:numpart,1:nspec),numpart,nspec,chunksize,1)
946
947   if (myid.eq.0) then
948!        call itime(now)
949!        ttime=now(1)*3600+now(2)*60+now(3)-ttime
950        call cpu_time(finish) 
951!      print*,'receiving time',ttime
952             if (option_verbose.eq.1) then
953       print*,'receiving time',finish-start
954             endif
955    endif
956    endif !finish transfering between nodes 
957!    if (ntasks.gt.1) then
958!!  print*,'before dealloc',myid
959!   deallocate(mpi_npoint,mpi_idt,mpi_itra1)
960!!  print*,'after dealloc, part1',myid
961!   deallocate(mpi_uap,mpi_ucp,mpi_uzp)
962!!  print*,'after dealloc, part12',myid
963!   deallocate(mpi_us,mpi_vs,mpi_ws,mpi_ztra1)
964!!  print*,'after dealloc, part2',myid
965!   deallocate(mpi_xtra1,mpi_ytra1)
966!!  print*,'after dealloc, part3',myid
967!   deallocate(mpi_cbt)
968!!  print*,'after dealloc, part4',myid
969!   deallocate(mpi_xmass1)
970!!  print*,'after dealloc',myid
971!   endif
972! update the drydepo
973            if (DRYDEP.and.ldirect.gt.0) then
974  do ks=1,nspec
975  do kp=1,maxpointspec_act
976    do nage=1,nageclass
977
978      do jy=0,numygrid-1
979        do ix=0,numxgrid-1
980          do l=1,nclassunc
981       drygridunc(ix,jy,ks,kp,l,nage)=drygridunc(ix,jy,ks,kp,l,nage) &
982               +drygridunc2(ix,jy,ks,kp,l,nage)
983          end do
984        end do
985      end do
986    if (nested_output.eq.1) then
987      do jy=0,numygridn-1
988        do ix=0,numxgridn-1
989          do l=1,nclassunc
990       drygriduncn(ix,jy,ks,kp,l,nage)=drygriduncn(ix,jy,ks,kp,l,nage) &
991               +drygriduncn2(ix,jy,ks,kp,l,nage)
992          end do
993        end do
994      end do
995    endif
996
997    end do
998  end do
999  end do
1000  endif
1001
1002  end do !loop over time
1003
1004
1005  ! Complete the calculation of initial conditions for particles not yet terminated
1006  !*****************************************************************************
1007
1008  do j=1,numpart
1009    if (linit_cond.ge.1) call initial_cond_calc(jtime,j)
1010  end do
1011
1012  if (ipout.eq.2) call partoutput(jtime)     ! dump particle positions
1013
1014  if (linit_cond.ge.1) call initial_cond_output(jtime)   ! dump initial cond. field
1015
1016  close(104)
1017
1018  ! De-allocate memory and end
1019  !***************************
1020
1021  if (iflux.eq.1) then
1022      deallocate(flux)
1023  endif
1024  if (OHREA.eqv..TRUE.) then
1025      deallocate(OH_field,OH_field_height)
1026  endif
1027  deallocate(gridunc)
1028  deallocate(xpoint1,xpoint2,ypoint1,ypoint2,zpoint1,zpoint2,xmass)
1029  deallocate(ireleasestart,ireleaseend,npart,kindz)
1030!  deallocate(xmasssave)
1031  if (myid.eq.0) then
1032  if (nested_output.eq.1) then
1033     deallocate(orooutn, arean, volumen)
1034     if (ldirect.gt.0) then
1035     deallocate(griduncn,drygriduncn,wetgriduncn,drygriduncn2)
1036     endif
1037  endif
1038  if (ldirect.gt.0) then
1039      if (allocated(drygridunc)) deallocate(drygridunc)
1040      if (allocated(wetgridunc)) deallocate(wetgridunc)
1041      if (allocated(drygridunc2)) deallocate(drygridunc2)
1042      if (allocated(drygriduncn2)) deallocate(drygriduncn2)
1043  endif
1044  deallocate(outheight,outheighthalf)
1045  deallocate(oroout, area, volume)
1046  endif
1047end subroutine timemanager
1048
1049
1050
Note: See TracBrowser for help on using the repository browser.
hosted by ZAMG