source: branches/jerome/src_flexwrf_v3.1/timemanager_mpi.f90 @ 16

Last change on this file since 16 was 16, checked in by jebri, 10 years ago

sources for flexwrf v3.1

File size: 42.5 KB
Line 
1!***********************************************************************
2!* Copyright 2012,2013                                                *
3!* Jerome Brioude, Delia Arnold, Andreas Stohl, Wayne Angevine,       *
4!* John Burkhart, Massimo Cassiani, Adam Dingwell, Richard C Easter, Sabine Eckhardt,*
5!* Stephanie Evan, Jerome D Fast, Don Morton, Ignacio Pisso,          *
6!* Petra Seibert, Gerard Wotawa, Caroline Forster, Harald Sodemann,   *
7!*                                                                     *
8!* This file is part of FLEXPART WRF                                   *
9!*                                                                     *
10!* FLEXPART is free software: you can redistribute it and/or modify    *
11!* it under the terms of the GNU General Public License as published by*
12!* the Free Software Foundation, either version 3 of the License, or   *
13!* (at your option) any later version.                                 *
14!*                                                                     *
15!* FLEXPART is distributed in the hope that it will be useful,         *
16!* but WITHOUT ANY WARRANTY; without even the implied warranty of      *
17!* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the       *
18!* GNU General Public License for more details.                        *
19!*                                                                     *
20!* You should have received a copy of the GNU General Public License   *
21!* along with FLEXPART.  If not, see <http://www.gnu.org/licenses/>.   *
22!***********************************************************************
23
24       subroutine timemanager_mpi(mts)
25
26!*******************************************************************************
27!                                                                              *
28! Handles the computation of trajectories, i.e. determines which               *
29! trajectories have to be computed at what time.                               *
30! Manages dry+wet deposition routines, radioactive decay and the computation   *
31! of concentrations.                                                           *
32!                                                                              *
33!     Author: A. Stohl                                                         *
34!                                                                              *
35!     20 May 1996                                                              *
36!                                                                              *
37!     Dec 2005, J. Fast - Only call conccalc & concoutput when (iout.ge.1)     *
38!     Aug 2007, W. Wang - call KFeta convection scheme (lconvection=2or3)
39!                       Note, backward is unavailabe for lconvection=2
40!     Mar 2012, J. Brioude: modifications to handle openmp and mpi             *
41!*******************************************************************************
42!  Changes, Bernd C. Krueger, Feb. 2001:                                       *
43!        Call of convmix when new windfield is read                            *
44!------------------------------------                                          *
45!  Changes Petra Seibert, Sept 2002                                            *
46!     fix wet scavenging problem                                               *
47!     Code may not be correct for decay of deposition!                         *
48!  Changes Petra Seibert, Nov 2002                                             *
49!     call convection BEFORE new fields are read in BWD mode                   *
50!  Changes Caroline Forster, Feb 2005
51!     new interface between flexpart and convection scheme
52!     Emanuel's latest subroutine convect43c.f is used
53!*******************************************************************************
54!                                                                              *
55! Variables:                                                                   *
56! DEP                .true. if either wet or dry deposition is switched on     *
57! decay(maxspec) [1/s] decay constant for radioactive decay                    *
58! DRYDEP             .true. if dry deposition is switched on                   *
59! ideltas [s]        modelling period                                          *
60! jtime [s]          actual temporal position of calculation                   *
61! ldeltat [s]        time since computation of radioact. decay of depositions  *
62! loutaver [s]       averaging period for concentration calculations           *
63! loutend [s]        end of averaging for concentration calculations           *
64! loutnext [s]       next time at which output fields shall be centered        *
65! loutsample [s]     sampling interval for averaging of concentrations         *
66! loutstart [s]      start of averaging for concentration calculations         *
67! loutstep [s]       time interval for which concentrations shall be calculated*
68! npoint(maxpart)    index, which starting point the trajectory has            *
69!                    starting positions of trajectories                        *
70! nstop              serves as indicator for fate of particles                 *
71!                    in the particle loop                                      *
72! nstop1             serves as indicator for wind fields (see getfields)       *
73! outnum             number of samples for each concentration calculation      *
74! outnum             number of samples for each concentration calculation      *
75! prob               probability of absorption at ground due to dry deposition *
76! WETDEP             .true. if wet deposition is switched on                   *
77! weight             weight for each concentration sample (1/2 or 1)           *
78! uap(maxpart),ucp(maxpart),uzp(maxpart) = random velocities due to turbulence *
79! us(maxpart),vs(maxpart),ws(maxpart) = random velocities due to interpolation *
80! xtra1(maxpart), ytra1(maxpart), ztra1(maxpart) =                             *
81!                    spatial positions of trajectories                         *
82!                                                                              *
83! Constants:                                                                   *
84! maxpart            maximum number of trajectories                            *
85!                                                                              *
86!*******************************************************************************
87
88!      include 'includepar'
89!      include 'includecom'
90  use unc_mod
91  use point_mod
92!  use xmass_mod
93  use flux_mod
94  use outg_mod
95  use oh_mod
96  use par_mod
97  use com_mod
98  use mpi_mod
99 use mt_stream
100
101!  use ran_mod
102!  use interpol_mod
103
104      implicit none
105
106  include 'mpif.h'
107
108  integer :: ix,jy,j,ks,kp,l,n,jtime,nstop,nstop1
109!  integer :: MPI_COMM_WORLD
110! integer :: ksp
111  integer :: loutnext,loutstart,loutend,jj,chunksize
112!,chunksize2
113  integer :: chunksize3,omp_get_num_threads
114  integer :: ldeltat,itage,nage,th_itra1,i
115  real :: outnum,weight,prob(maxspec),nrand,decfact
116!  real :: uap(maxpart),ucp(maxpart),uzp(maxpart)
117!  real :: us(maxpart),vs(maxpart),ws(maxpart)
118!  integer(kind=2) :: cbt(maxpart)
119! real,allocatable, dimension (:) :: uap,ucp,uzp
120! real,allocatable, dimension (:) :: us,vs,ws
121! integer(kind=2),allocatable, dimension (:) :: cbt
122  real :: drydeposit(maxspec),gridtotalunc,wetgridtotalunc
123  real :: drygridtotalunc,xold,yold,zold,xmassfract
124!      integer j,k,l,n,jtime,nstop,nstop1
125!      integer loutnext,loutstart,loutend
126!      integer ix,jy,ldeltat,itage,nage
127!      real outnum,weight,prob(maxspec)
128!     real uap(maxpart),ucp(maxpart),uzp(maxpart),decfact
129!     real us(maxpart),vs(maxpart),ws(maxpart),cbt(maxpart)
130!     real drydeposit(maxspec),gridtotalunc,wetgridtotalunc
131!      real drygridtotalunc,xold,yold,zold
132!     real xm,xm1
133
134
135  integer :: th_npoint,th_idt,th_itramem,jdeb,jfin,stat,th_nclass
136! integer,save :: cpt(24)=0
137  integer,save :: cpt(maxomp)=0
138
139  real(kind=dp) :: th_xtra1,th_ytra1
140  real :: th_ztra1,th_uap,th_ucp,th_uzp
141  real :: th_us,th_vs,th_ws,ran3
142  integer(kind=2) :: th_cbt
143  integer :: OMP_GET_THREAD_NUM,from
144
145  real :: p1,p2,p3,p4,ddx,ddy,rddx,rddy,dtt,dt1,dt2
146  integer :: ixp,jyp,ngrid,indz,indzp,nbp,jj2,ii,offset
147  logical :: depoindicator(maxspec)
148  logical,save :: indzindicator(nzmax)
149  real :: ust,wst,ol,h,zeta,sigu,sigv,tlu,tlv,tlw
150  real :: sigw,dsigwdz,dsigw2dz,th_xmass1(maxspec)
151  real :: start, finish
152  real :: uprof(nzmax),vprof(nzmax),wprof(nzmax)
153  real :: usigprof(nzmax),vsigprof(nzmax),wsigprof(nzmax)
154  real :: rhoprof(nzmax),rhogradprof(nzmax)
155  real :: tkeprof(nzmax),pttprof(nzmax)
156  real :: u,v,w,usig,vsig,wsig,pvi
157  integer*4 :: now(3)
158  integer :: ttime,cpttra
159  integer, dimension(MPI_STATUS_SIZE) :: status
160  integer :: myid,ntasks,ierr,islave,tag2,ompid,n_threads,tag3,i_omp
161  type (mt_state) :: mts (0: MAX_STREAM)
162!************************
163
164!JB
165  call MPI_COMM_RANK ( MPI_COMM_WORLD, myid, ierr )
166  call MPI_COMM_SIZE ( MPI_COMM_WORLD, ntasks, ierr )
167! myid gives the info on the node id
168
169      loutnext=loutstep/2
170      outnum=0.
171      loutstart=loutnext-loutaver/2
172      loutend=loutnext+loutaver/2
173
174!   if (myid.eq.0) then
175    allocate(uap(maxpart) ,stat=stat)
176    allocate(ucp(maxpart) ,stat=stat)
177    allocate(uzp(maxpart) ,stat=stat)
178    allocate(us(maxpart) ,stat=stat)
179    allocate(vs(maxpart) ,stat=stat)
180    allocate(ws(maxpart) ,stat=stat)
181    allocate(cbt(maxpart) ,stat=stat)
182!   endif
183!     if (chunksize2.eq.0) chunksize2=1
184      chunksize2=int(maxpart/ntasks)+1  !if sent homogeneously
185!    print*,'chunk',myid,chunksize2,numpart
186     if (ntasks.gt.1) then
187    allocate(mpi_npoint(chunksize2) ,stat=stat)
188    if (stat.ne.0) write(*,*)'ERROR: could not 1'
189    allocate(mpi_idt(chunksize2) ,stat=stat)
190    if (stat.ne.0) write(*,*)'ERROR: could not 2'
191    allocate(mpi_itra1(chunksize2) ,stat=stat)
192    allocate(mpi_itramem(chunksize2) ,stat=stat)
193    if (stat.ne.0) write(*,*)'ERROR: could not 3'
194    allocate(mpi_uap(chunksize2) ,stat=stat)
195    if (stat.ne.0) write(*,*)'ERROR: could not 4'
196    allocate(mpi_ucp(chunksize2) ,stat=stat)
197    if (stat.ne.0) write(*,*)'ERROR: could not 5'
198    allocate(mpi_uzp(chunksize2) ,stat=stat)
199    if (stat.ne.0) write(*,*)'ERROR: could not 6'
200    allocate(mpi_us(chunksize2) ,stat=stat)
201    if (stat.ne.0) write(*,*)'ERROR: could not 7'
202    allocate(mpi_vs(chunksize2) ,stat=stat)
203    if (stat.ne.0) write(*,*)'ERROR: could not 8'
204    allocate(mpi_ws(chunksize2) ,stat=stat)
205    if (stat.ne.0) write(*,*)'ERROR: could not 82'
206    allocate(mpi_xtra1(chunksize2) ,stat=stat)
207    if (stat.ne.0) write(*,*)'ERROR: could not 9'
208    allocate(mpi_ytra1(chunksize2) ,stat=stat)
209    if (stat.ne.0) write(*,*)'ERROR: could not10'
210    allocate(mpi_ztra1(chunksize2) ,stat=stat)
211    if (stat.ne.0) write(*,*)'ERROR: could not11'
212    allocate(mpi_cbt(chunksize2) ,stat=stat)
213    if (stat.ne.0) write(*,*)'ERROR: could not12'
214    allocate(mpi_xmass1(chunksize2,nspec) ,stat=stat)
215!   allocate(mpi_drydep1(chunksize2,nspec) ,stat=stat)
216    if (stat.ne.0) write(*,*)'ERROR: could not13'
217    allocate(mpi_nclass(chunksize2) ,stat=stat)
218    allocate(dummyi2(chunksize2) ,stat=stat)
219    if (stat.ne.0) write(*,*)'ERROR: could not14'
220    allocate(dummyr2(chunksize2) ,stat=stat)
221    if (stat.ne.0) write(*,*)'ERROR: could not15'
222    allocate(dummyi22(chunksize2) ,stat=stat)
223    if (stat.ne.0) write(*,*)'ERROR: could not16'
224    allocate(dummyr22(chunksize2) ,stat=stat)
225    if (stat.ne.0) write(*,*)'ERROR: could not17'
226    chunksize2=chunksize
227     endif
228
229!**********************************************************************
230! Loop over the whole modelling period in time steps of mintime seconds
231!**********************************************************************
232
233!     print*,'time',myid,ideltas,lsynctime
234      do jtime=0,ideltas,lsynctime
235
236
237
238!         print*,'jtime',jtime
239! Computation of wet deposition every lsynctime seconds
240! maybe wet depo frequency can be relaxed later but better be on safe side
241! wetdepo must be called BEFORE new fields are read in but should not
242! be called in the very beginning before any fields are loaded, or
243! before particles are in the system
244! Code may not be correct for decay of deposition
245! changed by Petra Seibert 9/02
246!********************************************************************
247
248        if (WETDEP .and. jtime .ne. 0 .and. numpart .gt. 0) &
249          call wetdepo(jtime,lsynctime,loutnext)
250
251    if (OHREA .and. jtime .ne. 0 .and. numpart .gt. 0) &
252         call ohreaction(jtime,lsynctime,loutnext)
253
254! compute convection for backward runs
255!*************************************
256
257!          if ((ldirect.eq.-1).and.(lconvection.eq.1).and.(jtime.lt.0))
258!    &    call convmix(jtime)
259
260           if ((ldirect.eq.-1).and.(jtime.lt.0)) then
261             if (lconvection .eq. 1) call convmix(jtime)
262             if (lconvection .eq. 2 .or. lconvection .eq. 3) &
263                call convmix_kfeta(jtime)
264           endif
265
266! Get necessary wind fields if not available
267!*******************************************
268
269!        call itime(now)
270!        ttime=now(1)*3600+now(2)*60+now(3)
271        call cpu_time(start) 
272        call getfields(jtime,nstop1)
273        if (nstop1.gt.1) stop 'NO METEO FIELDS AVAILABLE'
274!        call itime(now)
275!        ttime=now(1)*3600+now(2)*60+now(3)-ttime
276        call cpu_time(finish) 
277!      print*,'read wind time',ttime
278             if (option_verbose.eq.1) then
279       print*,'read wind time',finish-start
280         endif
281! Release particles
282!******************
283
284!JB
285     if (myid.eq.0) then ! I let only the master thread releasing the particles and calculate the output
286!        call itime(now)
287        call cpu_time(start) 
288        if (mdomainfill.ge.1) then
289          if (jtime.eq.0) then
290            call init_domainfill()
291          else
292            call boundcond_domainfill(jtime,loutend)
293          endif
294        else
295        if (numpoint_option.eq.0) then
296          call releaseparticles_irreg(jtime)
297         elseif (numpoint_option.eq.1) then
298!      print*,'avant release'
299          call releaseparticles_reg(jtime)
300          endif
301        endif
302!           do i=1,numpart
303!         print*,'ipart 2',myid,i,ztra1(i)
304!            enddo
305!        print*,'test rel',npoint(1),npoint(2),npoint(3)
306
307!         print*,'test rel1',npoint(5139),npoint(6002),npoint(100003)
308! Compute convective mixing for forward runs
309! for backward runs it is done before next windfield is read in
310!**************************************************************
311
312!      if ((ldirect.eq.1).and.(lconvection.eq.1)) &
313!           call convmix(jtime)
314
315          if (ldirect.eq.1) then
316           if (lconvection.eq.1)call convmix(jtime)
317           if (lconvection.eq.2 .or. lconvection .eq. 3) &
318             call convmix_kfeta(jtime)
319          endif
320!      print*,'intermediaire'
321
322! If middle of averaging period of output fields is reached, accumulated
323! deposited mass radioactively decays
324!***********************************************************************
325
326    if (DEP.and.(jtime.eq.loutnext).and.(ldirect.gt.0)) then
327      do ks=1,nspec
328      do kp=1,maxpointspec_act
329        if (decay(ks).gt.0.) then
330          do nage=1,nageclass
331            do l=1,nclassunc
332  ! Mother output grid
333              do jy=0,numygrid-1
334                do ix=0,numxgrid-1
335                  wetgridunc(ix,jy,ks,kp,l,nage)= &
336                       wetgridunc(ix,jy,ks,kp,l,nage)* &
337                       exp(-1.*outstep*decay(ks))
338                  drygridunc(ix,jy,ks,kp,l,nage)= &
339                       drygridunc(ix,jy,ks,kp,l,nage)* &
340                       exp(-1.*outstep*decay(ks))
341                end do
342              end do
343  ! Nested output grid
344              if (nested_output.eq.1) then
345                do jy=0,numygridn-1
346                  do ix=0,numxgridn-1
347                    wetgriduncn(ix,jy,ks,kp,l,nage)= &
348                         wetgriduncn(ix,jy,ks,kp,l,nage)* &
349                         exp(-1.*outstep*decay(ks))
350                    drygriduncn(ix,jy,ks,kp,l,nage)= &
351                         drygriduncn(ix,jy,ks,kp,l,nage)* &
352                         exp(-1.*outstep*decay(ks))
353                  end do
354                end do
355              endif
356            end do
357          end do
358        endif
359      end do
360      end do
361    endif
362
363!!! CHANGE: These lines may be switched on to check the conservation
364!!! of mass within FLEXPART
365
366!       if (mod(jtime,loutsample).eq.0) then
367!          xm=0.
368!          xm1=0.
369!          do 247 j=1,numpart
370!47          if (itra1(j).eq.jtime) xm1=xm1+xmass1(j,1)
371!          xm=xm1
372!          do 248 nage=1,nageclass
373!            do 248 ix=0,numxgrid-1
374!              do 248 jy=0,numygrid-1
375!                do 248 l=1,nclassunc
376!48        xm=xm+wetgridunc(ix,jy,1,l,nage)+drygridunc(ix,jy,1,l,nage)
377!          write(*,'(i6,4f10.3)') jtime,xm,xm1
378!       endif
379!!! CHANGE
380
381         
382! Check whether concentrations are to be calculated
383!**************************************************
384
385        if ((ldirect*jtime.ge.ldirect*loutstart).and. &
386        (ldirect*jtime.le.ldirect*loutend)) then ! add to grid
387          if (mod(jtime-loutstart,loutsample).eq.0) then
388
389! If we are exactly at the start or end of the concentration averaging interval,
390! give only half the weight to this sample
391!*******************************************************************************
392
393            if ((jtime.eq.loutstart).or.(jtime.eq.loutend)) then
394              weight=0.5
395            else
396              weight=1.0
397            endif
398!      print*,'avant conccalc'
399            outnum=outnum+weight
400            if(iout.ge.1) then
401             if (outgrid_option.eq.0) then
402             call conccalc_irreg(jtime,weight)
403             elseif (outgrid_option.eq.1) then
404             call conccalc_reg(jtime,weight)
405             endif
406            endif
407          endif
408
409!      print*,'apres conccalc'
410
411!         if ((mquasilag.eq.1).and.(jtime.eq.(loutstart+loutend)/2)) &
412!         call partoutput_short(jtime)    ! dump particle positions in extremely compressed format
413
414
415! Output and reinitialization of grid
416! If necessary, first sample of new grid is also taken
417!*****************************************************
418
419          if ((jtime.eq.loutend).and.(outnum.gt.0.)) then
420!            print*,'iout',iout,ipout,outgrid_option
421            if ((iout.le.3.).or.(iout.eq.5)) then
422             if(iout.ge.1) then
423             if (outgrid_option.eq.0) then
424             call concoutput_irreg(jtime,outnum,gridtotalunc, &
425              wetgridtotalunc,drygridtotalunc)
426       if (nested_output.eq.1) call concoutput_nest_irreg(jtime,outnum)
427             elseif (outgrid_option.eq.1) then
428             call concoutput_reg(jtime,outnum,gridtotalunc, &
429              wetgridtotalunc,drygridtotalunc)
430       if (nested_output.eq.1) call concoutput_nest_reg(jtime,outnum)
431             endif
432            endif
433
434!      print*,'apres concout'
435!             if (nested_output.eq.1.and.iout.ge.1)
436!    +           call concoutput_nest(jtime,outnum)
437              outnum=0.
438            endif
439            if ((iout.eq.4).or.(iout.eq.5)) call plumetraj(jtime)
440            if (iflux.eq.1) call fluxoutput(jtime)
441            write(*,45) jtime,numpart,gridtotalunc,wetgridtotalunc, &
442            drygridtotalunc
44345          format(i9,' SECONDS SIMULATED: ',i9, &
444            ' PARTICLES:    Uncertainty: ',3f7.3)
445            if (ipout.ge.1) call partoutput(jtime)    ! dump particle positions
446            loutnext=loutnext+loutstep
447            loutstart=loutnext-loutaver/2
448            loutend=loutnext+loutaver/2
449            if (jtime.eq.loutstart) then
450              weight=0.5
451              outnum=outnum+weight
452              if(iout.ge.1) then
453               if (outgrid_option.eq.0) then
454               call conccalc_irreg(jtime,weight)
455               elseif (outgrid_option.eq.1) then
456               call conccalc_reg(jtime,weight)
457               endif
458             endif
459            endif
460
461
462! Check, whether particles are to be split:
463! If so, create new particles and attribute all information from the old
464! particles also to the new ones; old and new particles both get half the
465! mass of the old ones
466!************************************************************************
467
468        if (ldirect*jtime.ge.ldirect*itsplit) then
469          n=numpart
470          do j=1,numpart
471            if (ldirect*jtime.ge.ldirect*itrasplit(j)) then
472              if (n.lt.maxpart) then
473                n=n+1
474                itrasplit(j)=2*(itrasplit(j)-itramem(j))+itramem(j)
475                itrasplit(n)=itrasplit(j)
476                itramem(n)=itramem(j)
477                itra1(n)=itra1(j)
478                idt(n)=idt(j)
479                npoint(n)=npoint(j)
480                nclass(n)=nclass(j)
481                xtra1(n)=xtra1(j)
482                ytra1(n)=ytra1(j)
483                ztra1(n)=ztra1(j)
484                uap(n)=uap(j)
485                ucp(n)=ucp(j)
486                uzp(n)=uzp(j)
487                us(n)=us(j)
488                vs(n)=vs(j)
489                ws(n)=ws(j)
490                cbt(n)=cbt(j)
491                do ks=1,nspec
492                  xmass1(j,ks)=xmass1(j,ks)/2.
493                  xmass1(n,ks)=xmass1(j,ks)
494                end do
495              endif
496            endif
497          end do
498          numpart=n
499        endif
500      endif
501    endif
502       
503
504
505
506! Loop over all particles
507!************************
508
509
510!     chunksize=int(numpart/ntasks)+1  !if sent homogeneously
511      chunksize=int(numpart/ntasks)  !if sent homogeneously
512!        call itime(now)
513!        ttime=now(1)*3600+now(2)*60+now(3)-ttime
514        call cpu_time(finish) 
515
516!      print*,'processing time',ttime
517             if (option_verbose.eq.1) then
518       print*,'processing time',finish-start
519        endif
520   endif !over myid
521!JB
522! at this stage, I assume that each node has the same shared memory because they run getfields.
523! now we need to split the trajectories into pieces for each node
524!   if (myid.eq.0) then
525
526        if (jtime.eq.ideltas) exit   
527
528! Compute interval since radioactive decay of deposited mass was computed
529!************************************************************************
530
531        if (jtime.lt.loutnext) then
532          ldeltat=jtime-(loutnext-loutstep)
533        else                                  ! first half of next interval
534          ldeltat=jtime-loutnext
535        endif
536
537
538   if (myid.eq.0) then
539!       call itime(now)
540!        ttime=now(1)*3600+now(2)*60+now(3)
541        call cpu_time(start) 
542   do ii=1,ntasks-1
543    call MPI_SEND(chunksize,1, MPI_INTEGER, ii,3001, MPI_COMM_WORLD, ierr)
544    call MPI_SEND(numpart,1, MPI_INTEGER, ii,3002, MPI_COMM_WORLD, ierr)
545   enddo
546   else
547    call MPI_RECV(chunksize,1, MPI_INTEGER, 0,3001, MPI_COMM_WORLD,status, ierr)
548    call MPI_RECV(numpart,1, MPI_INTEGER, 0,3002, MPI_COMM_WORLD,status, ierr)
549   endif
550!  print*,'numpart',numpart
551!    chunksize2=chunksize
552!        print*,'chunksize 0',chunksize2,ntasks
553!JB
554! here I am going to send the infos to each slave nodes.
555     if (numpart.gt.0 .and. ntasks.gt.1 ) then
556    call MPI_BARRIER(MPI_COMM_WORLD,ierr)
557
558    call sendint_mpi(1,numpart,chunksize,0)
559!    print*,'after npoint',myid,numpart,chunksize
560    call sendint_mpi(2,numpart,chunksize,0)
561!    print*,'after idt',myid,numpart,chunksize
562    call sendint_mpi(3,numpart,chunksize,0)
563!    print*,'after itra1',myid,numpart,chunksize
564    call sendreal_mpi(4,numpart,chunksize,0)
565!    print*,'after uap',myid,numpart,chunksize
566    call sendreal_mpi(5,numpart,chunksize,0)
567!    print*,'after ucp',myid,numpart,chunksize
568    call sendreal_mpi(6,numpart,chunksize,0)
569!    print*,'after uzp',myid,numpart,chunksize
570    call sendreal_mpi(7,numpart,chunksize,0)
571!    print*,'after us',myid,numpart,chunksize
572    call sendreal_mpi(8,numpart,chunksize,0)
573!    print*,'after vs',myid,numpart,chunksize
574    call sendreal_mpi(9,numpart,chunksize,0)
575!    print*,'after ws',myid,numpart,chunksize
576    call sendreal_mpi(10,numpart,chunksize,0)
577!    print*,'after ztra1',myid,numpart,chunksize
578    call senddouble_mpi(11,numpart,chunksize,0)
579!    print*,'after xtra1',myid,numpart,chunksize
580    call senddouble_mpi(12,numpart,chunksize,0)
581!    print*,'after ytra1',myid,numpart,chunksize
582    call sendint2_mpi(13,numpart,chunksize,0)
583!    print*,'after cbt',myid,numpart,chunksize
584    call sendint_mpi(14,numpart,chunksize,0)
585!    print*,'after itramem',myid,numpart,chunksize
586    call sendint_mpi(15,numpart,chunksize,0)
587!    print*,'after nclass',myid,numpart,chunksize
588    call sendreal2d_mpi(20,numpart,nspec,chunksize,0)
589!    print*,'after xmass1',myid,numpart,chunksize
590   if (myid.eq.0) then
591!        call itime(now)
592!        ttime=now(1)*3600+now(2)*60+now(3)-ttime
593        call cpu_time(finish) 
594
595!       print*,'sending time',ttime
596           if (option_verbose.eq.1) then
597       print*,'sending time',finish-start
598          endif
599    else
600    chunksize2=chunksize
601    endif
602    endif !if statement on numpart et ntasks
603
604!   print*,'debut chunksize',chunksize,chunksize2,myid
605!    sigw,dsigwdz,dsigw2dz,cpt(nbp),ompid)
606
607! initialize the temporary drydeposition grid
608
609            if (DRYDEP.and.ldirect.gt.0) then
610  do ks=1,nspec
611  do kp=1,maxpointspec_act
612    do nage=1,nageclass
613      do jy=0,numygrid-1
614        do ix=0,numxgrid-1
615          do l=1,nclassunc
616            drygridunc2(ix,jy,ks,kp,l,nage)=0.
617          end do
618        end do
619      end do
620    if (nested_output.eq.1) then
621      do jy=0,numygridn-1
622        do ix=0,numxgridn-1
623          do l=1,nclassunc
624            drygriduncn2(ix,jy,ks,kp,l,nage)=0.
625          end do
626        end do
627      end do
628    endif
629    end do
630  end do
631  end do
632   endif
633       
634!
635!JB
636! now we are entering the openmp zone.
637!    if (myid.eq.0) then
638!    print*,'itra11',numpart,itra1(numpart-2:numpart+2)
639!    print*,'itra12',chunksize2,mpi_itra1(chunksize2-2:chunksize2+1)
640!    else
641!    print*,'itra13',mpi_itra1(chunksize-2:chunksize+1)
642!    endif
643
644!        print*,'continue',myid,chunksize2
645!!!$OMP PARALLEL NUM_THREADS(10) DEFAULT(SHARED) &
646!$OMP PARALLEL DEFAULT(SHARED) &
647!$OMP PRIVATE(jj, th_npoint, th_idt, th_uap, th_ucp,  &
648!$OMP th_uzp, th_us, th_vs, th_ws, th_xtra1, th_ytra1, th_ztra1,decfact, &
649!$OMP th_cbt, xold, yold, zold, kp, itage, prob, nstop, xmassfract, &
650!$OMP th_nclass,chunksize3,start,finish,ngrid,ompid,depoindicator,nbp, &
651!$OMP indzindicator,cpttra,th_xmass1,th_itra1,th_itramem,drydeposit,n_threads) &
652!$OMP SHARED(height,rho,tt,vsetaver,dquer,xtra1,ytra1,ztra1, &
653!$OMP density,cunningham,itra1,ioutputforeachrelease,cbt,iflux, &
654!$OMP uun,vvn,wwn,ustar,wstar,oli,uupol,vvpol,uu,vv,ww,drhodz,ptt,tke, &
655!$OMP rhon,drhodzn,pttn,tken,vdep,vdepn,itramem,nageclass,lage, &
656!$OMP jtime,ldirect,memind,nglobal,switchnorthg,m_x,m_y,m_xn,m_yn, &
657!$OMP switchsouthg,numbnests,xln,xrn,yln,yrn,memtime,xresoln, &
658!$OMP yresoln,hmix,hmixn,tropopause, &
659!$OMP tropopausen,lsynctime,dxconst,dyconst,mdomainfill, &
660!$OMP turb_option,turbswitch,ifine,chunksize,chunksize2, &
661!!!maxrand, &
662!$OMP xmass,xmass1,DRYDEP,DRYDEPSPEC,nspec,rannumb,uniform_rannumb,cpt, &
663!$OMP lwindinterv,npart,npoint,idt,uap,ucp,uzp,us,vs,ws, &
664!$OMP linit_cond,decay,ldeltat,nclass,nested_output,numpart, &
665!$OMP  mpi_npoint,mpi_idt, mpi_uap, mpi_ucp, mpi_uzp, mpi_us, mpi_vs, &
666!$OMP  mpi_ws, mpi_xtra1, mpi_ytra1, mpi_ztra1, &
667!$OMP mpi_cbt,drygridunc2,drygriduncn2, &
668!$OMP  mpi_xmass1, mpi_itra1,myid,mpi_itramem,mpi_nclass, &
669!$OMP  mts)
670
671!        call itime(now)
672!        ttime=now(1)*3600+now(2)*60+now(3)
673        call cpu_time(start) 
674!       chunksize3=int(chunksize2/omp_get_num_threads())+1
675        n_threads=omp_get_num_threads()
676
677!       ompid=omp_get_num_threads()
678        ompid=OMP_GET_THREAD_NUM()
679        chunksize3=int(real(chunksize2)/real(n_threads)/20.)+1 !more efficient
680
681        if (ompid+1.gt.maxomp) then
682        print*,'problem with maxomp. modify par_mod.f90',maxomp,ompid+1
683        stop
684        endif
685        cpttra=0
686!        print*,'chunksi',chunksize2,chunksize3,myid
687        if (chunksize2.gt.0 .and. numpart.gt.0) then
688!         print*,'test rel2',npoint(5139),npoint(6002),npoint(100003)
689!!!$OMP DO  SCHEDULE(STATIC,10)
690!$OMP DO  SCHEDULE(STATIC,chunksize3)
691!!!$OMP DO SCHEDULE(GUIDED,1)
692!!!$OMP DO
693!        do jj=1,numpart
694!        do jj=numpart,1,-1
695!        print*,jj
696        do jj=1,chunksize2
697
698! If integration step is due, do it
699!**********************************
700!$OMP CRITICAL
701          if (ntasks.gt.1) then
702         th_itra1=mpi_itra1(jj)
703         th_itramem=mpi_itramem(jj)
704         th_npoint=mpi_npoint(jj)
705             else
706         th_itra1=itra1(jj)
707         th_itramem=itramem(jj)
708         th_npoint=npoint(jj)
709         endif
710
711!$OMP END CRITICAL
712!          if (th_itra1(jj).eq.jtime) then
713!        if (mod(jj,30000).eq.1) print*,'middle',jj,th_itra1,jtime
714!       if (th_itra1.lt.-9999) print*,'middle',jj,th_itra1,jtime
715!         if (th_itra1.eq.jtime) then
716!         if (th_itra1.eq.jtime .and. th_npoint.gt.0 .and. th_npoint.le.numpoint) then
717          if (th_itra1.eq.jtime ) then
718           cpttra=cpttra+1
719
720!       print*,'avant init',j
721! Initialize newly released particle
722!***********************************
723!$OMP CRITICAL
724          if (ntasks.eq.1) then
725!         th_npoint=npoint(jj)
726          th_idt=idt(jj)
727          th_uap=uap(jj)
728          th_ucp=ucp(jj)
729          th_uzp=uzp(jj)
730          th_us=us(jj)
731          th_vs=vs(jj)
732          th_ws=ws(jj)
733          th_xtra1=xtra1(jj)
734          th_ytra1=ytra1(jj)
735          th_ztra1=ztra1(jj)
736         th_nclass=nclass(jj)
737          th_cbt=cbt(jj)
738        do ks=1,nspec
739         th_xmass1(ks)=xmass1(jj,ks)
740        enddo
741        if (ioutputforeachrelease.eq.1) then
742            kp=npoint(jj)
743        else
744            kp=1
745        endif
746         else
747!        th_npoint=mpi_npoint(jj)
748         th_idt=mpi_idt(jj)
749         th_uap=mpi_uap(jj)
750         th_ucp=mpi_ucp(jj)
751         th_uzp=mpi_uzp(jj)
752         th_us=mpi_us(jj)
753         th_vs=mpi_vs(jj)
754         th_ws=mpi_ws(jj)
755         th_xtra1=mpi_xtra1(jj)
756         th_ytra1=mpi_ytra1(jj)
757         th_ztra1=mpi_ztra1(jj)
758         th_nclass=mpi_nclass(jj)
759         th_cbt=mpi_cbt(jj)
760     do ks=1,nspec
761         th_xmass1(ks)=mpi_xmass1(jj,ks)
762        enddo
763        if (ioutputforeachrelease.eq.1) then
764            kp=mpi_npoint(jj)
765        else
766            kp=1
767        endif
768         endif
769!$OMP END CRITICAL
770
771! Determine age class of the particle
772!            itage=abs(itra1(jj)-itramem(jj))
773            itage=abs(th_itra1-th_itramem)
774            do nage=1,nageclass
775              if (itage.lt.lage(nage)) exit
776         enddo
777!      if (jj.lt.5) print*,'xmass1',th_xmass1(1)
778!            ompid=OMP_GET_THREAD_NUM()
779             nbp=ompid+1
780!         print*,th_npoint,jj,npoint(jj)
781!     print*,'befo',th_xtra1,th_ytra1,th_ztra1
782!          iff=0
783!            if ((itramem(jj).eq.jtime).or.(jtime.eq.0)) &
784            if ((th_itramem.eq.jtime).or.(jtime.eq.0)) then
785!           call initialize(jtime,idt(j),uap(j),ucp(j),uzp(j), &
786!           us(j),vs(j),ws(j),xtra1(j),ytra1(j),ztra1(j),cbt(j))
787            call initialize(jtime,th_idt,th_uap,th_ucp,th_uzp, &
788            th_us,th_vs,th_ws,th_xtra1,th_ytra1,th_ztra1,th_cbt, &
789      ngrid,depoindicator,indzindicator,cpt(nbp),ompid,myid,n_threads,mts )
790             endif
791
792!     print*,'after',th_xtra1,th_ytra1,th_ztra1
793! Memorize particle positions
794!****************************
795!         if (mod(jj,100000).eq.1) print*,'middle',jj,myid,ompid
796!            xold=xtra1(j)
797!            yold=ytra1(j)
798!            zold=ztra1(j)
799            xold=th_xtra1
800            yold=th_ytra1
801            zold=th_ztra1
802
803! Integrate Lagevin equation for lsynctime seconds
804!*************************************************
805!              write(*,*)'numpart,jtime, particle #=',numpart,jtime,j
806
807!        call advance(jtime,npoint(j),idt(j),uap(j),ucp(j),uzp(j),us(j), &
808!         vs(j),ws(j),nstop,xtra1(j),ytra1(j),ztra1(j),prob,cbt(j))
809!     if ( abs(xold).gt.1000. .or. xold.ne.xold .or. th_xtra1.ne.th_xtra1 )  &
810!    print*,'pb avant 0',xold,yold,th_itramem, &
811!      jtime,jj,chunksize2
812!        print*,'npoint',mpi_npoint(jj),th_npoint,jj
813        call advance(jtime,th_npoint,th_idt,th_uap,th_ucp,th_uzp, &
814            th_us,th_vs,th_ws,nstop,th_xtra1,&
815            th_ytra1,th_ztra1,prob,th_cbt, &
816      ngrid,depoindicator,indzindicator,cpt(nbp),ompid,myid,n_threads,mts )
817
818!     if ( abs(xold).gt.1000. .or. xold.ne.xold .or. th_xtra1.ne.th_xtra1 )  &
819!    print*,'pb avant 1',xold,yold,th_itramem, &
820!      jtime,jj,chunksize2
821! Calculate the gross fluxes across layer interfaces
822!***************************************************
823
824
825            if (iflux.eq.1) call calcfluxes(nage,jj,xold,yold,zold)
826
827!      if (jj.lt.5) print*,'coord after',myid,th_itra1,th_xmass1(1),DRYDEPSPEC(ks)
828
829! Determine, when next time step is due
830! If trajectory is terminated, mark it
831!**************************************
832!!!$OMP CRITICAL
833         do ks=1,nspec
834     drydeposit(ks)=0.
835          enddo
836        if (nstop.gt.1) then
837          if (linit_cond.ge.1) call initial_cond_calc(jtime,jj)
838!              itra1(jj)=-999999999
839              th_itra1=-999999999
840       if (option_verbose.gt.1) print*,'out of domain',th_xtra1,th_ytra1,th_ztra1
841
842            else
843!              itra1(jj)=jtime+lsynctime
844              th_itra1=jtime+lsynctime
845
846
847!      if (jj.lt.5) print*,'coord after2',myid,th_itra1,th_xmass1(1),DRYDEPSPEC(ks)
848! Dry deposition and radioactive decay for each species
849!******************************************************
850          xmassfract=0.
851
852              do ks=1,nspec
853                if (decay(ks).gt.0.) then             ! radioactive decay
854                  decfact=exp(-real(abs(lsynctime))*decay(ks))
855                else
856                  decfact=1.
857                endif
858
859                if (DRYDEPSPEC(ks)) then        ! dry deposition
860!                  drydeposit(ks)=xmass1(j,ks)*prob(ks)*decfact
861                  drydeposit(ks)=th_xmass1(ks)*prob(ks)*decfact
862!                  xmass1(j,ks)=xmass1(j,ks)*(1.-prob(ks))*decfact
863                  th_xmass1(ks)=th_xmass1(ks)*(1.-prob(ks))*decfact
864                  if (decay(ks).gt.0.) then   ! correct for decay (see wetdepo)
865                    drydeposit(ks)=drydeposit(ks)* &
866                    exp(real(abs(ldeltat))*decay(ks))
867                  endif
868                else                           ! no dry deposition
869!                  xmass1(j,ks)=xmass1(j,ks)*decfact
870                  th_xmass1(ks)=th_xmass1(ks)*decfact
871                endif
872!      if (jj.lt.5) print*,'coord after3',myid,th_itra1,th_xmass1(1),DRYDEPSPEC(ks),xmass(th_npoint,1)
873
874            if (mdomainfill.eq.0) then
875              if (xmass(th_npoint,ks).gt.0.) &
876!                  xmassfract=max(xmassfract,real(npart(npoint(jj)))* &
877                   xmassfract=max(xmassfract,real(npart(th_npoint))* &
878!                   xmass1(j,ks)/xmass(npoint(j),ks))
879                   th_xmass1(ks)/xmass(th_npoint,ks))
880            else
881              xmassfract=1.
882            endif
883
884            end do
885
886         if (ipin.eq.0 .and. xmassfract.lt.0.000001) then   ! terminate all particles carrying less mass
887!           itra1(jj)=-999999999
888           th_itra1=-999999999
889         endif
890
891  !        Sabine Eckhardt, June 2008
892  !        don't create depofield for backward runs
893          if (DRYDEP.AND.(ldirect.eq.1)) then
894!           call drydepokernel(nclass(jj),drydeposit,real(xtra1(jj)), &
895            call drydepokernel(th_nclass,drydeposit,real(th_xtra1), &
896!                real(ytra1(jj)),nage,kp)
897                 real(th_ytra1),itage,nage,kp)
898            if (nested_output.eq.1) call drydepokernel_nest( &
899!                nclass(jj),drydeposit,real(xtra1(jj)),real(ytra1(jj)), &
900              th_nclass,drydeposit,real(th_xtra1),real(th_ytra1), &
901                 itage,nage,kp)
902          endif
903
904  ! Terminate trajectories that are older than maximum allowed age
905  !***************************************************************
906
907!          if (abs(itra1(jj)-itramem(jj)).ge.lage(nageclass)) then
908          if (abs(th_itra1-th_itramem).ge.lage(nageclass)) then
909            if (linit_cond.ge.1) &
910                  call initial_cond_calc(jtime+lsynctime,jj)
911!            itra1(jj)=-999999999
912            th_itra1=-999999999
913          endif
914      endif
915!!     print*,xtra1(j),th_xtra1,OMP_GET_THREAD_NUM()
916!$OMP CRITICAL
917          if (ntasks.eq.1) then
918    npoint(jj)=th_npoint
919    idt(jj)=th_idt
920     uap(jj)=th_uap
921     ucp(jj)=th_ucp
922     uzp(jj)=th_uzp
923     us(jj)=th_us
924     vs(jj)=th_vs
925     ws(jj)=th_ws
926     xtra1(jj)=th_xtra1
927     ytra1(jj)=th_ytra1
928     ztra1(jj)=th_ztra1
929     cbt(jj)=th_cbt
930    do ks=1,nspec
931    xmass1(jj,ks)=th_xmass1(ks)
932!   drydep1(jj,ks)=drydeposit(ks)
933    enddo
934!    itramem(jj)=th_itramem
935     itra1(jj)=th_itra1
936     else
937    mpi_npoint(jj)=th_npoint
938    mpi_idt(jj)=th_idt
939    mpi_uap(jj)=th_uap
940    mpi_ucp(jj)=th_ucp
941    mpi_uzp(jj)=th_uzp
942    mpi_us(jj)=th_us
943    mpi_vs(jj)=th_vs
944    mpi_ws(jj)=th_ws
945    mpi_xtra1(jj)=th_xtra1
946    mpi_ytra1(jj)=th_ytra1
947    mpi_ztra1(jj)=th_ztra1
948!   mpi_nclass(jj)=th_nclass
949    mpi_cbt(jj)=th_cbt
950    do ks=1,nspec
951    mpi_xmass1(jj,ks)=th_xmass1(ks)
952!   mpi_drydep1(jj,ks)=drydeposit(ks)
953    enddo
954!   mpi_itramem(jj)=th_itramem
955    mpi_itra1(jj)=th_itra1
956     endif
957!      if (jj.lt.5) print*,'coord cont',th_itra1
958
959!$OMP END CRITICAL
960
961      endif
962
963    end do !loop over particles
964!$OMP END DO
965    endif
966!!!$OMP
967!!!$OMP FLUSH(npoint,idt,uap,ucp,uzp,us,vs,ws,xtra1,ytra1,ztra1,cbt,xmass1,itra1)
968!!!$OMP FLUSH
969!        call itime(now)
970!        ttime=now(1)*3600+now(2)*60+now(3)-ttime
971        call cpu_time(finish) 
972!      print*,'time',ttime,cpttra,myid,OMP_GET_THREAD_NUM()
973            if (option_verbose.eq.1) then
974       print*,'time',finish-start,cpttra,myid,ompid
975           endif
976!$OMP END PARALLEL
977!JB
978! the openmp is done. the output above gives how long it takes to finish the loop over the particles. good benchmark
979 
980! here we use mpi to use the mpi_* arrays to update the big ones in the master
981! thread
982 
983!   call MPI_REDUCE (mypi ,pi ,1, MPI_DOUBLE_PRECISION , MPI_SUM ,0, &
984!    MPI_COMM_WORLD , ierr )
985!  print*,'after loop',myid,chunksize
986   if (chunksize.gt.0 .and. ntasks.gt.1) then
987
988!JB
989! I need a barrier so each node is a the same place
990! I am going to send the new results to the master thread now.
991    call MPI_BARRIER(MPI_COMM_WORLD,ierr)
992   if (myid.eq.0) then
993!        call itime(now)
994!        ttime=now(1)*3600+now(2)*60+now(3)
995        call cpu_time(start) 
996   endif
997    call sendint_mpi(1,numpart,chunksize,1)
998    call sendint_mpi(2,numpart,chunksize,1)
999    call sendint_mpi(3,numpart,chunksize,1)
1000    call sendreal_mpi(4,numpart,chunksize,1)
1001    call sendreal_mpi(5,numpart,chunksize,1)
1002    call sendreal_mpi(6,numpart,chunksize,1)
1003    call sendreal_mpi(7,numpart,chunksize,1)
1004    call sendreal_mpi(8,numpart,chunksize,1)
1005    call sendreal_mpi(9,numpart,chunksize,1)
1006    call sendreal_mpi(10,numpart,chunksize,1)
1007    call senddouble_mpi(11,numpart,chunksize,1)
1008    call senddouble_mpi(12,numpart,chunksize,1)
1009    call sendint2_mpi(13,numpart,chunksize,1)
1010    call sendint_mpi(14,numpart,chunksize,1)
1011!   call sendint_mpi(15,nclass(1:numpart),numpart,chunksize2,1)
1012    call sendreal2d_mpi(99,numpart,nspec,chunksize,1)
1013!   if (DRYDEP) call sendreal2d_mpi(21,numpart,nspec,chunksize,1)
1014    if (DRYDEP.and.(ldirect.eq.1)) call senddrydep_mpi(numxgrid*numygrid)
1015    if (DRYDEP.and.(ldirect.eq.1).and.nested_output.eq.1)  &
1016      call senddrydep_nest_mpi(numxgridn*numygridn)
1017!   print*,'size of vector',chunksize,chunksize2,myid,numpart
1018!   
1019!    if (myid.eq.0) then
1020!    print*,'itra1',numpart,itra1(numpart-2:numpart+2)
1021!    print*,'itra2',chunksize2,mpi_itra1(chunksize2-2:chunksize2+1)
1022!    else
1023!    print*,'itra3',mpi_itra1(chunksize-2:chunksize+1)
1024!    endif
1025   if (myid.eq.0) then
1026!        call itime(now)
1027!        ttime=now(1)*3600+now(2)*60+now(3)-ttime
1028        call cpu_time(finish) 
1029!      print*,'receiving time',ttime
1030            if (option_verbose.eq.1) then
1031       print*,'receiving time',finish-start
1032           endif
1033    endif
1034    endif !finish transfering between nodes 
1035
1036! update the drydepo
1037            if (DRYDEP.and.ldirect.gt.0) then
1038  do ks=1,nspec
1039  do kp=1,maxpointspec_act
1040    do nage=1,nageclass
1041
1042      do jy=0,numygrid-1
1043        do ix=0,numxgrid-1
1044          do l=1,nclassunc
1045       drygridunc(ix,jy,ks,kp,l,nage)=drygridunc(ix,jy,ks,kp,l,nage) &
1046               +drygridunc2(ix,jy,ks,kp,l,nage)
1047          end do
1048        end do
1049      end do
1050    if (nested_output.eq.1) then
1051      do jy=0,numygridn-1
1052        do ix=0,numxgridn-1
1053          do l=1,nclassunc
1054       drygriduncn(ix,jy,ks,kp,l,nage)=drygriduncn(ix,jy,ks,kp,l,nage) &
1055               +drygriduncn2(ix,jy,ks,kp,l,nage)
1056          end do
1057        end do
1058      end do
1059    endif
1060    end do
1061  end do
1062  end do
1063
1064   endif
1065
1066!          if (DRYDEP.AND.(ldirect.eq.1)) then
1067!!           call drydepokernel(nclass(jj),drydeposit,real(xtra1(jj)), &
1068!            call drydepokernel(th_nclass,drydeposit,real(th_xtra1), &
1069!!                real(ytra1(jj)),nage,kp)
1070!                 real(th_ytra1),itage,nage,kp)
1071!            if (nested_output.eq.1) call drydepokernel_nest( &
1072!!                nclass(jj),drydeposit,real(xtra1(jj)),real(ytra1(jj)), &
1073!              th_nclass,drydeposit,real(th_xtra1),real(th_ytra1), &
1074!                 itage,nage,kp)
1075!          endif
1076
1077
1078  end do !loop over time
1079
1080
1081  ! Complete the calculation of initial conditions for particles not yet terminated
1082  !*****************************************************************************
1083    call MPI_BARRIER(MPI_COMM_WORLD,ierr)
1084
1085  do j=1,numpart
1086    if (linit_cond.ge.1) call initial_cond_calc(jtime,j)
1087  end do
1088
1089  if (ipout.eq.2) call partoutput(jtime)     ! dump particle positions
1090
1091  if (linit_cond.ge.1) call initial_cond_output(jtime)   ! dump initial cond. field
1092
1093  close(104)
1094
1095  ! De-allocate memory and end
1096  !***************************
1097
1098  if (iflux.eq.1) then
1099      deallocate(flux)
1100  endif
1101    if (ntasks.gt.1) then
1102!  print*,'before dealloc',myid
1103   deallocate(mpi_npoint,mpi_idt,mpi_itra1)
1104!  print*,'after dealloc, part1',myid
1105   deallocate(mpi_uap,mpi_ucp,mpi_uzp)
1106!  print*,'after dealloc, part12',myid
1107   deallocate(mpi_us,mpi_vs,mpi_ws,mpi_ztra1)
1108!  print*,'after dealloc, part2',myid
1109   deallocate(mpi_xtra1,mpi_ytra1)
1110!  print*,'after dealloc, part3',myid
1111   deallocate(mpi_cbt)
1112!  print*,'after dealloc, part4',myid
1113!  deallocate(mpi_xmass1,mpi_drydep1,mpi_nclass)
1114   deallocate(mpi_xmass1,mpi_nclass)
1115!  print*,'after dealloc',myid
1116   deallocate(dummyi2)
1117   deallocate(dummyr2)
1118   deallocate(dummyi22)
1119   deallocate(dummyr22)
1120
1121   endif
1122  if (OHREA.eqv..TRUE.) then
1123      deallocate(OH_field,OH_field_height)
1124  endif
1125  deallocate(gridunc)
1126  deallocate(xpoint1,xpoint2,ypoint1,ypoint2,zpoint1,zpoint2,xmass)
1127  deallocate(ireleasestart,ireleaseend,npart,kindz)
1128!  deallocate(xmasssave)
1129  if (myid.eq.0) then
1130  if (nested_output.eq.1) then
1131     deallocate(orooutn, arean, volumen)
1132     if (ldirect.gt.0) then
1133     deallocate(griduncn,drygriduncn,wetgriduncn,drygriduncn2)
1134     endif
1135  endif
1136  if (ldirect.gt.0) then
1137      if (allocated(drygridunc)) deallocate(drygridunc)
1138      if (allocated(wetgridunc)) deallocate(wetgridunc)
1139      if (allocated(drygridunc2)) deallocate(drygridunc2)
1140      if (allocated(drygriduncn2)) deallocate(drygriduncn2)
1141  endif
1142  deallocate(outheight,outheighthalf)
1143  deallocate(oroout, area, volume)
1144  endif
1145end subroutine timemanager_mpi
1146
1147
1148
Note: See TracBrowser for help on using the repository browser.
hosted by ZAMG