wiki:FpCtbtoWo4FpFormat

Version 1 (modified by dearn, 8 years ago) (diff)

--

Documentation on fp format and FLEXPART workflow regarding meteorological input data

For diagrams of routines and dependencies, see: http://borealscicomp.com/CTBTO_FLEXPART/flexdoxygen/20151015/

In order to facilitate the conversion of processed met data into a form that can be stored in a file, and then read by another process, we have defined an “fp format.” Like anything in life, there are tradeoffs between ease of use and performance. One option, which some of the community has favoured, has been to store it in NetCDF format. This would allow for relatively easy access to specific data, and the data would be “self describing.” This may still be an option in the future, but we opted for a straight binary dump/load of variables, which Fortran can do very efficiently. With this approach, a full array may be efficiently dumped/loaded to/from a file with simple WRITE/READ statements. For example (this program was used to initially test the concept)

PROGRAM t2

INTEGER, PARAMETER :: N = 2
REAL, DIMENSION(N,N) :: A, B

!! Create simple array
A = 3.14
B = 2.0*A

WRITE(6,*) 'ORIGINAL A:', A
WRITE(6,*) 'ORIGINAL B:', B
WRITE(6,*)

!!! Dump the data to .fp file
OPEN(3, file='temp.fp', form="unformatted", access="sequential")
WRITE(3) A, B
CLOSE(3)

! Change the original A, so we're sure that it's loaded correctly
A = -9999.99
B = -9999.99

WRITE(6,*) 'MODIFIED A:', A
WRITE(6,*) 'MODIFIED B:', B
WRITE(6,*)

!!! Load the data from .fp file
OPEN(4, file='temp.fp', form="unformatted", access="sequential")
READ(4) A, B
CLOSE(4)

WRITE(6,*) 'NEWLY LOADED A:', A
WRITE(6,*) 'NEWLY LOADED B:', B

END PROGRAM t2

The met fields are all stored in complex arrays defined in com_mod.f90 - for example

  real :: uu(0:nxmax-1,0:nymax-1,nzmax,2)
  real :: vv(0:nxmax-1,0:nymax-1,nzmax,2)
  real :: uupol(0:nxmax-1,0:nymax-1,nzmax,2)
  real :: vvpol(0:nxmax-1,0:nymax-1,nzmax,2)

By understanding which arrays were important, we could build dump and load routines by insuring that the WRITE and READ statements were in the same order. There was difficulty in figuring out which arrays and variables were important. We could have chosen just to dump all of com_mod (it's what we did initially), but wanted to be more efficient. What we dump is more than sufficient. There are some variables that we’re not sure of, particularly when we think about how they might not be necessary now, but might be in the future, and in such cases, if we were in doubt, we chose to keep them in the fp file.

Most of the arrays have one of their dimensions valued at 2, and this is used because FLEXPART always needs to store data from two met files so that it can efficiently interpolate. For a number of reasons - efficiency being one of them - we wanted to make sure that we only dumped one of the sets into a file, and when we loaded, we wanted to make sure we loaded it into the correct index of the arrays. FLEXPART gets tricky with these index pointers to minimise swapping of fields when a new one is read in, but we were finally able to understand how this works, and build a system that dumps and loads correctly.

The fp format is best illustrated by showing the code for the fpmetbinary_dump() routine. cm_index in the following array slices is the index being dumped (1 or 2) from the com_mod.f90 arrays.

            ! Scalar values
            WRITE(iounit) nx, ny, nxmin1, nymin1, nxfield
            WRITE(iounit) nuvz, nwz, nz, nmixz, nlev_ec
            WRITE(iounit) dx, dy, xlon0, ylat0, dxconst, dyconst

            ! Fixed fields, static in time
            WRITE(iounit) oro, excessoro, lsm, xlanduse, height

            ! 3d fields
            WRITE(iounit) uu(:,:,:,cm_index)
            WRITE(iounit) vv(:,:,:,cm_index)
            WRITE(iounit) uupol(:,:,:,cm_index)
            WRITE(iounit) vvpol(:,:,:,cm_index)
            WRITE(iounit) ww(:,:,:,cm_index)
            WRITE(iounit) tt(:,:,:,cm_index)
            WRITE(iounit) qv(:,:,:,cm_index)
            WRITE(iounit) pv(:,:,:,cm_index)
            WRITE(iounit) rho(:,:,:,cm_index)
            WRITE(iounit) drhodz(:,:,:,cm_index)
            WRITE(iounit) tth(:,:,:,cm_index)
            WRITE(iounit) qvh(:,:,:,cm_index)
            WRITE(iounit) pplev(:,:,:,cm_index)
            WRITE(iounit) clouds(:,:,:,cm_index)
            WRITE(iounit) cloudsh(:,:,cm_index)

            ! 2d fields
            WRITE(iounit) ps(:,:,:,cm_index)
            WRITE(iounit) sd(:,:,:,cm_index)
            WRITE(iounit) msl(:,:,:,cm_index)
            WRITE(iounit) tcc(:,:,:,cm_index)
            WRITE(iounit) u10(:,:,:,cm_index)
            WRITE(iounit) v10(:,:,:,cm_index)
            WRITE(iounit) tt2(:,:,:,cm_index)
            WRITE(iounit) td2(:,:,:,cm_index)
            WRITE(iounit) lsprec(:,:,:,cm_index)
            WRITE(iounit) convprec(:,:,:,cm_index)
            WRITE(iounit) sshf(:,:,:,cm_index)
            WRITE(iounit) ssr(:,:,:,cm_index)
            WRITE(iounit) surfstr(:,:,:,cm_index)
            WRITE(iounit) ustar(:,:,:,cm_index)
            WRITE(iounit) wstar(:,:,:,cm_index)
            WRITE(iounit) hmix(:,:,:,cm_index)
            WRITE(iounit) tropopause(:,:,:,cm_index)
            WRITE(iounit) oli(:,:,:,cm_index)
            WRITE(iounit) diffk(:,:,:,cm_index)
            WRITE(iounit) vdep(:,:,:,cm_index)

            ! 1d fields
            WRITE(iounit) z0(:)
            WRITE(iounit) akm(:)
            WRITE(iounit) bkm(:)
            WRITE(iounit) akz(:)
            WRITE(iounit) bkz(:)
            WRITE(iounit) aknew(:)
            WRITE(iounit) bknew(:)

            ! Nested, scalar values (for each nest)
            WRITE(iounit) nxn(:)
            WRITE(iounit) nyn(:)
            WRITE(iounit) dxn(:)
            WRITE(iounit) dyn(:)
            WRITE(iounit) xlon0n(:)
            WRITE(iounit) ylat0n(:)

            ! Nested fields, static over time
            WRITE(iounit) oron, excessoron, lsmn, xlandusen

            ! 3d nested fields
            WRITE(iounit) uun(:,:,:,cm_index,:)
            WRITE(iounit) vvn(:,:,:,cm_index,:)
            WRITE(iounit) wwn(:,:,:,cm_index,:)
            WRITE(iounit) ttn(:,:,:,cm_index,:)
            WRITE(iounit) qvn(:,:,:,cm_index,:)
            WRITE(iounit) pvn(:,:,:,cm_index,:)
            WRITE(iounit) cloudsn(:,:,:,cm_index,:)
            WRITE(iounit) cloudsnh(:,:,cm_index,:)
            WRITE(iounit) rhon(:,:,:,cm_index,:)
            WRITE(iounit) drhodzn(:,:,:,cm_index,:)
            WRITE(iounit) tthn(:,:,:,cm_index,:)
            WRITE(iounit) qvhn(:,:,:,cm_index,:)

            ! 2d nested fields
            WRITE(iounit) psn(:,:,:,cm_index,:)
            WRITE(iounit) sdn(:,:,:,cm_index,:)
            WRITE(iounit) msln(:,:,:,cm_index,:)
            WRITE(iounit) tccn(:,:,:,cm_index,:)
            WRITE(iounit) u10n(:,:,:,cm_index,:)
            WRITE(iounit) v10n(:,:,:,cm_index,:)
            WRITE(iounit) tt2n(:,:,:,cm_index,:)
            WRITE(iounit) td2n(:,:,:,cm_index,:)
            WRITE(iounit) lsprecn(:,:,:,cm_index,:)
            WRITE(iounit) convprecn(:,:,:,cm_index,:)
            WRITE(iounit) sshfn(:,:,:,cm_index,:)
            WRITE(iounit) ssrn(:,:,:,cm_index,:)
            WRITE(iounit) surfstrn(:,:,:,cm_index,:)
            WRITE(iounit) ustarn(:,:,:,cm_index,:)
            WRITE(iounit) wstarn(:,:,:,cm_index,:)
            WRITE(iounit) hmixn(:,:,:,cm_index,:)
            WRITE(iounit) tropopausen(:,:,:,cm_index,:)
            WRITE(iounit) olin(:,:,:,cm_index,:)
            WRITE(iounit) diffkn(:,:,:,cm_index,:)
            WRITE(iounit) vdepn(:,:,:,cm_index,:)

            ! Auxiliary variables for nests
            WRITE(iounit) xresoln(:)
            WRITE(iounit) yresoln(:)
            WRITE(iounit) xln(:)
            WRITE(iounit) yln(:)
            WRITE(iounit) xrn(:)
            WRITE(iounit) yrn(:)

            ! Variables for polar stereographic projection
            WRITE(iounit) xglobal, sglobal, nglobal
            WRITE(iounit) switchnorthg, switchsouthg
            WRITE(iounit) southpolemap(:)
            WRITE(iounit) northpolemap(:)

            ! Variables declared in conv_mod (convection)
            WRITE(iounit) pconv(:)
            WRITE(iounit) phconv(:)
            WRITE(iounit) dpr(:)
            WRITE(iounit) pconv_hpa(:)
            WRITE(iounit) phconv_hpa(:)
            WRITE(iounit) ft(:)
            WRITE(iounit) fq(:)
            WRITE(iounit) fmass(:,:)
            WRITE(iounit) sub(:)
            WRITE(iounit) fmassfrac(:,:)
            WRITE(iounit) cbaseflux(:,:)
            WRITE(iounit) cbasefluxn(:,:,:)
            WRITE(iounit) tconv(:)
            WRITE(iounit) qconv(:)
            WRITE(iounit) qsconv(:)
            WRITE(iounit) psconv, tt2conv, td2conv
            WRITE(iounit) nconvlev, nconvtop

The implementation of the above was somewhat straightforward, but it took some time to understand how to make sure we dumped and loaded the correct cm_index. Once this was figured out, it seemed like our tests were all working well - we could generate fp data from a forward run, then start another run that read the .fp data, and come up with identical results.

A difference between forward and backward met fields

However, in one of our tests we generated fp data for a forward run and tried to use it in both forward and backward runs and the backward run gave us errors relative to our control output data. With a lot of experimentation and investigation it was finally found that FLEXPART itself doesn’t use exactly identical fields in forward and backward runs. Specifically, in a forward run the gridcheck() routine will use the earliest met file and set up the vertical coordinate system which will remain constant for the rest of the simulation. In a backward run, gridcheck() will use the latest met file to set up the vertical coordinate system. What this means is that if we want “traditional” FLEXPART behaviour, we need to generate a set of fp files where every file has the same vertical coordinate system, and this will be different between backward and forward runs.

Recognising that this is an issue that will need to be discussed over time, our team has constructed a solution that will allow for a variety of fp files to be generated with grib2flexpart. The fp files can be generated so that they contain the exactly the same data that a traditional FLEXPART run would contain. This means, for example, that in the case of CTBTO, 15 days of fp files would be generated, and they would all use the vertical coordinate system defined by the latest met file, as FLEXPART currently does. This, of course, means all fifteen days have to be processed every day, but only once - all 90 runs could use these preprocessed files, and results would be identical to those from traditional FLEXPART. Alternatively, fp files may also be generated one at a time, or in a specified sequence, where each fp file contains the vertical coordinate system as defined by the met file that produced it. This means that during a simulation the vertical coordinate system will vary with time. It’s not clear to us yet whether this might be a good or a bad thing, so we have chosen to make gribflexpart capable of multiple output configurations, which may be modified in the future if desired.

A potentially confusing downside of our raw dump/load procedure is that the arrays that are filled with an fpmetbinary_load() must be dimensioned (as defined in par_mod.F90) exactly as the arrays that were used in the fpmetbinary_dump(). Since GRIB2FLEXPART and FLEXPART are compiled in the same distribution, with the same par_mod.F90, as long as the two binaries are used with each other, there will be no problem. However, we all know how people love to copy binaries to different places, and it’s entirely feasible that someone ends up using a GRIB2FLEXPART from one configuration and a FLEXPART from another configuration, and this simply will not work - even if the dimensions are only slightly different.

We could have partially alleviated this by making sure that we dump dimensions to the fp files and loaded arrays based on these, but, again, the overhead of filling arrays in this manner would probably be excessive. However, it might be useful in the future to go ahead and add these dimensions to the fp format just as an error checking tool. A process that wants to load the fp data into its variables could at least check that its own dimensions match those of the fp file and offer a warning if not. Some of the dimensions are already dumped, just to be safe, but not all of them.

Preliminary performance testing suggests that the dumping and loading of fp files is quick. For a 0.5 degree global ECMWF the actual processing of the GRIB met data (getting it into the arrays) will take (on a specific machine) about 160 seconds, but the actual dumping to fp file will take about 3 seconds. Loading the file as input into FLEXPART will take about 10-16 seconds on our test platform.

If we chose to use NetCDF as the fp format in the future, we could do so simply by modifying the new fpmetbinary_mod.F90, but our suspicion is that anything other than a raw binary dump/load will require significant overhead. Testing this, however, would be an interesting activity.

hosted by ZAMG