README - 16 July 2002 This README file outlines the steps necessary to build and run CMAQ models. The code has been tested on a variety of platforms but the build and run scripts that are included in the tar files are set up to compile and run on Linux (we tested on Redhat Linux 2.1 with the Portland Group F90 compiler, pgf90 version 3.2) and Sun workstations (we tested on SunOS 5.7 sparc Ultra-30 with f90 compiler: Sun WorkShop 6 update 1 Fortran 95 6.1 2000/09/11). The C-shell scripts are easy to modify for any Unix platform. The Stand-Alone CMAQ package contains data and default setups to execute a series of tutorial runs to demonstrate the usage of the scripts and data. The tutorial produces concentrations for a 32 km horizontal resolution grid (coarse domain) and a nested 8 km grid (fine domain). The package will have datasets in the $M3DATA input directories only; the user must work through the system of CMAQ models to produce the inputs required for the downstream model(s). These inputs (preprocessor model outputs) must reside in the specified $M3DATA subdirectories (see below in item 13). We have provided model generated datasets for the purpose of comparison with your outputs in a separate tar file, M3DATA_REF_LINUX.tar (M3DATA_REF_SUN.tar). The run scripts and comparison output data reflect the scenario period for the tutorial runs: two 24 hour periods starting 0Z:1999183 (2 July 1999) and 0Z:1999184 (3 July 1999). The distribution package consists of the following files: o README - this readme text file o RELEASE_NOTES - text file listing the technical upgrades of CMAQ for this release o ERRATA - text file describing corrections to the Linux 27 June 2002 release o CVS_NETCDF - text file explaining the CVS configuration management system and the netCDF data system and how to set them up o SMOKE - text file describing the SMOKE emissions system and how to set it up (not required for the tutorial) o MODELS.tar.gz - gzipped tar file (~3.1 Mb) containing model, tools and libraries source code CVS archives o M3DATA.tar.gz - gzipped tar file (~125 Mb) containing the required datasets not produced by this package o SCRIPTS.tar.gz - gzipped tar file (~15 Kb) containing Linux C- Shell scripts to build and execute the CMAQ models o TUTORIAL_PROCEDURE_LINUX - text file describing how to run the tutorial on the Linux platform o M3DATA_REF_LINUX.tar.gz - gzipped tar file (~304 Mb) containing reference data to compare with datasets produced by the tutorial on a Linux workstation o TUTORIAL_PROCEDURE_SUN - text file describing how to run the tutorial on the Sun workstation o M3DATA_REF_SUN.tar.gz - gzipped tar file (~301 Mb) containing reference data to compare with datasets produced by the tutorial on a Sun workstation TUTORIAL_PROCEDURE_LINUX and M3DATA_REF_LINUX.tar.gz are located in the LINUX subdirectory. TUTORIAL_PROCEDURE_SUN and M3DATA_REF_SUN.tar.gz are located in the SUN subdirectory. ** NOTE: You must have CVS and netCDF (see the CVS_NETCDF readme for help). The sequence of things you must do: 1) Set environment variable (path) for M3HOME, e.g. user "yoj" could set: setenv M3HOME /project/cmaq/yoj 2) Set environment variables (paths) for M3MODEL, M3TOOLS and M3DATA as: setenv M3MODEL $M3HOME/models setenv M3TOOLS $M3HOME/tools (you may need to create this subdirectory) setenv M3DATA $M3HOME/data 3) cd to $M3HOME and gunzip and untar the data tar file, M3DATA.tar. This will produce the following subdirectories: data/ cctm/ <<<<<<< empty, to be filled by the user bcon/ <<<<<<< empty, to be filled by the user icon/ <<<<<<< empty, to be filled by the user jproc/ <<<<<<< empty, to be filled by the user mcip2/ M_32_99NASH/ M_08_99NASH/ emis/ tut02/ raw/ phot/ icon/ bcon/ 4) Create (mkdir) the subdirectory $M3TOOLS and the following subdirectories under $M3TOOLS: build/ stenex/ m3io/ netCDF/ Concerning netCDF: The scripts assume that netCDF resides in the $M3TOOLS path as $M3TOOLS/netCDF. You need to install your own netCDF libraries built for Linux (SunOS5) in this directory (see CVS_NETCDF) or symbolically link to the existing netcdf on your system. Example for Linux: mkdir -p $M3TOOLS/netCDF/Linux cd $M3TOOLS/netCDF/Linux ln -s /home/showard/netcdf-3.4_linux/lib/libnetcdf.a libnetcdf.a Example for Sun: mkdir -p $M3TOOLS/netCDF/SunOS5 cd $M3TOOLS/netCDF/SunOS5 ln -s /project/m3test/tools/netCDF/SunOS5/libnetcdf.a libnetcdf.a 5) In $M3HOME untar the models archive tar file, MODELS.tar. This will produce the following subdirectories: models/ CCTM/ m3io/ include/ BUILD/ DYNMEM/ <<<<<<< not used for the tutorial PARIO/ <<<<<<< not used for the tutorial STENEX/ PROCAN/ <<<<<<< not used for the tutorial JPROC/ ICON/ BCON/ 6) Make a working directory (NOT in either the $M3MODEL, $M3TOOLS or $M3DATA trees), cd there and untar the SCRIPTS.tar. This will produce the following subdirectories, which contain "bldit" csh scripts for Linux and Sun, "run" scripts, and a GRIDDESC file (see item c. uder "other details" below): bcon/ icon/ cctm/ build/ stenex/ jproc/ m3io/ <<<<<<< initially empty GRIDDESC1 Not necessary, but for the sake of further discussion create an environment variable for the top level of your working directory, $WORK. 7) Start by creating m3bld, the tool require to build all the other executables. cd $WORK/build execute (type) bldit.m3bld.linux (bldit.m3bld.sun) 8) Next create the libraries required for the models. First, the Models-3 ioapi: cd $WORK/m3io Untar the Models-3 ioapi tar file: tar xvf $M3MODEL/m3io/m3io_xtr.tar Make the m3io library: Execute (type) make -f Makefile.linux (Makefile.sun) After the object files (*.o) and library (libm3io.a) have been created, move them to the tools directory: mkdir $M3TOOLS/m3io/Linux mv libm3io.a $M3TOOLS/m3io/Linux mv *.o $M3TOOLS/m3io/Linux -or- mkdir $M3TOOLS/m3io/SunOS5 mv libm3io.a $M3TOOLS/m3io/SunOS5 mv *.o $M3TOOLS/m3io/SunOS5 Note: To create a debug version of this library, edit Makefile and comment lines 1, 34 and 37, and uncomment lines 2, 35 and 38 for Linux; lines 1, 26 and 28, and uncomment lines 2, 27 and 29 for Sun; Finally, copy three include files to the $M3TOOLS directory: cp FDESC3.EXT $M3TOOLS/m3io cp IODECL3.EXT $M3TOOLS/m3io cp PARMS3.EXT $M3TOOLS/m3io A note concerning the Models-3 ioapi (m3io): Future releases of CMAQ will not include m3io. Rather, the user is encouraged to download the standard IOAPI released by the MCNC Environmental Modeling Center. 9) Next create the stencil exchange library (required for parallel processing - the no-op version used here is required for serial processing): cd $WORK/stenex Execute (type) bldit.se_noop.linux (bldit.se_noop.sun) 10) Now create the model executables: jproc is created and run only once for the tutorial; icon and bcon need to be compiled and run separately for profile data (coarse grid) and for nest data (fine grid); cctm is compiled only once. See the TUTORIAL_PROCEDURE_LINUX (TUTORIAL_PROCEDURE_SUN) readme file for details. Generally, you will need to get the MCIP2 code and run it to create met data from MM5 for cctm. MCIP2 can be downloaded from the same site as this distribution package. And of course, you will need "model-ready" emissions data - presumably from SMOKE. See the SMOKE readme file included with this package. For this tutorial release we have provided the model-ready emissions and met data. Start with jproc (cd to $WORK/jproc). Invoke "bldit.jproc.linux" ("bldit.jproc.sun"). There will be a lot of text displayed to standard out (which you can capture of course, by redirecting to a file). The process should end with a JPROC executable, which is invoked in the second script, "run.jproc", producing output data files. These data files will be inserted into the path predefined in the run script, $M3DATA/jproc. Note: The "run.jproc" script is set up to produce daily J-value tables for the cb4_ae3_aq mechanism starting from 30 June to 14 July 1999. This works as long as you're not using TOMS data, in which case you would need to run one day at a time. Note: It's always a good idea to capture in a log file the text written to standard out when running these models. In each "run" script, near the top, is a suggested method. e.g. for jproc, run.jproc >&! jproc.log & 11) Check the jproc log file to ensure complete and correct execution. Then cd to $WORK/icon and follow the same procedure; invoke "bldit.icon.linux" ("bldit.icon.sun"), followed by "run.icon >&! icon.log &". This will produce the first (profile) dataset for the first run of cctm on the coarse domain. After cctm finishes, you will need to generate a nest dataset for the fine domain. See the TUTORIAL_PROCEDURE_LINUX (TUTORIAL_PROCEDURE_SUN) readme file for details. 12) Follow this procedure for each of the model subdirectories after icon/ (the order is not mandatory). If you are running through the tutorial, see the TUTORIAL_PROCEDURE_LINUX (_SUN) readme file. 13) Finishing with cctm, you should have a complete collection of datasets, which you can compare with the distribution datasets in $M3DATA_REF.tar. Unless you modify the run scripts, the output data from all the models will reside in the following (automatically generated) paths: $M3DATA/ jproc/ icon/ bcon/ cctm/ 14) Other details: a. You can check output ioapi file headers (and data) using ncdump. This utility will be located in the same place as netcdf, mentioned in (4) above. b. You can run a model using the required inputs from the reference datasets, instead of your preprocessor outputs. You might want to do this to run just the cctm, compiled with a different set of module options, for example. c. To run cctm, icon and bcon requires a GRIDDESC file that contains horizontal projection and grid domain definitions. The run scripts contain environment variables that point to this file that contains the user's horizontal grid definitions. The horizontal grid definition can be set to window from the met and emissions input files. However, the window must be a proper subset - not lie along any of the boundaries of the domain represented by the input files. Note: The domains represented by the met and emissions data must be the same. Of course, you don't have to window, and the domain represented by the input files is a valid cctm domain. d. Running cctm for a windowed domain or a higher resolution nested domain from larger or coarser met and emissions datasets still requires creating intitial and boundary data for the target domain using ICON and BCON.