Summary:\n This workshop consisted of a lot of discussion of definitions, particularly what is interoperability, a sensor, an instrument, a platform, a system? Pretty much everyone there had a different idea of what these terms mean, and what the purpose and goals of the workshop were. The goals of summarizing existing technologies for sensor interoperability in observing systems, and identifying what's needed to realize sensor interoperability were stated, but because nobody was really clear about what sensor interoperability is, little progress achieving the goals was made. There were two fairly clear groups using different approaches- bottom-up and top down design. There were also folks involved in large observatories, who weren't concerned with power issues (or cost) and those with smaller existing or experimental systems who often did have power or cost limitations. Somehow, out of a lot of discussion, a report will be generated to present the group's findings. \n\nThe Industry folk want to help implement interoperability, but want guidance about what was important to add, but there was limited consensus about what key elements were required to move toward interoperability.\n\n The few concepts that met with fairly wide agreement are discussed below. Here, consider a sensor to be a transmissometer, and the Instrument to be a logger (that may or may not be a node of an observatory).\nTo be interoperable, these conditions need to be met:\n* the sensor must indentify itself (I am a transmissometer)\n* the sensor must supply some limited meta-data\n* one of N communication protocols must be used\n* one of N hardware connectors must be used\n \nBeyond this, there were many desirables, and differing opinions about where the smarts to drive self identification should go and implementation strategies. Most participants thought a specialist group should be assembled to take the meeting recommendations further and work towards a standard for observatories. However, the cyber infrastructure recommendations for ORION are farily well advanced, and perhaps should simply be adopted.\n\nOne of the issues discussed was which other organizations are working of parts of this puzzle. Everyone from Open Geospatial consortium - OGC (with SensorML) to IEEE, MMI, FGDC, NBII were cited. An interesting connectivity standard that was discussed is 1451 with accompanying 1588. JDDAC (Java Distributed Data Acquisition Code) is open source software spun off from that which may have future applicability for us. There's more about this in some slides on \n\n\n\nPoints of interest:\n Many of the data distribution systems are employing XML, web services and Java. Some are also serving netCDF via openDAP. Most of the web based data selectors are very granular, you get temperature or salinity in response to a query, not all the variables collected by the CTD.\n\n Data quality was of less interest than getting it on the web for some participants.\n\n The Neptune observatory has a lot of useful information on it- they offer data as images (pdf & jpg), .mat files and csv ascii.\n\n The concept of a "plug-fest" held part way though a development project that gets different groups working on a thin slice project together to test how things are working seems potentially useful.\n\n\n\n\n
Step 1. Get all the toolkits from the WHSC SVN:\n*Linux: \n{{{\nsvn -co svn:// /home/rps/m\n}}}\n*Windows: Install TortoiseSVN, then in File Explorer, right click and choose "SVN Checkout", and use the above URL (I put all this stuff at c:\srps\sm)\n\nStep 2. Get the netcdf toolkit, snctools and rslice from other SVN:\n*Linux: \n{{{\nsvn co /usr/local/matlab/m_other/rslice\nsvn co /usr/local/matlab/m_other/snctools\nsvn co /usr/local/matlab/m_other/netcdf_toolkit\n}}}\n*Windows: In File Explorer, right click and choose "SVN Checkout", and use the above URLs (I put my stuff at c:\srps\sm_other\srslice, c:\srps\sm_other\ssnctools, etc)\n\nStep 3. To add all these directories to your MATLABPATH, get this file, put it somewhere in your matlab path. (I put it in the toolboxes\slocal directory, which on my PC with Matlab 7.1 is "c:\sprogram files\smatlab71\stoolbox\slocal". Then edit this file to point to the proper directories on your machine. When Matlab starts, it will automatically run "startup.m", which will then add these directories to your path.
Okay, so I'm logged on and see 3 people logged in as root. Who are they? I can type: \n{{{\n$who \n}}}\nto see the ttyname, then \n{{{\n$write user [ttyname]\n}}}\nto communicate to that user. You just type and convention is to type "-o" when you are done "writing" and and to listen for response. Do <cntrl-c> to exit.
The tinyurl for this page is <>\n\nThis has been superseded by my post on gis stackexchange: \n\n\n!Introduction\nThis is the approach that I've been using to initially set up THREDDS Data Server (TDS) catalogs for regional oceanographic modeling providers to serve their models results. It is not necessarily the best practice, merely a practice that works reasonably well. There are four basic types of catalogs we have been setting up:\n*A top level catalog that points to other catalogs that you want exposed\n*An "all" catalog that automatically scans a directory tree for netcdf (and grib, etc) files\n*Catalogs that aggregate regional model results by concatenating along the time dimension\n*Catalogs that aggregate forecast model results by using the special Forecast Model Run Collection feature of the TDS.\nSo we'll go through each type. But before modifying any catalogs, verify that TDS is up and running with the test catalog and datasets. Go to http://localhost:8080/thredds and drill down on one of the test data sets to the OpenDAP service to make sure everything looks okay in the OpenDAP Data Access page. \n\n!Top level catalog (catalog.xml)\nI use the top level catalog as a table of contents whose sole purpose is to point to other catalogs that you want to advertise. The following example is Ruoying He's catalog.xml, where he is simply pointing to two regional modeling catalogs:\n{{{\n<?xml version="1.0" encoding="UTF-8"?>\n<catalog xmlns=""\n xmlns:xlink=""\n name="THREDDS Top Catalog, points to other THREDDS catalogs" version="1.0.1">\n\n <dataset name="NCSU MEAS THREDDS catalogs">\n <catalogRef xlink:href="gomtox_catalog.xml" xlink:title="GOMTOX (Gulf of Maine) Ocean Model" name=""/>\n <catalogRef xlink:href="sabgom_catalog.xml"\n xlink:title="SABGOM (South Atlantic Bight and Gulf of Mexico) Ocean Model" name=""/>\n </dataset>\n\n</catalog>\n}}}\n\n!The "All" Catalog\nIt is quite convenient to have a catalog that automatically allows you to access to all data files in a particular directory tree via the TDS services. The datasetScan feature in the TDS scans a specified directory tree for files matching certain patterns or file extensions. \nThis could be your whole disk, or just a particular directory. In the following example, the TDS will scan the /data1/models directory for all NetCDF, Grib, or HDF files, sort them by alphabetical order, and include the file size. The data will be served via OpenDAP and HTTP, with HTTP just allowing people to download the existing file in it's native format.\n{{{\n<?xml version="1.0" encoding="UTF-8"?>\n<catalog xmlns=""\n xmlns:xlink=""\n name="THREDDS Catalog for NetCDF Files" version="1.0.1">\n <service name="allServices" serviceType="Compound" base="">\n <service name="ncdods" serviceType="OpenDAP" base="/thredds/dodsC/"/>\n <service name="HTTPServer" serviceType="HTTPServer" base="/thredds/fileServer/"/>\n <service name="wcs" serviceType="WCS" base="/thredds/wcs/"/>\n <service name="ncss" serviceType="NetcdfSubset" base="/thredds/ncss/grid/"/>\n <service name="wms" serviceType="WMS" base="/thredds/wms/"/>\n <service name="iso" serviceType="ISO" base="/thredds/iso/"/>\n <service name="ncml" serviceType="NCML" base="/thredds/ncml/"/>\n <service name="uddc" serviceType="UDDC" base="/thredds/uddc/"/>\n </service>\n \n <datasetScan name="Model Data" ID="models" path="models" location="/data1/models">\n <metadata inherited="true">\n <serviceName>allServices</serviceName>\n <publisher>\n <name vocabulary="DIF">USGS/ER/WHCMSC/Dr. Richard P. Signell</name>\n <contact url="" email=""/>\n </publisher>\n </metadata>\n <filter>\n <include wildcard="*.ncml"/>\n <include wildcard="*.nc"/>\n <include wildcard="*.grd"/>\n <include wildcard="*.nc.gz"/>\n <include wildcard="*.cdf"/>\n <include wildcard="*.grib"/>\n <include wildcard="*.grb"/>\n <include wildcard="*.grb2"/>\n <include wildcard="*.grib2"/>\n </filter>\n <sort>\n <lexigraphicByName increasing="true"/>\n </sort>\n <addDatasetSize/>\n </datasetScan>\n\n</catalog>\n}}}\nYou could reference this catalog in your catalog.xml file, or you might feel that advertising a link to all your data files would be confusing to some users. If you don't put the catalog in catalog.xml, you must add a reference to it in the threddsConfig.xml file in order for it to be read by the TDS. So if your catalog is called "all.xml", you would need a line in threddsConfig.xml that looks like this:\n{{{\n<catalogRoot>all.xml</catalogRoot>\n}}}\n\n!Regional model catalogs\n\nI suggest that you use a separate catalog for each model domain so that others can link to your catalogs in their own THREDDS catalogs in a more flexible way (e.g. your catalog for Boston Harbor could be referenced in a regional catalog for the Gulf of Maine). \n\nFor regional model results, there are typically two types of aggregation datasets that are useful. One aggregates along an existing time dimension, so uses type="joinExisting":\n{{{\n <aggregation dimName="ocean_time" type="joinExisting">\n <scan location="/media/1tb/MABGOM/Jun292008_Feb282009" regExp=".*mabgom_avg_[0-9]{4}\$"/>\n </aggregation>\n}}}\nwhere you can use a regular expression (java style) to match only certain files in a directory. Here we are matching files that looks like "". The "." means any character, so ".*" means any number of any character followed by "mabgom_avg_" followed by exactly 4 digits between 0 and 9, followed by exactly ".nc". So the entire catalog might look like:\n\n{{{\n<?xml version="1.0" encoding="UTF-8"?>\n<catalog name="MABGOM Catalog"\n xmlns=""\n xmlns:xlink="">\n <service name="allServices" serviceType="Compound" base="">\n <service name="wms" serviceType="WMS" base="/thredds/wms/"/>\n <service name="iso" serviceType="ISO" base="/thredds/iso/"/>\n <service name="ncml" serviceType="NCML" base="/thredds/ncml/"/>\n <service name="uddc" serviceType="UDDC" base="/thredds/uddc/"/>\n </service>\n\n <dataset name="MABGOM Runs">\n\n <metadata inherited="true">\n <serviceName>allServices</serviceName>\n <creator>\n <name vocabulary="DIF">Dr. Ruoying He</name>\n <contact url="" email=""/>\n </creator>\n <documentation xlink:href=""\n xlink:title="MABGOM Circulation"/>\n <documentation type="Summary"> Hydrodynamic simulations for the Mid-Atlantic Bight and Gulf of\n Maine </documentation>\n <documentation type="Rights"> This model data was generated as part of an academic research\n project, and the principal investigators: Ruoying He ( ask to be informed of\n intent for scientific use and appropriate acknowledgment given in any publications arising\n therefrom. The data is provided free of charge, without warranty of any kind.\n </documentation>\n </metadata>\n\n <dataset name="Tide-Averaged Data">\n <dataset name="Jun292008_Feb282009" ID="MABGOM/Jun292008_Feb282009/avg"\n urlPath="MABGOM/Jun292008_Feb282009/avg">\n <netcdf xmlns="">\n <aggregation dimName="ocean_time" type="joinExisting">\n <scan location="/media/1tb/MABGOM/Jun292008_Feb282009"\n regExp=".*mabgom_avg_[0-9]{4}\$"/>\n </aggregation>\n </netcdf>\n </dataset>\n </dataset>\n\n <dataset name="History Data">\n <dataset name="Jun292008_Feb282009" ID="MABGOM/Jun292008_Feb282009/his"\n urlPath="MABGOM/Jun292008_Feb282009/his">\n <netcdf xmlns="">\n <aggregation dimName="ocean_time" type="joinExisting">\n <scan location="/media/1tb/MABGOM/Jun292008_Feb282009"\n regExp=".*mabgom_his_[0-9]{4}\$"/>\n </aggregation>\n </netcdf>\n </dataset>\n </dataset>\n\n </dataset>\n</catalog>\n}}}\n\n!Forecast model catalogs\nThe other type of very useful catalog is a Forecast Model Run Collection (FMRC), which aggregates forecast files that have overlapping time records (e.g. 3-day forecasts, issued once a day). For this type of catalog, we use the FMRC FeatureCollection, which creates a "best time series" view, using the most recent data from each forecast to construct a continuous aggregated time series. The files to be scanned are specified in the `collection` tag, and when the files are scanned is specified by either a `recheckAfter` tag in the collection tag, or in the `update` tag.\n\nHere's a full example:\n{{{\n<catalog xmlns:xsi=""\n xsi:schemaLocation=""\n xmlns=""\n xmlns:xlink="" name="OPeNDAP Data Server" version="1.0.3">\n\n <!-- \n Specify the data and metadata services for this catalog\n -->\n <service name="allServices" serviceType="Compound" base="">\n <service name="ncdods" serviceType="OPENDAP" base="/thredds/dodsC/"/>\n <service name="ncss" serviceType="NetcdfSubset" base="/thredds/ncss/grid/"/>\n <service name="wms" serviceType="WMS" base="/thredds/wms/"/>\n <service name="iso" serviceType="ISO" base="/thredds/iso/"/>\n <service name="ncml" serviceType="NCML" base="/thredds/ncml/"/>\n <service name="uddc" serviceType="UDDC" base="/thredds/uddc/"/>\n </service>\n <!-- \n Create a folder for all the FMRC Feature Collections\n -->\n <dataset name="COAWST Model Runs">\n <metadata inherited="true">\n <serviceName>allServices</serviceName>\n <authority></authority>\n <dataType>Grid</dataType>\n <dataFormat>NetCDF</dataFormat>\n <creator>\n <name vocabulary="DIF">OM/WHSC/USGS</name>\n <contact url="" email=""/>\n </creator>\n <publisher>\n <name vocabulary="DIF">OM/WHSC/USGS</name>\n <contact url="" email=""/>\n </publisher>\n <documentation xlink:href=""\n xlink:title="Carolinas Coastal Change Program"/>\n <documentation xlink:href=""\n xlink:title="ReadMe.txt"/>\n </metadata>\n <!-- \n First FMRC Feature Collection\n -->\n <featureCollection name="coawst_4_use" featureType="FMRC" harvest="true" path="coawst_4/use/fmrc">\n <metadata inherited="true">\n <documentation type="summary">ROMS Output from COAWST</documentation>\n <serviceName>allServices</serviceName>\n </metadata>\n <!-- \n Inside the featureCollection, but outside the protoDataset, we define the NcML that happens\n before the aggregation. To get aggregated, we must have grids, so we turn the bed params\n into grids by giving them a psuedo coordinate in Z. If we don't do this, they will not be \n aggregated. \n -->\n <netcdf xmlns="">\n <variable name="Nbed" shape="Nbed" type="double">\n <attribute name="long_name" value="pseudo coordinate at seabed points"/>\n <attribute name="standard_name" value="ocean_sigma_coordinate"/>\n <attribute name="positive" value="up"/>\n <attribute name="formula_terms" value="sigma: Nbed eta: zeta depth: h"/>\n <values start="-1.0" increment="-0.01"/>\n </variable>\n <attribute name="Conventions" value="CF-1.0"/>\n </netcdf>\n\n <!-- \n Specify which files to scan for the collection, and say when to scan them.\n (here we scan at 3:30 and 4:30 every morning. 4:30 is just in case the model\n finishes late)\n -->\n <collection spec="/usgs/vault0/coawst/coawst_4/Output/use/$"\n olderThan="10 min"/>\n <update startup="true" rescan="0 30 3,4 * * ? *" trigger="allow"/>\n\n <!-- \n Specify the dataset to use for non-aggregated variables and \n global attributes. NcML changes here are applied after the data\n has been aggregated. \n -->\n <protoDataset choice="Penultimate">\n <netcdf xmlns="">\n <variable name="temp">\n <attribute name="_FillValue" type="float" value="0.0"/>\n </variable>\n <variable name="salt">\n <attribute name="_FillValue" type="float" value="0.0"/>\n </variable>\n <variable name="Hwave">\n <attribute name="_FillValue" type="float" value="0.0"/>\n </variable>\n <variable name="zeta">\n <attribute name="_FillValue" type="float" value="0.0"/>\n </variable>\n </netcdf>\n </protoDataset>\n <!-- \n Specify what datasets the user will access. Usually we just \n want the "best time series" aggregation. \n -->\n <fmrcConfig regularize="false" datasetTypes="Best"/>\n </featureCollection>\n\n </dataset>\n</catalog>\n\n}}}\n\nThe best place to find more information on setting up the TDS is usually the documents linked from the latest TDS tutorial from Unidata.\n\nAs I type this, the most recent is:\n\n\nwhich links to:\n\n\n\n\nSteve Baum has some tutorials also.\n\nseems to have been superceded by\n\nbut this also points to now-out-of-date pages at Unidata.\n
Working on ADCIRC high res grid developed for VDATUM in the Gulf of Maine\n\nMade a movie of currents over the tidal cycle from forecast run\nusing "tide_movie.m" in \nc:\srps\smaine\smodels\sadcirc\svdatum\stide_movie.m\n\nExtracted elevation and velocity time series for MVCO 12-M node location using\nc:\srps\smaine\smodels\sadcirc\svdatum\sadcirc_tseries.m\n\nCompared this to actual data at MVCO\nc:\srps\svs\smvco used ADCIRC ver. 44.19T for the Vdatum GOM simulation. I'm trying to use ADCIRC 46.32. Here's what I had to do to get the new version going:\n\nThe spatially varying friction used to be in fort.21, but now it's in fort.13.\n\n-Rich
To access THREDDS-served, go to the THREDDS server top level catalog '''', navigate to the dataset you want, and then to click on the ''OpenDAP'' access link. You will see an OPeNDAP Dataset Access Form, which shows the variables in the dataset, along with their types and dimensions. You can peruse the variables in the dataset, and then cut-and-paste the OpenDAP URL that is shown near the top of the OPeNDAP Dataset Access Form into Matlab.\n\nIf you have the opendap-enabled mexnc and the netcdf toolkit, you can use the OPeNDAP URL just as you would a local NetCDF file name. For example:\n{{{ \n >>nc=netcdf('');\n >>t=nc{'zeta'}(:,30,40);\n >>close(nc);\n}}} \n\nto load a time series of surface elevation values at grid cell j=30, i=40. \n \nYou can also use the Matlab RSLICE GUI tool, by typing\n\n{{{ \n >>rslice('url','');\n}}} \n\n6. If the data is CF compliant, you can use the CF toolbox to do things like retrieve the lon,lat,time and z values automatically, regardless of whether the data is ROMS, SWAN, ECOM, POM, etc. See the instructions at\n'''' to set Matlab to use these tools.\n
You can browse archived NEXRAD Level II and III data using NOAA's Weather and Climate Toolkit\n\nYou use the tool to first "order" data, which generates an NCDC ID number. You then can view the data you ordered by entering that order number into the space provided under NCDC data. You can then animate to KMZ (but watch the memory -- about 80 frames is the max on my PC).
run anaconda.bat and see what is spits back.\nIt says it put something in the path, but it didn't. So you have to add it at the top.\nI added this:\n{{{\nset path=%path%;C:\sprograms\sanaconda\s;c:\sprograms\sanaconda\sscripts\n}}}\n
To assist in creating a dataset entry for ERDDAP, Bob Simons wrote two little utilities: one to create a partially completed dataset xml that must be edited to fill in some missing entries, and a checker to make sure that ERDDAP is handling the new dataset correctly:\n{{{\nrsignell@igsagiegltrsix0 /cygdrive/c/programs/tomcat6/webapps/erddap/WEB-INF\n$ ./GenerateDatasetsXml.bat EDDGridFromDap\ndds/dodsC/gom_interop/bio/ww3/forecast/fine > test.xml\n}}}\n(This example, as the name suggests, is for creating an ERDDAP Grid from Dap. To see the other options, just type "./GenerateDatasetsXml.bat" without any arguments and you will get a listing of different types (as well as prompts and step-by-step instructions)\n\nThen edit "test.xml" and replace all the "???" with actual values. You don't actually have to change all the ???, but ones that are important are:\n*associate the name of the longitude variable (e.g. "lon") with the name "longitude".\n*associate the name of the latitude variable (e.g. "lat") with the name "latitude".\n*associate the name of the time variable (e.g. "ocean_time") with the name "time".\n*change the default colorbarmin and colorbarmax for each var to something other than "NaN".\n*pick a relatively simple ID (not a path name with slashes)\n\nWhen you think you are done, try doing \n{{{\n./DasDds.bat <dataset_ID>\n}}}\nand if you get an OPeNDAP Das back, you should be good to go.
{{{\nsudo /usr/sbin/useradd barmstrong\nsudo passwd barmstrong\nsudo /usr/sbin/usermod -G tomcat barmstrong\nsudo mkdir /home/barmstrong/.ssh\nsudo chown barmstrong /home/barmstrong/.ssh\nsudo mkdir /blackburn/d2/barmstrong\nsudo chown barmstrong:tomcat /blackburn/d2/barmstrong\nsudo cp authorized_keys2 /home/barmstrong/.ssh\nsudo chown barmstrong:barmstrong /home/barmstrong/.ssh/authorized_keys2\n}}}
What was once at:\n\nricsigdtlx:/home/rsignell/p/adriatic/meteo\n\nis now at:\n\ncapecodder:/Volumes/archives/mudpile/adria/meteo\n\nRunning out of deskspace on the desktop!
I was struggling a bit until I realized that usgs-dev, the user running the development tomcat, needed to be able to write to the directory with the grib files, to create the grib index files (*.gbx). These directories were owned by user "usgs", and even though group write priviledges were enabled, user "usgs-dev" couldn't write. This was because user "usgs" was not a member of group "usgs-dev". So I added both users to each others groups\n\n 676 sudo usermod -g usgs usgs-dev\n 677 sudo usermod -g usgs-dev usgs\n\nand that fixed the problem. Do we really need a usgs and usgs-dev, however?\n
Short URL for this page is <>.\n\nHere's what I did to grab the NAVO NCOM regional relocatable model results from NCEP and make them available as aggregated data our our THREDDS Data Server \n\n\n1. Every day at 5:20pm local time, I run "do_get_ncom" on\n{{{\n$ crontab -l\n\nSHELL=/bin/bash\nPATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin\nMAILTO=rsignell\n20 17 * * * /usgs/data1/rsignell/models/navo/do_get_ncom\n}}}\n\n2. Here's what do_get_ncom looks like. I use the "date" command in Linux to get todays date, access just that file from the NCEP ftp server, then untar, unzip, and convert the NetCDF3 files to NetCDF4 using "ncks" to save space. I also move the previous days files a directory called "yesterday", just in case something goes wrong and I want to recover them. \n\n{{{\nday=`date +%Y%m%d`\ncd /usgs/data1/rsignell/models/navo\n\nncftpget${day}00.tar.gz\n\ntar xvfz ncom_relo_fukushima_1km_${day}00.tar.gz\nrm ncom_relo_fukushima_${day}00.tar.gz\nrm ./yesterday/*.nc\nmv ./ncom_relo_fukushima/*.nc ./yesterday\nfor file in *.nc\ndo\n echo $file\n ncks -4 -L 1 -O $file ./ncom_relo_fukushima/$file\n rm $file\ndone\n}}}\n\n3. I aggregate the data with this THREDDS catalog:\n{{{\n<catalog xmlns:xsi=""\n xsi:schemaLocation=""\n xmlns=""\n xmlns:xlink="" name="OPeNDAP Data Server" version="1.0.3">\n <service name="allServices" serviceType="Compound" base="">\n <service name="ncdods" serviceType="OPENDAP" base="/thredds/dodsC/"/>\n <service name="ncss" serviceType="NetcdfSubset" base="/thredds/ncss/grid/"/>\n <service name="wms" serviceType="WMS" base="/thredds/wms/"/>\n <service name="iso" serviceType="ISO" base="/thredds/iso/"/>\n <service name="ncml" serviceType="NCML" base="/thredds/ncml/"/>\n <service name="uddc" serviceType="UDDC" base="/thredds/uddc/"/>\n </service>\n\n <dataset name="NCOM Relocatable Runs" ID="NCOM">\n <metadata inherited="true">\n <serviceName>allServices</serviceName>\n\n<creator>\n <name vocabulary="DIF">NAVO/NAVY/MIL/US</name>\n <contact url="" email=""/>\n </creator>\n <documentation xlink:href="" xlink:title="ReadMe.txt"/>\n\n <documentation type="Summary"> Preliminary NCOM Relocatable 1km forecast model for Fukushima Region. The data here was obtained from the site (, untarred, unzipped, converted to NetCDF4 to save space, and then aggregated to facilitate access by Rich Signell ( in the hopes that this will make the forecast data more accessible to scientists who are trying to adaptively sample the plume. </documentation>\n </metadata>\n\n <dataset name="Fukushima" ID="ncom_relo/fukushima" urlPath="ncom_relo/fukushima">\n\n <netcdf xmlns="">\n <remove type="attribute" name="field"/>\n <aggregation dimName="time" type="joinExisting" recheckEvery="1 hour">\n <scan location="/usgs/data1/rsignell/models/navo/ncom_relo_fukushima" regExp=".*ncom_relo_fuku.*\$" olderThan="5 min"/>\n </aggregation>\n <variable name="water_u">\n <attribute name="standard_name" value="eastward_sea_water_velocity"/>\n </variable>\n <variable name="water_v">\n <attribute name="standard_name" value="northward_sea_water_velocity"/>\n </variable>\n </netcdf>\n </dataset>\n</dataset>\n</catalog>\n}}}\n
config.options.chkHttpReadOnly = false;\n
{{{\n mencoder "mf://*.jpg" -of rawvideo -mpegopts format=mpeg1:tsaf:muxrate=2000 -o output.mpg -oac lavc -lavcopts acodec=mp2:abitrate=224 -ovc lavc\n -lavcopts vcodec=mpeg2video:vbitrate=1152:keyint=15:mbd=2:aspect=4/3\n}}}\n\nThis converts a bunch of jpg files to mpeg1 in a form that will play on windows.\n\nmencoder is a command line tool that can work in windows, and comes with the "mplayer" package at:\n\n
Chen's group is running a 30 year hindcast of the Gulf of Maine using the gom3 grid, and we want to compare to all available current meter data that has been collected. The run starts in 1978, and is available via OPeNDAP at:\n\n\nWe can use the NCTOOLBOX for Matlab \n\nto easily perform the model/data comparison. \n\nMake sure you have the latest version that has Alex Crosby's latest cool contributions. Either this version:\n\nor the latest distribution obtained via Mercurial (hg)\n{{{\nhg clone nctoolbox\n}}}\nThen get this zip file\n\nthat has three m-files to be run in this order:\n\nStep 1. Run "hindcast_gom3_uv.m". This searches for all the time series data from NMFS, WHOI and USGS in the model space and time domain, reads the observed data via OPeNDAP, and then finds the closest model grid point (z,lon,lat) and interpolates both model and data to an hourly time base. \nStep 2. Run "hindcast_stats.m". This does tidal analysis, low-passed filtering, and complex correlation analysis on the model and data time series.\nStep 3. Run "hindcast_plot.m". This plots up some of the statistics.\n\nYou should be able to modify these to do just about any kind of time series analysis that is desired.\n\n[img[image 1|]]\n[img[image 1|]]\n[img[image 1|]]\n[img[image 1|]]\n
We want to assess the quality of the RADARSAT winds, so we are comparing to winds measured over water at various locations. But how do we know if the wind data is okay? Some of these sensors, especially the AGIP wind sensors might not be maintained very well. So one way, although indirect and using model results, is to compare to a met model. Of course, the MET model might not work very well in some regions either, but just as a check, let's compare the COAMPS model results to the buoy and platform data. All these observations were reduced from recorded anemometer heights to 10 m height using the neutral stability assumption.\n\nFor the period of Jan 20 - Feb 20, 2003, here's how the hourly data compares, with \nSeries 1 = DATA\nSeries 2 = MODEL\n|!Sta |!mean1| !theta| !std1 |!mean2 |!theta |!std2 | !corr |!theta | !transfn |!theta|\n| Ada | 5.29| 264.2| 6.47| 4.33| 221.9| 6.12| 0.70| -39.8| 0.66| -39.8|\n| Amelia | 2.58| 184.8| 5.37| 3.65| 198.7| 6.52| 0.65| 5.9| 0.79| 5.9|\n| Annabella | 4.86| 246.8| 4.23| 4.55| 202.8| 6.89| 0.36| 89.9| 0.58| 89.9|\n| Barbara C | 3.24| 194.8| 5.86| 3.38| 210.8| 6.40| 0.73| 36.7| 0.80| 36.7|\n| Fratello | 3.87| 132.6| 5.13| 4.36| 133.9| 6.16| 0.66| 7.8| 0.79| 7.8|\n| Garibaldi A | 1.79| 169.1| 4.64| 3.04| 194.8| 6.33| 0.60| -0.9| 0.82| -0.9|\n| Giovanna | 3.42| 251.9| 6.34| 4.22| 168.9| 6.81| 0.54| -57.8| 0.58| -57.8|\n| Pennina | 3.06| 163.8| 5.99| 2.91| 165.1| 7.44| 0.64| 7.6| 0.79| 7.6|\n| Acqua Alta | 5.81| 232.7| 6.33| 5.29| 225.0| 6.21| 0.71| -5.4| 0.70| -5.4|\n| Piran | 6.46| 241.5| 6.51| 6.70| 251.1| 6.04| 0.76| 8.8| 0.70| 8.8|\n| Senigallia | 4.02| 166.9| 6.78| 2.93| 151.5| 5.72| 0.64| -5.6| 0.54| -5.6|\n\nIt looks like Annabella is the only one we can clearly throw out. It's interesting that all the model standard deviations except for Ada and Senigallia are larger than observed, suggesting that the modeled winds are too strong. It also seem like there might be compass problems at Ada and Barbara C, with Ada data being rotated 39 degrees to the right (clockwise) of the model, and Barbara C being rotated 37 degrees to the left (counterclockwise) of the model.\n\n\n\n
{{{\nroot@gam:/usgs/data0/Carolinas# du -h --max-depth=1\n}}}\n\n
__SBE37-IM__\nDownloaded Seabird "Seasoft2" \n\nHad to "run as administrator", installed in c:\sProgram Files (x86)\sSea-Bird\nwhich contains\nCnv37IMHex.exe\nThis program converts Hex to ASCII (Format=0 to Format=1)
{{{\n#!/bin/bash\nfor font in *bdf.Z ;\ndo\n base=`echo $font | sed -e 's/\s.bdf\s.Z//'`\n zcat $font | dxfc > $base.pcf\n compress $base.pcf\ndone\n}}}
Need to use the Acronis Recovery CD to boot, then select Acronis True Image (full version).\n\nTakes about 4 hours to back up 180 GB.\n
|!Volume |!Size|\n| |2.6TB |\n|capecodder:/Volumes/archives3/cccp/ |3.4TB |\n|capecodder:/Volumes/archives3/mudpile |0.6TB |\n|capecodder:/Volumes/archives |[nothing of interest] |\n|capecodder:/Volumes/archives2 |[nothing of interest] |\n| |1.9T |\n| |2.0T |\n\nwe generated a special ssh keypair for access to pikmin and executed this command on cygwin:\n{{{\ntime rsync -a -e "ssh -i /home/etwomey/.ssh/jeffisone" /cygdrive/j\n}}}\nwhere /cygdrive/j is where the 3TB USB drive showed up on Erin's windows machine.
{{{\n1/2 cup mayonnaise\n1 large egg, beaten\n1 tablespoon Dijon mustard\n1 tablespoon Worcestershire sauce\n1/2 teaspoon hot sauce\n1 pound jumbo lump crab meat, picked over\n20 saltine crackers, finely crushed\n1/4 cup canola oil\nLemon wedges, for serving\n}}}\nIn a small bowl, whisk the mayonnaise with the egg, mustard, Worcestershire sauce and hot sauce until smooth.\nIn a medium bowl, lightly toss the crabmeat with the cracker crumbs. Gently fold in the mayonnaise mixture. Cover and refrigerate for at least 1 hour.\nScoop the crab mixture into eight 1/3-cup mounds; lightly pack into 8 patties, about 1 1/2 inches thick. In a large skillet, heat the oil until shimmering. Add the crab cakes and cook over moderately high heat until deeply golden and heated through, about 3 minutes per side. Transfer the crab cakes to plates and serve with lemon wedges.\nMAKE AHEAD\nThe crab cakes can be prepared through Step 2 and refrigerated overnight.
The tinyUrl of this page is <>.\n\nThis is a short tutorial on how to use Matlab to easily extract a geographic range of bathymetry and topography data you want from web services, create nice color-shaded relief maps, and output to a highly interactive 3D visualization tool. \n\nFirst install NCTOOLBOX, the toolbox for Matlab that lets you access netcdf files, OPeNDAP datasets and other gridded data using the same syntax. \nFollow the download instructions at\nThen try this:\n{{{\nncRef =''; % Access an OpenDAP Data URL\n[data,geo]=nj_subsetGrid(ncRef,'topo',[-70.9 -70.1 41.15 41.65]); % Select a lon/lat subset of variable "topo"\nimagesc(geo.lon,,data); axis xy % Plot it up\n}}}\n\nThere are a lot of bathymetry datasets at\n\nYou want to pick a dataset, then choose the OpenDAP service, and then do "select all" and "copy" commands in the OPeNDAP Data URL window to copy the OpenDAP Data URL to the clipboard. Then create a string with this URL in Matlab (I start a string, then "paste" the URL, and close the string).\n\nNow let's try accessing some NOAA tsunami inundation DEM data for Nantucket. \n\nFirst let's get an overall view of what the grid looks like, so we'll subsample every 10 points to make it fast:\n{{{\nurl='';\nnc=ncgeodataset(url);\nz=nc{'topo'}(1:10:end,1:10:end);\ng=nc{'topo'}(1:10:end,1:10:end).grid;\nimagesc(g.lon,,z);axis xy\n}}}\nnow zoom into the region you really want, and then do:\n{{{\nax=axis;\nax=[-70.5356 -70.2343 41.2162 41.4639]; %for example\n}}}\nto store the map limits to the variable "ax". \n\nTo extract data from a specified geographic bounds with subsetting, you need to first create a "geovariable" and then use the "geosubset" method, which includes the ability to subset by geographic range as well as striding. \n{{{\nzvar=nc.geovariable('topo');\ns.lon=ax(1:2);\;\ns.h_stride=[2 2];\nstruc=zvar.geosubset(s); % returns data and grid\ng=struc.grid;\;\nimagesc(g.lon,,z);axis xy\n}}}\nThere is a very nice package for working with bathymetry in Matlab called Mirone: After installing Mirone, go to the Mirone directory in Matlab and type:\n{{{\ngrid2mirone(z,g);\n}}}\nYou can then do sun illumination, do interactive profiles, export to Google Earth (if coordinates are lon/lat) and export to Fledermaus files, and then view with the free Iview4D from\nAn example file exported from Mirone you can load in Iview4D is here:\n\n
Note: Original recipe published in NY Times: May 25, 2010, adapted from a recipe from Zingerman's Bakehouse, Ann Arbor, MI, with further slight modifications by me.\n\nTime: About 2 hours \n*1 tablespoon packed brown sugar (or barley malt syrup)\n*2 tablespoons softened unsalted butter (or lard)\n*2 tablespoons instant yeast \n*5 1/2 - 6 cups bread flour \n*2 cups water\n*1 tablespoon kosher salt \n*2 cups lye solution for dipping (1Tbs NaOH pellets + 2 cups water) \n\nCoarse sea salt or pretzel salt, for sprinkling (do not substitute kosher salt). \n1. In a mixing bowl, stir together sugar, salt, butter, 2 cups 110 deg warm water and 1 cup of flour. Heat mixture if necessary to 100 degrees in microwave. Mix yeast into mixture and let stand for 5 min. Add 3 more cups of flour and stir just until mixture comes together in a shaggy mass, adding as much flour as necessary. \n2. Turn out onto counter (or attach dough hook to mixer) and knead for 8 to 10 minutes, until smooth and supple, adding more of the remaining flour as necessary. Flatten into a fat disk and cut like a pizza into 12 pie shaped wedges. Let rest 5 minutes. \n3. Roll out each piece into a rope about 22 inches long. (For traditional shape, the ends should be thin and the center fat.) An easy way to do this is to separate the top and bottom of the wedge with your fingers, starting from the pointing end, sort of "unfolding" the wedge so that it has two pointy ends and is fat in the middle. Then roll it out! Then lift both ends, twist them around each other once, then bring ends back and press them on either side of fat “belly,” at about 4 o’clock and 8 o’clock. Transfer shaped pretzels to a baking sheet lined with parchment paper.\n4. Let rise at room temperature for 20 minutes, then put in freezer for 20 min to make them easier to handle when you dip them in lye. Instead of freezing, you can also refrigerate for at least one hour and up to overnight. \n5. Heat oven to 425 degrees. \n6. In a deep bowl, wearing rubber or latex gloves, make a 3% Lye solution by pouring 1 Tbsp (1/2 oz=15g) NaOH pellets into 2 cups (1/2 liter=500gm) water (pour lye carefully into water to avoid splashing). Dip each pretzel in solution, turning it over for 10 to 15 seconds, and place back on baking sheet. \n7. Cut an arc on the fat part of the pretzel about 1/4 deep with a razor blade.\n8. Sprinkle pretzels with salt. Bake on a greased baking sheet about 15 minutes or until deep brown. Remove to a rack and serve warm with butter.\nYield: 12 pretzels. \n
The Best-Ever Lentil Salad\n2 ¼ cups (1 lb.) Du Puy lentils, rinsed and drained\n1 medium red onion, diced\n1 cup dried currants (you can also use raisins or other dried fruit)\n1/3 cup capers\nVinaigrette:\n1/3 cup cold-pressed, extra-virgin olive oil\n¼ cup apple cider vinegar\n1 Tbsp maple syrup\n1 Tbsp Dijon mustard\n2 tsp sea salt\n2 tsp freshly ground pepper\n\n3 ½ t spices:\n1 tsp ground cumin\n½ tsp ground turmeric\n½ tsp ground coriander\n½ tsp ground cardamom\n¼ tsp cayenne pepper\n¼ tsp ground cloves\n¼ tsp freshly grated nutmeg\n¼ tsp ground cinnamon \n\nOptional add-ins:\nArugula\nWalnuts\nFresh goat cheese\nFresh herbs, such as flat-leaf parsley, cilantro, or basil\nSprouts\nCrispy seasonal veggies\nDirections:\n1. In a pot, bring lentils to a boil, reduce to a simmer and cook until al dente, 15 to 20 minutes. Remove from heat, drain, and run under cold water. Once cooled slightly, place lentils in a large serving bowl.\n2. While the lentils are simmering, make the dressing: Placing all ingredients in a jar with a tight fitting lid and shake vigorously to combine.\n3. Toss lentils with dressing. Add onion, currants, and capers. Add optional ingredients, such as herbs, greens, and cheese, just before serving.\nThis salad can hang out in the fridge for a couple days.\n
* 2 T olive oil\n* 2 cups chopped onion\n* 3 T finely chopped cilantro stems\n* 4 cloves minced garlic\n* 1 T minced chipotle pepper in adobo\n* 1 T ground cumin\n* 1 lb dried black beans\n* 3 cups water\n* 1 15 oz can tomatos (whole, diced, crushed, doesn't matter)\n* Juice from 1 lime\n* chicken bullion cubes (I use chicken "better than bullion")\n* greek yogurt\n\nSoak beans in water for 7-8 hours (e.g. overnight or during the workday)\nHeat oil in pressure cooker over medium heat, add onion and cumin and cook till softened, 3-4 mins. Turn to medium low, add garlic, cook for 2 mins. Drain the soaked beans and add to cooker with 3 cups water. When up to pressure, time 9 mins. Turn off and run under water to cool. Open and add drained tomatos. Add water if necessary. Add chicken bullion until salty enough. Add lime juice and serve with a big dollop of greek yogurt, cilantro leaves, and sriracha if desired.\n
Please join our model interoperability group. Note that although you need a Google Account to access Google Groups, you can use your work e-mail for your Google Account. Unless you use gmail all the time, I recommend that you make a new Google Account associated with your work e-mail. If you already have a Google Account, you just have to "sign out" of your personal Google Account first. This way you will get e-mail from Google Groups delivered to your work e-mail, and you will be able to post or reply to messages from the group from your work e-mail.
According to Briggs thesis, the median grain size of the 41 ebb dominated samples (north side of Middle Ground) is 710 mm, while for the 2 symmetric flood dominated samples is 340 mm.\n\nCould try a ROMS run with 710 and 340 mm sand, perhaps 1 m of each.
Pound, Brine and Grill Fast:\nPound: discard tender, pound to 1/2 inch thick\nBrining: To brine the chicken, dissolve 1-1/2 tablespoons of unionized table salt (or 1/4 cup of kosher salt) with 1/4 cup of sugar in 8 cups of cold water. This will make enough brine for 4 chicken breasts. If you are making more of less, adjust the amount of brine accordingly. The sugar in the brine will caramelize on the surface of the chicken as it cooks, giving it a nice, grilled coloring. To help dissolve the sugar and water, simply add it to 1 cup of boiling water, stir until dissolved and add mixture to the remaining water. Make sure the brine has cooled before adding the chicken. You can brine in a shallow, covered baking dish or a large zip lock bag. Make sure to bring for at least 30 minutes. It is important that you give the brine enough time to work, but don't overdo it. \nGrill: Directly from Brine to Grill -> grill on hot, 2 minutes a side.
After searching around for the download link on, realized that you have to send an e-mail to get the source code. I sent mine to "Fulcher, Crystal W" <>" and she sent me a file adc51_12.tar. Then following the ADCIRC developers guide at, I did:\n\n{{{\ncd /peach/data1/rsignell/Adcirc\ntar xvf adc51_12.tar\ncd v51.12\nfind . -name \s*.bz2 -exec bunzip2 \s{\s} \s;\ncd work\n\n# add -lnetcdff (fortran lib) to netcdf lib\nsed s/-lnetcdf/-lnetcdff -lnetcdf/ makefile > foo\nmv foo makefile\n\nmake adcirc compiler=pgi NETCDF=enable NETCDFHOME=/share/apps/netcdf NETCDF4=enable NETCDF4_COMPRESSION=enable\nmake padcirc compiler=pgi NETCDF=enable NETCDFHOME=/share/apps/netcdf NETCDF4=enable NETCDF4_COMPRESSION=enable\nmake adcprep compiler=pgi NETCDF=enable NETCDFHOME=/share/apps/netcdf NETCDF4=enable NETCDF4_COMPRESSION=enable\n\n}}}\n\nRunning ADCIRC:\n{{{\nadcprep [select option 1]\nadcprep [select option 2]\n}}}\nStep 2 initially bombed because NWS=1 was set in fort.15, which in older versions signified reading the fort.13 file for spatial varying parameters. In the v51.12 version, if NWS=1 you need to specify on the next line a parameter name for the data in the fort.13 file (e.g. mannings n, chezy, etc). Since the fort.13 file was not provided, I changed to NWS=0 and then it adcprep ran)\n\n\n\nas I didn't need SWAN support.
In the \n{{{\n./esmf/build_config\n}}}\n directory, I made a new sub-directory\n{{{\n./esmf/build_config/CYGWIN_NT-5.1.gfortran.default\n}}}\nand copied into it all the files from \nLinux.g95.default\n\nI then made these modifications to CYGWIN_NT-5.1.gfortran.default/\n{{{\nESMF_F90DEFAULT = gfortran (instead of g95)\nESMF_F90COMPILEOPTS += (removed -fno-second-underscore)\nESMF_F90LINKOPTS += (removed -fno-second-underscore)\n}}}\n\nI then set:\n{{{\n$ env | grep ESMF\nESMF_DIR=/cygdrive/c/rps/models/esmf\nESMF_INSTALL_PREFIX=/usr/local/esmf\nESMF_COMM=mpiuni\nESMF_ABI=32\nESMF_COMPILER=gfortran\n}}}\n\nTyping "make" produced
When I tried building GDAL using ./configure on with NetCDF4, I ran into problems at the linking stage due to unlinked HDF libraries. \n\nThe solution I found was to first do:\n{{{\nrsignell@gam:~$ nc-config --all\n\nThis netCDF 4.1.1 has been built with the following features:\n\n --cc -> gcc\n --cflags -> -I/usr/local/netcdf-4.1.1/include\n --libs -> -L/usr/local/netcdf-4.1.1/lib -lnetcdf -lhdf5_hl -lhdf5 -lz -lm -lcurl\n}}}\nwhich showed me what libraries need to be linked when building applications using netCDF4. So I then did this:\n{{{\n export LIBS='-lnetcdf -lhdf5_hl -lhdf5 -lz -lm -curl'\n ./configure --prefix=$HOME --with-netcdf=/usr/local/netcdf-4.1.1\nmake install\n}}}\nand it worked.\n
The problem: building mexnc from source with netcdf and opendap libraries seems to work fine with R14 on RHEL4, but then the applications fail.\n\nThe solution:\nIt turns out that the default compiler on RHEL4 is gcc version 3.4.4 (gcc -v), but Matlab R14 only supports 3.2.3, so shared libraries are incorrect.\n\nThe solution: install the gcc 3.2.3 compatibility libraries from Redhat network if they are not already installed.\n\nThen do:\n{{{\ncd $MATLAB/sys/os/gnlx86\ncp -p\nrm -f\nln -s /usr/lib/ .\n}}}\n\n(1) For non-DAP Mexnc:\nModify the "" file so that in the branch for your architecture (in my case glnx86)\n\nCC="gcc32"\n\nMake sure that the MATLAB environment variable is set correctly for the version you are building, and then just type "make". \n\n(2) For DAP-enabled Mexnc:\n\nmodify so that\nCC="gcc32"\nthen type "make -f makefile_dap"\n\nTESTING:\nAfter these tiny changes, "mexnc/tests/test_mexnc.m" worked with *no errors* for both the regular netcdf version of mexnc (with large file support) and with the opendap version (without large file support).\nImportant note: when testing using "test_mexnc.m" in the "mexnc/tests" directory, make sure there are no existing .nc files in the "tests" directory before you start the test. Also, when testing the opendap version, if you get "too many connects", try starting Matlab over -- I think some of the tests are forgetting to close the file after opening).\n\n\n(once I deleted the existing .nc files in the tests directory).
Wow, here is the new easy way to build NCVIEW on cygwin (thanks to Ward Fisher at Unidata!). It's quite a few optional cygwin packages to locate and install using setup.exe, but it beats building packages!\n\n1. Install NetCDF, HDF5, Curl, libXaw, libICE, udunits, libexpat and libpng by using the Cygwin setup.exe, searching for "netcdf", "hdf5", "curl", "Xaw", "libICE", "udunits" "libxpat" and "libpng" and installing these packages:\n{{{\nNetCDF libnetcdf-devel, libnetcdf7, netcdf\nHDF5 1.8.9-1: hdf5, libhdf5-devel, libhdf5_7 \ncurl 7.27.0-1: libcurl4\nlibXaw 1.0.11-1: libXaw-devel, libXaw7\nlibICE 1.0.8-1: libICE-devel, libICE6\nlibpng 1.5.12-1: libpng-devel, libpng15, libpng\nudunits 2.1.24-1: libudunits-devel, libudunits0, udunits\nlibexpat 2.1.0-1: libexpat1, libexpat1-devel\n}}}\n\n2. Build NCVIEW\n{{{\nwget\ntar xvfz ncview-2.1.1.tar.gz\ncd ncview-2.1.1\n./configure --prefix=/home/rsignell\nmake install\n}}}\n\nSuccess: /home/rsignell/bin/ncview works!\n\n\n\nCompare this to the old way:\nAfter running configure:\n{{{\nexport FC=gfortran\nexport CPPFLAGS='-DNDEBUG -Df2cFortran'\nexport CFLAGS='-O -fno-builtin'\n./configure --prefix=/usr/local\n}}}\nI edited the Makefile and replaced the NETCDFLIB and NETCDFLIBDIR lines with:\n{{{\nNETCDFLIB = -lnetcdf -lhdf5_hl -lhdf5 -lz -lm -lsz\nNETCDFLIBDIR = -L/usr/local/netcdf/lib -L/usr/local/hdf5-1.8.1/lib -L/usr/local/szip-2.1/lib\n}}}\nThen typed "make" and everything worked. I was able to view both "classic" and "NetCDF-4" files!\n\nNOTE: I don't think the FC=gfortran did anything, and I don't even know whether the CPPFLAGS or CFLAGS options were necessary, but that's just what I had set previously to build some other stuff, and the build worked, so I'm reporting them here.
On RICSIGDTLX\n{{{\nexport FC=ifort\nexport CPPFLAGS="-fPIC -DpgiFortran"\nexport FFLAGS="-i-static"\n./configure --prefix=/usr/local/netcdf/ifort\nmake \nmake check\nmake install\n}}}
Note: the first time I untarred the {{{HDF5189-win32-cygwin.tar.gz}}} file, I untarred it in my home directory without noticing that it didn't make a directory for itself. So I ended up with a bunch of stuff in my bin,lib, etc that I didn't want there. So I used this slick command to remove all the tar files that got untarred:\n{{{\ntar tfz HDF5189-win32-cygwin.tar.gz | sort -r | xargs rm -f 2\n}}}\nBut here's how not to screw up\n{{{\ncd $HOME\nwget\nmkdir hdf5\ntar xvfz HDF5189-win32-cygwin.tar.gz -C $HOME/hdf5\n}}}\n\n\n
I pretty much just followed this "porting guide":\n\n\n{{{\n$ export FC=gfortran\n$ export F90=gfortran\n$ gfortran --version\nGNU Fortran (GCC)
!!Why rebuild the mex files?\n\nThere are two reasons to build Seagrid mex files -- either you need bigger arrays or you have a Machine/Matlab version for which the Seagrid mex files have not been built.\n\n!!Changing the array sizes\n\nThe maximum size of the grid that MEXSEPELI can handle is set in Because SEAGRID effectively doubles the grid for computational purposes, if you need a final grid that is 400x400, you need to set NX and NY in to something greater than 800.\n\nAlso the maximum size of the boundary that MEXRECT can handle is set in mexrect.c. You need to increase the size of the Z array in the main routine, and also the size of R and T in the RECT subroutine.\n\n!!Building the mex files\n\nThere are two Fortran mex files and a C mex file\n{{{\nmexrect.F\nmexsepeli.F\nmexinside.c\n}}}\nthat need to be built.\n\nCheck which Fortran compilers work with your platform and version of Matlab at\n
Building udunits 1.12.4 on cygwin\n{{{\nexport FC=gfortran\nexport CPPFLAGS='-DNDEBUG -Df2cFortran'\nexport CFLAGS='-O -fno-builtin'\n./configure --prefix=/usr/local\nmake install\n}}}
Building udunits 1.12.4 on RICSIGDTLX\n{{{\nexport CC=gcc\nexport FC=ifort\nexport CPPFLAGS='-DNDEBUG -DpgiFortran'\nexport CFLAGS='-O -fno-builtin'\n./configure --prefix=/usr/local\nsudo make install\n}}}
!Building NetCDF\n1st built zlib and udunits2 following the instructions on the NetCDF4 quick instructions (building with HDF5 section) at\nJeff had already build HDF5, which is in /share/apps/hdf5 on\n\n{{{\n cd /pikmin/raid1/home/rsignell/nco/netcdf4\n wget\n tar xvfz netcdf-4-daily.tar.gz\n cd netcdf-4.1-beta2-snapshot2009092500/ \n export FC=pgf90\n export CPPFLAGS=-DpgiFortran\n ./configure --prefix=/pikmin/raid1/home/rsignell --with-hdf=/share/apps/hdf5 \n --enable-netcdf-4 --enable-dap --with-udunits --with-libcf --with-zlib=/pikmin/raid1/home/rsignell\n make check install\n}}}\nAll passed!\n\n!Building NCO\nGot latest NCO from CVS, and first tried configure, but for some reason it try to build without netcdf-dap, even thought netcdf was successfully built with dap, so instead resorted to the Makefile approach (which I think I've always ended up using in the past). With some help from Dave Robertson, who used the same approach, here is what worked:\n{{{\ncd /pikmin/raid1/home/rsignell/nco\nrsignell@marlin:/pikmin/raid1/home/rsignell$ cvs -z3 co -kk nco\ncd ./nco/bld\nmake NETCDF4=Y OPENDAP=Y DAP_NETCDF=Y NETCDF_ROOT=/pikmin/raid1/home/rsignell HDF5_ROOT=/share/apps/hdf5 UDUNITS2=Y UDUNITS_LIB=/pikmin/raid1/home/rsignell/lib UDUNITS_INC=/pikmin/raid1/home/rsignell/include\n}}}\n\nThe new NCO binaries are sitting in:\n{{{\nmarlin:/pikmin/raid1/home/rsignell/nco/nco/bin\n}}}\nThis builds without errors, but for some reason, even though I specified the path above for UDUNITS, I need to get \nLD_LIBRARY_PATH=/pikmin/raid1/home/rsignell/lib\nto get the NCO tools to work.\n\nI think everything got built with "-g -O2". I'm not sure what this does, so will ask Jeff Dusenberry if this will make things run slower than they should. \n\nAnyway, the result was that we now have a version of ncks that can do \n{{{\nncks -4 -L 1\n}}}\non a single line. I tested it on a vineyard sound run (/pikmin/raid1/home/rsignell/vs/vs011/ => and it deflated a 1.1GB file down to about 0.4 GB. This is a grid with very little masked region, so I would expect regions with much masking to deflate even more.\n
{{{\nexport LDFLAGS="-L/home/tomcat/lib"\nexport CPPFLAGS="-I/home/tomcat/include " \n ./configure --enable-dap -enable-shared --prefix=/home/tomcat --enable-netcdf-4\n}}}
Started with RHEL4 system (kernel 2.6.9-67.ELsmp) with gcc 3.4.6 and all the standard -devel packages installed.\n\nI set these flags: \n{{{\nexport CPPFLAGS='-DNDEBUG -DpgiFortran -Drestrict='\nexport CC=gcc\nexport FC=ifort\nexport CFLAGS='-O -fno-builtin'\n}}}\n\nand then built:\n{{{\n/usr/local/udunits-1.12.4\n/usr/local/libdap-3.7.3\n/usr/local/libnc-dap-3.7.0\n/usr/local/antlr-2.7.7\n/usr/local/nco-3.9.2\n}}}\n\nIn each directory, I built with this sequence: \n{{{\n./configure\nmake\nsudo make install\n}}}\nexcept that I used\n{{{\n./configure --prefix=/usr/local/nco \n}}}\nfor nco since I didn't want it to overwrite my old nco binaries in /usr/local/bin\n\nNotes: I first built with the most recent libdap (3.7.10), but although libnc_3.7.0 was fine with this, nco-3.9.2 was NOT, and failed the opendap test during configure. So I then cleaned all the *dap* stuff out of /usr/local/lib and removed the /usr/local/include/libdap and /usr/local/include/libnc-dap directories. Also, I used the "-Drestrict" parameter to CPPFLAGS, since NCO said that this is needed if the compiler is not C99 compliant. I didn't know if gcc 3.4.6 was C99 compliant or not, so I defined "-Drestrict". Also, the "-DpgiFortran" is necessary for the Intel Fortran compiler (ifort), and also for the pgi fortran compiler, while "-Df2cFortran" is set for the G77 compiler. This should be automatically detected by the configure scripts, but I just wanted to mak sure.\n
To get ncSOS working, you need to get your time series in NetCDF using CF-1.6 conventions.\n\nWe tried for a long time to convert our 4D (ntimes,1,1,1) time series data using NcML, but in the end we had to convert the actual netCDF files.\nThe ipython notebook 'cmg_ts2cf_no_stationdim' on shows a working example. \n\nI learned a few things:\n* you need the global attribute "naming_authority", because that gets used in ncSOS.\n* ncSOS doesn't seem to work with (ntimes, nstations) files, even if nstations=1. When I removed the station dimension, ncSOS worked. \n\nThis retrieves data:\n
Source: Nick Napoli\nScott Gallagher needs to associate bottom photographs with sediment type and sidescan for species estimates.
Click Windows "Start" button\n Type "Accounts" in the Start search box, and then click User Accounts under Programs.\n\n In the User Accounts dialog box, click Change my environment variables under Tasks.\n Make the changes that you want to the user environment variables for your user account, and then click OK.
Get the PID for tomat, then look at /proc/PID/fd\n{{{\n[rsignell@blackburn ~]$ ps -ef | grep tomcat2\nusgs
Delft3D OpenSource\n\n\nSVN:\nsvn co delft3d_mom\nrsignell\ntooker1831
1 quart chocolate Haagen Dazs\n1/2 cup heavy cream\n4 oz semi-sweet chocolate\n1 habanero\n\nPut cream in pyrex measuring cup, cut 4 vertical slits in habanero and immerse in cream. Squeeze habanero against side of cup to get cream inside. Microwave until the cream begins to boil, 1-1.5 minutes. Squeeze the habanero again. You should see some oil floating on top of the cream. Taste a tiny bit of the cream -- it should taste very spicy. If not, let the pepper rest some more, and microwave a bit longer. Mix in 4 oz chopped chocolate until melted and uniform consistency. Cool in freezer until cool. Microwave ice cream for 30 seconds. Mix cream/chocolate/habanero mixture into ice cream and refreeze.
*AJAX (web pages don't need to reload on every action -- can have one element of the page change)\n*Canvas/SVG (images rendered directly in browser) (SVG similar to 2D VRML)\n*jQuery/Prototype (JavaScript stopped sucking in browser)\n*REST - (URLS can be data accessors!) API in distributed manner (evolution of Ajax idea)\n*Google/Microsoft/Apple/Mozilla all have very fast javascript compilers (way faster for python for everything besides number crunching)\n\n*jQuery (used almost everywhere)\n*Backbone (decouple data from the DOM) (DOM= Document Object Manager (what drives the "HTML"))\n*D3 (data transformation, primarily for vis)\n\n\none page web server using tornado\n\n
I bought a 4GB USB stick with a LiveUSBPendrivePersistent Ubuntu Linux from Dragon Technology Ltd:\n\nI installed Intel Fortran Compiler 9.1, MPI, ncview and the latest version of ROMS.\n\nI then tried to clone this drive onto a faster 4GB Corsair Flash Voyager GT. This didn't go so well, because it turned out that the Corsair was a *tiny* bit smaller -- 16 Mb. the result was that the dd command \n{{{\ndd if=/dev/sdd of=/dev/sde bs=10M\n}}}\ntried to make partitions on the Target Stick that were the same as on the Source Stick:\n\nSource Stick (ByteStor)\n{{{\n[root@ricsigdtlx rsignell]# fdisk -l /dev/sdd\n\nDisk /dev/sdd: 4126 MB,
/***\n| Name|CloseOnCancelPlugin|\n| Description|Closes the tiddler if you click new tiddler then cancel. Default behaviour is to leave it open|\n| Version|3.0 ($Rev: 1845 $)|\n| Date|$Date: 2007-03-16 15:19:22 +1000 (Fri, 16 Mar 2007) $|\n| Source||\n| Author|Simon Baird <>|\n| License||\n***/\n//{{{\nmerge(config.commands.cancelTiddler,{\n\n handler_orig_closeUnsaved: config.commands.cancelTiddler.handler,\n\n handler: function(event,src,title) {\n this.handler_orig_closeUnsaved(event,src,title);\n if (!store.tiddlerExists(title) && !store.isShadowTiddler(title))\n story.closeTiddler(title,true);\n return false;\n }\n\n});\n\n//}}}\n\n
Notes from reading the Dec 9, 2009 report "Interim Framework for Effective Coastal and Marine Spatial Planning".\n\nMarine Spatial Planning is one of 9 priority objectives of the Interagency Ocean Policy Task Force (under the White House Council on Environmental Quality). The nine are:\n* Ecosystem-based management\n* Coastal and Marine Spatial Planning\n* Inform Decisions and Improve Understanding\n* Coordinate and Support\n* Climate Change and Ocean Acidification\n* Regional Ecosystem Protection and Restoration\n* Water Quality and Sustainable Practices on Land (that affect health of oceans)\n* Arctic\n* Observations and Infrastructure\n\nFor CMSP, goal is to come up with a system that optimizes decisions across all sectors: economic, environmental, cultural and security.\n\nAfter reading, not quite sure how the authority for CMSP planning works. \n\nCMS Plans would be adaptive and flexible, open, based on best scientific information, would be evaluated periodically, and would adopt the Principle 15 of Rio: "Where there are threats of serious or irreversible damage, lack of full scientific certainty shall not be used as a reason for postponing cost-effective measures to prevent environmental degradation".\n\nGeographic scope: high water line to edge of Continental Shelf or EEZ (whichever is further).\n\nConsistent planning scale for CMSP should be the established Large Marine Ecosystem (LME) scales. These are described at 5 in Alaska, California Current, Gulf of Mexico, Caribbean Sea, Southwest US, Northeast US, and Hawaii.\n
edit ~/.hgrc to look like this:\n{{{\n# This is a Mercurial configuration file.\n[ui]\nusername = Rich Signell <>\npassword = e8d3w4q8 <=replace with real google code password from google code account settings (note: this is *not* your google account password)\n}}}\nthen do \n{{{\nhg commit -m 'made changes'\nhg push\n}}}\n\n
From MWRA reports (for example, the 2010 Outfall Monitoring Report: the total nitrogen loading from the outfall (which is secondary with a little primary mixed in during storms) is about 12,000 Metric Tons/Year. The average discharge rate is 350 MGD * (0.043) = 15 m3/s. \n This means an average total nitrogen concentration of 12000/15/(3600*24*365) = 25 ppm. This goes into a water column that is about 30 m deep, tidal currents are about 10 cm/s, and the tidally-averaged currents are about 10 cm/s (10 km/day).\n\nFor Falmouth, from this West Falmouth Mass Estuaries Project report (, the waste water treatment facility (WWTF) discharges about 0.46 MGD (319 gallons/minute), which equals 0.46 MGD * (0.043 cms/mgd) = 0.02 m3/s. The concentration of total nitrogen from the town waste water site: was about 30 ppm in 2004, but is about 3 ppm now. The estimate annual load of total nitrogen from the existing treatment plant therefore, if discharged to the ocean, would be about 0.02 m3/s * 3e-6 * 3600*24*365 = 1.8 Metric Tons/Year (1 Metric Ton = 1000 kg = 1 m3 water)\n\nCurrently only 3% of Falmouth is sewered, but if we assumed that all of Falmouth was sewered, and multiply this number by 30, we get 30*1.8 = 54 Metric Tons/year. That would mean that the Boston Outfall puts (12,000/54 = 222) more than 200 times as much nitrogen into Mass Bay as the Falmouth Outfall would!\n\nThe population of falmouth is 30,000, population sewered by MWRA is about 2 million (and 5500 businesses); so only 60x more people So why 200 times as much nitrogen? Partly because Falmouth has better treatment. Plus some error?
From the NCO manual:\nIt is possible to use a combination of these operations to compute the variance and standard deviation of a field stored in a single file or across multiple files. The procedure to compute the temporal standard deviation of the surface pressure at all points in a single file involves three steps.\n{{{\n ncwa -O -v prs_sfc -a time\n ncbo -O -v prs_sfc\n ncra -O -y rmssdn\n}}}\nFirst construct the temporal mean of prs_sfc in the file Next overwrite with the anomaly (deviation from the mean). Finally overwrite with the root-mean-square of itself. Note the use of ‘-y rmssdn’ (rather than ‘-y rms’) in the final step. This ensures the standard deviation is correctly normalized by one fewer than the number of time samples. The procedure to compute the variance is identical except for the use of ‘-y var’ instead of ‘-y rmssdn’ in the final step. \n\nHere's what I actually did:\n{{{\nssh\ncd /http/www/tomcat/apache-tomcat-7.0.22/data/gom3_hindcast\n}}}\nStep 1. Compute the mean for each month. I ran the script "do_mean":\n{{{\nmore do_mean\n\n#!/bin/bash\n# create the monthly means from the gom3 data\nfor file in /http/www/CODFISH/Data/FVCOM/NECOFS/Archive/gom3_*.nc\ndo\n ext=`echo $file | sed -e 's/.*\s///'`\n outf=`echo $ext | sed -e 's/gom3/gom3_mean/'`\n if [ -f $outf ]\n then\n echo $outf exists\n else\n echo processing $outf\n ncra -O $file $outf\n fi\ndone\n}}}\nThis took 4 days to complete! \n\nStep 2. Compute the standard deviation. In Step 1 I computed the means using "ncra" instead of "ncwa" to be left with a time dimension of 1 that could be easily time aggregated. Unfortunately, for calculating the standard deviation we can't have the time dimension in there because "ncbo" will get upset about differencing files with two different time dimensions. Luckily, it's easy to remove this singleton dimension using "ncwa" as shown below, creating a file "". The "" file, without time dimension, is then used to comput the anomaly. \n{{{\nmore do_std\n#!/bin/bash\n# create the monthly standard deviation from the gom3 data\nfor file in /http/www/CODFISH/Data/FVCOM/NECOFS/Archive/gom3_*.nc\ndo\n ext=`echo $file | sed -e 's/.*\s///'`\n meanf=`echo $ext | sed -e 's/gom3/gom3_mean/'`\n outf=`echo $ext | sed -e 's/gom3/gom3_std/'`\n if [ -f $outf ]\n then\n echo $outf exists\n else\n echo processing $outf\n# remove time dimension from $meanf so that ncbo will work\n ncwa -O -a time $meanf\n# compute anomaly (difference from mean)\n ncbo -O --op_typ=subtraction $file $outf\n# compute rmssdn from anomalys\n ncra -O -y rmssdn $outf $outf\n fi\ndone\n}}}\n
//{{{\nconfig.options.chkHttpReadOnly = false; // means web visitors can experiment with your site by clicking edit\nconfig.options.chkInsertTabs = true; // tab inserts a tab when editing a tiddler\nconfig.views.wikified.defaultText = ""; // don't need message when a tiddler doesn't exist\nconfig.views.editor.defaultText = ""; // don't need message when creating a new tiddler \n//}}}\n
You need to fork the main matplotlib repo, so you have your own copy\nassociated with your github account:\n\n\nOnce you've forked it, clone it and create a branch:\n\ngit clone my-forked-repo-url\ncd matplotlib\ngit checkout -b my_awesome_new_feature\n# ... hack hack hack ...\ngit commit -am "Useful commit message here"\ngit push origin my_awesome_new_feature\n\nOnce you've done that, make a pull request by following the\ninstructions here:\n\n\nOnce you've done that, congratulations!\n
The tinyURL of this page is <>\n\nArt deGaetano provided daily weather data (5km grid) stretching back to 1970 for the NE. The data is written in an ASCII file, one file per day, and looks like this:\n{{{\n$ head DEM_NE_5.0_2005_01_01.txt\n dem0 48.00 -84.00 275.8 255.0\n dem1 48.00 -83.96 275.9 255.1\n dem2 48.00 -83.92 275.6 255.2\n dem3 48.00 -83.88 275.9 255.4\n dem4 48.00 -83.83 275.9 255.0\n}}}\nThe first column can be ignored, and then we have lat, lon, tmax, tmin.\n\nI wanted to see what this looked like, so I loaded into Matlab:\nI first on the Linux (or cywin under Windows) command line, I did\n{{{\n$ cut -c11- DEM_NE_5.0_2005_01_01.txt > foo.txt \n}}}\nthen in Matlab\n{{{\n>> load foo.txt\n>> lat=foo(:,1); lon=foo(:,2);\n\n>> min(lon(:))\nans = -84\n>> max(lon(:))\nans = -66.0400\n>> min(lat(:))\nans = 36.0400\n>> max(lat(:))\nans = 48\n>> ind=find(lon>-83.97&lon<=-83.95);\n>> mean(diff(lat(ind)))*60\n>> ans = -2.5003\n>> ind=find(lat>36.05 & lat<=36.10);\n>> ans = 2.4995\n}}}\nSo I'm pretty sure the dx and dy are supposed to be 2.5 minutes, even though there wasn't sufficient significant digits written in the text file to determine the exact spacing.\n\nArmed with the interval, max and min of lat and lon, we can use the GMT (Generic Mapping Tools) program "xyz2grd" to create the NetCDF grid files from the text files. In the directory where the text files are, I created and ran this bash script "do_dem2nc" that calls xyz2grd to convert each file. We have to do tmin and tmax separately because "xyz2grd" can only handle one variable, unfortunately. But we'll fix this later with NcML. So here is "do_dem2nc":\n\n{{{\n#!/bin/bash\n## DO_DEM2NC script convert DEM text files to NetCDF using GMT's "xyz2grd" routine\nfor file in *.txt\ndo \n base=`echo $file | sed -e 's/\s.txt//'`\n echo "creating ${base}, ${base}"\n## cut column 4 (tmax)\n cut -f2,3,4 $file | xyz2grd -: -G${base} -I2.5m -R-84/-66.041667/36.041667/48\n## cut column 5 (tmin)\n cut -f2,3,5 $file | xyz2grd -: -G${base} -I2.5m -R-84/-66.041667/36.041667/48\ndone\n}}}\n\nSo we run it:\n{{{\n./do_dem2nc\n}}}\nwhich produces a bunch of NetCDF files.\n\nThis produces the netcdf files, but to give them time values (extracted from the file names) and make them CF compliant, we use this NcML:\n{{{\n<?xml version="1.0" encoding="UTF-8"?>\n<netcdf xmlns="">\n <dimension name="lon" orgName="x"/>\n <dimension name="lat" orgName="y"/>\n <variable name="lon" orgName="x">\n <attribute name="long_name" value="longitude"/>\n <attribute name="units" value="degrees_east"/>\n </variable>\n <variable name="lat" orgName="y">\n <attribute name="long_name" value="latitude"/>\n <attribute name="units" value="degrees_north"/>\n </variable>\n <aggregation type="union">\n <netcdf xmlns="">\n <variable name="tmin" orgName="z">\n <attribute name="long_name" value="minimum temperature"/>\n <attribute name="units" value="kelvin"/>\n <attribute name="_FillValue" type="float" value="NaN"/>\n </variable>\n <aggregation dimName="time" type="joinNew">\n <variableAgg name="z"/>\n <scan location="c:/RPS/dem/" regExp=".*DEM.*tmin\$"\n dateFormatMark="DEM_NE_5.0_#yyyy_MM_dd"/>\n </aggregation>\n </netcdf>\n <netcdf xmlns="">\n <variable name="tmax" orgName="z">\n <attribute name="long_name" value="maximum temperature"/>\n <attribute name="units" value="kelvin"/>\n <attribute name="_FillValue" type="float" value="NaN"/>\n </variable>\n <aggregation dimName="time" type="joinNew">\n <variableAgg name="z"/>\n <scan location="c:/RPS/dem/" regExp=".*DEM.*tmax\$"\n dateFormatMark="DEM_NE_5.0_#yyyy_MM_dd"/>\n </aggregation>\n </netcdf>\n </aggregation>\n</netcdf>\n}}}\n\nSo to use this on another system, just need:\n1) Install GMT on Linux (or Cygwin on Windows)\n2) Run the do_dem2nc script\n3) Change the "location" in the NcML to point to your local directory.\n\nThen it should be able to be added as a THREDDS dataset in a TDS catalog, just like the PRISM data.\n \n\n\n
Ellyn generated .ncml files for each region, and here's how I converted them to .xml catalogs.\n{{{\nssh\ncd /usr/local/usgs-dev/tomcat-thredds/content/thredds\nmv ts old_ots_xml\nmkdir new_ots_xml\nchmod 775 new_ots_xml\ncd new_ots_ncml\n./do_ncml2xml\ncd ..\nln -s new_ots_xml ts\n}}}\n\nmore do_ncml2xml\n{{{\nfor file in *.ncml\ndo\n base=`echo $file | sed -e 's/\s.ncml//'`\n echo $base\n head -n 47 ../old_ots_xml/$base.xml > head.txt\n cat head.txt $base.ncml tail.txt > ../new_ots_xml/$base.xml\ndone\n}}}\n\nmore tail.txt\n{{{\n </dataset>\n</catalog>\n}}}
{{{\nc:\sRPS\sbathy\sseth>gdalwarp vs_draft.asc -s_srs "+proj=utm +zone=19 +datum=WGS84"\n -t_srs "+proj=latlong +datum=NAD83" vs_geo.tif\n\nc:\sRPS\sbathy\sseth>gdalwarp BB_ALL_DRAFT.asc -s_srs "+proj=utm +zone=19 +datum=WG\nS84" -t_srs "+proj=latlong +datum=NAD83" buzzbay_geo.tif\n}}}\nThen fired up ArcGIS10.1 and ran "RastertoNetCDF" to convert the tif to netCDF (I could use GDAL, but the GDAL netCDF output is not CF compliant, requires making NcML, and this was just 2 datasets)\n
I had some previous COAWST calculation stored as\ndata: hsum\ngrid: g\nHere's how I converted to ArcGis for Brad to consume:\n\n{{{\n%grid2regular.m\ncd c:\srps\scf\scoawst\nload str_max_hours_0.2.mat\nhsum=hsum*100; % convert [0-1] fraction => [0-100] percent\nax=[ -72.5 -65 39.5 46.0];\n\n% determine model index ranges within lon/lat bounding box\n[jj,ii]=lonlat2ij(g.lon,,ax);\n\nlon=g.lon(jj,ii);\,ii);\nhsum=hsum(jj,ii);\n\n% 200x200 grid for 6.5 x 6.5 degrees\ndx=0.02;\ndy=0.02;\nx=ax(1):dx:ax(2);\ny=ax(3):dy:ax(4);\n[xi,yi]=meshgrid(x,y);\nzi=griddata(lon(:),lat(:),hsum(:),xi,yi);\ngz.lon=x;\;\n\n%%\ncd c:/rps/m/mirone200\nmirone\ngrid2mirone(zi,gz);\n}}}\n\nIn Mirone, I did "Save Grid As=> ESRI .hdr labeled" and then zipped up the 3 files that were generated.\n\nWhen Brad brought them into ArcGIS, the map was all black because of the 1e36 values where NaN should have been, but he saved as an Arc GRID, and then reloaded and it was okay.\n
{{{\nC:\sRPS\svs\sgrids>gdal_translate -a_srs "+proj=utm +zone=19 +datum=NAD83" MiddleGround_110706_1m.asc -of GMT Mid_110706_1m.grd\nInput file size is 1781, 1515\n\nC:\sRPS\svs\sgrids>gdal_translate -a_srs "+proj=utm +zone=19 +datum=NAD83" MiddleGround_1.0_rtk.asc -of GMT Mid_092706_1m.grd\nInput file size is 1781, 1515\n\nC:\sRPS\svs\sgrids>gdal_translate -a_srs "+proj=utm +zone=19 +datum=NAD83" MiddleGround_1.0_rtk.asc Mid_092706_1m.tif\nInput file size is 1781, 1515\n0...10...20...30...40...50...60...70...80...90...100 - done.\n}}}\n
Step 1. Convert EGM96 Geoid to WGS84 Ellipsoid using interactive calculator:\n\n\nStep2. Convert WGM84 Ellipsoid (G1150) to NAD_83 (CORS96) using interactive calculator:\n\n\nStep 3. Convert NAD83 to NAVD88 using GEOID09 interactive calculator:\n\n\nTry low-lying area at Scituate:\nlat
Create an ncml file in that looks something like this:\n{{{\n<?xml version="1.0" encoding="UTF-8"?>\n<netcdf xmlns=""\n location="/usgs/data0/bathy/GOM_010510_3sec.grd">\n <dimension name="lon" orgName="x"/>\n <dimension name="lat" orgName="y"/>\n <variable name="topo" orgName="z">\n <attribute name="units" value="meters"/>\n <attribute name="long_name" value="Topography"/>\n </variable>\n <variable name="lon" orgName="x">\n <attribute name="units" value="degrees_east"/>\n <attribute name="long_name" value="Longitude"/>\n </variable>\n <variable name="lat" orgName="y">\n <attribute name="units" value="degrees_north"/>\n <attribute name="long_name" value="Latitude"/>\n </variable>\n</netcdf>\n}}}\nLogin to the ncWMS admin page:\ngo to the first empty line, and enter the full path name to the NcML file in the "Location" box. Make up a name and an ID and then click "Save configuration". Then go to Godiva2 and browse the grid to make sure it looks okay.\n\nIf it looks okay, modify the Thredds Data Server bathy catalog \\nand add a new dataset with the above NcML cut-and-pasted into it.
Use windows program "mov2avi" to convert quicktime .mov files from GNOME to an .avi that can be played in PowerPoint under Windows.\n{{{\nc:\srps\sprograms\smov2avi -c19\n}}}\nwill make a file called "gnome_movie.avi" with no compression.\n{{{\nmov2avi -?c\n}}}\nwill show all the CODECS. Unfortunately there isn't a "Microsoft RLE", which is perfect for GNOME movies. But once you've got an uncompressed .avi, you can convert it to RLE using VideoMach.
I used to use "gribtocdl" and "gribtonc" to convert GRIB files to NetCDF. These tools no longer work for GRIB2 files.\n\nBut here are two easy ways to convert GRIB2 files (and GRIB1 for that matter) to NetCDF:\n\n* Use WGRIB2's NetCDF output option: \n{{{\nwgrib2 ds.wspd.bin -netcdf\n}}}\n* Use NetCDF Java: \n{{{\njava -classpath ~/java/jar/toolsUI-2.2.18.jar ucar.nc2.iosp.grib.Grib2Netcdf ds.wspd.bin\n}}}\n\nWGRIB2 writes lon/lat values, while NetCDF java just writes the projected coordinates.\n\nOf course, the best thing would be to ask whoever is producing the GRIB files to put up a GDS server so that they could be accessed as NetCDF to begin with!
Dave Schwab gave me the bathymetry for the Great Lakes as 162 tiles in the form of GEODAS ".g98" format files. These are simple binary files with a 128-byte header (see documentation at <> for more information). After perusing the g98 format doc, I wrote a Matlab program to input g98 files and output NetCDF files. Because I wanted to use "gdal_merge" to merge the NetCDF files into a single tile, and GDAL plots files with decreasing lat upside down, I wrote the NetCDF files "upside down". After using "gdal_merge" (version from FWTOOLS 2.2.8), the resulting single tile 800MB GeoTIFF looked fine.\n\n{{{\nStep 1. In Matlab, run "c:\srps\scf\sglos\sgeodas-great-lakes\sread_g98.m". This produces 161 NetCDF files. I had to skip the 5th .g98 file, "gna42080.g98" as it would not read correctly, despite having the same file size on disk as the other tiles. \n\nStep 2. In the FWTOOLS command shell, run "do_merge.bat", which looks like this:\n\ngdal_merge.bat -o big1.tif ... ^\ ... ^\\n\nI had to do this because *.nc does not work on the PC. The "caret symbols" are continuation symbols for Windows Batch Files.\n\nStep 3. Convert the GeoTIFF back to NetCDF.\n\nC:\sRPS\scf\sglos\sgeodas-great-lakes>gdal_translate big1.tif -of NetCDF\n\nUnfortunately, this creates just a "Band1" variable with no metadata.\n\nStep 4. Generate the NcML to put in standard COARDS Compliant Topo form:\n <netcdf xmlns=""\n location="/var/local/glos/thredds/Bathymetry/">\n <dimension name="lon" orgName="x"/>\n <dimension name="lat" orgName="y"/>\n <variable name="lon" shape="lon" type="double">\n <attribute name="units" value="degrees_east"/>\n <values start="-93.00" increment="0.0008333333333333333"/>\n </variable>\n <variable name="lat" shape="lat" type="double">\n <attribute name="units" value="degrees_north"/>\n <values start="50.00" increment="-0.0008333333333333333"/>\n </variable>\n <variable name="topo" orgName="Band1">\n <attribute name="units" value="meters"/>\n <attribute name="long_name" value="Topography"/>\n </variable>\n <attribute name="Conventions" value="COARDS"/>\n </netcdf>\n\n}}}\n
in OSGeo4W shell:\n{{{\ne:\sDEMS_NELiDAR\sdems\n\ -o maine.tif 19_04534856.img 19_04534854.img 19_04534852.img 19_04534851.img 19_04544856.img 19_04544854.img 19_04544852.img 19_04544851.img 19_04564856.img 19_04564854.img 19_04564852.img\n}}}
MassGIS data is all in NAD83 Mass State Plane Coordinates, in meters. So that is EPSG:26986.\n\nThe 1:25000 coastline for Massachusetts is available as an ArcShapefile from MassGIS at (self-extracting zip which has all the components of the shapefile, including PRJ)\n\nSo on Windows, I fire up the "FWTools Shell" and use "ogr2ogr" from the FWTOOLS set to convert from Arc Shapefile to KML in one step!\n\n{{{\nogr2ogr -f KML -s_srs EPSG:26986 -t_srs EPSG:4326 mass_coast_25k.kml OUTLINE25K_ARC.shp\n}}}\n\nYou can convert to lots of other format types too. Just type "ogr2ogr" to see the list!\n\nFWTOOLS is available for Linux and Windows at\n\nHere's how I converted the coastline shapefile to matlab:\nFirst I converted to GMT format:\n{{{\nogr2ogr -f GMT -s_srs EPSG:26986 -t_srs EPSG:4326 coast.gmt OUTLINE25K_ARC.shp\n}}}\nThen I did a global replace: "#" to "%" and ">" to "NaN NaN NaN".\nThen I was able to just load the file into matlab:\n{{{\ncoast=load('coast.gmt');\nand then join to make a continuous coastline:\nnew=join_cst(coast,0.00001); % with 1e-5 degrees tolerance (1 m)\ncoast=new;\ncomment='Converted from MassGIS 25K coast, NAD83';\nsave coast_mass_25k.mat coast comment\n}}}
Here's a non-ArcGIS method for converting images and grids from the Mass Bay Open-File DS99 ( to self-describing GeoTIFF images and grids, and to Google Earth.\n\nConversion to GeoTIFF:\n\n1. Install FWTOOLS, as we will use the "gdal_translate" program to convert to Mercator GeoTIFF (with coordinate system fully specified interally so that no additional files are required), and the "gdalwarp" program to warp the Mercator GeoTIFF to a Geographic (EPSG:4326) coordinate system that Google Earth requires. \n\n1. From the command line (On a PC, start the FWToools Shell first) convert to Mercator GeoTIFF using\n{{{\ngdal_translate -a_srs "+proj=merc +lat_ts=41.65 +lon_0=-
Several years ago I wrote some bash scripts that used GDAL to convert the Arc ASCII grids into hundreds of individual NetCDF files (using gdal_translate) and then munged the whole thing together with a union aggregation of three different joinNew aggregations together with a NetCDF file containing time. It was a tour de force of NcML aggregation, but difficult to update, and a bit inefficient. \n\nSo instead, I just wrote some python scripts to download the ARC ASCII .gz files from PRISM, and convert them to NetCDF using Python, creating single NetCDF files for each decade that contain temp_min, temp_max, precip_mean, lon, lat and time. I put these files on, but here they are. They require the GDAL and NetCDF4-Python packages, which are supplied in Python(x,y) [windows 32-bit only] and in the Enthought Python Distribution (EPD) [mac, linux 32, linux 64, win32, win64]. I used EPD for Linux 64 bit (geoport is a 64-bit machine running Ubuntu).\n\nFirst I wrote a function and script to download the files by decade,\n{{{\ndef get_prism_files(decades,vars):\n\n from ftplib import FTP\n import os\n #decades=['1890-1899','2000-2009']\n #vars=['ppt','tmax','tmin','tdmean']\n #vars=['ppt','tmax','tmin']\n ftp=FTP('')\n ftp.login()\n for decade in decades:\n try:\n os.mkdir(decade)\n except:\n print('Directory ' + decade + ' already exists')\n \n for var in vars:\n ftp.cwd('/pub/prism/us/grids/' + var + '/' + decade)\n for file in ftp.nlst():\n print(file)\n ftp.retrbinary('RETR ' + file,open('./'+decade+'/'+file,'wb').write)\n \n\n os.system('gunzip ./'+decade+'/*.gz')\n \n return\n}}}\nwhich I called using "", which creates directories of uncompressed ARC ascii files, one for each decade:\n{{{\nimport get_prism_files as rps\n\n#decades=['1890-1899']\n#decades=['1900-1909','1910-1919','1920-1929','1930-1939','1940-1949']\n#decades=['1950-1959','1960-1969','1970-1979','1980-1989']\ndecades=['1990-1999','2000-2009','2010-2019']\n\n#vars=['ppt','tmax','tmin','tdmean']\nvars=['ppt','tmax','tmin']\n\nrps.get_prism_files(decades,vars)\n}}}\n\nI then called a script to read the data and chunk it into NetCDF files:\n{{{\nimport gdal\nimport os\nimport datetime as dt\nimport netCDF4\nimport numpy as np\nimport re\n\ndecades=['1890-1899','1900-1909','1910-1919','1920-1929','1930-1939','1940-1949',\n '1950-1959','1960-1969','1970-1979','1980-1989','1990-1999',\n '2000-2009','2010-2019']\n\n#vars=['ppt','tmax','tmin','tdmean']\nvars=['ppt','tmax','tmin']\n\n#rps.get_prism_files(decades,vars)\n\n# read 1 sample dataset to get lon/lat \ndataset=gdal.Open('us_tmin_1895.01') # sample file\na=dataset.ReadAsArray() #data\nnlat,nlon=np.shape(a)\nb=dataset.GetGeoTransform() #bbox, interval\nlon=np.arange(nlon)*b[1]+b[0]\nlat=np.arange(nlat)*b[5]+b[3]\n\nbasedate=dt.datetime(1858,11,17,0,0,0)\n\nfor decade in decades:\n \n #create netCDF4 file\n nco = netCDF4.Dataset('./chunk/prism_'+decade+'.nc','w',clobber=True)\n chunk_lon=16\n chunk_lat=16\n chunk_time=12\n #sigdigits=4\n nco.createDimension('lon',nlon)\n nco.createDimension('lat',nlat)\n nco.createDimension('time',None)\n timeo=nco.createVariable('time','f4',('time'))\n lono=nco.createVariable('lon','f4',('lon'))\n lato=nco.createVariable('lat','f4',('lat'))\n # 16 MB for one year:\n tmno = nco.createVariable('tmn', 'i2', ('time', 'lat', 'lon'), \n zlib=True,chunksizes=[chunk_time,chunk_lat,chunk_lon],fill_value=-9999)\n tmxo = nco.createVariable('tmx', 'i2', ('time', 'lat', 'lon'), \n zlib=True,chunksizes=[chunk_time,chunk_lat,chunk_lon],fill_value=-9999)\n ppto = nco.createVariable('ppt', 'i4', ('time', 'lat', 'lon'), \n zlib=True,chunksizes=[chunk_time,chunk_lat,chunk_lon],fill_value=-9999)\n #attributes\n timeo.units='days since 1858-11-17 00:00:00'\n lono.units='degrees_east'\n lato.units='degrees_north'\n \n tmno.units='degC'\n tmno.scale_factor = 0.01\n tmno.add_offset = 0.00\n tmno.long_name='minimum monthly temperature'\n tmno.set_auto_maskandscale(False)\n \n tmxo.units='degC'\n tmxo.scale_factor = 0.01\n tmxo.add_offset = 0.00\n tmxo.long_name='maximum monthly temperature'\n tmxo.set_auto_maskandscale(False)\n \n ppto.units='mm/month'\n ppto.scale_factor = 0.01\n ppto.add_offset = 0.00\n ppto.long_name='mean monthly precipitation'\n ppto.set_auto_maskandscale(False)\n \n nco.Conventions='CF-1.4'\n \n #write lon,lat\n lono[:]=lon\n lato[:]=lat\n \n pat=re.compile('us_tmin_[0-9]{4}\s.[0-9]{2}')\n itime=0\n #step through data, writing time and data to NetCDF\n #for root, dirs, files in os.walk('mintemp'):\n\n for root, dirs, files in os.walk(decade):\n dirs.sort()\n files.sort()\n for f in files:\n if re.match(pat,f):\n year=int(f[8:12])\n mon=int(f[13:15])\n if mon <= 12 :\n date=dt.datetime(year,mon,1,0,0,0)\n print(date)\n dtime=(date-basedate).total_seconds()/86400.\n timeo[itime]=dtime\n # min temp\n tmn_path = os.path.join(root,f)\n print(tmn_path)\n tmn=gdal.Open(tmn_path)\n a=tmn.ReadAsArray() #data\n tmno[itime,:,:]=a\n # max temp\n tmax_path=tmn_path.replace('tmin','tmax')\n print(tmax_path)\n tmx=gdal.Open(tmax_path)\n a=tmx.ReadAsArray() #data\n tmxo[itime,:,:]=a\n # mean precip\n ppt_path=tmn_path.replace('tmin','ppt')\n print(ppt_path)\n ppt=gdal.Open(ppt_path)\n a=ppt.ReadAsArray() #data\n ppto[itime,:,:]=a\n \n itime=itime+1\n\n nco.close()\n}}}\n\n
Convert NOAA XYZ swath bathy text files from Kate McMullen to 32-bit GeoTIFF and then to ArcGIS grid\n\nIssues: the original xyz file was more than 2 Gb, so we had to split, create two .grd files, and the use grdpaste to put them together. I also discovered that specifying WKT was easier than looking up the EPSG codes for UTM. But EPSG:4326 worked better for going to geographic. The WKT string of "+proj +latlong" didn't specify any ellipsoid, even though the UTM has a NAD83 ellipsoid defined.\n\n\n{{{\n#!/bin/bash\n\n# Convert NOAA XYZ swath bathy text files from Kate McMullen to GeoTIFF\n\n# This script uses tools from\n# GMT:\n# and\n# FWTOOLS:\n\n# ------------------------CONVERT THE METER-RESOLUTION GRID \n\n# set GMT with sufficient precision for big UTM numbers\ngmtset D_FORMAT %15.4f\nminmax H11310_1m_UTM19NAD83.txt\n\n# the results of "minmax" are used here:\n\nxyz2grd -R/298076.7/309351.7/4599754.86/4607312.86/ \s\n-Gkate_1m.grd -I1.0 H11310_1m_UTM19NAD83.txt\n\n# Specify 1/2 grid cell (0.5 m) larger on each end (point=>raster)\n# and specify output projection as NAD83(CSRS98) / UTM zone 19N (EPSG:2037)\ngdal_translate -a_ullr 298076.2 4607313.36 309352.2 4599754.36 \s\n-a_srs "+proj=utm +zone=19 +datum=NAD83" kate_1m.grd kate_1m.tif\n\n# convert UTM to Geographic also\n\ngdalwarp kate_1m.tif -rb -t_srs "EPSG:4326" kate_1m_geo.tif\n\n\n# ------------CONVERT THE HALF-METER-RESOLUTION GRID --------------------\n\n# The 0.5 m file is to big for "minmax" to handle, so split\n# in files with
For the 30 year FVCOM/NECOFS archive run at SMAST, we were using aggregation=union to join together a aggregation=joinExisting with two additional netcdf files: 1) a new file containing lon/lat coordinates, and 2) an auxiliary file containing z0. \n\nThe problem was that when wrapped in a union aggregation, the joinExisting wasn't automatically updating as new files were added, and the metadata was not being cached.\n\nSo instead, we decided to modify the 1st file of the aggregation to include the correct lon/lat and add the extra auxilliary information. \n\nTo do this, we first used ncks to grab the 1st time step from the union aggregation. This seg faulted on the local smast machine, but worked okay on\n\n{{{\n ncks -d time,0\n}}}\nThen in Python, we did\n{{{\nIn [10]: nci=netCDF4.Dataset('','r+')\nIn [11]: time=nci.variables['time'][:]\nIn [12]: time\nOut[12]: array([ 43509.], dtype=float32)\nIn [13]: nci.variables['time'][0]=time-1/24.\nIn [14]: time=nci.variables['time'][:]\nIn [15]: time\nOut[15]: array([ 43508.95703125], dtype=float32)\nIn [16]: nci.close()\n}}}\n\nto write a new time value that is one hour (1/24 days) earlier. We then renamed this file to be "", since the first real file in "http/www/CODFISH/Data/FVCOM/NECOFS" that is being aggregated is named "".\n\nWe then moved the existing catalog to \n
Converting the Mass Bay 10 m bathy from Arc Binary (Mercator) to ASCII XYZ (geographic)\n{{{\ngdal_translate
The old NOAA smooth sheets are available as TIFF or Mr. Sid. This one is 88 MB as a TIFF, and 8 MB as a Mr. Sid. Can Matlab compress the TIFF to JPEG 2000 (which like Mr. Sid, also uses wavelet compression) with comparable quality? Let's try making an 8MB JPEG 2000 file to find out?\n\nFirst get the Mr. Sid and TIFF image from NOAA:\n{{{\nwget\nwget\n}}}\nafter ungzipping them, read the tiff into Matlab, and convert from an indexed image to a true color image (need to do this for JPEG 2000) and then compress to 8MB file:\n{{{\n[a,map] = imread('c:\sdownloads\sh01832.tif'); % a is 144 MB\nb = ind2rgb8(a,map); % b is 432 MB (3 times bigger than a)\noutput_file_size = 8e6; % size in bytes (8MB)\ns = whos('b');\nratio = s.bytes/output_file_size;\n imwrite(b,'c:\sdownloads\sh01832.jp2','compressionratio',ratio); % results in 8MB file that looks as good as Mr. Sid\n}}}\n
Login to Wakari Enterprise at\ngo to your project, and open a terminal. Then type:\n{{{\nconda config --add channels wakari\nconda create -n my_root --clone=/opt/wakari/anaconda\nconda config --add channels\nconda install iris pyoos netcdf4\n}}}\nthen for your project, go to "compute resource config" and enter the default python environment for that project. For example:\n{{{\n/projects/rsignell/test/envs/my_root\n}}}\n\n
To create aggregations of USGS model data on blackburn served by OPeNDAP, there are several steps: Copying the files, making the aggregation catalog, and reloading the THREDDS Data Server (TDS) so that it will see the new catalog.\n* First step is to copy the files from marlin to blackburn. We have set up a directory on that is writable by group "usgs". All USGS users will be part of the group "usgs", therefore any USGS user should be able to create directories and scp files to blackburn from other machines. In the example below, we copy some ROMS and WRF output files from a coupled simulation 117 for Hurricane Isabel, making a new directory first using a one-line ssh command. Commands like these could be appended to the run script to automatically transfer the files on run completion. You shouldn't have to supply a password if you've appended your public key from marlin onto your ~/.ssh/known_hosts file on blackburn.\n{{{\nssh mkdir -p /blackburn/usgs/jcwarner/models/Isabel/r117\nscp -p ocean_his_*.nc\nscp -p wrfout_*.nc\n}}}\n* Next step is to make an aggregation catalog for the TDS on blackburn. Normally we create aggregations with all the time records. If the ROMS files looked like "" with dimension "ocean_time" you could aggregate them using this NcML snippet:\n{{{\n <netcdf xmlns="">\n <aggregation dimName="ocean_time" type="joinExisting">\n <scan location="/Volumes/models/carolinas/Isabel/r117"\n regExp=".*ocean_his_\s.[0-9]{4}\$"/>\n </aggregation>\n </netcdf>\n}}}\nThe Java regular expression syntax is a bit unusual. The ".*" matches anything and is necessary at the beginning to match the beginning of the full path name that the <scan> function is operating on. See <> for more info. \n\nIf the files were local, you could open this virtual datasets NcML snippet the same way you would open a NetCDF file in any application that used NetCDF-Java. But since these files are remote, we will add the NcML to a TDS catalog so that we can serve this virtual dataset via OPeNDAP.\n\nIn the example below we make a catalog that has both the ROMS and WRF aggregations. If the WRF files looked like "" with dimension "time", the whole catalog might look like: \n{{{\n<catalog xmlns=""\n xmlns:xlink="" name="OPeNDAP Data Server" version="1.0.1">\n\n <service name="ncdods" serviceType="OpenDAP" base="/thredds/dodsC/"/>\n <dataset name="Carolinas Coastal Change Project" ID="carolinas">\n <metadata inherited="true">\n <serviceName>gridServices</serviceName>\n <authority></authority>\n <dataType>Grid</dataType>\n <dataFormat>NetCDF</dataFormat>\n <creator>\n <name vocabulary="DIF">OM/WHSC/USGS</name>\n <contact url="" email=""/>\n </creator>\n <publisher>\n <name vocabulary="DIF">OM/WHSC/USGS</name>\n <contact url="" email=""/>\n </publisher>\n\n <documentation xlink:href=""\n xlink:title="Carolinas Coastal Change Program"/>\n </metadata>\n\n <dataset name="Isabel Coupled runs">\n\n <dataset name="Run 117">\n <dataset name="roms" ID="carolinas/Isabel/r117/roms"\n urlPath="carolinas/Isabel/r117/roms">\n <metadata>\n <documentation type="summary">ROMS output from R117</documentation>\n </metadata>\n <serviceName>ncdods</serviceName>\n <netcdf xmlns="">\n <aggregation dimName="ocean_time" type="joinExisting">\n <scan location="/Volumes/models/carolinas/Isabel/r117"\n regExp=".*ocean_his_\s.[0-9]{4}\$"/>\n </aggregation>\n </netcdf>\n </dataset>\n <dataset name="wrf" ID="carolinas/Isabel/r117/wrf"\n urlPath="carolinas/Isabel/r117/wrf">\n <metadata>\n <documentation type="summary">WRF output from R117</documentation>\n </metadata>\n <serviceName>ncdods</serviceName>\n <netcdf xmlns="">\n <aggregation dimName="time" type="joinExisting">\n <scan location="/Volumes/models/carolinas/Isabel/r117"\n regExp=".*wrf_out_\s.[0-9]{4}\$"/>\n </aggregation>\n </netcdf>\n </dataset>\n </dataset>\n </dataset>\n </dataset>\n</catalog>\n}}}\n\n* The last step is to add the catalog to the TDS and reload the server. First copy the catalog to the TDS catalog location on blackburn: \n{{{\n/usr/local/tomcat2/content/thredds\n}}}\nSimply copying the file to the directory is not sufficent to get the TDS to recognize it. There are two ways to add the catalog to the TDS. If you want to make the new datasets to appear in the top level TDS catalog, you add a link to the catalog in the the file\n{{{\n/usr/local/tomcat2/content/thredds/catalog.xml\n}}}\nSo if your catalog is named "carolinas_catalog.xml", the catalog.xml file might looks like:\n{{{\n <?xml version="1.0" encoding="UTF-8"?>\n<catalog name="USGS SedTrans THREDDS Server"\n xmlns=""\n xmlns:xlink="">\n\n <catalogRef xlink:title="Bathymetry" xlink:href="bathy_catalog.xml" name=""/>\n <catalogRef xlink:title="Carolinas Project" xlink:href="carolinas_catalog.xml" name=""/>\n</catalog>\n}}}\nIf you don't want the catalog listed in the top level TDS catalog, you can add a line like\n{{{\n<catalogRoot>carolinas_catalog.xml</catalogRoot>\n}}}\nto the threddsConfig.xml file (also in /usr/local/tomcat2/content/thredds). After either of these methods, you need to reload the TDS, which can do through the Tomcat Manager App. Login as "admin" to:\n{{{\n\n}}}\nScroll down to "thredds" and click on the "reload" command. Your catalog will then be available at:\n{{{\n\n}}}\nTo use the data via OpenDAP (e.g. from the Matlab NJ Toolbox at, drill down to the OpenDAP data URL and copy it so you can paste it into Matlab or IDV.
Getting ROMS going with rotated and cell-centered vectors:\n\n1. Install F-TDS, following instructions at: <>\n\n2. Create a clean.ncml file that removes most of the variables, leaving only the ones we want to process with FERRET:\n{{{\n<netcdf xmlns="">\n <remove type="variable" name="Akk_bak"/>\n <remove type="variable" name="Akp_bak"/>\n <remove type="variable" name="Akt_bak"/>\n <remove type="variable" name="Akv_bak"/>\n <remove type="variable" name="Cs_r"/>\n <remove type="variable" name="Cs_w"/>\n <remove type="variable" name="FSobc_in"/>\n <remove type="variable" name="FSobc_out"/>\n <remove type="variable" name="Falpha"/>\n <remove type="variable" name="Fbeta"/>\n <remove type="variable" name="Fgamma"/>\n <remove type="variable" name="M2nudg"/>\n <remove type="variable" name="M2obc_in"/>\n <remove type="variable" name="M2obc_out"/>\n <remove type="variable" name="M3nudg"/>\n <remove type="variable" name="M3obc_in"/>\n <remove type="variable" name="M3obc_out"/>\n <remove type="variable" name="Tcline"/>\n <remove type="variable" name="Tnudg"/>\n <remove type="variable" name="Tobc_in"/>\n <remove type="variable" name="Tobc_out"/>\n <remove type="variable" name="Znudg"/>\n <remove type="variable" name="Zob"/>\n <remove type="variable" name="Zos"/>\n <remove type="variable" name="bustr"/>\n <remove type="variable" name="bvstr"/>\n <remove type="variable" name="dstart"/>\n <remove type="variable" name="dt"/>\n <remove type="variable" name="dtfast"/>\n <remove type="variable" name="el"/>\n <remove type="variable" name="f"/>\n <remove type="variable" name="gamma2"/>\n <remove type="variable" name="gls_Kmin"/>\n <remove type="variable" name="gls_Pmin"/>\n <remove type="variable" name="gls_c1"/>\n <remove type="variable" name="gls_c2"/>\n <remove type="variable" name="gls_c3m"/>\n <remove type="variable" name="gls_c3p"/>\n <remove type="variable" name="gls_cmu0"/>\n <remove type="variable" name="gls_m"/>\n <remove type="variable" name="gls_n"/>\n <remove type="variable" name="gls_p"/>\n <remove type="variable" name="gls_sigk"/>\n <remove type="variable" name="gls_sigp"/>\n <!-- <remove type="variable" name="h"/>\n <remove type="variable" name="hc"/>-->\n <remove type="variable" name="lat_psi"/>\n <remove type="variable" name="lat_u"/>\n <remove type="variable" name="lat_v"/>\n <remove type="variable" name="lon_psi"/>\n <remove type="variable" name="lon_u"/>\n <remove type="variable" name="lon_v"/>\n <remove type="variable" name="mask_psi"/>\n <remove type="variable" name="pm"/>\n <remove type="variable" name="pn"/>\n <remove type="variable" name="rdrg"/>\n <remove type="variable" name="rdrg2"/>\n <remove type="variable" name="rho"/>\n <remove type="variable" name="rho0"/>\n <remove type="variable" name="s_w"/>\n <remove type="variable" name="shflux"/>\n <remove type="variable" name="spherical"/>\n <remove type="variable" name="sustr"/>\n <remove type="variable" name="svstr"/>\n<!-- <remove type="variable" name="theta_b"/>\n <remove type="variable" name="theta_s"/>-->\n <remove type="variable" name="w"/>\n <remove type="variable" name="xl"/>\n<!-- <remove type="variable" name="zeta"/>-->\n\n <!--<remove type="variable" name="salt"/> -->\n <variable name="salt">\n <attribute name="missing_value" type="float" value="0.0"/>\n </variable>\n\n <!--<remove type="variable" name="temp"/> -->\n <variable name="temp">\n <attribute name="missing_value" type="float" value="0.0"/>\n </variable>\n\n <aggregation dimName="ocean_time" type="joinExisting" timeUnitsChange="true">\n <scan location="/data/ftp/upload/Estuarine_Hypoxia/umces/chesroms/synoptic/output/history_output/" suffix=".nc" subdirs="true"/>\n </aggregation>\n</netcdf>\n}}}\nAn example "clean.ncml" dataset can be seen at <>\n\n2. Create a Ferret "vectors.jnl" file that points to the "clean.ncml" URL. See the FERRET documentation for syntax (\n{{{\n[tomcat@testbedapps dynamic]$ more vectors.jnl\n\n!use ""\n!use "\nnc"\n!use ""\nuse ""\n\n! We define a new axis for the rotated data. We are going to use an average of points i and i + 1 to move the data to the center of the cell.\n! This means there will be fewer points in the centered grid than the original so we need a new set of slightly smaller axis.\ndefine axis/x=1:98:1 xrho\ndefine axis/y=1:148:1 yrho\n\n! Define masked variables if mask eq 1 then var\n! Then u_masked will be used in place of u below, etc.\n\nlet u_masked = if mask_u eq 1 then u\nlet v_masked = if mask_v eq 1 then v\nlet ubar_masked = if mask_u eq 1 then ubar\nlet vbar_masked = if mask_v eq 1 then vbar\n\n! These lines produce new variables for the grid using the ferret shf operator which\n! in this case is effectively subsetting the array and eliminating the first item\n! Note the "let/d=1/units syntax. This instructs ferret to create a new variable (let)\n! and store it in dataset 1 (d=1). The units are to come from the existing lon_rho variable.\n! Without the d=1 the new variables would not be visible to the TDS/NCML subsequent processing.\n! The second and fourth lines of this code block are defining new grids.\n! IMPORTANT NOTE: It doesn't appear that these four line are currently used in further calculations\n! so they could presumably be removed.\nlet/d=1/units="`lon_rho,return=units`" lon_rho_p_0 = lon_rho[i=@shf:+1, j=@shf:+1]\nlet/d=1/units="`lon_rho,return=units`" lon_rho_p = lon_rho_p_0[gx=xrho@asn,gy=yrho@asn]\nlet/d=1/units="`lat_rho,return=units`" lat_rho_p_0 = lat_rho[i=@shf:+1, j=@shf:+1]\nlet/d=1/units="`lat_rho,return=units`" lat_rho_p = lat_rho_p_0[gx=xrho@asn,gy=yrho@asn]\n\n\n! These lines use the shift and grid transform operators to produce the needed angles on the new grid.\nlet/d=1 angle_p_0 = angle[i=@shf:+1, j=@shf:+1]\nlet/title="angle centered" angle_p = angle_p_0[gx=xrho@asn,gy=yrho@asn]\n\n! These lines use the shift and grid transform operators to produce a mask on the new grid.\nlet/d=1 mask_p_0 = mask_rho[i=@shf:+1, j=@shf:+1]\nlet/d=1 mask_p = mask_p_0[gx=xrho@asn,gy=yrho@asn]\n\n! These lines average the data to the centers of the cells.\nlet/d=1/units="`u,return=units`" u_p_0 = 0.5*(u_masked[j=@shf:+1]+u_masked[i=@shf:+1,j=@shf:+1])\nlet/title="U centered"/units="`u,return=units`" u_p = u_p_0[gx=xrho@asn,gy=yrho@asn]\nlet/d=1/units="`v,return=units`" v_p_0 = 0.5*(v_masked[i=@shf:+1]+v_masked[i=@shf:+1,j=@shf:+1])\nlet/title="V centered"/units="`v,return=units`" v_p = v_p_0[gx=xrho@asn,gy=yrho@asn]\n\n! These lines average the masked data to the centers of the cells.\nlet/d=1/units="`ubar,return=units`" ubar_p_0 = 0.5*(ubar_masked[j=@shf:+1]+ubar_masked[i=@shf:+1,j=@shf:+1])\nlet/title="UBAR centered"/units="`ubar,return=units`" ubar_p = ubar_p_0[gx=xrho@asn,gy=yrho@asn]\nlet/d=1/units="`vbar,return=units`" vbar_p_0 = 0.5*(vbar_masked[i=@shf:+1]+vbar_masked[i=@shf:+1,j=@shf:+1])\nlet/title="VBAR centered"/units="`vbar,return=units`" vbar_p = vbar_p_0[gx=xrho@asn,gy=yrho@asn]\n! ==========\n\n! Finally we use trig to transform the centered data to the new grid\nLET/d=1 urot = u_p*COS(angle_p) - v_p*SIN(angle_p)\nLET/d=1 vrot = u_p*SIN(angle_p) + v_p*COS(angle_p)\n\n! This transforms the masked data.\nLET/d=1 ubarrot = ubar_p*COS(angle_p) - vbar_p*SIN(angle_p)\nLET/d=1 vbarrot = ubar_p*SIN(angle_p) + vbar_p*COS(angle_p)\n}}}\n\n3. In the THREDDS catalog, create a dataset and reference the vectors.jnl as the data location. Remove all the FERRET intermediate variables and specify coordinates for the new cell-centered and rotated velocities using NcML:\n{{{\n<dataset name="ChesROMS - Synoptic (Rotated vectors)" ID="estuarine_hypoxia/chesroms/vectors"\n urlPath="estuarine_hypoxia/chesroms/">\n <serviceName>agg</serviceName>\n <netcdf xmlns=""\n location="/var/www/tomcat/content/las/conf/server/data/dynamic/vectors.jnl">\n <!--location="/data/ftp/upload/Estuarine_Hypoxia/umces/chesroms/synoptic_vectors/chesroms_rot_step2.jnl">-->\n <remove type="variable" name="LON_RHO_P_0"/>\n <!--<remove type="variable" name="lon_rho_p"/>-->\n <remove type="variable" name="LAT_RHO_P_0"/>\n <!-- <remove type="variable" name="lat_rho_p"/>-->\n <remove type="variable" name="ANGLE_P_0"/>\n <remove type="variable" name="ANGLE_P"/>\n <remove type="variable" name="MASK_P_0"/>\n <remove type="variable" name="MASK_P"/>\n <remove type="variable" name="U_P_0"/>\n <remove type="variable" name="U_P"/>\n <remove type="variable" name="V_P_0"/>\n <remove type="variable" name="V_P"/>\n <remove type="variable" name="UBAR_P_0"/>\n <remove type="variable" name="UBAR_P"/>\n <remove type="variable" name="VBAR_P_0"/>\n <remove type="variable" name="VBAR_P"/>\n <attribute name="wms-link" value=""/>\n <attribute name="wms-layer-prefix" value="chesroms-vectors"/>\n <attribute name="title" value="ChesROMS (UMCES) - ROMS-2.2"/>\n <attribute name="id" value="eh.umces.chesroms.synoptic_vectors"/>\n <attribute name="naming_authority" value="noaa.ioos.testbed"/>\n <attribute name="summary"\n value="Chesapeake Bay Application of ROMS/TOMS 2.2 with rotated vectors and no hypoxic variables"/>\n <attribute name="creator_name" value="Wen Long"/>\n <attribute name="creator_email" value=""/>\n <attribute name="creator_url" value=""/>\n <attribute name="cdm_data_type" value="Grid"/>\n\n <variable name="ZETA_P">\n <attribute name="coordinates" value="OCEAN_TIME LAT_RHO_P LON_RHO_P"/>\n </variable>\n <variable name="H_P">\n <attribute name="coordinates" value="LAT_RHO_P LON_RHO_P"/>\n </variable>\n <variable name="UROT">\n <attribute name="units" value="m/s"/>\n <attribute name="coordinates" value="OCEAN_TIME S_RHO LAT_RHO_P LON_RHO_P"/>\n <attribute name="standard_name" value="eastward_sea_water_velocity"/>\n <attribute name="missing_value" type="float" value="-1.0e34"/>\n <attribute name="wms-layer" value="chesroms-vectors/sea_water_velocity"/>\n </variable>\n <variable name="VROT">\n <attribute name="coordinates" value="OCEAN_TIME S_RHO LAT_RHO_P LON_RHO_P"/>\n <attribute name="standard_name" value="northward_sea_water_velocity"/>\n <attribute name="missing_value" type="float" value="-1.0e34"/>\n <attribute name="units" value="m/s"/>\n <attribute name="wms-layer" value="chesroms-vectors/sea_water_velocity"/>\n </variable>\n <variable name="UBARROT">\n <attribute name="coordinates" value="OCEAN_TIME LAT_RHO_P LON_RHO_P"/>\n <attribute name="missing_value" type="float" value="-1.0e34"/>\n <attribute name="standard_name" value="barotropic_eastward_sea_water_velocity"/>\n <attribute name="units" value="m/s"/>\n <attribute name="wms-layer" value="chesroms-vectors/barotropic_sea_water_velocity"/>\n </variable>\n <variable name="VBARROT">\n <attribute name="coordinates" value="OCEAN_TIME LAT_RHO_P LON_RHO_P"/>\n <attribute name="missing_value" type="float" value="-1.0e34"/>\n <attribute name="standard_name" value="barotropic_northward_sea_water_velocity"/>\n <attribute name="units" value="m/s"/>\n <attribute name="wms-layer" value="chesroms-vectors/barotropic_sea_water_velocity"/>\n </variable>\n <variable name="S_RHO">\n <attribute name="formula_terms"\n value="s: S_RHO eta: ZETA_P depth: H_P a: THETA_S b: THETA_B depth_c: HC"/>\n </variable>\n <attribute name="Conventions" value="CF-1.0"/>\n </netcdf>\n </dataset>\n}}}
Jane Dunworth-Baker <> asked me if i knew where to get a lon,lat text file of the 1000 m isobath for the region [-
I used the\nc:\srps\sm\sroms\srk4_run2.m\nfile to produce those cool plots. The rk4 works much better than the "stream2" function in Matlab, but it's an interactive thing, so it's tricky to recrease these plots.
Shell script to cut a ROMS file:\n{{{\n# Cut a valid ROMS file out of another valid ROMS file, \n# Can be used on history, grid or averages files, \n# Fixed, and checked against grid file on July 15, 2010\n\n# Usage: do_cut_roms_grid file_in file_out istart istop jstart jstop (1-based indexing)\n\ndeclare -i XI_RHO_START=$3\ndeclare -i XI_RHO_STOP=$4 \ndeclare -i ETA_RHO_START=$5\ndeclare -i ETA_RHO_STOP=$6\n\ndeclare -i ETA_U_START=ETA_RHO_START\ndeclare -i ETA_U_STOP=ETA_RHO_STOP\n\ndeclare -i XI_U_START=XI_RHO_START\ndeclare -i XI_U_STOP=XI_RHO_STOP-1\n\ndeclare -i ETA_V_START=ETA_RHO_START\ndeclare -i ETA_V_STOP=ETA_RHO_STOP-1\n\ndeclare -i XI_V_START=XI_RHO_START\ndeclare -i XI_V_STOP=XI_RHO_STOP\n\ndeclare -i XI_PSI_START=XI_RHO_START\ndeclare -i XI_PSI_STOP=XI_RHO_START-1\n\ndeclare -i ETA_PSI_START=ETA_RHO_STOP\ndeclare -i ETA_PSI_STOP=ETA_RHO_STOP-1\n\nncks -F -d xi_rho,$XI_RHO_START,$XI_RHO_STOP \s\n -d eta_rho,$ETA_RHO_START,$ETA_RHO_STOP \s\n -d eta_u,$ETA_U_START,$ETA_U_STOP \s\n -d xi_u,$XI_U_START,$XI_U_STOP \s\n -d eta_v,$ETA_V_START,$ETA_V_STOP \s\n -d xi_v,$XI_V_START,$XI_V_STOP \s\n -d eta_psi,$ETA_PSI_START,$ETA_PSI_STOP \s\n -d xi_psi,$XI_PSI_START,$XI_PSI_STOP \s\n $1 $2\n}}}
If you want to cut out a spatiotemporal chunk of NCOM output, you can\njust do it using the NCO tools. For example, if you compile and build\nNCO with opendap support (easy now that NetCDF includes a native\nopendap) and UDUNITS, you can create a local netcdf file directly from\nyour best time series aggregation. Here's an example of cutting just\na chunk near the BP site for a 4 day period:\n{{{\nrsignell@gam:~$ ncks -d time,"2010-08-01 00:00","2010-08-05 00:00" -d\nlat,27.0,30.0 -d lon,-90.0,-87.0 -v\nwater_temp,salinity,water_u,water_v,surf_el\n''\n\n}}}
{{{\nncks -O -F -d time,-d time,"2010-04-11 00:00","2010-08-04 00:00" -d lon,-88.37,-85.16 -d lat,29.31,30.4\n}}}\n
geoportal 1.2.4 supports DCAT\n\n"Geoportal Server (1.2.4) now support DCAT (Data Catalog Vocabulary) outputs. The DCAT output is in json format and is available through url pattern http://servername:port/geoportal/rest/find/document?f=dcat, it is possible to add additional parameters to the url as well, please refer to REST API Syntax for additional parameters. "
I just had this problem again: I submit a parallel job (8 cpus) and nothing seems to be happening. I try submitting a serial job in debug mode to see what's going on and it works fine.\n\nThe problem: stray processes that didn't get killed by "qdel".\n\nSolution: \n*Check which nodes the job is running on by doing a "qstat -f"\n*Fire up "konqueror" and see if any of the 4 cpu nodes have more than 4 processes running and if any of the 2 cpu nodes have more than 2 processes running.\n* Kill all the "oceanM" jobs via "rcom killall oceanM"\n* Resubmit the job \nHappiness!
From John Caron:\n\nI've also added the "_CoordinateSystemFor" to be able to assign coordinate systems to a class of variables. in case you dont want to (or cant) tag each data variable.\n note this is not a CF convention. docs here:\n\n\n brand new and little tested. banging on it and feedback would be appreciated. Ill let you decide if/when to share with others.\n <variable name="coordSysVar4D" type="int" shape="">\n <attribute name="_CoordinateAxes" value="x y zpos time"/>\n <attribute name="_CoordinateTransforms" value="zpos"/>\n <attribute name="_CoordinateSystemFor" value="xpos ypos zpos time"/>\n </variable>\n\n <variable name="coordSysVar3D" type="int" shape="">\n <attribute name="_CoordinateAxes" value="x y time"/>\n <attribute name="_CoordinateSystemFor" value="xpos ypos time"/>\n </variable>\n\n The names "coordSysVar4D", "coordSysVar3D" are not important.\n NJ just searches for variables that contain "_CoordinateSystemFor"\n attributes and uses those Coordinate axes for those dimensions.\n
Ken ( and interested UAF folks)\n\nYou guys probably know this, but chunking can really speed up the performance of time series extraction at a point from remote sensing applications, yet still yeild good performance when returning full scenes at specific times.\n\nWe experimented with 450 images of 1km AVHRR SST images in the Gulf of Maine from 2009. The image size is 1222x1183, and we tried chunking at 50x50.\n| Type of File | Size on Disk |Time series (s) |Single Full Scene (s) |\n| NetCDF3 | 7,400M| 0.07| 1.49|\n| NetCDF4+deflation (no chunking) | 420M| 15.63| 0.92|\n| NetCDF4+deflation (chunksize=50) | 400M| 0.31| 0.89|\n\nThese are the median values for 10 extractions of each type. There are a lot of missing values in this dataset because there are lot of land values, but it would be interesting to try this on the GHRSST data. The script I used was:\n\n{{{\n!#/bin/bash\nfor file in *.nc\ndo \n /usr/bin/ncks -4 -L 1 -O $file netcdf4a/$file\n /usr/bin/ncks -4 -L 1 --cnk_dmn lat,50 --cnk_dmn lon,50 -O $file netcdf4b/$file\ndone \n}}}\n\nIf you want to play with these 3 datasets yourselves, they are here:\n{{{\n\n}}}
Look at the end of these logs:\n{{{\\\n}}}
Matlab:\n{{{\nx=urlread('')\na=regexp(x,'bare(.*?)tif.bz2','match');\npart1=''\nfor i=1:2:length(a);\n url=[part1 char(a(i))];\n urlwrite(url,char(a(i)));\nend\n}}}
download all the netcdf-files from a directory:\n\nwget -nc -r -l2 -I /thredds/fileServer/,/thredds/catalog/\n'http://dev-vm188/thredds/catalog/osisaf/'\n\nI use here the existing datasetScan catalog.xml file, and fetch all\nnc-files up to two links away. Beside the nc-file, I get the\ncatalog-file of the nc-file (e.g.\nhttp://dev-vm188/thredds/catalog/osisaf/,\ntoo.\n\nA catalog-file in the fileServer would be saver, since the 2-levels\n(parent and child) might include other information, but at least I can\noffer our users something already now.
Here's how I keep up to date with NCTOOLBOX on my Windows box.\n\n1. I downloaded the Windows Binary command line tool for Mecurial from:\n\n\n2. I run the following batch script (c:\sRPS\sm_contrib\strunk\sdo_update_nctoolbox.bat) for Windows, which removes the existing toolbox, and then clones the latest one from the repository:\n{{{\nREM Batch script to update NCTOOLBOX\nrmdir /s /q nctoolbox\nhg clone nctoolbox\n}}}\n\nUpdate: once the source has been cloned, it's not necessary to clone it again. To update, cd to the nctoolbox directory and execute these two commands:\n{{{\ncd ~/nctoolbox\nhg pull\nhg update\n}}}\n\n\nNote: this script will only complete successfully if the NCTOOLBOX jar files are not in use, which means you need to close Matlab first if you've been using NCTOOLBOX (or perhaps clear the jar files from memory somehow, but I just close Matlab).
It appears that all you have to do now is install EPD as "install as administrator".\nAnd then add one line to the end of the file:\nc:\spython27\sArcGIS10.1\slib\ssite-packages\sdesktop10.1.pth\n\nso that this file contains these lines:\n{{{\nC:\sArcGIS\sDesktop10.1\sbin\nC:\sArcGIS\sDesktop10.1\sarcpy\nC:\sArcGIS\sDesktop10.1\sArcToolbox\sScripts\nc:\spython27_epd32\slib\ssite-packages\n}}}\n\nCan we get the Enthought Python Distribution (EPD), NetCDF4-Python and ArcGIS 10.1 to play nice? ArcGIS is 32 bit, and version 10.1 uses python2.7. ArcGIS say they require numpy 1.5.0 and matplotlib.\n\nAccording to ESRI, these windows registry keys need to be found in order for ArcGIS 10.1 to use an existing python that has been installed on the system instead of installing it's own python.\n{{{\n> From: Chris Whitmore <>\n> To: Curtis V Price <>, Laurene Koman <>\n>\n> Date: 01/25/2012 01:30 PM\n> Subject: RE: python installer\n\n> The Desktop setup checks for the existence of these two registry\n> keys (both must be there):\n>\n>\n> Python: HKLM\sSOFTWARE\sPython\sPythonCore\s2.7\n> \sInstallPath\sInstallGroup (the default reg key)\n>\n> Numerical Python: HKLM\sSOFTWARE\sMicrosoft\sWindows\n> \sCurrentVersion\sUninstall\snumpy-py2.7 (the DisplayName reg key).\n>\n> If the setup finds those two keys, it won’t install python/numpy.\n}}}\nFor our 64 bit Win 7 system running 32 bit python, these keys would be:\n{{{\nHKLM\sSOFTWARE\sWow6432Node\sPython\sPythonCore\s2.7\sInstallPath\sInstallGroup (the default reg key)\nHKLM\sSOFTWARE\sWow6432Node\sMicrosoft\sWindows\sCurrentVersion\sUninstall\snumpy-py2.7 (the DisplayName reg key).\n}}}\nSo before I installed ArcGIS10.1, I changed these registry keys from Python27 to Python27_EPD32:\n{{{\nHKLM\sSOFTWARE\sWow6432Node\sPython\sPythonCore\s2.7\sInstallPath\nData: C:\sPython27_EPD32\nHKLM\sSOFTWARE\sWow6432Node\sPython\sPythonCore\s2.7\sPythonPath\nData: \nC:\sPython27_EPD32\sLib;C:\sPython27_EPD32\sDLLs;C:\sPython27_EPD32\sLib\slib-tk\n}}}\n\n{{{\n> From: Christoph Gohlke <>\n> Date: Mon, Aug 22, 2011 at 3:59 PM\n> Subject: Re: [netcdfgroup] OPeNDAP support for the Python netcdf4\n> package on Windows\n> To: Rich Signell <>\n> Cc: John O'Malley <>\n>\n\n> On 8/22/2011 11:41 AM, Rich Signell wrote:\n>>\n>> Christoph,\n>> And that worked! See attached screen grab where a chunk of\n>> topo/bathymetry data for the Cape Cod region is extracted from the\n>> NOAA Coastal Relief Model via OpenDAP and a raster is created.\n>>\n>> So just to make sure I understand:\n>>\n>> if ESRI grabs your code\n>>\n>>\n>>\n>> and they build it using their ArcGIS10.0 environment, it should work\n>> with the DLLs they already ship, right?\n>\n>\n> It is more complicated since ArcGIS10 ships with old netcdf3 and hdf DLLs.\n>\n> 1) First, I would have to update netcdf-4.1.3-msvc9-source with DAP\n> support. This version is not supported by Unidata and requires curl\n> libraries. I don't know what ESRI's policy is on unsupported code.\n>\n> 2) Netcdf.dll needs to be built against a static or custom named\n> version of HDF5-1.8.7 to avoid conflicts with the existing ArcGIS10\n> HDF DLLs. Still, this might not work depending on whether different\n> versions of HDF5 libraries can be loaded/used at runtime.\n>\n> 3) chances are that the netcdf4-python 0.9.7 source distribution is\n> not compatible with numpy 1.3. This can probably be fixed by invoking\n> Cython to generate new source files with numpy 1.3 installed.\n>\n> Depending how useful this would be to others I could try to build\n> ArcGIS10 compatible netcdf4-python binaries.\n>\n> Christoph\n}}}
C code: \nPython-wrapper:
A sample CSW query/response\n{{{\nwget --header "Content-Type:text/xml" --post-file=p.xml '' -O response.txt\n}}}\np.xml looks like\n{{{\n<?xml version="1.0" encoding="UTF-8"?>\n<csw:GetRecords\n xmlns:csw=""\n xmlns:ogc=""\n xmlns:gmd=""\n xmlns:apiso=""\n xmlns:ows=""\n xmlns:xsd=""\n xmlns:gml=""\n xmlns:xsi=""\n service="CSW"\n version="2.0.2"\n resultType="results"\n outputFormat="application/xml"\n xsi:schemaLocation="\n\n\n\n\n "\n outputSchema=""\n startPosition="1"\n maxRecords="5"\n>\n <csw:Query typeNames="gmd:MD_Metadata">\n <csw:ElementSetName typeNames="gmd:MD_Metadata">full</csw:ElementSetName>\n <csw:Constraint version="1.1.0">\n <ogc:Filter>\n <ogc:PropertyIsEqualTo>\n <ogc:PropertyName>gmd:code</ogc:PropertyName>\n <ogc:Literal>imeds/inundation/simulation/</ogc:Literal>\n </ogc:PropertyIsEqualTo>\n </ogc:Filter>\n </csw:Constraint>\n </csw:Query>\n</csw:GetRecords>\n}}}
Search for recent data containing humidity:\n<,0,180,90&rel=&loc=&ts=2011-03-05T00:00:00&te=2011-03-27T00:00:00=&outputFormat=application/atom.xml>
I grabbed "s3 backup" from A bit terse, but seems to work fine. Started backing up my whole C drive, but that was going to take 14 hours, so ditched that idea. Tried just backing up m_cmg/trunk/cf instead. Make sure to hit the refresh button to see the folder you added. Tried the "usage report" and that worked fine too, showing me that so far I've spend $0.04 on Amazon Web Services.\n\nAfter reading a review in Laptop magazine, they mention some other web-backup services, including HP Upline ($59/year for *unlimited storage*!) I wonder what that really means! They mention also Xdrive ( and LinkUp (, but their favorite is SugarSync ( Said it cost $24.99 in the magazine, but it looked expensive if you had $100GB.
\nI was interested in the new NetCDF capabilities in ArcGIS 9.2, so I asked John O'Malley to install it on my notebook Windows XP computer. Took a couple of hours. \n\nHere's what I found:\n \nTo read NetCDF files, you can't just click on the "add" button like you would if you were dealing with geotiffs or Arc Grids. You have to use the new "multidimensional tools" in the Arc Toolbox. \n\nArc can read COARDS/CF compliant NetCDF grids that are evenly spaced (by clicking on "make NetCDF raster layer", kind of nonintuitive). It can read the "new style" GMT 4.1 NetCDF grid files, for instance. The caveat is that GMT apparently stores coordinate variables (e.g. x,y lon,lat) as floats, and if you have spacing like 3 arc seconds ( 8.3333e-004 degrees), ArcGIS will complain that the spacing is not uniform and refuse to read the data, even though it's a uniformly spaced grid. ESRI should probably be notified that they should interpret these grids as uniform if the deviation in dx and dy is less than 1.0e-5.\n\nArc can read other types of data from NetCDF files (other than uniformly-spaced grids) via the "make NetCDF feature layer". I tried loading one of our "EPIC" style NetCDF time series files, and I was able to select the "temperature" variable and plot a dot on a map, but there were lots of options on the Gui that I didn't fill in or understand, so I don't know what its capabilities are for time series data. \n\nThere is a very nice cookbook procedure for using ArcGIS 9.2 to animate time dependent NetCDF model data at so there certainly is some time series capability. I tried following the procedure. It all worked. \n\nFor NetCDF output, I tried output of an Arc Grid as NetCDF via "Raster to NetCDF". You do this by clicking on the Command Line Window icon (just to the left of the ?) near the right end of the standard toolbar. Then you type "RasterToNetCDF <layer name> <netcdf_file_name>\nI discovered that it writes the georeferencing information into a character attribute called "esri_pe_string" thusly:\n\n{{{\n$ ncdump -h\nnetcdf test36_arc {\ndimensions:\n        lon = 151 ;\n        lat = 80 ;\nvariables:\n        double lon(lon) ;\n                lon:long_name = "longitude coordinate" ;\n                lon:standard_name = "longitude" ;\n                lon:units = "degrees_east" ;\n        double lat(lat) ;\n                lat:long_name = "latitude coordinate" ;\n                lat:standard_name = "latitude" ;\n                lat:units = "degrees_north" ;\n        float topo(lat, lon) ;\n                topo:long_name = "topo" ;\n                topo:esri_pe_string = "GEOGCS[\s"GCS_WGS_1984\s",DATUM[\s"D_WGS_1984\s",SPHEROID[\s"WGS_1984\s",6378137.0,298.257223563]],PRIMEM[\s"Greenwich\s",0.0],UNIT[\s"Degree\s",0.0174532925199433]]" ;\n                topo:coordinates = "lon lat" ;\n                topo:units = "Degree" ;\n                topo:missing_value = 0.f ;\n\n// global attributes:\n                :Conventions = "CF-1.0" ;\n                :Source_Software = "ESRI ArcGIS" ;\n}\n}}}\n\nFor non-geographic projections, if known to CF, it adds the "mapping" variable. So for UTM, we get:\n\n{{{\n[rsignell@ricsigdtlx cf]$ ncdump -h\n$ ncdump -h\nnetcdf test36_utm_arc {\ndimensions:\n x = 141 ;\n y = 101 ;\nvariables:\n double x(x) ;\n x:long_name = "x coordinate of projection" ;\n x:standard_name = "projection_x_coordinate" ;\n x:units = "Meter" ;\n double y(y) ;\n y:long_name = "y coordinate of projection" ;\n y:standard_name = "projection_y_coordinate" ;\n y:units = "Meter" ;\n float topo(y, x) ;\n topo:long_name = "topo" ;\n topo:esri_pe_string = "PROJCS[\s"NAD_1983_UTM_Zone_19N\s",GEOGCS[\s"GCS_North_American_1983\s",DATUM[\s"D_North_American_1983\s",SPHEROID[\s"GRS_1980\s",6378137.0,298.257222101]],PRIMEM[\s"Greenwich\s",0.0],UNIT[\s"Degree\s",0.0174532925199433]],PROJECTION[\s"Transverse_Mercator\s"],PARAMETER[\s"False_Easting\s",500000.0],PARAMETER[\s"False_Northing\s",0.0],PARAMETER[\s"Central_Meridian\s",-69.0],PARAMETER[\s"Scale_Factor\s",0.9996],PARAMETER[\s"Latitude_Of_Origin\s",0.0],UNIT[\s"Meter\s",1.0]]" ;\n topo:coordinates = "x y" ;\n topo:grid_mapping = "transverse_mercator" ;\n topo:units = "Meter" ;\n topo:missing_value = 0.f ;\n int transverse_mercator ;\n transverse_mercator:grid_mapping_name = "transverse_mercator" ;\n transverse_mercator:longitude_of_central_meridian = -69. ;\n transverse_mercator:latitude_of_projection_origin = 0. ;\n transverse_mercator:scale_factor_at_central_meridian = 0.9996 ;\n transverse_mercator:false_easting = 500000. ;\n transverse_mercator:false_northing = 0. ;\n\n// global attributes:\n :Conventions = "CF-1.0" ;\n :Source_Software = "ESRI ArcGIS" ;\n}\n}}}\n\nIf we have a projection that isn't Geographic, and isn't defined one of the accepted "grid_mapping" projections in CF, it writes just the esri_pe_string, as in this Miller Projection:\n\n{{{\n[rsignell@ricsigdtlx cf]$ ncdump -h\nnetcdf test36_miller_arc {\ndimensions:\n x = 144 ;\n y = 92 ;\nvariables:\n double x(x) ;\n x:long_name = "x coordinate of projection" ;\n x:standard_name = "projection_x_coordinate" ;\n x:units = "Meter" ;\n double y(y) ;\n y:long_name = "y coordinate of projection" ;\n y:standard_name = "projection_y_coordinate" ;\n y:units = "Meter" ;\n float topo(y, x) ;\n topo:long_name = "topo" ;\n topo:esri_pe_string = "PROJCS[\s"Miller Cylindrical\s",GEOGCS[\s"GCS_North_American_1983\s",DATUM[\s"D_North_American_1983\s",SPHEROID[\s"GRS_1980\s",6378137.0,298.257222101]],PRIMEM[\s"Greenwich\s",0.0],UNIT[\s"Degree\s",0.0174532925199433]],PROJECTION[\s"Miller_Cylindrical\s"],PARAMETER[\s"False_Easting\s",0.0],PARAMETER[\s"False_Northing\s",0.0],PARAMETER[\s"Central_Meridian\s",-70.0],UNIT[\s"Meter\s",1.0]]" ;\n topo:coordinates = "x y" ;\n topo:units = "Meter" ;\n topo:missing_value = 0.f ;\n\n// global attributes:\n :Conventions = "CF-1.0" ;\n :Source_Software = "ESRI ArcGIS" ;\n}\n}}}\n\n\nWe should find out if other tools (such as the latest version of GDAL/FWTOOLS) can make use of this string to preserve georeferencing info. Okay, I've checked, and apparently gdal is doing something slightly different. This same file if converted to "NetCDF" with gdal_translate\n\n{{{\n gdal_translate test36_miller.tif -of netCDF\n}}}\n\nproduces this:\n\n{{{\nnetcdf test36_miller_gdal {\ndimensions:\n x = 144 ;\n y = 92 ;\nvariables:\n char miller_cylindrical ;\n miller_cylindrical:Northernmost_Northing = 4911333.35025183 ;\n miller_cylindrical:Southernmost_Northing = 4804259.39790056 ;\n miller_cylindrical:Easternmost_Easting =
/***\n| Name:|ExtentTagButtonPlugin|\n| Description:|Adds a New tiddler button in the tag drop down|\n| Version:|3.0 ($Rev: 1845 $)|\n| Date:|$Date: 2007-03-16 15:19:22 +1000 (Fri, 16 Mar 2007) $|\n| Source:||\n| Author:|Simon Baird <>|\n| License||\n***/\n//{{{\n\n// can't hijack a click handler. must redefine this entirely.\n// would be good to refactor in the core...\n// this version copied from 2.1.3 core\n\n// Event handler for clicking on a tiddler tag\nfunction onClickTag(e)\n{\n if (!e) var e = window.event;\n var theTarget = resolveTarget(e);\n var popup = Popup.create(this);\n var tag = this.getAttribute("tag");\n var title = this.getAttribute("tiddler");\n if(popup && tag)\n {\n var tagged = store.getTaggedTiddlers(tag);\n var titles = [];\n var li,r;\n for(r=0;r<tagged.length;r++)\n if(tagged[r].title != title)\n titles.push(tagged[r].title);\n var lingo = config.views.wikified.tag;\n\n wikify("<<newTiddler label:'New tiddler' tag:"+tag+">>",createTiddlyElement(popup,"li")); // <---- the only modification\n\n if(titles.length > 0)\n {\n var openAll = createTiddlyButton(createTiddlyElement(popup,"li"),lingo.openAllText.format([tag]),lingo.openAllTooltip,onClickTagOpenAll);\n openAll.setAttribute("tag",tag);\n createTiddlyElement(createTiddlyElement(popup,"li",null,"listBreak"),"div");\n for(r=0; r<titles.length; r++)\n {\n createTiddlyLink(createTiddlyElement(popup,"li"),titles[r],true);\n }\n }\n else\n createTiddlyText(createTiddlyElement(popup,"li",null,"disabled"),lingo.popupNone.format([tag]));\n createTiddlyElement(createTiddlyElement(popup,"li",null,"listBreak"),"div");\n var h = createTiddlyLink(createTiddlyElement(popup,"li"),tag,false);\n createTiddlyText(h,lingo.openTag.format([tag]));\n }\n,false);\n e.cancelBubble = true;\n if (e.stopPropagation) e.stopPropagation();\n return(false);\n}\n\n//}}}\n\n
For the NCOM IASNFS data, I wanted every 4th time step, since the data were saved at 6 hour intervals but I wanted only the values at 00:00 hours. So I used the command:\n{{{\nncks -O -F -d Time,"2010-05-18","2010-05-28",4 -v Surface_Elevation\n}}}\nThis takes a long time on my windows cygwin ncks: 12 minutes to get a 12MB file! Something must be working really hard, I guess.\n\nFor the HYCOM 1/25 degree NCODA, the output is daily, so I want every time step. So I did:\n{{{\nncks -O -F -d MT,"2010-05-18","2010-05-28" -v ssh\n}}}\nWhich takes about 10 seconds. Go figure.
FVCOM: All work is GOM3 model for MOP (500 m - 1km in coastal area, highest res in Nantucket Sound). No wetting and drying. Estuaries on south side of Cape Cod look very bad (way too big). Looks like poor coastline was used.\n\nOriginally FVCOM-GOM1-GOM2 open boundary had tides and climatology, but no transport. (clamped low-frequency elevation to zero). Thus the surface mean flow was not too bad (because largely wind driven), but deeper mean flow and gulf stream region were very bad.\n\nOriginal GOM1-GOM2 had 300m cutoff.\nExisting NECOFS forecast from GOM3 use 1500 m cutoff.\n\nSo to assess importance of these, check drifter comparison:\n684 Drifting buoy dataset (Globec and NEFSC 1995-2008)\nTried resetting each 6, 12, 24, 72 hours to see drifter forecast error over these time scales\n\nNew plan: use global FVCOM ocean model. Main problem is forcing of the open boundary condition (temperature and salinity structure). Looked at Global HYCOM and NCOM, and density structure was insufficient to drive Scotian Shelf boundary. Global model is 5-50 km, runs faster than regional model (2.5 days to run 1 year). Data assimilation every day. [we've seen this before for John Warner running his own global WW3, Harvard running the North Atlantic, ADCIRC running the Western North Atlantic, etc... Are global models for driving regional models useful? ]\n\nWave model is currently not being run for hindcast period, could do, but wave model takes: \n1 month = 3 days of run time\n\nCoupled current/wave takes 8 times longer\nwave takes 4 times longer than FVCOM\n\nSo maybe run one year of wave data, so we can compare to other wave models?\nWhat year should we pick? A year with big storms, or a year with lots of data?\nPerhaps when we had our Scituate and Boston Buoy tripods?\n\nIf model assessment and model/data comparisons could be done with Matlab toolbox accessing OPeNDAP data, it facilitate model analysis and utilization by other groups.\n\nGlobal model is running: 1978- present (only 5 years so far...)\nWill run FVCOM-GOM3 for 1978-2009. \nContacts: Global (Lai and Chen), FVCOM-GOM3 (Zhao, Sun & Chen)\nQC/Accuracy: Chen, Sun & Beardsley\nNOP Database on TDS: Xu\nRequests from MOP, MOP Consultants:\n\nCharge to Advisory Group:\n\nDan: Important to CZM:\nHabitat mapping of seafloor. Sediment, currents, temperatures + biotic communities\n\nseamex classification FGDC standard habitat model: seabed, geoforms (deep valleys, sloping, trench)\n\nWRF: Triple nested: need Domain 1 only for hindcasting (western Atlantic), Domain 3 (Mass Bay/Western Gulf of Maine) is 9km\n\nUMASSD has new server with 240TB (production grade, raid system) with TDS.\n\nWant to compare wave data for last two months to COAWST and to wave data.\n\n2009 GOM3 without boundary condition is online.\n\nJim will look at bottom temperature and compare with lobster trap data.\n\nFortran program for regridding to regular grid. \nTurn into a web service so that we can a grid in the resolution and domain we want.
\n\nGOM3, 48149 nodes, 40 levels: 523GB/year (without overlapping time)\nGOM2, 32649 nodes, 31 levels: 264GB/year\nMBAY: 98432 nodes, 11 layers: 250GB/year\n\nGOM3 sample file at:\n\n\nThis file is 4.3GB, with 72 hourly time steps, 48149 nodes, 40 vertical levels\n{{{\nsource: "FVCOM_3.0"\nhistory: "Sun Aug 2 11:08:33 2009: ncrcat -v x,y,lat,lon,xc,yc,lonc,latc,siglay,siglev,nv,nbe,h,temp,salinity,u,v,ww,zeta,Times -d time,3120,3192 -o\n}}}\nand this history shows that the model run since March 2009 is sitting in a single file called that must be (4.3GB/72steps)*3192steps=190GB (for these 4.4 months).\nSo \n\nGOM2 is 2.17GB for 72 hourly data, 32649 nodes, 31 levels.\n{{{\nsource: "FVCOM_2.6"\nhistory: "Thu Feb 18 08:30:26 2010: ncrcat -O -v x,y,lat,lon,xc,yc,lonc,latc,siglay,siglev,nv,nbe,h,temp,salinity,u,v,ww,zeta,Times -d time,744,816 /data01/necofs/FVCOM/RESULTS/ -o /data01/necofs/NECOFS_NC/\n}}}\nand this history shows that the data for the past 30 days or so is in a file\n\nMBAY, 2.07GB file for 72 hourly data, 98432 nodes, 11 layers.\n{{{\nsource: "FVCOM_3.0"\nhistory: "Thu Feb 18 19:42:25 2010: ncrcat -O -v x,y,lat,lon,xc,yc,lonc,latc,siglay,siglev,nv,nbe,h,temp,salinity,u,v,ww,zeta,Times -d time,240,312 /data01/necofs/FVCOM/output_mbn_layer11\n}}}
Started 1978, run for 10 years. Don't have SSH and SST before 1981. Looks better after assimilation. Fresh water on Scotian slope come from Gulf of St Lawrence, not Arctic. Expect 30 years run complete in 2-3 months. \n\nGOM3 run did not do assimilation right. Results not good. For multi-cpu run, locking up, found problem. \n\nGlobal model\n24 nodes x 8 cpu = 3 days wall clock for one year simulation\n\nGOM3\n24 nodes x 8 cpu = 1 day wall clock for one month simulation\n\nNECOFS\n24 nodes x 4cpu\n\nupgraded system 3 times, keep adding 20-24 nodes, but each set of nodes has different speeds, so really a bunch of mini 20-24 node systems.\n\nChen will ask with status of wave run. \n\nRich will ask Aron Roland about whether WWM can be run for GOM and whether WebEx can be shared with Chen.\n\n
1 cup chickpea flour\n1/2 tsp sea salt\n1/2 tsp freshly ground black pepper\n1 ¼ cup lukewarm water\n3 Tbsp extra-virgin olive oil\nGhee or oil, for pan\n1. In a large bowl, sift chickpea flour, salt, and pepper. Whisk in warm water and olive oil. Let sit, covered, for as many hours as possible (making this before you leave the house in the morning is perfect for making socca for dinner), but at least 30 minutes.\n2. Place heavy (preferably cast-iron) skillet in oven and preheat to 450 F.\n3. Remove skillet from oven. Add a small amount of oil or ghee to the hot pan, and pour batter in a steady stream until it reaches the edges of the pan. Bake for 8 to 10 minutes or until the pancake is firm and the edges are set.\n
{{{\nroot@gam:/home/rsignell# cd /\nroot@gam:/# find . -xdev -name '*' -size +1G\n./usr/local/usgs/tomcat-ncwms/content/ncWMS/tilecache/\n}}}\n\n{{{\nHouseKeeping:\nsudo apt-get update #resync package index\nsudo apt-get upgrade #newest versions of all packages, update must be run first\nsudo apt-get autoremove #removes depenancies no longer needed\n# removes .deb\nsudo apt-get autoclean # only removes files that cannot be downloaded anymore (obsolete)\nSometimes this does more?\nsudo apt-get dist-upgrade #updates dependancies\n\n# empty trash\n# remove log files if no issues\ncd /var/log\nsudo rm -f messages.*\nsudo rm -v /var/log/*.gz\n\n#check for large files:\nsudo du -h --max-depth=1 / | grep '[0-9]G\s>' # folders larger than 1GB\nsudo find / -name '*' -size +1G # files larger than 1GB\ndpkg-query -Wf '${Installed-Size}\st${Package}\sn' | sort -n\ngksudo nautilus /root/.local/share/Trash/files # Be sure to enable viewing of hidden files.\n}}}
{{{\nIn [105]: import sys\n\nIn [106]: sys.path\nOut[106]:\n['',\n '',\n 'c:\s\spython27',\n 'C:\s\sPython27\s\',\n 'C:\s\sPython27\s\sDLLs',\n 'C:\s\sPython27\s\slib',\n 'C:\s\sPython27\s\slib\s\splat-win',\n 'C:\s\sPython27\s\slib\s\slib-tk',\n 'C:\s\sPython27\s\slib\s\ssite-packages',\n 'C:\s\sPython27\s\slib\s\ssite-packages\s\sPIL',\n 'C:\s\sPython27\s\slib\s\ssite-packages\s\swin32',\n 'C:\s\sPython27\s\slib\s\ssite-packages\s\swin32\s\slib',\n 'C:\s\sPython27\s\slib\s\ssite-packages\s\sPythonwin',\n 'C:\s\sPython27\s\slib\s\ssite-packages\s\sIPython\s\sextensions']\n}}}
The MEST will work with JRE 1.6. Make sure you have it installed and make sure it is first in the System PATH environment eg. some versions of the 10g ORACLE client have a habit of installing the JRE 1.4.2 into the windows system PATH which causes the MEST to fail on startup. To check which version the MEST will use:\n\n * Start a Command window from the Start menu\n * Type:\n\njava -version\n\n * If the version is 1.4.2 or earlier then you must install a 1.6 JRE or JDK from Sun (eg. and set the system environment PATH variable as follows:\n o Right-click on My Computer and choose Properties\n o Select the "Advanced" tab and press the "Environment Variables" button\n o Choose the Path variable and press "Edit"\n o Add the path to your JRE 1.6 Java install to the front of the PATH variable so it looks something like the following:\n\nC:\sProgram Files\sJava\sJDK1.6.10\sbin; C:\s...\n\n o Save and close, then verify that the correct version of the JRE is being found using the steps described in the first two dot points above.\n
After upgrading Salamander from 2.5 to 2.51, I found that my "User Menu" options (activated via F9) stopped working. I found I had to edit the Visual Basic script c:\srps\ssrc\ssalhotmenu.vbs and replace \n{{{\nbFound = sh.AppActivate("Salamander 2.5")\n}}}\nwith\n{{{\nbFound = sh.AppActivate("Salamander 2.51")\n}}}\n
Worked on Baums TGLO files a bit. Tricky.\nThis is the aggregation that was having problems:\n{{{\n\n}}}\nSteve provided me access to his system, which was handy, because I discovered that one problem was some missing files in the aggregation:\n{{{\n[root@csanady HIS]# pwd\n/data1/TGLO/HIS\n[root@csanady HIS]# ls -s TGLO* | sort | head\n 0\n 0\n 0\n 0\n 4\n71548\n71548\n71548\n71548\n...\n}}}\nso I deleted them. Then things worked better. But not great, because this is an old version of ROMS (2.1) before the CF-compliant part got added. So we had to get time, lon and lat recognized as coordinate variables, and here's what we came up with:\n{{{\n<catalog name="TGLO Catalog" xmlns=""\n xmlns:xlink="">\n <service name="allServices" base="" serviceType="compound">\n <service name="dapService" serviceType="OpenDAP" base="/thredds/dodsC/"/>\n </service>\n <dataset name="TGLO ROMS History Files" ID="models/tglo/roms/his" serviceName="allServices"\n urlPath="models/tglo/roms/his">\n <netcdf xmlns=""> --> <aggregation\n dimName="time" type="joinExisting" recheckEvery="15min">\n <scan location="/data1/TGLO/HIS"\n regExp=".*TGLO-his-[0-9]{2}-[0-9]{2}-[0-9]{2}-[0-9]{2}-[0-9]{2}\$" olderThan="5 min"/>\n <variable name="zeta">\n <attribute name="coordinates" value="lat_rho lon_rho"/>\n </variable>\n <variable name="s_rho" orgName="sc_r">\n <attribute name="positive" value="up"/>\n <attribute name="units" value="1"/>\n <attribute name="standard_name" value="ocean_s_coordinate"/>\n <attribute name="formula_terms"\n value="s: s_rho eta: zeta depth: h a: theta_s b: theta_b depth_c: Tcline"/>\n </variable>\n <variable name="u">\n <attribute name="coordinates" value="s_rho lat_u lon_u time"/>\n </variable>\n <variable name="ubar">\n <attribute name="coordinates" value="lat_u lon_u time"/>\n </variable>\n <variable name="bustr">\n <attribute name="coordinates" value="lat_u lon_u time"/>\n </variable>\n <variable name="v">\n <attribute name="coordinates" value="s_rho lat_v lon_v time"/>\n </variable>\n <variable name="vbar">\n <attribute name="coordinates" value="lat_v lon_v time"/>\n </variable>\n <variable name="bvstr">\n <attribute name="coordinates" value="lat_v lon_v time"/>\n </variable>\n <variable name="time" orgName="ocean_time"/>\n </aggregation>\n <attribute name="Conventions" type="String" value="CF-1.0"/>\n </netcdf>\n </dataset>\n</catalog>\n}}}\nBut we had one additional problem: "lon_u, lat_u, lon_v, lat_v" were not in the history files. By looking at the global attributes, I found the ROMS grid file at \n{{{\n/home/baum/TGLO/SCRIPTS/\n}}}\nbut then the problem was how to add these variables in. Because the aggregations like be gridded data, I wanted to add these variables inside the aggregation loop, so I physically modified the 1st file in the aggregation (/data1/TGLO/HIS/, adding in the info from the grid file. I did this by making a tiny union aggregation in NcML:\n{{{\n<netcdf xmlns="">\n <aggregation type="union" >\n <netcdf location="c:/rps/cf/tamu/tglo/"/>\n <netcdf location="c:/rps/cf/tamu/tglo/"/>\n </aggregation>\n <attribute name="Conventions" type="String" value="CF-1.0"/>\n</netcdf>\n}}}\nbringing this up in the ToolsUI-GUI and clicking on the "ring" icon to write the NcML virtual aggregation to a physical NetCDF file. I then moved this file back to /data1/TGLO/HIS. \n\nThe one remaining step that screwed me up was that it turns out that the default behavior in TDS 4.1 is to *randomly* pick a file in the aggregation to use at a prototype. I was going crazy because I didn't know this, and had a test aggregation with two datasets where I had modified the 1st one. I kept reloading and seeing the variables in the aggregation continually switching back and forth! Luckily, there is a way to set the default behavior. \n\nSo I added these lines to \n/usr/local/apache-tomcat-6.0.18/content/thredds/threddsConfig.xml\n{{{\n <Aggregation>\n <typicalDataset>first</typicalDataset>\n </Aggregation>\n}}}\nThen everything worked!!!!!\n\n
I installed Matt Wilkie's (and flip_raster.bat) in my FWTOOLS1.3.6 directory on my PC, so it's not necessary to bring GMT stuff into Mirone and save to GeoTIFF (but Mirone adds the EPSG:4326 as well, so that's nice).\n\nUsage is simple:\n{{{\n flip_raster.bat bathy.grd bathy.tif\n}}}
Brian Eaton proposed "grid_description" container for all the specific attributes needed to\ndescribe connections and boundary nodes for a particular type of mesh.\n\n{{{\nnetcdf umesh_cf {\ndimensions:\n node = 9700 ;\n nele = 17925 ;\n nbnd = 1476 ;\n nface = 3 ;\n nbi = 4 ;\n sigma = 1 ;\n time = UNLIMITED ; // (0 currently)\nvariables:\n float time(time) ;\n time:long_name = "Time" ;\n time:units = "days since 2003-01-01 0:00:00 00:00" ;\n time:base_date = 2003, 1, 1, 0 ;\n time:standard_name = "time" ;\n float lon(node) ;\n lon:long_name = "Longitude" ;\n lon:units = "degrees_east" ;\n lon:standard_name = "longitude" ;\n float lat(node) ;\n lat:long_name = "Latitude" ;\n lat:units = "degrees_north" ;\n lat:standard_name = "latitude" ;\n float depth(node) ;\n depth:long_name = "Bathymetry" ;\n depth:units = "meters" ;\n depth:positive = "down" ;\n depth:standard_name = "depth" ;\n depth:grid = "grid_description";\n char grid_description\n mesh:grid_name = "triangular_mesh";\n mesh:Horizontal_Triangular_Element_Incidence_List = "ele";\n mesh:Boundary_Segment_Node_List = "bnd";\n mesh:index_start = 1;\n int ele(nele, nface) ;\n int bnd(nbnd, nbi) ;\n}}}\n\n\nRich thinks this idea would work well for staggered structured grids as well:\n\n{{{\nnetcdf c_grid_example { \n.... \n u:grid_staggering = "roms_stagger"\n v:grid_staggering = "roms_stagger"\n \n\n char roms_stagger\n roms_stagger:grid_name = "Arakawa_C_grid"\n roms_stagger:u_relative_to_pressure = "left"\n roms_stagger:v_relative_to_pressure = "below"\n}\n \n}}}\n where the last two attributes would signify that u(i,j) is "left" of p(i,j) and v(i,j) is "below" p(i,j). \n\nThe idea is that there are only a few common staggering arrangements, which would be described in the CF document, much like the vertical coordinate transformations. So "Arakawa_C_grid" would have specific rules that would tell applications how to do things like find the vertical coordinate at U(i,j) points by averaging the Z(i,j) and Z(i-1,j) points.\n\nBTW, The "C" grid is by far the most popular, at least according to Google: We searched A-E grids, and here's the result\n\n\n|Arakawa Type | Number of pages | Percent of total |\n| C | 50,500| 65%|\n| B | 13,500| 17%|\n| E | 7,490| 10%|\n| A | 5,650| 7%|\n| D | 501| 1%|\n\n\n\nSteve Hankin and Karl Taylor say "be careful adding new stuff" and Steve says trying to handle all the permutations of staggered grids gets too complicated. (But I don't agree)\n\nJohn Caron likes the idea of a container variable, but wants one container for the entire "coordinate system":\n\n{{{\nFrom my POV, both are properties of a "coordinate system" object, \nso i prefer a design that attaches the grid mapping and description (and \nwhatever else will eventually be needed) to a coordinate system \n"container variable"; the dependent variables then only have to point to \nthis, and all the information is contained in one place. I think i can \nalso incorporate "dimensionless vertical coordinates" in the same \nframework: rather than having a seperate mechanism for specifying \n"grid_mappings" and "dimensionless vertical coordinates", both are kinds \nof "coordinate system transformations". \n}}}\n\nHowever, taking a look at:\n\nit seems that for now, at least, the "vertical" and "projection" transforms are handled in separate "containers".\n\nJonathan Gregory endorses the container idea, but suggests using "grid_topology" to describe the connections of a grid (e.g. unstructured triangular grid), and wants a different term for describing the relationship between data variables on a grid (e.g. staggered C grid). \n\nBalaji's GridSpec suggested a "staggering" attribute \n\n{{{\ndimensions:\nnx = 46;\nny = 45;\nvariables:\nint nx_u(nx);\nint ny_u(ny);\nfloat u(ny,nx);\nu:standard_name = "grid_eastward_velocity";\nu:staggering = "c_grid_symmetric";\nu:coordinate_indices = "nx_u ny_u";\nGLOBAL ATTRIBUTES:\ngridspec = "/foo/";\nnx_u = 1,3,5,...\nny_u = 2,4,6,...\n}}}\n\nBut instead of a simple attribute, should we point to a grid_staggering container?
Different types of call forwarding for the iPhone and all other AT&T phones:\n\nTo Forward all calls:\n\n On your phone's calling screen, dial: *21*xxx-xxx-xxxx# and press send1\n Your phone will now provide feedback that all calls are forwarded\n\nTo ring for XX seconds, then forward:\n\n On your phone's calling screen, dial: *004*xxx-xxx-xxxx*11*time# ("time" is your desired ring duration in seconds; must be 5,10,15,20,25 or 30) and press send1\n Your phone will now provide feedback that calls are forwarded\n\nTo forward unanswered calls:\n\n Dial EXACTLY: *61*xxx-xxx-xxxx# and press send1\n Your phone will now provide feedback that unanswered calls are forwarded\n\nTo forward calls when you are busy or decline a call:\n\n On your phone's calling screen, dial: *67*xxx-xxx-xxxx# and press send1\n Your phone will now provide feedback that busy and declined calls are forwarded\n\nTo forward calls when the phone is off or in airplane mode/no service:\n\n On your phone's calling screen, dial: *62*xxx-xxx-xxxx# and press send1\n Your phone will now provide feedback that calls are forwarded when your phone is unavailable\n\nTo END all call forwarding:\n\n On your phone's calling screen, dial: #002# and press send\n Your phone will now provide feedback that call forwarding has been deactivated\n\n
Here is a ERDDAP RESTful query to get the surface layer velocity for 3 days over the entire domain, but subsampled by 2 in lon/lat dimensions for speed:\n\n[(2011-04-07):1:(2011-04-09)][(0.0):1:(0)][(34.0):1:(
GDAL would like to write a CDM-compliant NetCDF file, but it doesn't currently get it quite right. It doesn't create x,y coordinate variables, and it doesn't get the attribute specification quite right. All these are easy to fix in NcML, but also should be easy to fix in the GDAL NetCDF writer.\n\nHere's an example of converting an Arc GRID file to NetCDF\n{{{\ngdal_translate -of netcdf de.grd\n}}}\nand we also get an extra file\n\nThe GDAL produced looks like this in NcML form:\n\n{{{\n<?xml version="1.0" encoding="UTF-8"?>\n <netcdf xmlns=""\n location="">\n <dimension name="x" length="7000" />\n <dimension name="y" length="6971" />\n <attribute name="Conventions" value="CF-1.0" />\n <variable name="albers_conical_equal_area" shape="" type="char">\n <attribute name="Northernmost_Northing" type="double" value="1970917.9911730457" />\n <attribute name="Southernmost_Northing" type="double" value="1901207.9911730457" />\n <attribute name="Easternmost_Easting" type="double" value="1800692.5938176773" />\n <attribute name="Westernmost_Easting" type="double" value="1730692.5938176773" />\n <attribute name="spatial_ref" value="PROJCS[&quot;unnamed&quot;,GEOGCS[&quot;NAD83&quot;,DATUM[&quot;North_American_Datum_1983&quot;,SPHEROID[&quot;GRS 1980&quot;,6378137,298.257222101,AUTHORITY[&quot;EPSG&quot;,&quot;7019&quot;]],TOWGS84[0,0,0,0,0,0,0],AUTHORITY[&quot;EPSG&quot;,&quot;6269&quot;]],PRIMEM[&quot;Greenwich&quot;,0,AUTHORITY[&quot;EPSG&quot;,&quot;8901&quot;]],UNIT[&quot;degree&quot;,0.0174532925199433,AUTHORITY[&quot;EPSG&quot;,&quot;9108&quot;]],AUTHORITY[&quot;EPSG&quot;,&quot;4269&quot;]],PROJECTION[&quot;Albers_Conic_Equal_Area&quot;],PARAMETER[&quot;standard_parallel_1&quot;,29.5],PARAMETER[&quot;standard_parallel_2&quot;,45.5],PARAMETER[&quot;latitude_of_center&quot;,23],PARAMETER[&quot;longitude_of_center&quot;,-96],PARAMETER[&quot;false_easting&quot;,0],PARAMETER[&quot;false_northing&quot;,0],UNIT[&quot;METERS&quot;,1]]" />\n <attribute name="GeoTransform" value="1.73069e
The issue with "point" and "area" registrations between GMT and GeoTIFF that I reported on here:\n\nno longer exist in FWTOOLS 1.3.4.\n\nBoth point and area registered grids give uniform 5 m resolution now for:\ncd /home/rsignell/p/seth\ngdal_translate -a_srs EPSG:32618 point.grd point.ti\ngdal_translate -a_srs EPSG:32618 area.grd area.tif\n
Bring up MapSource on PC and click on "receive from device" and select tracks. In MapSource, save as "gpx" format, to the My Documents\sMy Garmin\stracks directory. Fire up Google Earth Plus and change the file type to GIS files and load the track.
{{{\n$ cd c:/programs/tomcat6/webapps/erddap/WEB-INF\n$ ./GenerateDatasetsXml.bat EDDGridFromDap http://localhost:8080/thredds/dodsC/hydro/national/4km > foo.xml\n}}}
GeoTIFF images can be loaded onto the HYPACK nav system on the Rafael and be used as background images. This can be quite handy for cruise planning.\n\nThe HYPACK nav system can, however, only read 8-bit UTM GeoTIFFS.\n\nSo if you have a 24 bit GeoTIFF in Geographic Coords, you have to:\n\n1. Convert from Geographic to UTM:\n{{{\ngdal_warp 24bit_geo.tif '+proj=utm +zone=19 +datum=NAD83' 24bit_utm.tif\n}}}\n2. Convert from 24 bit to 8 bit: \n{{{\nrgb2pct.bat 24bit_utm.tif 8bit_utm.tif\n}}}\n\n3. Make background transparent. Bring up the 8 bit image in OpenEV so you can see what value is for the background. The value for my sample image was 255, so I then did:\n{{{\ngdal_translate -a_nodata 255 8bit_utm.tif 8bit_utm_transparent.tif\n}}}\n\n
How I used FWTOOLS (1.0.5) to transform a regular TIF (a bathymetry smooth sheet from GEODAS) into a georeferenced geotiff:\n\n1. Open up the tif (e.g. mytif.tif) in OpenEV. Under "Preferences", change "Lat/Lon Format" to ddd.ddddd" (decimal degrees).\n\n2. Open an ASCII editor and create a new text file.\n\n3. Map known lon/lat points to pixel coordinates. OpenEV displays pixel locations in lower left corner, so move the tip of the pointer to a known lon,lat location (like the intersection of two lon/lat graticules). Write the pixel location into the text file, then on the next line, enter the lon/lat coordinates. Put a zero on the end of both lines. For example, if your cursor is on the known lon,lat point -130.5, 41.5, and OpenEV is telling you that this point is located at (3000.24P, 1500.25L), then the two lines in the text file should look like:\n{{{\n3000.
The TinyURL of this page is <>\n\nI want to get rid of the pesky 8080 url for thredds and ncWMS. In other words I want to use URLs like:\n{{{\n\n\n}}}\ninstead of\n{{{\n\n \n}}}\nThis is a requirement at some sites who can't allow access on port 8080, and it's desirable anyway since some internet providers block you from even *accessing* URLs with explicit port numbers in them (I've had this happen more than once by a internet provider in a hotel. The first time I freaked out because I thought our servers were down and I was going to give a live demo in two hours...).\n\nSo I'm going to use Apache proxypass.\nDownloaded and installed apache (mirror url obtained from:\n{{{\n wget\n tar xvfz httpd-2.2.16.tar.gz\n ./configure --enable-proxy --enable-proxy-ftp --enable-proxy-http\n make >& make.log &\n tail -f make.log\n sudo make install\n}}}\nCheck to see if mod_proxy stuff is there:\n{{{\n /usr/local/apache2/bin/httpd -l\n}}}\nAdd proxy configuration lines to end of the httpd.conf file:\n{{{\nsudo vi /usr/local/apache2/conf/httpd.conf\n}}}\nand insert at the end:\n{{{\n<IfModule mod_proxy.c>\nProxyRequests Off\nProxyPreserveHost On\n\n <Location /thredds>\n ProxyPass\n ProxyPassReverse\n </Location>\n\n <Proxy>\n AllowOverride None\n Order allow,deny\n Allow from all\n </Proxy>\n\n <Location /ncWMS>\n ProxyPass\n ProxyPassReverse\n </Location>\n\n <Proxy>\n AllowOverride None\n Order allow,deny\n Allow from all\n </Proxy>\n\n</IfModule>\n}}}\nThen start apache:\n{{{\nsudo /usr/local/apache2/bin/apachectl start\n}}}
The Rutgers THREDDS server is at\n\nIf you want to explore the spatial extents and time extents of the data sets, a convenient way is via the EDC (Environmental Data Connector), built by ASA for NOAA, obtainable from\n\nIn the EDC, just cut-and-paste \n\ninto the spot for "Catalog URL" and then browse through the datasets, eventually selecting the OpenDAP access link. Then you will see the lon/lat range and time extents listed.\n\nThe Meteorology directory contains met model output converted to comply with ROMS forcing file naming conventions.\n\nNavigate the directory structure down to the dataset you want, and then click on the "Access=>OPENDAP" link to open up the OPeNDAP Dataset Access Form. Then cut-n-paste the URL found in the "Data URL" box. For example, the OPeNDAP URL for the 3 hour NAM Uwind field is:\n\n{{{\n\n}}}\n\nArmed with this URL, you can then use the NCO tools "ncks" command to extract just the lon/lat range and time range you are interested in.\n\nOn pikmin, the NCO tools are in /usr/local/nco/bin, so make sure this in your path. Then make a get_forcing script like this:\n\n{{{\n#!/bin/bash\n\n# get ROMS met forcing from Rutgers\n\nfor var in Pair Uwind Vwind Tair lwrad_down lwrad Qair swrad rain\ndo \n echo $var\n ncks -d lon,-71.5,-68.0 -d lat,41.5,42.5 -d time,"2007-06-15 00:00","2007-11-15 00:00" \s\n"$var" "$var"\ndone\n}}}\n\nMake it executable and run it:\n{{{\nchmod +x get_forcing\n./get_forcing\n}}}
Note: these are now outdated instructions. See instructions for gfortran and ROMS 3.0 at:\n\n\n\nBut for historical interest...\n\nFollowing Sachin Kumar Bhate's instructions:\n\n#Get cygwin from, and select {{{}}} when cygwin_setup asks you to pick a site. Make sure you check 'gcc core module' 3.4.4 under 'Devel' where you get package listings.\n#Go to \n##Get the Cygwin x86 tarball (stable version). \n##Unpack in the root directory. i.e. /cygdrive/c/cygwin. It automatically unpacks into usr/local/bin.\n#Netcdf installlation and compilation with g95.\n##Get the netcdf library 3.6.1/3.6.2 from\n##untar.\n##change to the src directory where you have just unpacked the netcdf source.\n##run this command at the prompt: {{{ CC=gcc F90=g95 F90FLAGS='-O -Wno-globals' CPPFLAGS='-Df2cFortran' ./configure}}}\n##type 'make'\n##type 'make check'\n##type 'make install'. \n#Get ROMS.\n##I checked out v2.2.4 from\n##Go to the ROMS installation directory now.\n##Change 'FORT ?= g95' in makefile\n##Open Master/ Uncomment line #11, and comment line #13. Save it.\n##set $NETCDF_INCDIR and $NETCDF_LIBDIR enivronmental variables with path where you have just built netcdf.\n##Copy the attached with this email to Compilers directory. Then open the and change NETCDF_LIBDIR and NETCDF_INCDIR paths. (you don't need to if you have already defined these as environmental variables).\n##Edit ROMS/Include/cppdefs.h and #define UPWELLING. \n##Try to compile it.\n#Test the resulting executable: {{{ ./OceanS < ROMS/External/ }}}\n\nThe UPWELLING test case with g95 took 570 s on my notebook PC (2.2GHz T2600 ). With the Intel Fortran compiler (ifort) it takes 470 s on my Linux desktop PC (3.0GHz Xeon).\n
First I downloaded the SFTP plugin for Total Commander. There are several to choose from, but I picked the one that had the highest revision number and most recent revision date:\n{{{\\n}}}\nI unzipped this, and then followed the Total Commander Plugin instructions for WFX files (go to Configuration=>Options=>Plugins=>File System Plugins). \n* I made a new connection, and answered the questions, specifying port 22 for both Stellwagen and Capecodder, but ignoring the password since these systems use the private/public key pair authentication. After you are done with the setup, you have to edit the connection, and change use_key_auth to 1 and specify the private key location. On my system, it looks like this:\n{{{\nuse_key_auth=1\nkeyfilename=c:\scygwin\shome\srsignell\s.ssh\sricsigltxp.ppk\n}}}\n* If you don't have a Putty-generated SSH private/public key pair, you need to create the key pair using Puttygen. So find a copy of Puttygen on the internet, and then generate a key pair, selecting: SSH2, DSA (not RSA, as recommended). As you can see from above, I chose to store my keys in the same directory as my Cygwin/OpenSSH keys: c:\scygwin\shome\srsignell\s.ssh\n* For this to work, the machines you want connect to have to have the public keys installed, so send your putty-generated public key to [[Greg|]] ([[capecodder]], [[stellwagen]]) and to [[Jeff|]] for [[pikmin]].\n* For TortoiseSVN, just specify the location of the private key file in the options menu. It also uses Putty.\n \n
{{{\n/data/ftp/upload/Inundation/vims/selfe_tropical/runs/Ike/2D_varied_manning_windstress\n\n <ns0:variable name="elev" shape="time node" type="float">\n <ns0:attribute name="coverage_content_type" value="modelResult" />\n}}} is a 64-bit \nCentOS 5.0 Linux machine.\n\nI followed the installation directions on the LDM 6.8.1 page:\n<>\nwithout any problems. \n\nBut then I found the request syntax documentation totally baffling. I knew I wanted the NAM12 CONUS and GFS 0.5deg GLOBAL, and I didn't want to do any decoding, just drop the files in a directory and clean them out every three days. So I called Unidata and got Steve Emmerson and Jeff Weber on the line. First they tried to figure out what feed the data I wanted was on, using commands like:\n{{{\n notifyme -vl- -o 36000 -f CONDUIT -p NAM_212 -h\n}}}\nThis didn't work very well, so in the end Steve had the good idea to just look at what motherlode was doing, since I wanted our site to act like a mini-motherlode, just handling a few model products.\n\nSo in the end, after looking at motherlode's ldmd.conf and pcact.conf files here's what we set in the configuration files:\n\nIn ~lmd/etc/ldmd.conf\n{{{\n\n# NCEP NAM 12 km CONUS:\nREQUEST CONDUIT "^data/nccf/com/nam/prod/nam........./nam.t..z.awip12.*grib2"\nREQUEST CONDUIT "^data/nccf/com/nam/prod/nam........./nam.t..z.awip12.*grib2"\n\n# NCEP GFS 0.5 degree GLOBAL:\nREQUEST CONDUIT "^data/nccf/com/gfs/prod/gfs\s............*pgrb2f"\nREQUEST CONDUIT "^data/nccf/com/gfs/prod/gfs\s............*pgrb2f"\n}}}\nwhich means, put the requested streams into the queue (called "data") without decoding or anything. Penn State ( is my primary feed, and ( is my secondary feed.\n\nIn ~ldm/etc/pqact.conf\n{{{\n# NCEP NAM 12 km CONUS:\nCONDUIT ^data/nccf/com/nam/prod/nam.(........)/nam.t(..)z.awip12.*grib2\n FILE -close archive/pub/native/grid/NCEP/NAM/CONUS_12km_conduit/NAM_CONUS_12km_\s1_\s200.grib2\n\n# NCEP GFS 0.5 degree GLOBAL:\nCONDUIT ^data/nccf/com/gfs/prod/gfs\s.(........)(..).*pgrb2f\n FILE -close archive/pub/native/grid/NCEP/GFS/Global_0p5deg/GFS_Global_0p5deg_\s1_\s200.grib2\n}}}\nwhich means, move the files from the queue (called "data") to archive/pub/native...\n\nIn ~ldm/etc/scour.conf\n{{{\n# Directory Days-old Optional-filename-pattern\n\n~ldm/logs 1 *.stats\narchive 3\n}}}\nwhich means, clean out everything in the directory "archive" more than 3 days old.\n\nThe resulting files are picked up by blackburn's THREDDS Data Server, appearing on:\n<>\n\nAfter blackburn was shutdown and moved, the LDM was not running. I needed to do\n{{{\nldmadmin clean\nldmadmin start\n}}}
To get started with this blank TiddlyWiki, you'll need to modify the following tiddlers:\n* SiteTitle & SiteSubtitle: The title and subtitle of the site, as shown above (after saving, they will also appear in the browser title bar)\n* MainMenu: The menu (usually on the left)\n* DefaultTiddlers: Contains the names of the tiddlers that you want to appear when the TiddlyWiki is opened\nYou'll also need to enter your username for signing your edits: <<option txtUserName>>\n\nSee also MonkeyPirateTiddlyWiki.
This will return a subsetted 32-bit Geotiff\n{{{\nwget -O test.tif ",41.10,-70.0,41.70"\n}}}\n\nI believe that curl used to work, but now this returns a complaint about using "POST". So I tried "wget" instead, and it worked. I guess wget uses "GET"!
Christoph originally said:\n\nGetting NetCDF4-Python to work is tricky since ArcGIS10 ships with old netcdf3 and hdf DLLs.\n\nThe only clean solution is to build netcdf4-python against the python, numpy,netcdf, and hdf5 libraries that ship with ArcGIS10, and use the same compiler & C runtime that was used to build ArcGIS10-python.\n\nHere's what I would need to do to get something to work without messing with ArcGIS dlls:\n\n1) First, I would have to update netcdf-4.1.3-msvc9-source with DAP support. This version is not supported by Unidata and requires curl libraries. I don't know what ESRI's policy is on unsupported code.\n\n2) Netcdf.dll needs to be built against a static or custom named version of HDF5-1.8.7 to avoid conflicts with the existing ArcGIS10 HDF DLLs. Still, this might not work depending on whether different versions of HDF5 libraries can be loaded/used at runtime.\n\n3) chances are that the netcdf4-python 0.9.7 source distribution is not compatible with numpy 1.3. This can probably be fixed by invoking Cython to generate new source files with numpy 1.3 installed.\n\n\n\nBut then he did it: \n\n" Please find the updated netCDF4-0.9.7-ArcGIS10.win32-py2.6.?exe at\n<>.\n\nThis version of netcdf4-python was built against numpy 1.3 and uses a\nnetcdf.dll, which was linked to static libraries of hdf5, zlib, szlib,\nlibcurl and libxdr. It should work on ArcGIS10 without upgrading\nnumpy or moving DLLs. Just make sure there is no other netcdf.dll in\nthe Windows DLL search path."\n\nAnd indeed, it does work! My test script worked like a charm:\n\n{{{\nimport netCDF4\nimport numpy as np\nimport arcpy\nurl=''\nnc=netCDF4.Dataset(url);\nlon=nc.variables['lon'][:]\nlat=nc.variables['lat'][:]\nbi=(lon>=-71.2)&(lon<=-70.2)\nbj=(lat>=41)&(lat<=42)\nz=nc.variables['topo'][bj,bi]\ngrid1=arcpy.NumPyArrayToRaster(z)\n}}}
Voucher=> Edit=> Digital Signature=> Submit Completed Document
When I update to a new kernel from RHEL, here's what I have to do to get the graphics card going. \n\nGet the latest video driver from NVIDIA for Linux from:\n\n\nClick on the lastest driver and then click on "supported products list" to make sure that the graphics card is supported.\n\nRICSIGDTLX has this graphics card:\n128MB PCI Express™ x16 (DVI/VGA) Quadro FX1300\n\nELLMONDTLX has this graphics card:\n256MB Quadro FX2650\n\nDownload the lastest driver to \n/home/rsignell/downloads\n\nBefore rebooting with the new kernel, edit /etc/inittab and switch the init level to 3 so that it will boot in single user mode without graphics\n{{{\nid:3:initdefault:\n}}}\n\nIMPORTANT: make a soft link in the kernel source tree from arch/i386 to arch/intel because this is where the NVIDIA module build script looks for it.\n{{{\ncd /usr/src/kernels/
How well does GridFTP via Globusconnect work in a real modeling testbed data transfer case? We did a few tests, first moving a thousands of files totaling 365GB from the testbed to a local server in Woods Hole, and just to test the ability to restart transfers automatically, we restarted the globusconnect service and the testbed server in the middle of the transfer. The transfer proceeded flawlessly, moving the data in about 14 hours, at a rate of 7.32MB/s. About 1/2 of this transfer occurred during working hours on the East Coast. To see if we got faster rates during non-work hours, and to compare against sftp, we did a second small test, moving 600MB from the testbed server to the Woods Hole machine at 7am. We got a data rate of 10.5MB/s for GridFTP, and 4.1MB/s for sftp, a factor of 2.5 speedup.\n\nDetails:\n\nTest
{{{\n#!/bin/csh\n\ngrdgradient gom15.grd -A315 -Ggom15_grad.grd -N -M -V\ngrdhisteq gom15_grad.grd -Ggom15_grad_eq.grd -V -N\ngrdmath gom15_grad_eq.grd 6.0 / = gom15_intens.grd\ngrdimage gom15.grd -Cgom4.cpt -P -K -V -JM22 -Igom15_intens.grd >\nconvert ppm:- | pnmtotiff -lzw > gom15a.tif\nconvert ppm:- | cjpeg > gom15a.jpg\n}}}\n\nmore gom4.cpt\n{{{\n\n -7000 0 0 255 -2000 0 0 255 \n -2000 0 50 255 -200 0 50 255 \n -200 0 110 255 -100 0 110 255 \n -100 0 170 255 -60 0 170 255 \n -60 0 215 255 -30 0 215 255 \n -30 40 255 255 0 40 255 255 \n 0 50 150 50 500 200 200 50\n 500 200 200 50 2000 200 0 200\n}}}
Working on
If I calculate the H-Index for R. Signell at Google Scholar\n\nI get:\n\nCitations for 'R. Signell' : 2761\nCited Publications:135\nH-Index: 25\n\nTurns out the ISI web of knowledge has a better tool for H-Index. You can select only certain subject areas. \n\nUse "search" on the "author" line\nSignell R*\nand then "create citation report" (little link on the right side of the page).\nH-Index using web of knowledge is 20.\n\nAccording to H-Factor wikipedia entry:\n\n\nHirsh, the physicist who came up with the idea of this factor, said that "for physicists, a value for h of about 10–12 might be a useful guideline for tenure decisions at major research universities. A value of about 18 could mean a full professorship, 15–20 could mean a fellowship in the American Physical Society, and 45 or higher could mean membership in the United States National Academy of Sciences."\n\nTo test this out , I tried the physical oceanography National Academy members):\n H-Factor\nJ.C. McWilliams: 61\nCarl Wunsch: 55\nChris Garrett: 48\nRuss Davis: 42\nJ. Pedlosky: 35\n\nSeems to work pretty well!\n\n
/***\n| Name|HideWhenPlugin|\n| Description|Allows conditional inclusion/exclusion in templates|\n| Version|3.0 ($Rev: 1845 $)|\n| Date|$Date: 2007-03-16 15:19:22 +1000 (Fri, 16 Mar 2007) $|\n| Source||\n| Author|Simon Baird <>|\n| License||\nFor use in ViewTemplate and EditTemplate. Example usage:\n{{{<div macro="showWhenTagged Task">[[TaskToolbar]]</div>}}}\n{{{<div macro="showWhen tiddler.modifier == 'BartSimpson'"><img src="bart.gif"/></div>}}}\n***/\n//{{{\n\nwindow.removeElementWhen = function(test,place) {\n if (test) {\n removeChildren(place);\n place.parentNode.removeChild(place);\n }\n};\n\nmerge(config.macros,{\n\n hideWhen: { handler: function(place,macroName,params,wikifier,paramString,tiddler) {\n removeElementWhen( eval(paramString), place);\n }},\n\n showWhen: { handler: function(place,macroName,params,wikifier,paramString,tiddler) {\n removeElementWhen( !eval(paramString), place);\n }},\n\n hideWhenTagged: { handler: function (place,macroName,params,wikifier,paramString,tiddler) {\n removeElementWhen( tiddler.tags.containsAll(params), place);\n }},\n\n showWhenTagged: { handler: function (place,macroName,params,wikifier,paramString,tiddler) {\n removeElementWhen( !tiddler.tags.containsAll(params), place);\n }},\n\n hideWhenTaggedAny: { handler: function (place,macroName,params,wikifier,paramString,tiddler) {\n removeElementWhen( tiddler.tags.containsAny(params), place);\n }},\n\n showWhenTaggedAny: { handler: function (place,macroName,params,wikifier,paramString,tiddler) {\n removeElementWhen( !tiddler.tags.containsAny(params), place);\n }},\n\n hideWhenTaggedAll: { handler: function (place,macroName,params,wikifier,paramString,tiddler) {\n removeElementWhen( tiddler.tags.containsAll(params), place);\n }},\n\n showWhenTaggedAll: { handler: function (place,macroName,params,wikifier,paramString,tiddler) {\n removeElementWhen( !tiddler.tags.containsAll(params), place);\n }},\n\n hideWhenExists: { handler: function(place,macroName,params,wikifier,paramString,tiddler) {\n removeElementWhen( store.tiddlerExists(params[0]) || store.isShadowTiddler(params[0]), place);\n }},\n\n showWhenExists: { handler: function(place,macroName,params,wikifier,paramString,tiddler) {\n removeElementWhen( !(store.tiddlerExists(params[0]) || store.isShadowTiddler(params[0])), place);\n }}\n\n});\n\n//}}}\n\n
The hilbert transform can be used to get the envelope of a modulated signal. 'hilbert' is part of the Matlab signal processing toolkit. You can use hilb_envel.m in RPSstuff to calculate the envelope.\n
Short on disk space? Compress those netcdf3 files using netcdf4 with deflation and chunking instead of gzipping! With ROMS and other ocean model output, my experience has been that deflation at least cuts the file size in half, and if you are willing to sacrifice a bit of precision (say you don't need currents saved to be more accurate than 0.001 m/s, for example), you can save even more. If you have large masked regions (land in ROMS, clouds in remote sensing imagery), you can save even more. We reduced some AVHRR data by a factor of 20!\n\nHere are a number of ways you can do it:\n\n1) nc3tonc4 (Jeff Whitaker):\nThis command line tool comes with NetCDF4-Python.\nI like this tool because it gives you the ability to specify the number of significant digits, which can improve compression by quite a bit. FYI, this tool is part of the Enthought Python Distribution, which is a one click install on 32/64 bit Mac, PC and Linux. If you are academic, you can get the EPD for free (, and if not, it's $200 (well worth it, IMHO). \n\nFor example, to keep 3 digits (0.001m/s) accuracy on velocity, and 4 on temperature, just do:\n{{{\n nc3tonc4 --quantize='u=3,v=3,temp=4'\n}}}\n\n2) nccopy (Unidata):\nWith versions higher than netCDF-4.1.2, you can do deflation and chunking.\n\nFor example, to convert netCDF-3 data to netCDF-4\ndata compressed at deflation level 1 and using 10x20x30 chunks for\nvariables that use (time,lon,lat) dimensions:\n{{{\n nccopy -d1 -c time/10,lon/20,lat/30\n}}}\n\n3) ncks (Charlie Zender):\nAlso let's you specify chuck sizes along any dimension:\n{{{\nncks -4 -L 1 --cnk_dmn lat,50 --cnk_dmn lon,50 -O\n}}}
{{{\ncat /proc/cpuinfo | grep processor | wc -l\n}}}
Hit the snack machine today, and the 2.07oz (59g) Snickers Bar was $1 and the 1.75 oz (49.6g) of Tom's peanuts was $0.75. So I was wondering how many oz of peanuts the snickers bar had? And how many peanuts? \n\nWe assume that all the calcium and protein in Snickers comes from skim milk and peanuts. \n\nFrom wikipedia (\n{{{\n100g of peanuts has 62mg calcium and 25g protein.\n}}}\nFrom webmd (\n{{{\n 1 cup of skim milk (244g) has 301mg calcium and 8.4g protein. \n}}}\n\n1 cup of skim milk = 244g\n\nJust as a check that the protein and calcium don't come from other ingredients, let's see if the protein and calcium in Milky Way have a ratio similar to that of milk (because Milky Way is like Snickers without peanuts). From this site:, 100g of Milky Way has 115mg calcium, 4g protein, a ratio of 115/4 = 28.7 calcium(mg)/protein(g), and milk has a ratio of 301/8.4 = 35.83 calcium(mg)/protein(g), so pretty close.\n\nSo onward to Snickers. From\n{{{\n7.53g protein/100g snickers = 4.4g protein/59 g snickers bar\n93mg calcium/100g snickers = 55mg calcium/59 g snickers bar\n}}}\nSo we have two equations with two unknowns:\n{{{\nprotein: x*(25g protein/100g peanut) + y*(8.4g protein/244g milk) = 4.4g protein\ncalcium: x*(62mg calcium/100g peanut) + y*(301mg calcium/244g milk) = 55mg calcium\n}}}\nwhere: \n{{{\nx=mass of peanuts(g)\ny=mass of skim milk(g)\n}}}\nI could solve by substitution, but I have Matlab, and if A is an N-by-N matrix and B is a column vector with N components, or a matrix with several such columns, then X = A\sB is the solution to the equation A*X = B.\n\nSo I have: \n{{{\n>> A=[25/100 8.4/244;62/100 301/244]\nA =\n 0.2500 0.0344\n 0.6200 1.2336\n\n>> B=[4.4; 55]\nB =\n 4.4000\n 55.0000\n\n>> X=A\sB\nX =\n 12.3126\n 38.3965\n}}}\n\nSo solving the two equations yielded:\n{{{\nX(1) = 12.3g peanuts (0.42 oz, or about 20% peanuts by weight)\nX(2) = 38.4g milk (equivalent of 1.25 oz skim milk)\n}}}\nTo convert 12.3g peanuts, we (thanks Michael!) took 10 Tom's peanuts and weighed them, which came out to 5.7g. So if Tom's peanuts are the same as Snickers peanuts (and a casual visual inspection supports that they are close), and my other assumptions are correct, there are \n{{{\n12.3g * (10 peanuts/5.7g) = 21.6 peanuts per Snickers Bar\n}}}\nAnd definitely way more peanuts (49.6g/12.3g = 4 times as many) in the bag of peanuts. But I still like Snickers better. ;-)\n\nHere's a picture of 21 peanuts:\n<html><img src="" width="400" /> </html>\n\n
1. Create animation in ncWMS/Godiva2\n2. Right click on the animated gif and select "copy image location". \n3. Drop the URL into TextPad, and replace\n{{{\nFORMAT=image/gif\n}}}\nwith\n{{{\nFORMAT=application/\n}}}\nSample URL:\n,2010-05-29T00:00:00.000Z,2010-05-30T00:00:00.000Z,2010-05-31T00:00:00.000Z,2010-06-01T00:00:00.000Z,2010-06-02T00:00:00.000Z,2010-06-03T00:00:00.000Z,2010-06-04T00:00:00.000Z&TRANSPARENT=true&STYLES=vector%2Frainbow&CRS=EPSG%3A4326&COLORSCALERANGE=0%2C2&NUMCOLORBANDS=254&LOGSCALE=false&SERVICE=WMS&VERSION=1.3.0&REQUEST=GetMap&EXCEPTIONS=XML&FORMAT=application/,23.230544403196,-81.293768882752,31.799870654942&WIDTH=512&HEIGHT=400\n
{{{\n#!/usr/bin/perl\n# Concatenate NDBC stdmet files into a single NetCDF file\n\n# Rich Signell\n\n# $buoy = 44005; # Gulf of Maine offshore\n# $start_year = 1978;\n# $stop_year = 2006;\n\n$buoy = 44013; # Mass Bay\n$start_year = 1984;\n$stop_year = 2006;\n\n# Use Curl to grab yearly NetCDF files\n$year = $start_year;\nwhile ($year <= $stop_year) {\n system("curl -o ${buoy}_${year}.nc${buoy}/${buoy}h${year}.nc");\n $year++;\n}\n$nyears = $stop_year - $start_year + 1;\n\n# Use NCO "ncrcat" to join yearly NetCDF files :\nsystem("ncrcat -O -n ${nyears},4,1 ${buoy}_${start_year}.nc ${buoy}.nc");\n}}}
Bare earth digital elevation model (DEM) data from the NED, previously available through the Seamless Server Viewer, is accessed in The National Map Viewer with the 'Download Data' tool. Use the following procedures to download DEM data:\n\n*Zoom to your area of interest.\n*Click the Download Data tool near the top right corner of the viewer banner.\n*Use the current map extent, choose a reference area polygon from the Download options dropdown menu, or create your own custom polygon.\n*Select the data theme of Elevation and product format.\n*Select available NED products, such as 2 arc-second (Alaska only), 1 arc-second (conterminous U.S., Hawaii, Puerto Rico, portions of Alaska, western & southern Canada along U.S. border, and all of Mexico), 1/3 arc-second (conterminous U.S., Hawaii, and portions of Alaska), or 1/9 arc-second (in limited areas of U.S. only).\n*Add selected products to the Cart.\n*Checkout and enter your e-mail address twice to place your order.\nNote: you will not initially see the Elevation theme in The National Map Viewer's Overlays table of contents, although we are working on offering more visualization options in the near future. Also, be aware that elevation products are available either as 'staged data' in pre-packaged 1x1 degree cell extents in either ArcGrid or GridFloat formats, or through 'dynamic data extracts' in user defined reference area polygons with additional formats of GeoTIFF or BIL_16INT (besides ArcGrid or GridFloat).
!!!1. Create a directory\nThe entire geoport system uses RAID6, but only /usgs/vault0 is backed up (nightly, on tape). Most simulation output, since able to be regenerated, are on /usgs/data0, /usgs/data1 or /usgs/data2. \n\nSo create a new directory on on one of these disks, making sure to set permissions so it is executable by tomcat (chmod 755). \n\nFor example:\n{{{\nssh\ncd /usgs/data1/rsignell\nmkdir run01\nchmod 755 run01\n}}}\n!!!2. Transfer your files\nCopy your netcdf files to your new directory. If you have a bunch of history files that you would like to like to access an aggregation using a single URL, add an NcML file to this directory that looks like this:\n{{{\n<netcdf xmlns="">\n <aggregation dimName="ocean_time" type="joinExisting">\n <scan location="." suffix=".nc"/> \n </aggregation>\n</netcdf>\n}}}\n\nNote that the scan above just aggregates all the files in the directory ending with ".nc".\n\nIf you need to be more specific about the files you want to aggregate, you can use a "regExp" expression rather than "suffix". Say you had files that looked like this:\n{{{\\\\\n}}}\nTo specify history files that start with "vs_his", have exactly 4 numbers between 0 and 9, and then ends in ".nc" you can use:\n{{{\n <scan location="." regExp=".*vs_his_[0-9]{4}\$"/>\n}}}\nAnd to specify only the first two history files:\n{{{\n <scan location="." regExp=".*vs_his_00(01|02)\$"/>\n}}}\nYou can see lots of regExp options at:\n\n\nMake sure your files are readable by group. Do a chmod 644 if you are not sure. \nfor example:\n{{{\nchmod 644 *.nc\n}}}\n!!!3. Access your data\nYour files should now be accessible to you and others via THREDDS at:\n\nJust navigate down to the directory where you placed the files. You can click on the .ncml file to obtain the aggregation. Using this procedure you don't need to modify any thredds catalogs or reload the thredds server.\n
Charlies comments:\nWhat is different this year?\nDIF in spin down,synchronizing IOOS regions this year\nDIF Projects(4): Hurricane Intensity (NCEP), Coastal Inundation (NWS), harmful algal blooms (NCOS), integrated ecosystem assessments (PFEG)\nDIF Providers(3): NDBC, COOPS, Coastwatch\nIOOS-CI project is well defined\nHassan BIO is different\nModeling test bed \nBrink of next phase of DMAC\n\nCarmel's comments:\nSkil assessment was difficult for DIF projects\nMost of the metrics were looking at the benefit of new data into the modeling system, rather than looking at the impact of standardizing the existing data flows\n\nWebalyzer - open source processor for web logs\n\nSam's comments:\nICOOS act of 2009\nSteve Ramp workshop in May?\n\n SOS type \nAOOS NDBC\nCARA NDBC\nGCOOS OosTethys, NDBC, DIF\nGLOS NDBC\nMACOORA testing OosTethys\nNANOOS OosTethys python, but modified to deliver DIF, Cothran's tools for glider data\nNERACOOS Latest OosTethys perl package supports SWE and DIF\nPACIOOS\n\ngeoext\njson values, flot to plot\nSECOORA\nSCCOOS NDBC\n\nJeff DLB:\nGML \nNDBC doing CSV, SWE Common\n\n\n\nNanoos visualization system\n\n*** I should make a before and after slide for CF compliant ocean model output available.\n\nIWGOO ==> IOOC Interagency Ocean Observing Committee\n\nInteragency collaboration\nOOI-CI = IOOS collaboration\n\nKen Casey:\nArchiving NDBC Data at NODC\n1. This year, C-MAN, TAU, CTDs during maintenance of TAU buoy\n2. Next year, partner buoys, HF Radar, OceanSites, Ocean Current profileing at Oil and Gas Platforms (ADCP data)\n\\nFGDC, ISO 19115\nCF-Compliant NetCDF\n\nMight take the NECOFS forecast -- should discuss with NODC!\n\nKyle Wilcox\nAdd legends to ncWMS?\n\nEric Bridger\n\n\nLisa:\ncordc.usdc\nembedded hfradar data\n\n\n\n
login to geoport\n{{{\nssh\n}}}\nlogin to cluster as root:\n{{{\nstarcluster sshmaster rps_cluster\n}}}\nchange to sgeadmin user:\n{{{\nsu - sgeadmin\n}}}\nkill existing server\n{{{\nps aux | grep notebook | grep sgeadmin | grep -v parent\n kill -9 <pid>\n}}}\nstart new server\n{{{\nnohup ipython notebook --no-browser --notebook-dir='notebooks' &\n}}}\n\nnotebook server on EC2 is:\n\n\n{{{\nrsignell@gam:~$ starcluster listclusters\nStarCluster - ( (v. 0.94.3)\nSoftware Tools for Academics and Researchers (STAR)\nPlease submit bug reports to\n\n---------------------------------------------\nrps_cluster (security group: @sc-rps_cluster)\n---------------------------------------------\nLaunch time: 2014-01-17 16:28:19\nUptime: 3 days, 21:41:41\nZone: us-east-1a\nKeypair: mykey2\nEBS volumes: N/A\nCluster nodes:\n master running i-b4b11c9a\n node001 running i-b7b11c99\nTotal nodes: 2\n}}}
{{{\n 503 sudo add-apt-repository "deb precise/"\n 504 gpg --keyserver --recv-key E084DAB9\n 505 gpg -a --export E084DAB9 | sudo apt-key add -\n 506 sudo apt-get update\n 507 sudo apt-get install r-base\n 509 pip install rpy2\n}}}\n
{{{\ncd $HOME/python\ngit clone git://\ncd Fiona\npython build_ext -I$HOME/include -L$HOME/lib -lgdal install\n}}}\n(my GDAL libaries and include files are in $HOME/lib and $HOME/include)\n
update: I removed the https: => http: by commenting out the statement\n{{{\n#c.NotebookApp.certfile = u'/home/tomcat/.config/ipython/mycert.pem'\n}}}\nin\n/home/tomcat/.config/ipython/profile_default/\n\nthen restart:\n{{{\nps aux | grep notebook | grep -v parent\nkill -9 pid\ncd /home/tomcat/python\n\nnohup ipython notebook --pylab=inline &\n}}}\n{{{\ncd /home/tomcat/python\ngit clone git://\ncd ipython\npython build\npython -c "import setuptools;execfile('')" bdist_egg\negginst dist/\n\ncd /home/tomcat/.config/ipython\nopenssl req -x509 -nodes -days 365 -newkey rsa:1024 -keyout mycert.pem -out mycert.pem\n\nipython\nIn [1]: from IPython.lib import passwd\nIn [2]: passwd()\nEnter password:\nVerify password:\nOut[2]: 'sha1:67c9e60bb8b6:9ffede0825894254b2e042ea597d771089e11aed' (not this exact code)\n\n[tomcat@testbedapps ipython]$ ifconfig -a | grep 'inet addr'\n inet addr:
Starting with Iris Canopy 1.0.3 on 64 bit Ubuntu 12.04 LTS\n\n\nNON-PYTHON DEPENDENCIES:\n\nInstall libproj:\nsudo apt-get install libproj-dev\n\nInstall UDUNITS:\nsudo apt-get install libudunits2-dev\n\nInstall G++:\nsudo apt-get install g++\n\nInstall LibGeoS:\nnote: the version of geos with 12.04 is too old, so we have to use Ed Campbell's PPA:\nsudo add-apt-repository ppa:drescambell/ppa\nsudo apt-get update\nsudo apt-get install libgeos-dev\n\nPYTHON DEPENDENCIES:\n\nInstall these supported packages using the Canopy installer:\nInstall NetCDF4, Cython, Shapely\n\nInstall Pyke:\nwget\npython build\npython -c "import setuptools;execfile('')" bdist_egg\negginst dist/pyke-1.1.1-py2.7.egg\n\nInstall Cartopy:\ncd $HOME/python\ngit clone\npython build\npython -c "import setuptools;execfile('')" bdist_egg\negginst dist/Cartopy-0.9.x-py2.7-linux-x86_64.egg\n\nInstall Iris:\ncd $HOME/python\ngit clone\npython bdist_egg\negginst dist/Iris-1.5.0_dev-py2.7.egg\n\n
Java 8:\n{{{\nsudo add-apt-repository ppa:webupd8team/java\nsudo apt-get update\nsudo apt-get install oracle-java8-installer\n}}}\ncheck:\n{{{\njava -version\n}}}\n\nTomcat 8:
This bizarre first step seems to be necessary before the aptitude stuff will work:\n{{{\n1. Open console and type: ln -s / /cow\n\nMore info:\n\n}}}\n{{{\nsudo aptitude update\nsudo aptitude install sun-java6-bin sun-java6-plugin sun-java6-font\njava -version\n}}}
The TinyURL of this page is <>\n\nThis is the step-by-step procedure I used on to install the THREDDS Data Server, just walking in cold to the machine, but with sudo rights.\n\nHmmm... What kind of system do we have here?\n{{{\n$ ls /etc/*release*\n$ more /etc/redhat-release\nCentOS release 5 (Final)\n}}}\nSo we have a Redhat Enterprise Linux 5 system (CentOS release 5).\n\nWhat kernel are we running?\n{{{\n$ uname -a\nLinux blackburn 2.6.18-53.1.6.el5 #1 SMP Wed Jan 23 11:28:47 EST 2008 x86_64 x86_64 x86_64 GNU/Linux\n}}}\nSo we have a AMD 64 bit system.\n\nDo we have Java? We need to have Sun Java 1.6 or higher.\n{{{\n$ java -version\njava: command not found\n}}}\nIt appears we have no java. Really?\n{{{\n$ sudo /usr/sbin/alternatives --config java\n}}}\nReally. We have no java. \n\nSo let's go to Sun and get the latest Java. I went straight to <> and grabbed the appropriate JDK for my system (latest AMD64 JDK binary rpm).\n{{{\n$ wget\n$ chmod 755 jdk-6u14-ea-bin-b06-linux-amd64-06_may_2009-rpm.bin\n$ ./jdk-6u14-ea-bin-b06-linux-amd64-06_may_2009-rpm.bin\n$ ls /usr/java\ndefault jdk1.6.0_14 latest\n}}}\nOkay, so we now have java installed, but let's set it up in "alternatives" and make it the default choice. If we had multiple versions of java we manage them with "alternatives".\n{{{\n$ sudo /usr/sbin/alternatives --install /usr/bin/java java /usr/java/jdk1.6.0_14/bin/java 1\n$ sudo /usr/sbin/alternatives --config java\n$ java -version\njava version "1.6.0_14-ea"\nJava(TM) SE Runtime Environment (build 1.6.0_14-ea-b06)\nJava HotSpot(TM) 64-Bit Server VM (build 14.0-b15, mixed mode)\n}}}\n\nGrabbed Tomcat version 6.0.20 and Thredds Data Server version 4.1:\n{{{\n$ wget\n$ wget\n}}}\n\nUnpack tomcat and move to /usr/local/tomcat\n{{{\n$ tar xvfz apache-tomcat-6.0.20.tar.gz\n$ sudo mv apache-tomcat-6.0.20 /usr/local/tomcat\n}}}\n\nAdded a user "tomcat", make it the owner of the tomcat directory, and give the tomcat group write access to anything in the tomcat directory.\n{{{\n$ sudo /usr/sbin/useradd tomcat\n$ sudo passwd tomcat\n$ sudo chown -R tomcat:tomcat /usr/local/tomcat\n$ sudo chmod -R g+rwX /usr/local/tomcat\n}}}\nFor each user who you want to be able to modify content for tomcat, add them to the "tomcat" group, making the "tomcat" group the initial group. ("usgs" for usgs group). The capital "G" adds the group, the lower case "g" makes it also the initial group.\n{{{\nsudo /usr/sbin/usermod -g tomcat kestons\n\nsudo /usr/sbin/usermod -g tomcat mcgillic\n\nsudo /usr/sbin/usermod -g usgs rsignell\nsudo /usr/sbin/usermod -G tomcat,rsignell rsignell\nsudo /usr/sbin/usermod -g usgs nganju\n\n}}}\n\nBlock robots:\n\n{{{\n$ cd <tomcat>/webapps/ROOT\n$ cat robots.txt\nUser-agent: *\nDisallow: / \n}}}\n\nEdited /usr/local/tomcat/conf/tomcat-users.xml to add the Tomcat GUI manager:\n\n{{{\n<?xml version='1.0' encoding='utf-8'?>\n<tomcat-users>\n <role rolename="manager"/>\n <role rolename="admin"/>\n <user username="admin" password="XXXXXX" roles="admin,manager"/>\n</tomcat-users>\n}}}\n\nAdded the file "" to /usr/local/tomcat/bin to set JAVA options:\n{{{\n$more\nJAVA_HOME=/usr/java/default\nJAVA_OPTS="-server -Djava.awt.headless=true -Xms512m -Xmx2048m"\n}}}\nOn a machine with only 2GB RAM, back -Xmx2048m back to -Xmx1024m.\n\nStart Tomcat:\n{{{\ncd /usr/local/tomcat\nsudo -u tomcat ./bin/\n}}}\n\nTo ensure that tomcat starts at boot time, we followed the instructions at:\n{{{\n\n}}}\nchanging the CATALINA_HOME to "/usr/local/tomcat" and adding JAVA_HOME="/usr/java/default". \nSo the resulting "/etc/init.d/tomcat" file looks like this:\n{{{\n#!/bin/bash\n#\n# tomcat\n#\n# chkconfig: 35 98 01\n# description: Start/stop the Tomcat servlet engine gracefully on boot/shutdown. \s\n# The numbers say start tomcat in run levels 3 or 5 \s\n# and use priority 98 for start, and priority 01 for stop\n# Source function library.\n. /etc/init.d/functions\n\n\nRETVAL=$?\nCATALINA_HOME="/usr/local/tomcat"\nJAVA_HOME="/usr/java/default"\ncase "$1" in\n start)\n if [ -f $CATALINA_HOME/bin/ ];\n then\n echo $"Starting Tomcat"\n /bin/su tomcat $CATALINA_HOME/bin/\n fi\n ;;\n stop)\n if [ -f $CATALINA_HOME/bin/ ];\n then\n echo $"Stopping Tomcat"\n /bin/su tomcat $CATALINA_HOME/bin/\n fi\n ;;\n *)\n echo $"Usage: $0 {start|stop}"\n exit 1\n ;;\nesac\n\nexit $RETVAL\n\n}}}\n\nTo manually start/stop tomcat as user tomcat, so this:\n{{{\nsudo -u tomcat /usr/local/tomcat/bin/\nsudo -u tomcat /usr/local/tomcat/bin/\n}}}\n\nGet sample catalogs at\n{{{\n\n}}}\n
Hmm. What kind of system do we have?\n{{{\nrsignell@gam:~$ uname -a\nLinux gam 2.6.31-16-server #52-Ubuntu SMP Thu Dec 3 23:03:41 UTC 2009 x86_64 GNU/Linux\n}}}\nOkay, Ubuntu running on 64 bit AMD (x86_64).\n\nWhat version of Ubuntu?\n{{{\nrsignell@gam:~$ cat /etc/lsb-release\nDISTRIB_ID=Ubuntu\nDISTRIB_RELEASE=9.10\nDISTRIB_CODENAME=karmic\nDISTRIB_DESCRIPTION="Ubuntu 9.10"\n}}}\n\nWhat version of Java?\n{{{ \nrsignell@gam:~$ java -version\njava version "1.6.0_0"\nOpenJDK Runtime Environment (IcedTea6 1.6.1) (6b16-1.6.1-3ubuntu1)\nOpenJDK 64-Bit Server VM (build 14.0-b16, mixed mode)\n}}}\nOkay, Java 1.6, but we need Sun java 1.6, because we've been been burned before with non-Sun java.\n\n{{{\nsudo apt-get install sun-java6-jdk sun-java6-plugin sun-java6-fonts\n}}}\n\nAdd to alternatives:\n{{{\nrsignell@gam:~$ sudo update-java-alternatives -l\njava-6-openjdk 1061 /usr/lib/jvm/java-6-openjdk\n\n\n
To install OpenCV Python interface on Windows, I downloaded the binary from \n{{{\n\n\n}}}\nand then unzipped it into\n{{{\nC:\sRPS\spython\sepd32\sopencv\n}}}\nthen added a path file called "opencv.pth" that contains the single line:\n{{{\nc:\srps\spython\sepd32\sopencv\sbuild\spython\s2.7\n}}}\n\nI can then import OpenCV as\n{{{\nimport cv2\n}}}\n\nthis binary works on 32 bit python ( does not work on 64 bit python).\n-Rich\n
\nNote: Permalink to this page is {{{<>}}}.\n\nUpgraded Java, as Java 1.6 was not installed:\n{{{\nsudo /usr/sbin/alternatives --config java\n}}}\nrevealed that only 1.4 was available from the 3 choices available.\n\nSo we got java jdk from, version 6, update 12, installed it as the 4th choice, and then selected it:\ncheck out the latest binaries from:\n{{{\nrpm -iv j2re-1_6_13-linux-i586.rpm (one option, if you use the RPM)\nor\nsudo mv jdk-6u12-linux-x64-rps.bin /usr/java (2nd option, I had previously downloaded the jdk to my pc and renamed it)\ncd /usr/java\nsudo sh jdk-6u12-linux-x64-rps.bin \nsudo /usr/sbin/alternatives --install /usr/bin/java java /usr/java/jre1.6.0_07/bin/java 4\nsudo /usr/sbin/alternatives --config java\n}}}\nGrabbed tomcat and thredds:\n{{{\nwget\nwget\n}}}\n{{{\nsudo mv apache-tomcat-6.0.18 /usr/local/tomcat\nsudo chown -R rsignell /usr/local/tomcat\ncd /usr/local/tomcat/conf\nvi tomcat-users.xml\n}}}\nEdited /usr/local/tomcat/conf/tomcat-users.xml to look like:\n{{{\n<?xml version='1.0' encoding='utf-8'?>\n<tomcat-users>\n <role rolename="manager"/>\n <role rolename="admin"/>\n <user username="admin" password="XXXXXX" roles="admin,manager"/>\n</tomcat-users>\n}}}\n\nEdited /usr/local/tomcat/bin/ to insert the line\n{{{\nJAVA_HOME=/usr/java/jre1.6.0_07\n}}}\njust before tomcat executes (around line 189).\n\nStart Tomcat:\n{{{\ncd /usr/local/tomcat\n./bin/\n}}}\n\nTo ensure that tomcat starts at boot time, we followed the instructions at:\n{{{\n\n}}}\nmodifying the /etc/init.d/tomcat script to start tomcat as "rsignell" instead of user "tomcat" since we didn't see the need to add a tomcat user since rsignell was already present and served the same function (not having tomcat started as superuser).\n\nWe then did\n{{{\ncd /etc/rc5.d\nsudo ln -s ../init.d/tomcat S71tomcat\ncd /etc/rc3.d\nsudo ln -s ../init.d/tomcat S71tomcat\n}}}\nto add tomcat to both level 3 and level 5 boots.\n\nGet sample catalogs at\n{{{\n\n}}}\n
{{{\ncd /peach/data1/rsignell/models/wrf\nwget\nwget\nwget\n\nexport NETCDF=/share/apps/netcdf\nexport WRF_EM_CORE=1\n\nin namelist.wps, changed one line:\ngeog_data_path = '/peach/data1/rsignell/models/wrf/geog'\n\nDid ./configure 3 1 for WRF\nDid ./configure 5 for WPS\n\n\n}}}
I didn't even really get out of the staring blocks on the install of\nWakari Enterprise.\n\nI'm following the instructions here:\n\n\nBut when I did\n{{{\n./\n}}}\nit said I need to do "-w " and specify the IP\n\ndo then I did:\n{{{\n./ -w\n}}}\n\nand I got:\n{{{\nERROR: File or directory already exists: /opt/wakari/wakari-server\n}}}\nTroy: Looks like it left the install directory in place when it dropped out of the install. Just do a:\n{{{\nrm -rf /opt/wakari\n}}}\nand try the install script again:\n{{{\n./ -w\n}}}\n\nOkay, now I get:\n\n{{{ \nPREFIX=/opt/wakari/wakari-server\nChecking server name\nReady for preinstall\nDoing preinstall\nNo need to install sudo.\nNo need to install setfacl.\nInstalling nginx\n\nPlease install missing dependencies or use the -a flag to attempt\nauto-installing dependencies via yum or apt-get\n\nsudo-1.8.6p3-12.el6.x86_64\nsudo : Found\nsetfacl : Found\nfilespace : Found 83GB\nnginx : Not satisfied - Need 1.4.0\nmongo : Not satisfied - Need 2.2.7\nERROR: running pre install script\n/opt/wakari/wakari-server/tmp/ -w -a\nfailed\n\nmore /etc/redhat-release\nRed Hat Enterprise Linux Server release 6.5 (Santiago)\n}}}\n\nTroy Powell: What version of mongo and nginx are you running?\nNginx and Mongo are not installed by default, you will need to install them first if you have not already done so.\n\nSo then I installed nginx following instructions here: for Red Hat. This consists of creating the file /etc/yum.repos.d/nginx.repo with this info:\n{{{\n[nginx]\nname=nginx repo\nbaseurl=$basearch/\ngpgcheck=0\nenabled=1\n}}}\nand then doing \n{{{\nsudo yum install nginx\n}}}\n\nI then installed Mongo following the instructions here: which consisted of creating /etc/yum.repos.d/mongodb.repo with the following content:\n{{{\n[mongodb]\nname=MongoDB Repository\nbaseurl=\ngpgcheck=0\nenabled=1\n}}}\n\n{{{\nPATH=/opt/wakari/wakari-gateway/bin:$PATH /opt/wakari/wakari-gateway/bin/wk-gateway-configure --server --host --port 7512 --name Gateway --protocol http --summary Your First Gateway --username wakari --password password\n\n/opt/wakari/wakari-publisher/bin/wk-publisher-register --username wakari --password password --wakari-server-url --publisher-server-url --outputfile /opt/wakari/wakari-publisher/etc/wakari/wakari-publisher.json\n}}}\n\n{{{\nmore /opt/wakari/wakari-server/etc/wakari/config.json\n{\n "WAKARI_SERVER": "",\n "USE_SES": false,\n "CDN": ""\n}\n\nmore /opt/wakari/wakari-gateway/etc/wakari/wk-gateway-config.json\n{\n "CDN": "",\n "SUBDOMAIN_ROUTING": false,\n "port": 7512,\n "WAKARI_SERVER": "",\n "client_id": "5447f077b2690264939a109f",\n "client_secret": "b19c11b5-9ddc-4ccc-ac48-4b09491e5c43"\n}\n\n\nmore /etc/nginx/conf.d/publisher.location\nlocation /publisher {\n proxy_pass;\n proxy_set_header Host $host:3690;\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n}\n\n\nThis should get created automagically by the publisher script:\nmore /opt/wakari/wakari-publisher/etc/wakari/wakari-publisher.json\n{"flask_secret": "79a3f6d3-b733-49a9-8cd4-e9e9b9cfb727", "client_secret": "8dba72e1-7cc8-42cf-9831-988a536afb8e", "wakari_server_url": "", "client_id": "54480f67b2690278ba90df87"}\n\n\n[root@testbed2 wakari]# /opt/wakari/wakari-server/bin/wk-server-admin add-user hdean -p xxxxxx -e\n\n 882 vi /opt/wakari/wakari-server/etc/wakari/config.json\n 883 vi /etc/nginx/conf.d/www.enterprise.conf\n 884 server nginx stop\n 885 service nginx stop\n 886 service nginx start\n 887 vi /etc/nginx/conf.d/www.enterprise.conf\n 888 service nginx stop\n 889 service nginx start\n 890 vi /etc/nginx/conf.d/www.enterprise.conf\n 891 service nginx start\n 892 vi /etc/nginx/conf.d/www.enterprise.conf\n 893 service nginx start\n 894 ps aux | grep myproxy\n 895 vi /etc/nginx/conf.d/www.enterprise.conf\n 896 service nginx start\n 897 vi /etc/nginx/conf.d/www.enterprise.conf\n 898 service nginx start\n 899 vi /etc/nginx/conf.d/www.enterprise.conf\n 900 service nginx start\n 901 service nginx stop\n 902 history\n 903 vi /opt/wakari/wakari-server/etc/wakari/config.json\n 904 service nginx start\n 905 /opt/wakari/wakari-server/bin/supervisorctl restart all\n 906 exit\n 907 vi /etc/nginx/conf.d/www.enterprise.conf\n 908 service nginx start\n 909 /opt/wakari/wakari-server/bin/supervisorctl restart all\n 910 exit\n 911 lsof -i :587\n 912 lsof -i :3283\n 913 lsof -i :2401\n 914 nslookup testbed2\n 915 nmap testbed2\n 916 lsof -i :3690\n 917 exit\n 918 service nginx stop\n 919 vi /etc/nginx/conf.d/www.enterprise.conf\n 920 service nginx start\n 921 history\n 922 /opt/wakari/wakari-server/bin/supervisorctl restart all\n 923 pwd\n 924 ls *.sh\n 925 ./\n 926 ./ -w\n 927 rm -rf /opt/wakari/wakari-compute\n 928 ./ -w\n 929 ./ -w\n 930 ./ -w\n 931 ./ -w\n 932 more ~/.bashrc\n 933 vi /opt/wakari/wakari-gateway/etc/wakari/wk-gateway-config.json\n 934 vi /opt/wakari/wakari-gateway/etc/wakari/\n 935 vi /opt/wakari/wakari-gateway/etc/wakari/\n 936 ls /opt/wakari/wakari-gateway/etc/wakari\n 937 cd /opt/wakari/wakari-gateway/\n 938 find . - name '*.json' -print\n 939 find . -name wk-gateway-config.json -print\n 940 find . -name '*config.json' -print\n 941 find . -name '*.json' -print\n 942 cd config\n 943 cd wakari/config\n 944 pwd\n 945 find . -name '*.json' -print | grep config\n 946 ls /opt/wakari/wakari-comput/etc\n 947 ls /opt/wakari/wakari-compute/etc\n 948 ls /opt/wakari/wakari-compute/etc/wakari\n 949 vi /opt/wakari/wakari-compute/etc/wakari/config.json\n 950 vi /opt/wakari/wakari-publisher/etc/wakari/wakari-publisher.json\n 951 cd /opt/wakari/wakari-publisher/\n 952 ls\n 953 find . -name 'wakari-publisher.json' -print\n 954 pwd\n 955 cd /opt/wakari/wakari-publisher\n 956 find . -name 'wakari-publisher.json' -print\n 957 find . -name '*.json' -print\n 958 find . -name 'gmp-4.3.2-0.json' -print\n 959 history\n 960 netstat | grep 80\n 961 netstat | grep 80 | grep httpd\n 962 netstat -anp grep httpd\n 963 netstat -anp | grep httpd\n 964 vi /opt/wakari/wakari-server/etc/wakari/config.json\n 965 vi /opt/wakari/wakari-server/etc/wakari/config.json\n 966 vi /opt/wakari/wakari-compute/etc/wakari/config.json\n 967 vi /etc/nginx/conf.d/www.enterprise.conf\n 968 service nginx restart\n 969 /opt/wakari/wakari-server/bin/supervisorctl restart all\n 970 /opt/wakari/wakari-compute/bin/supervisorctl restart all\n 971 history\n 972 history\n 973 /etc/init.d/myproxy-server stop\n 974 exit\n 975 cd /etc/httpd/\n 976 ls\n 977 cd conf.d/\n 978 ld\n 979 ls\n 980 ls -ltr\n 981 more production.conf\n 982 pwd\n 983 vi -R production.conf\n 984 PATH=/opt/wakari/wakari-gateway/bin:$PATH /opt/wakari/wakari-gateway/bin/wk-gateway-configure --server --host --port 7512 --name Gateway --protocol http --summary Your First Gateway --username wakari --password password\n 985 cat /opt/wakari/wakari-gateway/etc/wakari/wk-gateway-config.json\n 986 vi /opt/wakari/wakari-gateway/etc/wakari/wk-gateway-config.json\n 987 sudo -u wakari bash -c /opt/wakari/wakari-gateway/bin/supervisord\n 988 cd /opt/wakari/wakari-server/\n 989 ls\n 990 cd etc/wakari/\n 991 ls\n 992 more config.json\n 993 more wk-server-config.json\n 994 /opt/wakari/wakari-server/bin/supervisorctl\n 995 /opt/wakari/wakari-server/bin/supervisorctl restart\n 996 /opt/wakari/wakari-server/bin/supervisorctl restart all\n 997 more /etc/httpd/conf.d/production.conf\n 998 more /etc/httpd/conf.d/production.conf\n 999 vi /etc/httpd/conf.d/production.conf\n 1000 exit\n 1001 vi /etc/nginx/conf.d/www.enterprise.conf\n 1002 service nginx restart\n 1003 more /opt/wakari/wakari-gateway/\n 1004 ls\n 1005 cd /opt/wakari/wakari-gateway/\n 1006 ls\n 1007 cd var\n 1008 ls\n 1009 cd log\n 1010 ls\n 1011 cd wakari\n 1012 ls\n 1013 tail gateway.application.log\n 1014 /opt/wakari/wakari-compute/bin/supervisorctl status\n 1015 /opt/wakari/wakari-compute/bin/supervisorctl restart all\n 1016 cat /etc/nginx/conf.d/www.enterprise.conf\n 1017 cat /opt/wakari/wakari-server/etc/wakari/wk-server-config.json\n 1018 cat /opt/wakari/wakari-server/etc/wakari/config.json\n 1019 /opt/wakari/wakari-server/bin/supervisorctl restart all\n 1020 ls -hal /opt/wakari/wakari-server/etc/wakari/config.json\n 1021 ps auxww | grep gunicorn\n 1022 ps auxww | grep wk_server\n 1023 ps auxwww | grep supervisord\n 1024 ls /opt/wakari/wakari-server/var/log/wakari\n 1025 /opt/wakari/wakari-server/bin/supervisorctl stop all\n 1026 cat /opt/wakari/wakari-server/etc/supervisord/conf.d/wk-server.conf\n 1027 ps auxwwww | grep wk-server\n 1028 pkill -f wk-server\n 1029 /opt/wakari/wakari-server/bin/supervisorctl start all\n 1030 cat /opt/wakari/wakari-gateway/etc/supervisord/conf.d/wk-gateway.conf\n 1031 /opt/wakari/wakari-gateway/bin/supervisorctl stop all\n 1032 ps auxwww | grep wk-gateway\n 1033 /opt/wakari/wakari-compute/bin/supervisorctl stop all\n 1034 cat /opt/wakari/wakari-compute/etc/supervisord/conf.d/wk-compute-launcher.conf\n 1035 ps auxw | grep wk-compute\n 1036 ps auxww | grep mtq-worker\n 1037 kill 25739\n 1038 /opt/wakari/wakari-gateway/bin/supervisorctl start all\n 1039 /opt/wakari/wakari-compute/bin/supervisorctl start all\n 1040 tail -n 1000 /opt/wakari/wakari-compute/var/log/wakari/compute-launcher.application.log\n 1041 tail -n 1000 /opt/wakari/wakari-gateway/var/log/wakari/gateway.application.log\n 1042 vim /opt/wakari/wakari-gateway/etc/wakari/wk-gateway-config.json\n 1043 history\n 1044 /opt/wakari/wakari-gateway/bin/supervisorctl start all\n 1045 /opt/wakari/wakari-server/bin/wk-admin\n 1046 /opt/wakari/wakari-server/bin/wk-server-admin superuser -h\n 1047 /opt/wakari/wakari-server/bin/wk-server-admin superuser -a rsignell\n 1048 /opt/wakari/wakari-server/bin/wk-server-admin -h\n 1049 /opt/wakari/wakari-server/bin/wk-server-admin add-user -h\n 1050 /opt/wakari/wakari-server/bin/wk-server-admin add-user hdean -p hdean=1 -e\n 1051 s/opt/wakari/wakari-publisher/bin/wk-publisher-register --username wakari --password password --wakari-server-url --publisher-server-url --outputfile /opt/wakari/wakari-publisher/etc/wakari/wakari-publisher.json\n 1052 /opt/wakari/wakari-publisher/bin/wk-publisher-register --username wakari --password password --wakari-server-url --publisher-server-url --outputfile /opt/wakari/wakari-publisher/etc/wakari/wakari-publisher.json\n 1053 /opt/wakari/wakari-publisher/bin/supervisord\n 1054 ps\n 1055 ps aux | grep supervisor\n 1056 history\n 1057 vi /etc/nginx/conf.d/publisher.location\n 1058 service nginx restart\n 1059 /opt/wakari/wakari-publisher/bin/supervisorctl restart all\n 1060 vi /opt/wakari/wakari-publisher/etc/wakari/wakari-publisher.json\n 1061 history\n}}}\n
Here's how I installed Java 1.6 on my RHEL4 box:\n\n* Go to\n* Click on 'Download' button for 'JDK6 update 3' (first one).\n* Choose your platform (linux) and download the rpm file.\n* Install by typing\n{{{\nsh jdk-6u3-linux-i586-rpm.bin\n}}}\n* See how many java alternatives exist on your system already \n{{{\n/usr/sbin/alternatives --config java\n}}}\nIn my case there were 5 choices.\n* I want to add my new java as alternative 6, so I type: \n{{{\n/usr/sbin/alternatives --install /usr/bin/java java /usr/java/jre1.6.0_01/bin/java 6\n}}}\n* Now I select my new java from the alternatives list\n{{{\n/usr/sbin/alternatives --config java\n}}}\nand choose number 6.\n* Done!\n\n\n
Greg Miller showed me how to add a 2nd tomcat to blackburn:\n\n{{{\ncd /usr/local\ncp -R tomcat tomcat2\nchown -R rsignell tomcat2\ncd tomcat2/conf\nvi server.xml\n}}}\nIn server.xml, we merely incremented all the port numbers by 1:\n{{{\n[rsignell@blackburn local]$ pwd\n/usr/local\n[rsignell@blackburn local]$ diff tomcat/conf/server.xml tomcat2/conf/server.xml\n22c22\n< <Server port="8005" shutdown="SHUTDOWN">\n---\n> <Server port="8006" shutdown="SHUTDOWN">\n65c65\n< Define a non-SSL HTTP/1.1 Connector on port 8080\n---\n> Define a non-SSL HTTP/1.1 Connector on port 8081\n67c67\n< <Connector port="8080" protocol="HTTP/1.1"\n---\n> <Connector port="8081" protocol="HTTP/1.1"\n69c69\n< redirectPort="8443" />\n---\n> redirectPort="8444" />\n73c73\n< port="8080" protocol="HTTP/1.1"\n---\n> port="8081" protocol="HTTP/1.1"\n75c75\n< redirectPort="8443" />\n---\n> redirectPort="8444" />\n77c77\n< <!-- Define a SSL HTTP/1.1 Connector on port 8443\n---\n> <!-- Define a SSL HTTP/1.1 Connector on port 8444\n82c82\n< <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true"\n---\n> <Connector port="8444" protocol="HTTP/1.1" SSLEnabled="true"\n87,88c87,88\n< <!-- Define an AJP 1.3 Connector on port 8009 -->\n< <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" />\n---\n> <!-- Define an AJP 1.3 Connector on port 8010 -->\n> <Connector port="8010" protocol="AJP/1.3" redirectPort="8444" />\n[rsignell@blackburn local]$\n}}}\n\nThen we just have to ask to open port 8081 on the firewall (for tomcat2) in addition to port 8080 (for tomcat). Jeez. That was easy.
1. make CGAL 4.0.2\n{{{\ncd /home/epifanio/rps\nwget\nbzip2 -d CGAL-4.0.2.tar.bz2\ntar xvf CGAL-4.0.2.tar\ncd CGAL-4.0.2/\ncmake -DCMAKE_INSTALL_PREFIX=/home/epifanio .\ncd build\nmake install\n}}}\n2. make cgal-bindings\n{{{\ncd /home/epifanio/rps\ngit clone\ncd cgal-bindings\nmkdir -p build/CGAL-4.02_release\ncd build/CGAL-4.02_release\ncmake -DCGAL_DIR=/home/epifanio/rps/CGAL-4.0.2 -DJAVA_OUTDIR_PREFIX=../../examples/java -DPYTHON_OUTDIR_PREFIX=../../examples/python ../..\nmake -j 4\n}}}\nNote: this last step (make -j 4) takes quite a while. Time to make an espresso. The result of all this is:\n{{{\n/home/epifanio/rps/examples/python \n/home/epifanio/rps/examples/python/CGAL \n}}}\nThere is no, so the examples contain "import CGAL" which just imports the functions from the subdirectory below. This is pretty lame, but the examples run. I'll see about generating a file.
Tried to download grib_api using Iris instructions at\n\nbut the link\n\nno longer works, redirecting to\n\nwhich times out for me.\nI eventually downloaded this copy from github:\n\nthis package requires openjpeg or jasper, so first I installed openjpeg\n{{{\nsudo apt-get install openjpeg-tools libopenjpeg-dev\ngit clone\ncd grib_api\nexport CFLAGS="-fPIC"\n./configure --prefix=$HOME --enable-python\nmake test\nmake install\ncd python\npython build\n\n}}}\n
Followed these excellent instructions for installing Java 8:\n
1. Install "lxml" (available in the optional EPD packages from Enthought):\n{{{\nenpkg lxml \n}}}\n2. Install "OWSLib":\n{{{\ngit clone\ncd OWSLib\npython build\npython -c "import setuptools;execfile('')" bdist_egg\negginst dist/OWSLib*.egg\n}}}\n3. Install "pyGDP":\n{{{\ngit clone\ncd pyGDP\npython build\npython -c "import setuptools;execfile('')" bdist_egg\negginst dist/pyGDP*.egg\n}}}\n\n
{{{\nssh\n\ncd /usgs/data1/rsignell/ooici/\nsudo apt-get install git-core\ngit clone git://\n\ncd dispatcher_deployment\npython\n./bin/buildout dispatcher-config:sysname=R1_TEST_SYSTEM2\n\nsudo apt-get install python-dev\nsudo apt-get install swig\n\necho /DC=org/DC=cilogon/C=US/O=Google/CN=Richard Signell A687 >\n./bin/dispatcher-supervisord\n\ntouch dispatcher.log\n}}}
Github page at\n{{{\n git clone\n}}}\nthe says that these packages are required:\nhdf5, netcdf4, libevent, libevent libyaml zeromq couchdb rabbitmq pkg-config virtualenv virtualenvwrapper\n\nWith EPD, \nhdf5, netcdf4, libyaml, were already installed, and couchdb and zeromq are part of the EPD that can be installed easily:\n{{{\nrsignell@gam:~/python$ enpkg couchdb\nprefix: /home/rsignell/epd-7.2-1\nCouchDB-0.8-1.egg [downloading]\n 117 KB [.................................................................]\nCouchDB-0.8-1.egg [installing]\n 361 KB [.................................................................]\n\nrsignell@gam:~/python$ enpkg zeromq\nprefix: /home/rsignell/epd-7.2-1\nzeromq-3.2.0-1.egg [downloading]\n 561 KB [.................................................................]\nzeromq-3.2.0-1.egg [installing]\n 2.54 MB [.................................................................]\n\nrsignell@gam:~/python$ enpkg virtualenv\nprefix: /home/rsignell/epd-7.2-1\nvirtualenv-1.7-1.egg [downloading]\n 1.09 MB [.................................................................]\nvirtualenv-1.7-1.egg [installing]\n 1.21 MB [.................................................................]\n\nrsignell@gam:~/python$ enpkg pip\nprefix: /home/rsignell/epd-7.2-1\npip-1.0.2-1.egg [downloading]\n 192 KB [.................................................................]\npip-1.0.2-1.egg [installing]\n 529 KB [.................................................................]\n\nrsignell@gam:~/python$ enpkg virtualenvwrapper\nprefix: /home/rsignell/epd-7.2-1\nvirtualenvwrapper-2.11.1-1.egg [downloading]\n 23 KB [.................................................................]\nvirtualenvwrapper-2.11.1-1.egg [installing]\n 68 KB [.................................................................]\n}}}\n\n\nSo that leaves only rabbitmq, pkg-config and libevent to install.\n{{{\nsudo apt-get install libevent-dev rabbit-mq-server pkg-config\n}}}\nNo, wait, it seems we will need to upgrade couchdb, because we need at least v1.1\n{{{\nsudo apt-get install couchdb\n}}}\nStill no go, because Ubuntu 12.04 is currently at couchdb 1.0.1\n\nSo now we need to build couchdb.\n\nPrerequisites from this page: \n\n\n{{{\nsudo apt-get install libmozjs185-dev libicu-dev\n}}}\nnow build couchdb:\n{{{\nwget\n\ntar xfvz apache-couchdb-1.2.1.tar.gz\ncd apache-couchdb-1.2.1/\n ./configure\nmake\nsudo make install\n}}}\nand start it up:\n{{{\nsudo couchdb\n}}}\n\nConfigure virtualenvwrapper:\n{{{\n export WORKON_HOME=~/Envs\n mkdir -p $WORKON_HOME\n source /usr/local/python27_epd/bin/\n}}}\n I also added \nsource /usr/local/python27_epd/bin/\nto the end of my ~/.bashrc as suggested\n\n\n
Download new version of Linux Tar file to /usr/local\n\n{{{\ncd /usr/local\ntar xzvf FWTools-linux-1.3.6.tar.gz\nrm fwtools\nln -s FWTools-1.3.6 fwtools\ncd fwtools\n./\n}}}\n
Go to "register and invite" and invite a new reviewer. Let Ocean Dynamics send them the form letter that says they are registered, setting them as a "reviewer" for default role. Then invite them, but click on the "customize" button and remove the stupid intro line, and change "With kind regards" to "Sincerely". There will now be an asterisk by their name which indicates that the invitation letter has been customized, which means it's ok to send.\n
thinking about putting my "projects" (things with 3 or more steps) into Tiddlyspot
Sometimes when we shut down tomcat on geoport, it still leaves a process running. \nMake sure you check by doing:\n{{{\n ps aux | grep tomcat-thredds | grep usgs-dev\n}}}\nor\n{{{\n ps aux | grep tomcat-thredds | grep usgs\n}}}\nthere should NOT be a processs running before restart. If there is, then do a \n{{{\nsudo kill -9 <pid>\n}}}\n
<html><img src="[(last)][(
/***\n|''Name:''|LegacyStrikeThroughPlugin|\n|''Description:''|Support for legacy (pre 2.1) strike through formatting|\n|''Version:''|1.0.1|\n|''Date:''|Jul 21, 2006|\n|''Source:''||\n|''Author:''|MartinBudden (mjbudden (at) gmail (dot) com)|\n|''License:''|[[BSD open source license]]|\n|''CoreVersion:''|2.1.0|\n|''Browser:''|Firefox 1.0.4+; Firefox 1.5; InternetExplorer 6.0|\n\n***/\n\n//{{{\n\n// Ensure that the LegacyStrikeThrough Plugin is only installed once.\nif(!version.extensions.LegacyStrikeThroughPlugin)\n {\n version.extensions.LegacyStrikeThroughPlugin = true;\n\nconfig.formatters.push(\n{\n name: "legacyStrikeByChar",\n match: "==",\n termRegExp: /(==)/mg,\n element: "strike",\n handler: config.formatterHelpers.createElementAndWikify\n});\n\n} // end of "install only once"\n//}}}\n
I followed this recipe:\n\n\nusing my old Toastmaster Belgian Waffler (model 230). It gets up to 355 degrees, which is just a bit less than the optimal 360 degrees. It melts and lightly carmelizes the pearl sugar with out burning, so I don't have to any of the work-around methods listed in the web site above. I just waited until the light turned off, just like a normal waffle, and it worked fine! Instead of Belgian Pearl Sugar, I used "Nordzucker Hagel Zucker" made in Germany by "Sweet Family", which cost 0.99 euro for a 250g pack in Germany. It worked great.
linear algebra\n*PCA using scipy.linalg.svd\n*sklearn.decomposition.PCA\n*or use mdp.pca\n*using Numpy with MKL makes huge difference. \n\ninterpolation\nlinear\ntripo\n\nscipy.signal.fftconvolve can give lots of speedups, even for linear operations\n\nscipy.signal.lfilter\n\nmyfilter = np.arange (.0, 4.0)\nmyfilter\narray([1., 2., 3.])\nsp.signal.lfilter(np.array([1.,0,0,0]), 1., np.array([2, 3, 4, 5]))\n\norigin of filter is first number of filter
To see what hardware is on your RedHat system, you can use "dmesg".\n\nIf you want to see what CPUs your machine has, for example, so\n\n{{{dmesg | grep CPU }}}
| | !N4_amp | !N4_pha | !N5_amp | !N5_pha |\n| !ROMS | 1.09 | 24.6 | 1.10 | 17.9 |\n| !DATA | 1.12 | 26.8 | 0.98 | 18.7 |
Avijit - 15 km grid, going to 5 km grid, currently GFS forcing, open boundaries climatology \n\nForecast once a week, 3-day composite used twice a week (John's Hopkins)\nRadiation outflow\nFeature model for Gulf Stream inflow, rings, Gulf of Maine to create initial conditions\nOI assimilation\n16 double sigma layers
Morphos-3D meeting (Nov 1-2, 2006)\nWaterways Experiment Station, Vicksburg, MS \nCoastal Hydraulics Lab\n\nTom Richardson\nCoastal Hydraulics Lab director Tom Richardson said that Morphos3D is a prototype for a new model how the USACE wants to conduct business, in a collabortive, open environment. He guaranteed gap funding for FY07, and said a new Morphos3D Phase 2 should be a slam dunk, starting in FY08 Morphos3D will be part of USACE budget. He mentioned that Maj. Gen. Riley, head of the USACE is also behind it.\n\nMorphos-3D will be renamed MORPHOS-ER (Modeling of Relevant Physics of Systems for Evaluating Risk) to reflect new scope of Morphos to help evaluate risk.\n\nSaid there is a new "United Coastal Flooding Methodology" developed between USACE and FEMA, involving ADCIRC, STWAVE, etc that will eliminate intra-state differences in 100 year storm height, etc.\n\nJeff Hansen\nListed USGS-WHSC as a Morphos3D partner. Said that Morphos3D would be using ESMF.\n\n\nJohannes Westerink/Rick Luettich\nADCIRC-CG grid for Katrina using 2.1 million nodes.\n\nADCIRC-CG (traditional) in working for some 3D problems, but having some issues with wind stress.\n\nADCIRC-DG (discontinous Galerkin) is starting to work very well for 2D. It features high accuracy only where you need it via "p-adaptation", where the order of the element can vary in space and time. Results with flow through an inlet looked impressive indeed. ADCIRC-DG in 3D is not even on the horizon.\n\nResio asked me if I would serve as a USGS rep on the Morphos steering committee. I said I thought I would.\n\nDave Froelich, Woolpert\nWants to use FMS for now, moving to ESMF. I said I would check with Cecelia DeLuca to see if we can just start with ESMF instead.\n\nWants to use NetBeans for development, which could take advantage of NetCDF java library. If Morphos goes with XMDF, then they could use SMS, but could also build a module for NetCDF-JAVA that would allow them to plug into the Common Data Model when the CF standards have been developed. We might find that there are a few additional things we need to add to the XMDF file, such as specification of the formula for the vertical coordinate.\n\n\nDano Roelvink\n\nHad a nice little model "XBEACH" of dune erosion which involved (1) a wave action balance equation for a directionally spread, single frequency wave (infragravity wave); (2) shallow water equations; (3) the Stelling and Duinmeijer wetting and drying scheme;(4) an avalanching mechanism (1.0 above water, 0.3 below water), and (5) a sediment conservation equation. Gave me the write-up and the Matlab version (he also has a F90 version). Note that this model does not have any onshore transport of sediment, however, so can't predict dune recovery.\n\nHendrik Tolman\nDemoee a preview of the coming multi-grid version of WW3. Seamless multiple resolution two-way tiles, nested or otherwise. Very cool. Also working with NRL to make a ESMF version of WW3. Said that unstructured grid versions of SWAN is being developed by Germans & Koreans, and that he is working on a unstructured version of WW3. Also the "TOMAHAWK" model is emerging for unstructured grids.\n\n\n\n\n\n-Rich\n\n
I always have half-and-half and skim milk around. If I want to make 1% milk or whole milk, how much of each do I need to mix together to make 1 quart?\n\nFrom the web site:\nskim milk: Fat 0g/8oz = 0.0g/oz\n1% milk: Fat 2.5g/8oz = 0.3 g/oz\n2% milk: Fat 5.0g/8oz = 0.6 g/oz\nwhole milk: Fat 8g/8oz = 1.0g/oz\nhalf-and-half: Fat 3.5 g/2T = 3.5g/oz \nlight cream: Fat 3g/T = 6.0g/oz\nheavy cream: Fat 5g/T = 10.0g/oz\n\nFor 32 oz of whole milk, we need 32oz*(1g/oz)=32 g fat, so for half-and-half, we need 32g*(1oz/3.5g) = 9.1oz ~= 1 cup half & half. So 1 cup half & half + 3 cups skim ~= 1 quart of whole milk.\n\nFor 32 oz of 1% milk, we need 32oz*(0.3g/oz)=10 g fat, so 10g*(1oz/3.5g)=2.8oz ~= 1/3 cup half-and-half. So 1/3 cup half and half + 3 2/3 cups skim ~= 1 quart of 1% milk.\n
The best way to do this is to remove EPD's netCDF4 module like so:\n{{{\n $ enpkg --remove netCDF4\n}}}\nThen build your netCDF4 module into an egg. Since the netCDF4 sources\ndon't natively support setuptools, you have to do a slightly awkward,\nbut totally boilerplate command:\n\n # First build it.\n{{{\n$ python build --whatever --flags --you --need\n}}}\n # Now make the egg.\n{{{\n $ python -c "import setuptools;execfile('')" bdist_egg\n}}}\n\nNow use egginst to install the egg:\n{{{\n $ egginst dist/netCDF4-*.egg\n}}}\nBy building an egg and installing it with egginst, enpkg will have the\nmetadata necessary to uninstall it without any dangling files.\n\nOn Windows, "egginst" is in the "Scripts" directory.
I love the ease of Jing, but it can make only 5 minute long videos. \nHey, wait, perhaps that has advantages!\n- takes less time to record\n- forces you to be more concise\n- more likely to be watched\n- easier to rerecord if information becomes dated\n\nBecause I want to capture the screen, I want the text to be crisp as possible. For HD on YouTube, you want 1280x720. You can easily set your browser (IE, Firefox) window to 1280x720 by typing this into the address bar:\n{{{\njavascript:window.resizeTo(1280,720)\n}}}\nYou might as well create a bookmark if you are doing this a lot (I did).\n\nOnce you've saved your HD video to youtube, here's a cool tip. Instead of passing around the usual URL that looks like\n \nand then telling people "pop to full screen for HD", you can just give them this URL\n\nwhich will pop to full screen automagically!\n\n
WRF NetCDF files are not CF-Compliant. They work with NetCDF-Java applications because there is a special "WRF I/O Service Provider" written, but I was curious whether we could also make a WRF NetCDF CF-Compliant by just creating some NcML that adds the missing metadata. This would allow folks with collections of existing WRF files to make them CF-compliant simply by creating some NcML that they could put in a THREDDS catalog and then serve WRF as CF compliant data. \n\nI started with a sample WRF file containing 1 time step from Cindy Bruyere ( What I found was that I needed to do:\n\n1. Change the file name so that it had a ".nc" extension and remove the ":" from the time stamp: (e.g. wrfout_d01_2000-01-25_00:00:00 => I'm not sure this was completely necessary, but without the .nc extension, my THREDDS server was not picking up this file in the datasetScan, and also there seemed to be some issues with the ":". Perhaps the ":" just needed to be escaped, but in any case, it seems easy to rename.\n\n2. Add the "Conventions: CF-1.6" to the global attributes. \n\n3. Add the specification of the vertical coordinate in the "ZNU" coordinate variable using these attributes: \nstandard_name="atmosphere_sigma_coordinate", formula_terms="ptop: P_TOP sigma: ZNU ps: PSFC" and positive="down"\n\n4. Add the specification of the vertical coordinate in the "ZNW" coordinate variable using these attributes: \nstandard_name="atmosphere_sigma_coordinate", formula_terms="ptop: P_TOP sigma: ZNW ps: PSFC" and positive="down"\n\n5. Remove the time dimension from the coordinate variables: ZNU, ZNW, XLONG, XLAT, XLAT_U, XLONG_U, XLAT_V, XLONG_V. This was simple in this single time step file because I could just remove the "Time" dimension in the NcML. For example: XLAT in the NetCDF file has dimensions {"Time south_north west_east"} and I just changed this to "south_north west_east" since Time here is a singleton dimension. But in general we will need another approach. If there are 20 time steps, we could clip out the coord vars from the first time step, union in that dataset along with an aggregation that removes the coord vars from the aggregation.\n\n6. Add valid "units" to the time variable "XTIME": units="minutes since 2000-01-24 12:00:00"\n\nHere's the resulting NcML, which seems to work both in ToolsUI and in IDV:\n\n{{{\n<netcdf xmlns:xsi=""\n xsi:schemaLocation=""\n xmlns=""\nlocation="dods://">\n <attribute name="Conventions" value="CF-1.6"/>\n\n <variable name="ZNU" shape="bottom_top" type="float">\n <attribute name="positive" value="down"/>\n <attribute name="standard_name" value="atmosphere_sigma_coordinate"/>\n <attribute name="formula_terms" value="ptop: P_TOP sigma: ZNU ps: PSFC"/>\n <attribute name="units" value="layer"/>\n </variable>\n\n <variable name="ZNW" shape="bottom_top_stag" type="float">\n <attribute name="positive" value="down"/>\n <attribute name="standard_name" value="atmosphere_sigma_coordinate"/>\n <attribute name="formula_terms" value="ptop: P_TOP sigma: ZNW ps: PSFC"/>\n <attribute name="units" value="level"/>\n </variable>\n\n <variable name="U" shape="Time bottom_top south_north west_east_stag" type="float">\n <attribute name="coordinates" value="XLONG_U XLAT_U ZNU XTIME"/>\n </variable>\n\n <variable name="V" shape="Time bottom_top south_north_stag west_east" type="float">\n <attribute name="coordinates" value="XLONG_V XLAT_V ZNU XTIME"/>\n </variable>\n\n <variable name="W" shape="Time bottom_top_stag south_north west_east" type="float">\n <attribute name="coordinates" value="XLONG XLAT ZNW XTIME"/>\n </variable>\n\n <variable name="T" shape="Time bottom_top south_north west_east" type="float">\n <attribute name="coordinates" value="XLONG XLAT ZNU XTIME"/>\n </variable>\n\n <variable name="PSFC" shape="Time south_north west_east" type="float">\n <attribute name="coordinates" value="XLONG XLAT XTIME"/>\n </variable>\n\n <variable name="U10" shape="Time south_north west_east" type="float">\n <attribute name="coordinates" value="XLONG XLAT XTIME"/>\n </variable>\n\n <variable name="V10" shape="Time south_north west_east" type="float">\n <attribute name="coordinates" value="XLONG XLAT XTIME"/>\n </variable>\n\n <variable name="XTIME" shape="Time" type="float">\n <attribute name="units" value="minutes since 2000-01-24 12:00:00"/>\n </variable>\n\n <variable name="XLAT" shape="south_north west_east" type="float">\n <remove type="attribute" name="coordinates"/>\n </variable>\n\n <variable name="XLONG" shape="south_north west_east" type="float">\n <remove type="attribute" name="coordinates"/>\n </variable>\n\n <variable name="XLAT_U" shape="south_north west_east_stag" type="float">\n <remove type="attribute" name="coordinates"/>\n </variable>\n\n <variable name="XLONG_U" shape="south_north west_east_stag" type="float">\n <remove type="attribute" name="coordinates"/>\n </variable>\n\n <variable name="XLAT_V" shape="south_north_stag west_east" type="float">\n <remove type="attribute" name="coordinates"/>\n </variable>\n\n <variable name="XLONG_V" shape="south_north_stag west_east" type="float">\n <remove type="attribute" name="coordinates"/>\n </variable>\n\n</netcdf>\n}}}
Note: If you do not have a My Network Places icon on your Desktop, right click on the Desktop, select Properties, then select the Desktop tab. Click the Customize Desktop button and replace the checkmark next to My Network Places. Click OK.\n\n 1. Double-click on the My Network Places icon on the Desktop.\n 2. Click on Add Network Place from the Network Tasks menu on the left side of the window.\n 3. The Add a Network Place wizard will open. Click Next.\n 4. Select the Choose another network location option. Click Next.\n 5. In the Internet or network address field enter and click Next.\n 6. Enter your CSTMS/ROMS Username and Password when prompted. Any prompts for a username and password throughout the setup process will require entry of your Username and Password as well.\n 7. Enter a name for the link, such as CSTMS . Click Next.\n 8. Click Finish.\n 9. You may need to re-enter your Username and Password.\n 10. A Windows Explorer window will appear at the top-most level of file space in CSTMS. You may then click on the links to reach your desired area.\n
{{{\n$ cat *.png | ffmpeg -r 24 -y -f image2pipe -c:v png -i - -c:v libx264 \s\n-preset ultrafast -qp 0 -movflags +faststart -pix_fmt yuv420p test.mkv\n}}}\nThen in IPython notebook, do this:\n{{{\nfrom IPython.core.display import HTML\nvideo = open("test.mkv", "rb").read()\nvideo_encoded = video.encode("base64")\nvideo_tag = '<video autoplay loop controls alt="test"\nsrc="data:video/x-m4v;base64,{0}">'.format(video_encoded)\nHTML(data=video_tag)\n}}}\n
These are easy to assemble. The only hard part is waiting out the full two-week maceration period.\n\n1 pound ripe, thoroughly washed cherries (stems and pits intact)\n\nRatio:\n2/3 Luxardo Maraschino liqueur\n1/3 Cherry Heering liqueur\n\nPlace the cherries in a glass Mason jar or other container with a lid. Pour the liqueurs in the ratio above over the cherries. The goal is to add enough liqueur to immerse the cherries, but they will bob to the top of the liquid anyway.\n\nRefrigerate for at least 2 weeks. Gently swirl the container every 2 to 3 days to immerse the cherries in the liqueur.\n
\n\nThis workshop included many of the same participants as the preceding ACT workshop, but had a broader scope. I participated in the "learners" section where and intro to XML and a tool called Oxygen for editing it were presented. Discussion of users’ needs, and objectives was also discussed. Break-out groups intending to define, constrain, and build consensus between and among several emerging standards for metadata met simultaneously. A combined session in the last couple hours of the meeting let everyone hear what the others had been working on. The agenda and results are available at this site:\n\nThe following is my impression of what’s important. I learned that there's a perception that there are two kinds of metadata, one for discovery, and one for sensor specs and calibration. The former allows catalogues like geospatial one stop and GCMD to index and search for data matching a query. FGDC and DIF are the big gorillas in this arena for the US with ISO 19115, MarineML and several others in use internationally. The other type is more for an observatory manager to identify and communicate with sensors installed in their systems. SensorML is the emerging standard for the instrument nuts and bolts. TransducerML is similar, but doesn't have as much support. Neither type is fully defined yet, but the discovery end of things is more mature. The funny thing is that with all the concern about the metadata, the measurements themselves seem to get lost- the best way to serve them wasn't discussed.\n\nAnother thing that was abundantly clear was the vehicle for communicating metadata of both kinds was XML, and that web services were rapidly out-pacing SOAP for protocols. Easy conversion tools to instantiate oceanographic metadata in XML don't exist yet, but may in the next year or so. Eventually the user won't need to know XML to write metadata to fit one of the current models, however, at the moment it's mandatory.\n\nOur metadata, the global attributes in particular, blends the two kinds of information, so at present we don't fit the existing models. Plus our main users are the modeling community, so we're not working with most of the constraints of metadata from an observing system monitoring surface waves in the Chesapeake Bay. It seems that having data accessible via OpenDap is still a good way to go, but it also depends who the intended users are. There are plenty of GUI web interfaces that allow the user to select from time, region, variable and depth, and then extract records from some database. They aren't the best choice for obtaining multiple variables simultaneously though. \n\nThe largest obstacle to fitting our data into one of the existing metadata models in XML is mapping our terms into what they're called in the "standard". What does "units" mean? Can we estimate measurement accuracy for each of our sensors? What do we do with the metadata fields we think will be helpful but that don't exist in the "standard"? My thought is keep what we've got, and continue moving towards conversion to cf (we're going to have to rename everything for that effort anyhow). In future, we have to create XML to contain a subset of our metadata to submit to FGDC for "discovery" purposes, and if methods to do more in XML exist then, we should implement what seems practical.\n\n\nThe to-do list that developed from this workshop is listed below:\n\n* Follow up with Melanie Meux at NASA/FGDC to re-do our metadata in FGDC to use more dictionary terms to enhance Discoverability. I couldn't find our Mass Bay data by starting with searching for “ocean current measurements”, and trying to narrow the search region and time, but learned they could be found if one knew they were collected by USGS/CMG/WHSC. I'd like to improve on this less than good discoverability.\n\n* Look into Unidata's NCO tools that may help with an alternate route for our metadata into XML (Thank you Gerry at TAMU).\n\n* Learn XML (a 10-step program?)\n\n* Communicate with engineers at Satlantic to learn more about how they encapsulate there measurements (data) into XML. I believe data can be defined as a vector, so that each measurement for each sensor does not have to be surrounded by a lot of extra XML code. They've done the LEO-15 and several other systems, so have pertinent experience and seemed willing to help. I don't know if this will buy us much, but working towards an integrated solution is more appealing than dealing with two. \n\n* Incorporate Open Geospatial Consortium's (OGC) library terms into our metadata vocabulary.\n\n* Evaluate implementing a web based GUI chooser interface for our datasets (along with OpenDap). \n\n* Think about data and metadata management best practices.
On a tip from Dave Ralston, I took a look at the MassGIS 30m bathymetry raster grid:\n\nDownloaded and ran the .exe, then converted .\sbathymetry30m\simg_bathym30m\sw001001.adf from ESRI raster and the existing coordinate system (Mass State Plane Coordinates) to a Lon/Lat grid (EPSG:4326) geotiff using the gdalwarp command from the FWTOOLS command shell:\n{{{\nC:\sRPS\sbathy\sbathymetry30m\simg_bathym30m>gdalwarp w001001.adf -r bilinear -t_srs EPSG:4326 mass30m_geo.tif\nCreating output file that is 9841P x 6604L.\nProcessing input file w001001.adf.\nUsing internal nodata values (eg. -32768) for image w001001.adf.\n0...10...20...30...40...50...60...70...80...90...100 - done.\n}}}\n
"bench" timing (smaller better) in Matlab 2009a:\nLU FFT ODE Sparse 2-D 3-D Machine\n0.2623
This is the tinyurl for the Matlab Interoperability Demo: \n{{{\n\n}}}
In Matlab 2010a or higher, you can increase the Java memory thusly:\nFile -> Preferences -> General -> Java Heap Memory
Step 1. Get all the toolkits from the SVN:\n*Linux: \n{{{\nsvn co /home/rps/m_cmg\nsvn co /home/rps/m_contrib\n}}}\n*Windows: Install TortoiseSVN, then bring up the Windows File Explorer. \n** Make a new directory called "m_cmg" where you keep your matlab toolkits (mine are in c:\srps\sm_cmg). \n** Right click on each directory and if TortoiseSVN installed correctly, you should see an "SVN Checkout" option. \n** Click on "SVN Checkout" and then type in "" as the URL for the respository\n** Repeat this process, making a new directory called "m_contrib" and using "" as the URL for SVN Checkout.\n\n\nStep 2. Get the netcdf toolkit, snctools and rslice from other SVN:\n*Linux: \n{{{\nsvn co /usr/local/matlab/m_other/rslice\nsvn co /usr/local/matlab/m_other/snctools\nsvn co /usr/local/matlab/m_other/netcdf_toolkit\n}}}\n*Windows: In File Explorer, right click and choose "SVN Checkout", and use the above URLs (I put my stuff at c:\srps\sm_other\srslice, c:\srps\sm_other\ssnctools, etc)\n\nStep 3. To add all these directories to your MATLABPATH, get this file, put it somewhere in your matlab path. (I put it in the toolboxes\slocal directory, which on my PC with Matlab 7.1 is "c:\sprogram files\smatlab71\stoolbox\slocal". Then edit this file to point to the proper directories on your machine. When Matlab starts, it will automatically run "startup.m", which will then add these directories to your path.
We wanted to pass data with lon/lat values into Mirone from the matlab workspace. We got some "undocumented info" from J. Luis about how to do this (which follows) and based on this, I wrote a function called "grid2mirone.m" (svn update m_cmg/trunk/RPSstuff) which accesses the results of "cf_subsetGrid" thusly:\n{{{\n>> uri=''\n>> [d,g]=nj_subsetGrid(uri,'topo',[-71.5 -63 39 46]);\n>> grid2mirone(d,g);\n}}}\n\nHere is J. Luis's info:\n{{{\nOne can invoke Mirone in several ways:\n\n- mirone file_name (this works with many different formats,\nas long as it possible to find out what is in "file_name")\n\n- mirone(Z) Z is an array - the one that started this thred\n\n- mirone(Z, struc) Z as above and "struc" is a structure with\ncoordinates information\n\nthe "struc" structure has optional and mandatory fields. Bellow is a\ndescription of those fields and what they should contain depending the\ncases of use.\n\nHope that it is clear enough.\n\n\n---- OPTIONAL fields\n\n- To inform if coordinares are geographical or other\nstruct.geog = 1 (geog) or = 0 (other coords)\n\n- Color map\nstruc.cmap -> contains a Matlab [Mx3] colormap\n\n- Figure name\ = 'Whatever you like';\n\n\n- If you know the projection in terms of a WKT (Well Known (by who?)\nText)\nstruc.srsWKT = a projection WKT string as the ones used by GDAL\n\n(related note, the ogrproj MEX file lets you convert a Proj4\nprojection string into a WKT string\nexample: strWKT = ogrproj('+proj=merc') )\n\n\n---- MANDATORY fields\nhead\nX\nY\n\nAn header [1 x 9] array which contents are:\nstruc.head = [x_min x_max y_min y_max z_min z_max 0 x_inc y_inc];\nThe zero (7th element) indicates that grid registration is beeing used\n\nNow we have two cases:\n\n-1 case: the input array is of type uint8 and contains an image\n\nstruc.X = [x_min x_max]\nstruc.Y = [y_min y_max]\n\n-2 case: the input array is of type int16 or float (single or double)\n\nIf [n_rows, n_columns] = size(Z);\nstruc.X = linspace(x_min, x_max, n_columns)\nstruc.Y = linspace(y_min, y_max, n_rows)\n\n}}}\n\n
Getting the correct spatial extents for unstructured grid data:\n\nThe problem: ncISO was returning inccorect lon/lat ranges for unstructured grid, because it was using a netcdf-java routine that took short cuts (e.g. first/last value) instead of reading the entire lon or lat variable, which of course is required for unstructured grids.\n\nDave made a new version of the ncISO for TDS jar file, called threddsIso-2.22.jar, which we placed in:\n/var/www/tomcat-threddsdev/webapps/thredds/WEB-INF/lib/threddsIso-2.22.jar (and removed the old threddsIso jar file)\n\nThis version is a jar that will read from the array of data values only when cdm_data_type attribute value is not null and not a GRID. \n\nSee link below for possible cdm_data_type values:\n\n\nUnstructured grid is not currently an option, so suggest specifying an global attribute called "cdm_data_type" with value "any" until this is resolved. This will trigger the reading of the lon/lat variables to determine the true extents. \n\nThis is easy for us to do with the unstructured grid model datasets, since we are already modifying the metadata for all datasets via a python script. \n\nThe script \\nis the main script, which calls "", which in turn uses Alex Crosby's "ncml" routine to modify metadata in a NcML file.\n\nWe are using this to modify the ID and title based on the google spreadsheet values, so I just modified the file so that it reads:\n{{{\n ncmlFile = ncml.Dataset.NcmlDataset(ncmlFile)\n ncmlFile.addDatasetAttribute('id',datasetID)\n ncmlFile.addDatasetAttribute('cdm_data_type','any')\n ncmlFile.addDatasetAttribute('title',datasetName)\n}}}\n\nI then reharvested the "testing.xml" catalog with GI-CAT using the THREDDS-NCISO service, and now the unstructured grid models are there!\n\nThe google doc is at:\n\nThis google doc is points to NcML files contributed by modelers that specify aggregations, and modifies the metadata. \n\n\n\n
All times UTC\n\n0835 UTC: Met Bill Danforth, Chuck Worley, Barry Irwin at Harbor\n\nBill & Chuck working in Ethernet interface to swath mapping system as opposed to serial port.\n\nTalked to engineer Matt in UK using Skype through The Unwired Village wireless connection using laptop on fantail. \npicked up in harbor (Island Queen point?). Cool!\n \n~1400 Left Bill Danforth behind and headed out toward Middle Ground. Overcast skies. 1-2 ft seas, light wind 5-10 knots from east.\n\n1441 On station at beginning of zig-zag survey, but GPS not receiving interally. Needed to reset on receiver.\n\n~1600 Swath bathy system not removing pitch & roll, nothing seems to help, so start doing zig-zag survey with single beam echosounder, hoping that swath system will work by the time we reach the hi-res survey area.\n\n1700 Switched back to the old ISA Tem's, and that solved the pitch & roll problem. The USB Tem's apparently has a timing problem. Chuck asks: Could it be related to desktop computer not being USB2.0? \n\nLine 27. Start mowing the the lawn from SE corner on Hypack line 27. Started with swath of 30 m on a side, but then changed to 50 m on a side 100 m into the line or so, hoping to be able to skip a few lines. \n\nDid not skip line 26, however, since we want to avoid a hole at the beginning of line 27 when the width was only 30 on a side.\n\nSkipped line 25.\n\nLine 24. Went back to 30 m on a side and doing every line, because the lines go from shallow (6 m to 20 m). Should really adjust the survey lines to try to follow contours of the bank.\n\nLine 23\nLine 22\nLine 21\nLine 20 \n\n1809 Beginning line 18. heading 238\n\n1904 begin line 11\n\n1939 big rollers (during 30 seconds along line 7)\n\n2019 Water is noticably smoother rougher the south side of Middle Ground\n\nBarry suggests running along "the wall" instead of just across it. Running along the crest seems like it would be a good idea also.\n\nHeading over to N5 to do a quick bathy and to check the RTK relative to the N5 pressure sensor (after N5 is recovered).\n\n1839 finished with N5 survey\n\nChuck is trying new system again.
Resources for model data interoperability:\n*Matlab NetCDF Java Toolbox: <>\n*Catalog of IOOS THREDDS Servers: <>\n*Gulf of Maine Model Interoperability Project: <>\n*THREDDS Data Server Installation and Configuration: An excellent tutorial that covers A-Z of the TDS installation, security and configuration is at:<>. If you want to see the actual steps I used for a complete installation (starting from scratch) on a Linux machine, see <>\n
[[MonkeyPirateTiddlyWiki|]] is a distribution of [[TiddlyWiki|]] created by Simon Baird. See [[the web site|]] for more information.\n!!Upgrading ~MonkeyPirateTiddlyWiki\nThis "empty" ~MonkeyPirateTiddlyWiki file comes pre-installed with the core ~MonkeyPirateTiddlyWiki plugins. You can upgrade these core plugins to the latest version by doing the following:\n* Click ImportTiddlers\n* Click "Choose..." and select "~MptwUpgradeURL"\n* Click "fetch"\n* Click the checkbox in the first column heading to select all tiddlers\n* Click "More actions..." and select "Import these tiddlers"\n* Click "OK" to confirm you want to overwrite the tiddlers\n* Save and reload\n
{{{\n cd /home/old_dir\n tar cf - . | (cd /usr/new_dir; tar xvf -)\n}}}
The WRF NetCDF output files on were called:\n{{{\nwrfout_d01_2003-01-20_00:00:00\nwrfout_d02_2003-01-20_00:00:00\nwrfout_d03_2003-01-20_00:00:00\n}}}\nand I couldn't figure out any syntax for ssh that would allow them to be transferred. None of these worked:\n{{{\nscp -p wrfout_d02_2003-01-20_00:00:00 blackburn:/blackburn/d2/rsignell/models/wrf\nscp -p wrfout_d02_2003-01-20_00\s:00\s:00 blackburn:/blackburn/d2/rsignell/models/wrf\nscp -p 'wrfout_d02_2003-01-20_00:00:00' blackburn:/blackburn/d2/rsignell/models/wrf\nscp -p "'"wrfout_d02_2003-01-20_00\s:00\s:00"'" blackburn:/blackburn/d2/rsignell/models/wrf\nscp -p wrfout_d02_2003-01-20_00* blackburn:/blackburn/d2/rsignell/models/wrf\n}}}\nSo in the end, I just renamed them, removing the minutes and seconds, and adding a .nc for good measure (just to remind me that these were NetCDF files):\n{{{\nmv wrfout_d02_2003-01-20_00:00:00\n}}}\nThey then transferred fine:\n{{{\n scp -p blackburn:/blackburn/d2/rsignell/models/wrf\n}}}\nThe inner nest (d03) was a whopper, about 40GB, which took about 30 minutes to transfer at 23MB/s.
The "Models" directory that served coast-enviro:/thredds is now on geoport at /usgs/data1/Models\n
/***\n| Name|MptwLayoutPlugin|\n| Description|A package containing templates and css for the MonkeyPirateTiddlyWiki layout|\n| Version|3.0 ($Rev: 1845 $)|\n| Source||\n| Author|Simon Baird <>|\n| License||\n!Notes\nPresumes you have TagglyTaggingPlugin installed. To enable this you should have a PageTemplate containing {{{[[MptwPageTemplate]]}}} and similar for ViewTemplate and EditTemplate.\n***/\n//{{{\n// used in MptwViewTemplate\nconfig.mptwDateFormat = 'DD/MM/YY';\nconfig.mptwJournalFormat = 'Journal DD/MM/YY';\n//config.mptwDateFormat = 'MM/0DD/YY';\n//config.mptwJournalFormat = 'Journal MM/0DD/YY';\n\nconfig.shadowTiddlers.GettingStarted += "\sn\snSee also MonkeyPirateTiddlyWiki.";\n\n//}}}\n\n//{{{\nmerge(config.shadowTiddlers,{\n\n'MptwEditTemplate':[\n "<!--{{{-->",\n "<!--- ($Rev: 1829 $) --->",\n "<div class=\s"toolbar\s" macro=\s"toolbar +saveTiddler saveCloseTiddler closeOthers -cancelTiddler cancelCloseTiddler deleteTiddler\s"></div>",\n "<div class=\s"title\s" macro=\s"view title\s"></div>",\n "<div class=\s"editLabel\s">Title</div><div class=\s"editor\s" macro=\s"edit title\s"></div>",\n "<div class=\s"editLabel\s">Tags</div><div class=\s"editor\s" macro=\s"edit tags\s"></div>",\n "<div class=\s"editorFooter\s"><span macro=\s"message views.editor.tagPrompt\s"></span><span macro=\s"tagChooser\s"></span></div>",\n "<div macro=\s"showWhenExists EditPanelTemplate\s">[[EditPanelTemplate]]</div>",\n "<div class=\s"editor\s" macro=\s"edit text\s"></div>",\n "<!--}}}-->"\n].join("\sn"),\n\n'MptwPageTemplate':[\n "<!--{{{-->",\n "<!-- ($Rev: 1829 $) -->",\n "<div class='header' macro='gradient vert [[ColorPalette::PrimaryLight]] [[ColorPalette::PrimaryMid]]'>",\n " <div class='headerShadow'>",\n " <span class='siteTitle' refresh='content' tiddler='SiteTitle'></span>&nbsp;",\n " <span class='siteSubtitle' refresh='content' tiddler='SiteSubtitle'></span>",\n " </div>",\n " <div class='headerForeground'>",\n " <span class='siteTitle' refresh='content' tiddler='SiteTitle'></span>&nbsp;",\n " <span class='siteSubtitle' refresh='content' tiddler='SiteSubtitle'></span>",\n " </div>",\n "</div>",\n "<!-- horizontal MainMenu -->",\n "<div id='topMenu' refresh='content' tiddler='MainMenu'></div>",\n "<!-- original MainMenu menu -->",\n "<!-- <div id='mainMenu' refresh='content' tiddler='MainMenu'></div> -->",\n "<div id='sidebar'>",\n " <div id='sidebarOptions' refresh='content' tiddler='SideBarOptions'></div>",\n " <div id='sidebarTabs' refresh='content' force='true' tiddler='SideBarTabs'></div>",\n "</div>",\n "<div id='displayArea'>",\n " <div id='messageArea'></div>",\n " <div id='tiddlerDisplay'></div>",\n "</div>",\n "<!--}}}-->"\n].join("\sn"),\n\n'MptwStyleSheet':[\n "/*{{{*/",\n "/* ($Rev: 1860 $) */",\n "",\n "/* a contrasting background so I can see where one tiddler ends and the other begins */",\n "body {",\n " background: [[ColorPalette::TertiaryLight]];",\n "}",\n "",\n "/* sexy colours and font for the header */",\n ".headerForeground {",\n " color: [[ColorPalette::PrimaryPale]];",\n "}",\n ".headerShadow, .headerShadow a {",\n " color: [[ColorPalette::PrimaryMid]];",\n "}",\n "",\n "/* separate the top menu parts */",\n ".headerForeground, .headerShadow {",\n " padding: 1em 1em 0;",\n "}",\n "",\n ".headerForeground, .headerShadow {",\n " font-family: 'Trebuchet MS' sans-serif;",\n " font-weight:bold;",\n "}",\n ".headerForeground .siteSubtitle {",\n " color: [[ColorPalette::PrimaryLight]];",\n "}",\n ".headerShadow .siteSubtitle {",\n " color: [[ColorPalette::PrimaryMid]];",\n "}",\n "",\n "/* make shadow go and down right instead of up and left */",\n ".headerShadow {",\n " left: 1px;",\n " top: 1px;",\n "}",\n "",\n "/* prefer monospace for editing */",\n ".editor textarea {",\n " font-family: 'Consolas' monospace;",\n "}",\n "",\n "/* sexy tiddler titles */",\n ".title {",\n " font-size: 250%;",\n " color: [[ColorPalette::PrimaryLight]];",\n " font-family: 'Trebuchet MS' sans-serif;",\n "}",\n "",\n "/* more subtle tiddler subtitle */",\n ".subtitle {",\n " padding:0px;",\n " margin:0px;",\n " padding-left:0.5em;",\n " font-size: 90%;",\n " color: [[ColorPalette::TertiaryMid]];",\n "}",\n ".subtitle .tiddlyLink {",\n " color: [[ColorPalette::TertiaryMid]];",\n "}",\n "",\n "/* a little bit of extra whitespace */",\n ".viewer {",\n " padding-bottom:3px;",\n "}",\n "",\n "/* don't want any background color for headings */",\n "h1,h2,h3,h4,h5,h6 {",\n " background: [[ColorPalette::Background]];",\n " color: [[ColorPalette::Foreground]];",\n "}",\n "",\n "/* give tiddlers 3d style border and explicit background */",\n ".tiddler {",\n " background: [[ColorPalette::Background]];",\n " border-right: 2px [[ColorPalette::TertiaryMid]] solid;",\n " border-bottom: 2px [[ColorPalette::TertiaryMid]] solid;",\n " margin-bottom: 1em;",\n " padding-bottom: 2em;",\n "}",\n "",\n "/* make options slider look nicer */",\n "#sidebarOptions .sliderPanel {",\n " border:solid 1px [[ColorPalette::PrimaryLight]];",\n "}",\n "",\n "/* the borders look wrong with the body background */",\n "#sidebar .button {",\n " border-style: none;",\n "}",\n "",\n "/* this means you can put line breaks in SidebarOptions for readability */",\n "#sidebarOptions br {",\n " display:none;",\n "}",\n "/* undo the above in OptionsPanel */",\n "#sidebarOptions .sliderPanel br {",\n " display:inline;",\n "}",\n "",\n "/* horizontal main menu stuff */",\n "#displayArea {",\n " margin: 1em 15.7em 0em 1em; /* use the freed up space */",\n "}",\n "#topMenu br {",\n " display: none;",\n "}",\n "#topMenu {",\n " background: [[ColorPalette::PrimaryMid]];",\n " color:[[ColorPalette::PrimaryPale]];",\n "}",\n "#topMenu {",\n " padding:2px;",\n "}",\n "#topMenu .button, #topMenu .tiddlyLink, #topMenu a {",\n " margin-left: 0.5em;",\n " margin-right: 0.5em;",\n " padding-left: 3px;",\n " padding-right: 3px;",\n " color: [[ColorPalette::PrimaryPale]];",\n " font-size: 115%;",\n "}",\n "#topMenu .button:hover, #topMenu .tiddlyLink:hover {",\n " background: [[ColorPalette::PrimaryDark]];",\n "}",\n "",\n "/* for Tagger Plugin, thanks sb56637 */",\n ".popup li a {",\n " display:inline;",\n "}",\n "",\n "/* make it print a little cleaner */",\n "@media print {",\n " #topMenu {",\n " display: none ! important;",\n " }",\n " /* not sure if we need all the importants */",\n " .tiddler {",\n " border-style: none ! important;",\n " margin:0px ! important;",\n " padding:0px ! important;",\n " padding-bottom:2em ! important;",\n " }",\n " .tagglyTagging .button, .tagglyTagging .hidebutton {",\n " display: none ! important;",\n " }",\n " .headerShadow {",\n " visibility: hidden ! important;",\n " }",\n " .tagglyTagged .quickopentag, .tagged .quickopentag {",\n " border-style: none ! important;",\n " }",\n " .quickopentag a.button, .miniTag {",\n " display: none ! important;",\n " }",\n "}",\n "/*}}}*/"\n].join("\sn"),\n\n'MptwViewTemplate':[\n "<!--{{{-->",\n "<!--- ($Rev: 1830 $) --->",\n "",\n "<div class='toolbar'>",\n " <span macro=\s"showWhenTagged systemConfig\s">",\n " <span macro=\s"toggleTag systemConfigDisable . '[[disable|systemConfigDisable]]'\s"></span>",\n " </span>",\n " <span style=\s"padding:1em;\s"></span>",\n " <span macro='toolbar closeTiddler closeOthers +editTiddler deleteTiddler undoChanges permalink references jump'></span>",\n " <span macro='newHere label:\s"new here\s"'></span>",\n " <span macro='newJournalHere {{config.mptwJournalFormat?config.mptwJournalFormat:\s"MM/0DD/YY\s"}}'></span>",\n "</div>",\n "",\n "<div class=\s"tagglyTagged\s" macro=\s"tags\s"></div>",\n "",\n "<div class='titleContainer'>",\n " <span class='title' macro='view title'></span>",\n " <span macro=\s"miniTag\s"></span>",\n "</div>",\n "",\n "<div class='subtitle'>",\n " <span macro='view modifier link'></span>,",\n " <span macro='view modified date {{config.mptwDateFormat?config.mptwDateFormat:\s"MM/0DD/YY\s"}}'></span>",\n " (<span macro='message views.wikified.createdPrompt'></span>",\n " <span macro='view created date {{config.mptwDateFormat?config.mptwDateFormat:\s"MM/0DD/YY\s"}}'></span>)",\n "</div>",\n "",\n "<div macro=\s"showWhenExists ViewPanelTemplate\s">[[ViewPanelTemplate]]</div>",\n "",\n "<div macro=\s"hideWhen tiddler.tags.containsAny(['css','html','pre','systemConfig']) && !tiddler.text.match('{{'+'{')\s">",\n " <div class='viewer' macro='view text wikified'></div>",\n "</div>",\n "<div macro=\s"showWhen tiddler.tags.containsAny(['css','html','pre','systemConfig']) && !tiddler.text.match('{{'+'{')\s">",\n " <div class='viewer'><pre macro='view text'></pre></div>",\n "</div>",\n "",\n "<div macro=\s"showWhenExists ViewDashboardTemplate\s">[[ViewDashboardTemplate]]</div>",\n "",\n "<div class=\s"tagglyTagging\s" macro=\s"tagglyTagging\s"></div>",\n "",\n "<!--}}}-->"\n].join("\sn")\n\n});\n//}}}\n
For upgrading directly from tiddlyspot. See [[ImportTiddlers]].\nURL: /proxy/\n
For upgrading. See [[ImportTiddlers]].\nURL:\n
\nTDS:\nOpenDAP URL:\n\nThis dataset has over 170,000 time records, spanning from 1979-01-01\nto 2010-07-28 (currently)!!!\n\nThis dataset is being served by TDS Version 4.1.20100520.1554\n\nI was able to bring this OpenDAP URL in the ToolsUI in the\nFeatureTypes tab, but it took over 20 minutes before the datasets\nappeared in the list! Once they appeared, the 2d fields plot rapidly\nand look great (see attached).\n\nThe 3d fields plot okay too, but only the latest 3D field is in the\naggregation (only 1 value instead of 170,000+).\n\nI downloaded one time step from this dataset to gam, \nncks -d time,0\nand it's 163MB! So written as NetCDF, this would be a 163MB/step * 173,304 steps * 1TB/1e6MB= 28.2TB!!\n
In my cygwin window, I just did:\n{{{\ncd /usr/local/bin\nwget\n( see for latest version)\ntar xvfz nco-4.0.6.win32.cygwin.tar.gz\n}}}\nand because I installed UDUNITS2 in a non-standard location (not in\n/usr/local/share), I needed to do:\n{{{\nexport UDUNITS2_XML_PATH=/home/rsignell/share/udunits/udunits2.xml\n}}}\nNote that you don't have to install the UDUNITS package -- you can just get the udunits2.xml file, put it somewhere, and then point to it using the environment variable. The UDUNITS package is already built into the cygwin binary for nco.\n\nI had to also install the curl library for cygwin. If you run cygwin's setup.exe, you will find this in the "Web" directory.\n\nOnce this was done, NCO with OpenDAP and UDUNITS2 worked like a champ:\n{{{\n/usr/local/bin/ncks -O -F -d time,"2010-08-31 00:00","2010-08-31\n12:00" -d lon,-88.37,-85.16 -d lat,29.31,30.4\n""\\n}}}\nYou can run "cygcheck -srv" to see the details of your cygwin installation.\n\n
Getting NCO going on Cygwin is now much easier, thanks to the binary tarball provided by Charlie Zender. But there are still a few steps to get NCO fully functional on Cygwin. Here's what they are:\n\n1. Download and unpack the Cygwin binary distribution of NCO from \n\n2. Make sure that "curl" is installed. Type "which curl" in a cygwin shell. If you don't see "/usr/bin/curl", then you need to run Cygwin setup.exe and install the "curl" package from the "Web" section of the installer.\n\n3. For UDUNITS support (which you want so you can extract data based on Gregorian times and such), download, and unpack the UDUNITS version 2 or higher distribution from\n\nNCO just needs the XML files defining the units, do you don't need to build the distribution. You can just copy all the XML files from the ./lib subdirectory to a suitable place like /usr/local/share/udunits. Make sure you copy them all, because udunits2.xml references the other XML files.\n\n4. Specify the location of the UDUNITS2.xml file in this environment variable:\n{{{ \nexport UDUNITS2_XML_PATH=/usr/local/share/udunits/udunits2.xml \n}}}\nYou probably want to put this in your .bashrc or .bash_profile so you always have it defined.\n\n5. Try it out. See if this works:
FYI, here's what i found out about wave and met reporting time for NDBC buoys. Met data is reported stamped with the time that the measurements ended (starting in 2004; starting in 1993 for archived data). Wave data are reported to the nearest hour or half-hour following the end of measurements. For example, for 44008 that records data from 20-40 min past the hour, wave measurements are actually centered on the 1/2 hour, but are reported on the hour.\n\nstart time and duration of wave measurements vary by buoy; a table is found here:\n\nmore about acquisition times at:\n\n
{{{\nGOM3, 48149 nodes, 40 levels: 523GB/year (without overlapping time)\nGOM2, 32649 nodes, 31 levels: 264GB/year\nMBAY: 98432 nodes, 11 layers: 250GB/year\n}}}\nFrom Qichun Xu <>:\n\nThe entire forecast system uses:\n{{{\n WRF hindcast met model: 5*12 processors\n WRF forecast met model: 3*12 processors\n gom2 hindcast :1*12 processors\n gom2 forecast :1*12 processors\n gom3 hindcast :4*12 processors\n gom3 forecast :4*12 processors\n gom3 wave :4*12 processors\n MassBay forecast :4*12 processors\n Scituate forecast: 6*8 processors\n}}}\n\n(26 nodes * 12 CPU/node) + (6 nodes * 8 CPU/node) = 360 CPUs\nrun takes 6 hours \n6 hours * 360 = 2160 CPU hours\n\n\n\n\nCloud Options: Amazon, Rackspace, ProfitBricks (infiniband)\nDNS Failover: Amazon Route 53\n\nRequired: \nExperience running FVCOM\nExperience running ocean models on the CLOUD \nExperience maintaining operational systems\n\nPreferred: Experience running FVCOM in a cloud environment\n\nPerformance comparison between ProfitBricks, Amazon EC2, Rackspace Cloud:\n\n\n
{{{\n>> url='';\n>> nc=ncgeodataset(url);\n>> jd=nc.time('time');\n>> datestr(jd([1 end]))\n31-Dec-1977 22:58:07\n01-Aug-2010 00:00:00\n\n>> url='';\n>> nc=ncgeodataset(url);\n>> jd2=nc.time('time');\n>> datestr(jd2([1 end]))\n01-Apr-2010 00:00:00\n31-Oct-2011 22:58:07\n}}}\n
Here's how to extract data from NOAA's ERDDAP for use in NOAA's GNOME (oil-spill and particle tracking freeware):\n
\nThe NOAA Estuarine Bathymetry \n\ncan easily be viewed and converted into lon/lat grids using Mirone\n\n\nExample: Barnegat Bay\n\nDownloaded 1 arc second zip file, and unpacks into a DEM file.\n\nThe DEM loaded okay in Mirone 2.0. \n\nIn Matlab, go to the mirone directory, type "mirone" and then "File=>Open Grid/Image=>Try Luck with GDAL", choose "all file types" and then select the DEM file. (M070_39074G2_BIG.dem in this case)\n\nThen to convert to this UTM grid to uniformly spaced geographic grid in Mirone, choose:\n"Projections=>GDAL Project" and choose "EPSG:4326" for output (the EPSG code for uniform lon/lat)\n\nThen save grid as type "GMT" which is a netcdf file.\n\nYou can also choose "File=>Workspace=>Grid/Image=>Workspace" and then you will find X,Y,Z variables in your matlab workspace.\n\nIn addition the the NOAA Estuarine Bathymetry, there are some very nice merged bathy/topo grids at:\n\n
NOAA's GNOME program can do 2D tracking with structured or unstructured grid NetCDF files, but requires certain conventions. We want to figure out how to get FVCOM results into GNOME.\n\nThe NetCDF file in the below is obviously the same, but is packaged with other files that are Mac or Windows specific. \n\nWindows:\n\n[[ GNOME for Windows | ]]\n[[ Sample Unstructured Grid w/NetCDF file |]]\n\nMac:\n[[ GNOME for Mac | ]]\n[[ Sample Unstructured Grid w/NetCDF file |]]\n\nSee pages 30 and 31 of this document:\n[[ GNOME Data formats | ]] \n\nComments: the 4 column BND variable is required (I tried deleting it, but GNOME then bombed). It is used to create the GNOME "map" file, which describes the boundary, and the nature of this variable is described on page 31. An issue for FVCOM is that velocities are on element centers, and GNOME expects u,v,lon,lat to be on nodes. GNOME also expects the boundary list to be a list of nodes. \n\n
{{{\n <netcdf xmlns=""\n location="">\n <variable name="theta_s" shape="" type="double">\n <values>5.0</values>\n </variable>\n <variable name="theta_b" shape="" type="double">\n <values>0.0</values>\n </variable>\n <variable name="Tcline" shape="" type="double">\n <values>10.0</values>\n </variable>\n <variable name="s_rho" shape="s_rho" type="double">\n <values start="-0.9875" increment="0.025"/>\n <attribute name="positive" value="up"/>\n <attribute name="standard_name" value="ocean_s_coordinate"/>\n <attribute name="formula_terms"\n value="s: s_rho eta: zeta depth: h a: theta_s b: theta_b depth_c: Tcline"/>\n </variable>\n <variable name="s_w" shape="s_w" type="double">\n <values start="-1" increment="0.025"/>\n <attribute name="standard_name" value="ocean_s_coordinate"/>\n <attribute name="positive" value="up"/>\n <attribute name="formula_terms"\n value="s: s_w eta: zeta depth: h a: theta_s b: theta_b depth_c: Tcline"/>\n </variable>\n <variable name="temp">\n <attribute name="coordinates" value="time s_rho eta_rho xi_rho"/>\n </variable>\n <variable name="u">\n <attribute name="coordinates" value="time s_rho eta_rho xi_u"/>\n </variable>\n <variable name="ubar">\n <attribute name="coordinates" value="time eta_rho xi_u"/>\n </variable>\n <variable name="v">\n <attribute name="coordinates" value="time s_rho eta_v xi_rho"/>\n </variable>\n <variable name="omega">\n <attribute name="coordinates" value="time s_w eta_rho xi_rho"/>\n </variable>\n <variable name="AKv">\n <attribute name="coordinates" value="time s_w eta_rho xi_rho"/>\n </variable>\n <variable name="vbar">\n <attribute name="coordinates" value="time eta_v xi_rho"/>\n </variable>\n <variable name="zeta">\n <attribute name="coordinates" value="time eta_rho xi_rho"/>\n </variable>\n <variable name="time" orgName="scrum_time">\n <attribute name="units" value="seconds since 2002-01-01 00:00 UTC"/>\n </variable>\n <variable name="eta_rho" shape="eta_rho" type="double">\n <attribute name="units" value="degrees_north"/>\n <values start="32.45" increment="
I tried using NetCDF java with NCML to extract just two time steps from a remote OpenDAP URL:\n\n{{{\njava -classpath toolsUI-4.0.jar ucar.nc2.dataset.NetcdfDataset -in test.ncml -out\n}}}\n\n$> more test.ncml\n{{{\n<?xml version="1.0" encoding="UTF-8"?>\n<netcdf xmlns="">\n <aggregation dimName="ocean_time" type="joinExisting">\n <netcdf location=""\n coordValue="
!!Examples of Coupled + Refined ROMS + SWAN (no WRF coupling, met forcing provided via NetCDF files)\n\nThese have to currently be telescoping grids (like Russian Dolls), and must be either 3x3 or 5x5 refinement (e.g. not 5x3)\n\n!!!Maitane: triple nested run\\n!!!Kumar: quad nested run\ \n\n./Grids: Grids (ROMS+SWAN). I am using 4 nested grids, starting from and followed by Carolinas, Diamond and Dshoal. \n\n./Forcing: Forcing, boundary, climatology and initial files. I am not using the atmospheric model (WRF), but provide wind forcing in form of wind stress to ROMS. \n\n./Case1/CODE: Source Code for run.\n\n./Case1/RUNFILES: Include (.h) and Input (.in) files for ROMS, SWAN and Coupling. I am using: cphat.h,,,,, and \n\n./Case1/RUNFILES/SWAN_Wind: Swan wind input (.wnd) files\n\n!!! Working on Vineyard Sound\nran create_nested_grid_rps.m on laptop: c:\srps\svs\smodels\scoawst\n\n\n\nfinished editing\nrsignell@peach:/peach/data1/rsignell/Projects/VS_NEST/\n!!! Working on Adriatic Sea\n\n
{{{\nrsignell@gam:/usgs/data1/rsignell/adria$ grdinfo AD_canyon.grd\nAD_canyon.grd: Title: AD_canyon.grd\nAD_canyon.grd: Command: xyz2grd -R15.896/18.0695/40.8445/41.8855 -I0.0005 -GAD_canyon.grd\nAD_canyon.grd: Remark:\nAD_canyon.grd: Gridline node registration used\nAD_canyon.grd: Grid file format: nf (# 18) GMT netCDF format (float) (COARDS-compliant) [DEFAULT]\nAD_canyon.grd: x_min: 15.896 x_max: 18.0695 x_inc: 0.0005 name: x nx: 4348\nAD_canyon.grd: y_min: 40.8445 y_max: 41.8855 y_inc: 0.0005 name: y ny: 2083\nAD_canyon.grd: z_min: -1204.92004395 z_max: 10 name: z\nAD_canyon.grd: scale_factor: 1 add_offset: 0\nrsignell@gam:/usgs/data1/rsignell/adria$ grdsample AD_canyon.grd -GAD_canyon2.grd -I0.003\ngrd2xyz AD_canyon2.grd >\n\n}}}
Using deflation level 1 reduced 4.5 GB to 2.1GB, so tried using packing as well, and got more than another factor of 2, only about 1/7 of original size!\n\\n$ cd /usgs/data0/rsignell/wbm\n{{{\n$ ~/netcdf/nco/bin/ncpdq -4 -L 2\n\n$ ls -sailrt\n712 (stable)\n (beta)\n (nightly)
/***\n| Name:|NewHerePlugin|\n| Description:|Creates the new here and new journal macros|\n| Version:|3.0 ($Rev: 1845 $)|\n| Date:|$Date: 2007-03-16 15:19:22 +1000 (Fri, 16 Mar 2007) $|\n| Source:||\n| Author:|Simon Baird <>|\n| License||\n***/\n//{{{\nmerge(config.macros, {\n newHere: {\n handler: function(place,macroName,params,wikifier,paramString,tiddler) {\n wikify("<<newTiddler "+paramString+" tag:[["+tiddler.title+"]]>>",place,null,tiddler);\n }\n },\n newJournalHere: {\n handler: function(place,macroName,params,wikifier,paramString,tiddler) {\n wikify("<<newJournal "+paramString+" tag:[["+tiddler.title+"]]>>",place,null,tiddler);\n }\n }\n});\n\n//}}}\n\n
Pre meeting:\nIdeas for ocean representation on OGC Met Ocean Working Group, which seems dominated by UK Met Office and other met folks. We need some more oceanographers!\n\nFrom Roger Proctor: Richard Lampitt (NOC)\nAdam Leadbetter (BODC)\nLesley Rickards (BODC)\n\nCharlie: \nIOOC interagency ocean observations committee oversees DMAC (IOOC is \nCo-chaired by NASA/NOAA/NSF\nDavid Legler\n\nJeff NOAA Data Architect\nRoger Australian Integrated Marine Observing Sysstem \nJenifer working with Walter BOEM\n\nKrish: DARWIG ?\nJulie Thomas: SEADIP, executive director of SCOOS\nRu Morrison: NERACOOS former optical oceanographer\nSam Simmons: IOOC member, marine mammals, elephant seals, tagged\nEoin Howlett: MARACOOS, OOI, IOOS Modeling Testbed\nMark Fornwall: OBIS USA\nKrisa (sub for Ken Casey)\nJanet Fredericks (Data Quality: Cabled observatory MVCO, publishing once)\nCarl Benedict: ESIP (\nMichelle Gierach: Lead Scientist for PO-DACC\n\n
Jet Stream position:\nJet Stream is position is critical for conditions on the US East Coast, and climate models are all over the map.\nGISS is terrible\nGFDL is pretty good\n
TIGGE: 48 delay to 34 weather products from all over the world. Not yet accessible via TDS, but now with authentication, should become possible.\n\nAWIPS II can download data from LDM, but has decoders to convert to HDF5, not sure what data models is and how it's data model is, how it relates to CDM is unknown. \n\nBen mentions a project called F5 that is trying to invent a CDM-like approach to HDF5.\n\nRick Anthes stepping down as head of UCAR after 22 years.\n\nFlight simulator for IDV:\n\n\n
Wednesday\nENKI: get quote to ENKI, ask WHOI and NFRA group if they want a presentation\n\nChris Little: asking for fixed level could be very expensive.\n\nGeorge Percival "Environmental modeling"\n\nUnCertML available as an OGC discussion paper (for uncertainty in models)\nWPS: GetCapabilities,DescribeProcess,Execute\n\nM. Gould has used WPS to "run" a Hydrological Model
\n “In my opinion, tequila is misunderstood and underappreciated, but it’s one of the best spirits for cocktails. I love mezcal, too, and I’ve had some really nice cocktails with mezcal and pineapple, so I started there. Then I thought about what else pairs well with tequila and mezcal, and I just kept adding layers. Drinks aren’t one-dimensional to me; I try to add as many different notes as I can without letting one overpower another. Unfortunately, when some people think punch they think of the trash-can variety they had in college. But this is for graduates of that. There’s something so wonderful about sitting around drinking out of a big, beautiful punch bowl.”\nWho Jeret Peña, tavern keeper\nWhere Esquire Tavern, in San Antonio\n{{{\nIngredients:\n8 ounces tequila\n1 ounce mezcal \n3 ounces tawny port\n2 ounces St-Germain elderflower liqueur\n6 ounces pineapple juice \n4 ounces water \n2 ounces lime juice\n2 ounces simple syrup\n10 drops Bittermens Hellfire Habanero Shrub \ncinnamon stick\nlime wheel\npineapple wedge\n}}}\n Directions:\n{{{\n1. Chill ingredients, then stir together in a punch bowl or pitcher. \n2. Top with grated cinnamon. Garnish with a lime wheel and pineapple wedge. Serves five.\n}}}\n
ssh to\nssh to\ncd '/home/rsignell/.thunderbird/dnowsvyq.Default User/Mail/Local Folders/Incoming (old).sbd'
Here are the os versions of the Linux machines we use at WHOI:\n|! machine name |! Operating System|\n| |~SuSE 9.1 |\n| |Debian 5.0 Lenny |\n| |Ubuntu 8.04 Hardy Heron (Long Term Support release) |\n| |Ubuntu 8.04 Hardy Heron (Long Term Support release) |\n| |Ubuntu 12.04 Precise Pangolin (Long Term Support release) |\n\n
The PSDEM_2000 data were supplied in Washington State Plane North coordinates NAD83, with elevations in feet relative to the NAVD88 vertical datum. The data was downloaded as "" a zipped ARC ASCII grid format from\n\nWe converted from state plane coordinates in feet with elevations in feet to geographic coordinates (lon,lat) with height in meters. The original grid spacing was 30 ft, and we used the slightly smaller 0.0001 arc degree spacing, using bilinear interpolation to interpolate the height values.\n\nStep 1. Convert Arc/ASCII to GeoTIFF using information in metadata document in the zip file. The key was realizing that the false_easting mentioned in the metadata
Performance Plans are DI-3100 forms\n
omni graffle\ncmapTools COE\nProtege\nSkype + dimdim MediaWiki\ngoogle web toolkit\nJena/TDB joseki triple store and SPARQL endpoint server
Here's a "Polpette" (meatball) recipe that seeks to recreate those served at \n"Osteria Ca D'Oro alla Vedova" in Venice\nSee this page for pics:\n\nIngredients: \n{{{\n1.5lbs veal/beef/pork mixture\n1/3 lb mortadella\n2 medium russet potatoes\n2 cloves garlic\n1 egg\n1 1/2 t salt\n1/2 cup parmesean or grana cheese\n1/2 cup parsley\n1/2 cup bread crumbs (only if necessarily)\n}}}\nBake the potatos at 400 for 1 hour or until done, and while still hot (or at least warm ) put through a ricer, or mash by hand.\nMince the garlic, parsley and mortadella very fine.\nMix all ingredients and form in to small balls (about 3/4 inch diameter). \nOnly add bread crumbs if necessary to get the balls to hang together. You want these to be nice and soft.\n\nFry in hot (350 degree) oil until golden brown, or fry in a pan over medium heat until cooked all the way through.
When Salamander complains about a directory having a "@" symbol, ssh to the directory above the one with the problem, and use xattr -d to get rid of the problem:\nexample:\n\n{{{\nTo fix the coast-enviro:/Volumes/models/gom_interop directory:\n\nssh coast-enviro\ncd /Volumes/models\nxattr -d gom_interop\n}}}\n\nTo get rid of the @ in files, we need to do:\n{{{\nxattr -d *\nxattr -d .??*\n}}}\n\nNote: you will get back messages like \n{{{\nNo such xattr:\n}}}\nfrom files that don't have the problem. Fine. Don't worry about it!
* convert the CEOS to calibrated backscatter using the free "rsat" program from Juha-Petri Kärnä ( This program expects the CEOS files to be named like this: \n dat_01.001\n lea_01.001\n nul_01.001\n tra_01.001\n vdf_01.001\nso I make a subdirectory for each date (e.g. "jan_26" containing these files. Then I make a subdirectory below this called "rs1". Cd to the "rs1" subdirectory and run the "rsat" program: \n{{{\nrsat -d ../ -c -l -o ./\n}}}\nthis will create a bunch of "rs1_XXX" files in the ER Mapper (.ers) format. \n\n * We want to preserve the GCP information in the original CEOS file, which is not maintained by Juha-Petri's program. If we convert the original CEOS file to ERMapper using gdal_translate, we can use the .ers file generated for the original image for the calibrated image instead, providing we change the \n{{{\n CellType = Unsigned8BitInteger\n}}}\nto\n{{{\n CellType = IEEE4ByteReal\n}}}\n\nThen we can convert the ER Mapper images to a uniformly spaced image in any coordinate system we want, for example, UTM zone 33, using gdal_warp.\n
Step 1: Run this grab & convert script:\n{{{\n#!/bin/bash\n#\n# DO_MERGE_SRTM30+\n#\n# Grab the latest SRTM30+ data (33 tiles in SRTM30 format)\n# from UCSD ftp site and merge into a single 16-bit GeoTIFF.\n# Grabbing the EM-mapper headers from UCSD allows ""\n# from the FWTOOLS ( to merge them\n# into single GeoTIFF with global extent with a single command.\n#\nwget*.ers\nwget*.srtm\n\n# Merge all 33 tiles into one global GeoTIFF\ -o srtm30plus_v6.tif *.ers\n\n# Convert GeoTIFF to NetCDF for distribution via OpenDAP\ngdal_translate srtm30plus_v6.tif -of NetCDF\n}}}\n\nNote to self: I've been doing this on my linux system in the directory ~/bathy/strm30plus\nwith subdirectories v5.0,v6.0 and script name "do_merge_srtm30plus". The "wget" command takes about 45 minutes to grab all the data.\n\nStep 2: Modify the metadata to make it recognized as a GridDataset by NetCDF-Java\nGDAL's NetCDF driver yields:\n{{{\n netcdf srtm30plus_v5 {\ndimensions:\n x = 43200 ;\n y = 21600 ;\nvariables:\n char GDAL_Geographics ;\n GDAL_Geographics:Northernmost_Northing = 90. ;\n GDAL_Geographics:Southernmost_Northing = - is using an albers projection on a unit sphere, with variable names following Snyder, repeated here by Wolfram:\n\nThese parameters are set early in the wind-bundle.js, and look like this:\n{{{\n var phi1 = radians(29.5);\n var phi2 = radians(45.5);\n var n = .5 * (phi1 + phi2);\n var C = Math.cos(phi1) * Math.cos(phi1) + 2 * n * Math.sin(phi1);\n var phi0 = radians(38);\n var lambda0 = radians(-98);\n var rho0 = Math.sqrt(C - 2 * n * Math.sin(phi0)) / n;\n}}}\nThese coordinates on a unit sphere are then multiplied by a scale factor to get pixels\n{{{\n var mapProjection = new ScaledAlbers(\n 1111 * 0.85, -75, canvas.height - 100, -130.1, 20.2);\n}}}\nwhere 1111*85 is the number of pixels/unit distance on the unit radius sphere (pi/2 = distance from pole to equator). This distance is measured from the lower left corner (-130.1,20.2), and an offset is included as canvas.height because we are measuring pixels from the top instead of pixels from the bottom. I would get rid of the offsets (-75 and -100).
In cygwin, I edited my ~/.hgrc file:\n{{{\n$ more ~/.hgrc\n[ui]\nusername = Rich Signell <>\n[auth]\ngoogle.prefix =\ngoogle.username = rsignell\ngoogle.password = xxxxxxxxxx (login to google code to find this out)\n}}}\nWith this, after making changed, I can commit to my local repository and then push to Google Code with these commands in cygwin:\n{{{\nhg commit -m 'new changes'\nhg push\n}}}
Quick Pasta e Fagioli (serves 4 -- can easily double)\nPreparation time: 25 minutes!\n\nIngredients\n\n 1 tablespoon olive oil\n 1/2 cup chopped onion\n 2 slices of bacon, diced\n 1 stalk celery chopped \n 2 garlic cloves, minced\n 1/4 teaspoon crushed red pepper\n 1 teaspoon chopped fresh rosemary\n 1 (19-ounce) can cannellini beans with liquid\n 3 cups chicken broth\n 1/2 cup canned diced tomatoes with liquid\n 1/2 cup ditalini (very short tube-shaped macaroni)\n 1/4 cup finely shredded Parmesan cheese\n\nPreparation\n\n1. Cook bacon in a large saucepan over medium heat. Remove bacon and drain fat from pan (but don't clean it).\n Add oil, celery, onion and garlic; cook 5 minutes or until golden,stirring frequently.\n\n2. Stir in pepper, crumbled bacon and everything else except: 1/2 the beans, tomatoes,cheese and pasta\n Bring to a boil.\n\n3. Reduce heat, add tomatoes and simmer 10 minutes.\n\n4. Puree until smooth with an immersion blender\n\n4. Add pasta and the rest of the beans, and cook 7 minutes or until done. \n Turn off heat, mix in 1/3 cup parmesean cheese.\n Sprinkle each serving with 1 tablespoon more cheese.
/***\n| Name|QuickOpenTagPlugin|\n| Description|Changes tag links to make it easier to open tags as tiddlers|\n| Version|3.0 ($Rev: 1845 $)|\n| Date|$Date: 2007-03-16 15:19:22 +1000 (Fri, 16 Mar 2007) $|\n| Source||\n| Author|Simon Baird <>|\n| License||\n***/\n//{{{\nconfig.quickOpenTag = {\n\n dropdownChar: (document.all ? "\su25bc" : "\su25be"), // the little one doesn't work in IE?\n\n createTagButton: function(place,tag,excludeTiddler) {\n // little hack so we can to <<tag PrettyTagName|RealTagName>>\n var splitTag = tag.split("|");\n var pretty = tag;\n if (splitTag.length == 2) {\n tag = splitTag[1];\n pretty = splitTag[0];\n }\n \n var sp = createTiddlyElement(place,"span",null,"quickopentag");\n createTiddlyText(createTiddlyLink(sp,tag,false),pretty);\n \n var theTag = createTiddlyButton(sp,config.quickOpenTag.dropdownChar,\n config.views.wikified.tag.tooltip.format([tag]),onClickTag);\n theTag.setAttribute("tag",tag);\n if (excludeTiddler)\n theTag.setAttribute("tiddler",excludeTiddler);\n return(theTag);\n },\n\n miniTagHandler: function(place,macroName,params,wikifier,paramString,tiddler) {\n var tagged = store.getTaggedTiddlers(tiddler.title);\n if (tagged.length > 0) {\n var theTag = createTiddlyButton(place,config.quickOpenTag.dropdownChar,\n config.views.wikified.tag.tooltip.format([tiddler.title]),onClickTag);\n theTag.setAttribute("tag",tiddler.title);\n theTag.className = "miniTag";\n }\n },\n\n allTagsHandler: function(place,macroName,params) {\n var tags = store.getTags();\n var theDateList = createTiddlyElement(place,"ul");\n if(tags.length == 0)\n createTiddlyElement(theDateList,"li",null,"listTitle",this.noTags);\n for (var t=0; t<tags.length; t++) {\n var theListItem = createTiddlyElement(theDateList,"li");\n var theLink = createTiddlyLink(theListItem,tags[t][0],true);\n var theCount = " (" + tags[t][1] + ")";\n theLink.appendChild(document.createTextNode(theCount));\n var theDropDownBtn = createTiddlyButton(theListItem," " +\n config.quickOpenTag.dropdownChar,this.tooltip.format([tags[t][0]]),onClickTag);\n theDropDownBtn.setAttribute("tag",tags[t][0]);\n }\n },\n\n // todo fix these up a bit\n styles: [\n"/*{{{*/",\n"/* created by QuickOpenTagPlugin */",\n".tagglyTagged .quickopentag, .tagged .quickopentag ",\n" { margin-right:1.2em; border:1px solid #eee; padding:2px; padding-right:0px; padding-left:1px; }",\n".quickopentag .tiddlyLink { padding:2px; padding-left:3px; }",\n".quickopentag a.button { padding:1px; padding-left:2px; padding-right:2px;}",\n"/* extra specificity to make it work right */",\n"#displayArea .viewer .quickopentag a.button, ",\n"#displayArea .viewer .quickopentag a.tiddyLink, ",\n"#mainMenu .quickopentag a.tiddyLink, ",\n"#mainMenu .quickopentag a.tiddyLink ",\n" { border:0px solid black; }",\n"#displayArea .viewer .quickopentag a.button, ",\n"#mainMenu .quickopentag a.button ",\n" { margin-left:0px; padding-left:2px; }",\n"#displayArea .viewer .quickopentag a.tiddlyLink, ",\n"#mainMenu .quickopentag a.tiddlyLink ",\n" { margin-right:0px; padding-right:0px; padding-left:0px; margin-left:0px; }",\n"a.miniTag {font-size:150%;} ",\n"#mainMenu .quickopentag a.button ",\n" /* looks better in right justified main menus */",\n" { margin-left:0px; padding-left:2px; margin-right:0px; padding-right:0px; }", \n"#topMenu .quickopentag { padding:0px; margin:0px; border:0px; }",\n"#topMenu .quickopentag .tiddlyLink { padding-right:1px; margin-right:0px; }",\n"#topMenu .quickopentag .button { padding-left:1px; margin-left:0px; border:0px; }",\n"/*}}}*/",\n ""].join("\sn"),\n\n init: function() {\n // we fully replace these builtins. can't hijack them easily\n window.createTagButton = this.createTagButton;\n config.macros.allTags.handler = this.allTagsHandler;\n config.macros.miniTag = { handler: this.miniTagHandler };\n config.shadowTiddlers["QuickOpenTagStyles"] = this.styles;\n store.addNotification("QuickOpenTagStyles",refreshStyles);\n }\n}\n\nconfig.quickOpenTag.init();\n\n//}}}\n
Nobuhito Muri: \nOsaka Bay: Osaka City has warmed by 2 degrees over the last 50 years due to land use changes.\nWith WRF, urban land use and using measured SST in Osaka Bay gets close, but model nighttime cools are still 1.5 deg too cool.\nSpeculates that heating of osaka bay due to sewage (7 deg warmer) results in higher atmos temps. Only gives 0.07 deg increase.\n\nVHF Radar: 500 m resoution. Measured residual circulation shows similar to obs.\n\nTidal residual circulation is sensitive to vertical mixing, says MY2.5 gives better result than GLS. Why? \nAnd bottom friction not tested?
PV003\n\nIn "Include/cppdefs.h"\n{{{\n#define SOUTHERN_WALL\n#define NORTHERN_WALL\n#define EAST_M3GRADIENT\n#define WEST_M3GRADIENT\n#define EAST_TGRADIENT\n#define WEST_TGRADIENT\n#define ANA_FSOBC\n#define ANA_M2OBC\n#define FSOBC_REDUCED\n\n#define WEST_FSCHAPMAN\n#define WEST_M2REDUCED\n#define EAST_FSCLAMPED\n#define EAST_M2REDUCED\n}}}\n\nForced with Kelvin Wave on Eastern Side:\n{{{\n IF (EASTERN_EDGE) THEN\n! kelvin wave structure across boundary\n DO j=JstrR,JendR\n val=cff1*fac*EXP(-GRID(ng)%f(Iend,j)* &\n & (GRID(ng)%yp(Iend,JendR)-GRID(ng)%yp(Iend,j))/ &\n & SQRT(g*GRID(ng)%h(Iend,j)))\n BOUNDARY(ng)%zeta_east(j)=val*SIN(omega*time(ng))\n END DO\n END IF\n}}}\n\nPV004\n{{{\nSOUTH FSCLAMPED\nSOUTH M2REDUCED\nEAST M2CLAMPED\nEAST FSGRADIENT\n}}}\nPV005\nPV006\n{{{\nWEST_FSRADIATION\nWEST_M2RADIATION\n}}}\nPV007\n{{{\n -.12 amp velocity, 36 hours\n}}}\nPV008\n{{{\n72 hours\n}}}\nPV009\n{{{\n WEST_FSCHAPMAN \n WEST_M2REDUCED\n}}}\nPV010\n{{{\n WEST_FSRADIATION\n WEST_M2RADIATION\n EAST FSCHAPMAN\n}}}
DEF_HIS - creating history file:\noceanO: string.c:42: NC_check_name: Assertion `name != ((void *)0)' failed.\nAborted\n
First downloaded Padman's TMD Matlab toolbox from\n\nUnzipped this to c:\srps\sm\stides\stxpo\n\nDownloaded the latest TXPO data files via the links found on \n\nUpdate: they now have the data in netCDF, so the rest of this page may be no longer be necessary!\n\nUncompressed to \nc:\srps\sm\stides\stxpo\sDATA\n\nMoved the ASCII "pointer" file "Model_tpxo7.1" up a level to\nc:\srps\sm\stides\stxpo\sDATA\nsince this file contains these 3 lines:\n{{{\nDATA/h_tpxo7.1\nDATA/u_tpxo7.1\nDATA/grid_tpxo7.1\n}}}\nI then wrote two m-files to convert the coefficients in the binary files to NetCDF:\ntpxo_uv_2nc.m\ntpxo_z_2nc.m\n\nBefore you can run these scripts to create the NetCDF, you need to create the empty NetCDF files by editing the CDL templates "z_template.cdl" and "uv_template.cdl" and then doing:\n{{{\n$ ncgen -o < uv_template.cdl\n$ ncgen -o < z_template.cdl\n}}}\n\nThese programs simply read the list of constituents, then extract the amp and phase for each constituent and write to the NetCDF file. I also switch from 0:360 to -180:+180 just to be consistent with our other tidal databases.\n\nFinally, I wrote a program based on roms_tri_tides.m called roms_tides.m, which reads and interpolates from gridded tidal models (specifically TXPO in this case) to a ROMS grid. I checked it into m_cmg/trunk/adcirc_tides (now a bit of a misnomer, unfortunately).\n\n
{{{\nInstructions how to use Makefile to build ROMS/UCLA model.\n============ === == === ======== == ===== ========= ======\nThere are three types of makefiles files associated with the\nROMS/UCLA building procedure:\n\n i. Makefile -- a universal machine independent makefile. This\n file contains list of source code files which determine the\n particular model configuration to be build. User is free to\n add or delete files from this configuration list at his/her\n own discretion without any restrictions, depending on\n physical formulation of the problem.\n\n ii. Makedefs.machine_type (e.g., Makedefs.sgi, Makedefs.Linux):\n These files contain definitions of rules, compilers and compiler\n options, which are generally machine dependent. These files may\n be edited by the user in order to insure optimal usage of the\n compiler flags for a particular machine type or compiler.\n\niii. Make.depend -- an automatically generated list of dependencies.\n usually this list contains the names and dependencies of ALL\n source codes in the directory regardless weather they are\n actually needed in the present configuration or not. This file\n is practically machine independent. This file should not be\n edited by user under any circumstances.\n\n\nHow to make Makefile work:\n=== == ==== ======== =====\n\n 1. On a particular machine, for example a Sun, establish symbolic\n link:\n ln -s Makedefs.sun Makedefs\n\n (If the file for the paricular type of machine is not available\n create it, using one of the existing "Makedefs.machine" files\n as a template. Define appropriate compiler options.)\n\n 2. Check, if file "Make.depend" exists in the present directory.\n if it does not exist, create an EMPTY file and call it\n "Make.depend".\n\n 3. After steps 1 and 2 your Makefeile should be able to work.\n Type\n make tools\n\n This will create two auxiliary utility executable files named\n "cross_matrix" and "mpc". The first one, "cross_matrix" is a\n tool to analyze dependencies and build "Make.depend", the\n second one in an auxiliary multifunctional precompiler designed\n to make .f files generated by CPP more human readable by\n cleaning them from blank lines and comments, as well as to\n perform certain code transformations and optimizations\n (optionally). Read headers of files "mpc.F" and "cross_matrix.F"\n for more details. Once tools are build, it is not necessary\n to rebuild them every time when compiling the model, unless\n files "cross_matrix.F" and "mpc.F" were modified.\n\n 4. Type\n make depend\n\n This will update/create file "Make.depend" consistent with the\n content of all *.F files in the current working directory. All\n source code *.F files will be included into dependency list,\n regardless weather they are actually used or not. User have to\n update "Make.depend" only if\n\n (A) a brand new source code file is introduced into the\n working directory and it participates in the SRSC list\n in the "Makefile" to build the model,\n or\n (B) in a source code file a new #include statement, which\n includes a file previously not included.\n\n It is not necessary to type make depend every time after\n changing SRSC list in the "Makefile", say switching from\n "prsgrd.F" to "prsgrd3.F" back and forth, as long as neither\n (A) nor (B) happens.\n\n 5. After step 4 Makefile becomes fully operational.\n Type\n make\n or\n smake (SGI machines only)\n or\n smake -J 8 (SGI machines only)\n\n to build the model. (Here smake will make individual targets\n in parallel, is multiple processors are available. -J stands\n to specify the desired number of processors involved to override\n the default, for example 8.\n\nFinal remark:\n===== =======\n\n iv. Once steps 1 and 2 are performed, one can simply type\n\n make all\n\n instead of steps 3,4,5. However, doing it in parallel, that\n is "smake all" is not recommended, since the dependency file,\n "Make.depend", is being modified during this procedure.\n\n v. Command "make clean" is recommended when compiler options are\n changed. Otherwise it is unnecessary. "make depend" is\n sufficient most of the time after some of the model source\n codes and .h files were edited.\n}}}\n\n
{{{\nThe strict answer is "yes" and there should be no \nUNEXPLAINED differences.\n\nThis code has the capability of self-verification,\nwhich saves strings from the output from previous\nruns and AUTOMATICALLY compares them with new runs.\n\nThe most typical use of this capability is to verify\nthat there are no parallel bugs. To do so one needs\n\n 1. execute the code on a single CPU, setting 1x1\n partition: NSUB_X = NSUB_E = 1 in file param.h\n\n 2. Save some of the lines of the output into the\n file named\n etalon_data.APPLICATION_NAME\n\n [see several actual files which are already\n available with the code for their specific\n format. Also briefly glance "diag.F" where\n these files are included.\n\n Typically I save output after the 1st, 2nd,\n 4th, 8th, 16th, 32nd, 64th, 96th, 128th, ....\n etc time step.\n\n 3. Recompile and execute the code again, still\n 1 CPU, 1x1 partition, and observe that lines\n looking like \n\n PASSED_ETALON_CHECK \n \n appear after computation passes the check\n points.\n\n\n 4. Introduce partition, recompile the while code \n and execute it again. If everything is correct,\n the PASSED_ETALON_CHECK lines should still be\n present.\n\n Basically this means that global integrals,\n like kinetic energy are kept the same between\n the control "etalon" run and test run. The\n accuracy is 11 decimal places, which is close\n to double-precision accuracy.\n\n\n 5. If something goes wrong and the results of the \n two runs differ, the difference is printed using\n a special format in which leading zeros are\n replaced with the dots and decimal point with\n column. This is done for quick visual reading:\n one can glance the magnitude of the difference\n without reading it.\n\nMost of the time the difference indicate inconsistency\nbetween non-partitioned single processor run and\npartitioned one, which is most likely explained by a\nparallel bug, especially at coarse resolution.\n\n\n\nThere are however other reasons, for example different\ncompilr version may produce slightly different\nresults, \ndifferent implemntations of intrinsic functions and \neven optimization level may change the roundoff-level\nbehavior.\n\nThe etalon check is extremely sensitive, and actually\nspecial measures are done to ensure consistent\ncomputation of global sums regardless of the\npartition.\nFor example, a naive code to compute sum of squares\nof elements an array\n\n sum=0\n do i=1,N\n sum=sum+A(i)**2\n enddo\n\nresults in adding a small number to a large number,\nbecause sum may grow and be much larger than the \nindividual contributions. To avoid this problem,\na special algorithm --- summation by pairs --- is\nused in "diag.F" and elsewhere in this code where\nglobal summation takes place. The idea is that\nsuppose one needs to compute sum of elements of\narray A(i) in such a way that only comparable numbers\nare added at every stage. To do so we first add\n\n A(1) to A(2) and put the result into A(1)\n A(2) + A(4) ---> A(3)\n .....\n A(i) + A(i+1) ---> A(i) for all i<N\n\nThen we have an array of A(1) A(3) A(5) ... etc\nwhich is half the size. Do the same thing again:\n\n A(1) + A(3) ---> A(1)\n A(5) + A(7) ---> A(5)\netc\n\nthe result is array A(1) A(5) A(9) ... etc of the\nsize of one quarter of the original size of A.\nRepeat the above again and again, until it boils\ndown to just a single point.\n\nObviously the above works with the array size being\na power of 2, but actually the algorithm can be\ngeneralized to any number of points ---- there will\nbe just few "deffects" in the reduction tree.\n\ndiag.F contains a 2D version of the reduction\nalgorithm above.\n\nHint: if the dimensions are powers of 2 and the number\nof CPUs used are also power of 2, the above algorithm\nis ENTIRELY DETERMINISTIC, since the order of\nsummation does not depend on number of CPUs.\n\n}}}\n
The ROMS/UCLA code came as roms.tar.\n\nThe steps:\n\n\n 1. Edit "cppdefs.h" and make sure ISWAKE is the only case defined\n 2. Edit "param.h" and set the grid size (360x160x20 or 720x320x20) and the number of tiles (~NP_XI and ~NP_ETA for MPI runs, ~NSUB_X and ~NSUB_Y for ~OpenMP or Scalar runs)\n 3. Edit "ana_grid.F" and set define ~GAUSSIAN_SLOPE_ISLAND if desired (if not defined, the default cylinder-shaped island is used). Also set the island radius to either 10 or 5 km. \n 4. Follow the instructions for building ROMS/UCLA below -- basically just edit or create an appropriate file and then edit the Makefile to include that file.\n 5. Run roms: ~OpenMP: {{{roms <}}}; MPI: {{{roms}}}\n\n\nAccording to Sasha:\n\n{{{\n\nI encorporated Charles contributions related to\nthe Island Wake problem and is able to reproduce\nhis results, while maintaining full computational\nefficiency (since I gave my code to Charles back\nin February, a lot of changes took place on both\nends, so that, for example my code now has small-\nsize boundary arrays for off-line nesting\ncapability, which it did not have at the time I\ngave it to Charles. This specifically affects the\nIsland Wake problem, because not inflow--outflow\nboundaries for Island Wake problem are implemented\nusing this capability.)\n\n...So if you interested to setup a problem run\njust to have your dual-Xeon machine running over\nweekend, we can arrange it now.\n\n\nIndependently of this, I can still work on the code\nfor some time pursuing two goals: making cleaner and\nmore flexible CPP-configuration control (purely\nlogistical task); looking to improve downstream\nboundary (i do not thing that it will change anthing\nsignificantly, but I do see some oscillations in\nvorticity field near the donstream boundary, which\nI hope I can avoid); also I think I can do something\nin spirit of partial-cells to make numerically cleaner\nimplementation of masking boundaries. Flow around the\ncylinder provides a nice testing problem for this.\n\nPlease let me know.\n\nWith the code I have right now I can fit 720x320x20\ncomputiational grid running it on a single-processor\n3.2 GHz Pentium 4, 1GByte memory. I am getting\nslightly more that 1 model day per 2CPU hours for this\ngrid. Scientifically interesting runs for this\nproblem are about 30 model days, or 2 days of\ncomputing.\n\n\nSasha\n}}}\n
|!machine | !cpus | !partition | !run | !time | !compiler | !flags |\n|ricsigdtlx | 1 Xeon | 1x1 | 19200 steps | 13:55 hours| ifort | FFLAGS = -O3 -ip -pc80 -tpp7 -axN -xN -auto -stack_temps -fpp2 -openmp |\n|pikmin | 1 Opteron | 1x1 | 19200 steps |14:14 hours| pgf90 | FFLAGS = -tp k8-64 -Bstatic -fastsse -Munroll=n:4 -Mipa=fast,inline |\n|pikmin | 4 Opteron | 2x2 | 19200 steps |6:16 hours| ifort | FFLAGS = -fpp2 -openmp -xW -auto -stack_temps -O3 -ip |\n|pikmin | 4 Opteron | 2x2 | 19200 steps |xxyy hours| mpif90 | |
On the NCEP RTOFS site\n\nOn the Ocean Nomads RTOFS site:\nIt directs the user to \n\nBut GRIB1 files start off the archive on 2006-05-31:\n\n\nThese early files have lon,lat as 1d vectors with uniform spacing, which is incorrect.\n\nBeginning with 2007-06-06 the file size goes from 15MB to 42MB:\n\nand we find that NetCDF-Java can't read the files, with the error:\nUnknown Grid Type : 204\nucar.grib.NoValidGribException: GDS: Unknown Grid Type : 204) is not supported.\n\n\nGRIB2 files (that NetCDF-Java reads okay) don't start until 2008-01-09:\n\n\n
NATO NURC located all of the original 10 RADARSAT CEOS images, with the exception of the image from Feb 12. Frustrating, since the Feb 12 image is one of the original 3 that we used to showcase the Bora in presentations and in the NATO internal report by Askari & Signell.\n\nLuckily, we have the original (unfiltered) NetCDF file from Feb 12 that Farid processed containing "longitude","latitude", "sigma0" and "Inc_angl". So the question is how to convert these sigma0 and incidence angles to be compatible with the ones that Jochen Horstmann is producing.\n\nFirst we compare the Jan 26 image, since we have both the unfiltered NetCDF from Farid, and the CEOS file.\n\nFor Jan 26, the dimensions of the CEOS image are: 10976, 12064, with a grid spacing of 50 m. There seems to be only 1 variable, which ranges from about 15 (black) to 255 (white).\n\nFor Jan 26, the dimensions of the NetCDF file are: 5000,5000 (darn!). But the grid spacing is 50 m, so this just must be a clipped version. The values of sigma0 range from 0 to 2.0235. The values of incidence_angle range from 23.6 to 41.7 (degrees).\n\nFor Jan 26, Jochen's file is 1799x1666, which is not 10976x12064/6. So I guess he clipped a bit also. The range of "nrcs" is from 0 to 2.2914, which seems pretty close to the NetCDF file, so that is good.\n\nTalked to Jochen finally, and he said it would be hard to believe that Farid's sigma0 is incorrect, because that's very easy to extract and calculate from the CEOS files. We decided that we will block average sigma0 to the 300 m grid (6x6 averaging), even though Jochen averaged before he calculated sigma0. Jochen says this difference should be very small. Then Jochen will calculate the sattelite look angle and generate a file that looks like the others he produced from CEOS format.\n\n|!Date|!Quality|\n|Jan 23| Good|\n|Jan 26| Excellent (after surgery)|\n|Jan 29| Poor|\n|Jan 30| OK |\n|Feb 2| Excellent (after surgery)|\n|Feb 5| OK |\n|Feb 9| Good, but very complicated to South|\n|Feb 15| Strange|\n|Feb 16| Excellent (after surgery)|\n
20,904 entries harvested by "Import THREDDS catalog"\n\n37,000 harvested when I tried creating a scheduled harvester, but metadata didn't come through.\n\nTrying again with a normal harvester:\n 0 harvest started at 8:45 am EST\n13,414 harvested by 9:51 am EST
Becca was having some trouble reading Landsat geotiff data from USGS into Matlab on a Mac, so I took a look. She was hoping to use "read_geotiff.m", but that was failing. So she gave me some files to take a look.\n\nI unzipped the 270MB (which came from into c:/rps/landsat, which made a directory with 9 geotiffs. \n\nUsing the FWTOOLS shell, I took a look at the metadata for one of the images using "gdalinfo". Note: Although FWTOOLS isn't available for MacOS, it appears that binaries for GDAL (which is all one would need) are at: <>\n{{{\nc:\sRPS\slandsat\sLE70100112010188EDC00>gdalinfo L71010011_01120100707_B10.TIF\nDriver: GTiff/GeoTIFF\nFiles: L71010011_01120100707_B10.TIF\nSize is 8851, 8281\nCoordinate System is:\nPROJCS["WGS 84 / UTM zone 22N",\n GEOGCS["WGS 84",\n DATUM["WGS_1984",\n SPHEROID["WGS 84",6378137,298.257223563,\n AUTHORITY["EPSG","7030"]],\n AUTHORITY["EPSG","6326"]],\n PRIMEM["Greenwich",0],\n UNIT["degree",0.0174532925199433],\n AUTHORITY["EPSG","4326"]],\n PROJECTION["Transverse_Mercator"],\n PARAMETER["latitude_of_origin",0],\n PARAMETER["central_meridian",-51],\n PARAMETER["scale_factor",0.9996],\n PARAMETER["false_easting",500000],\n PARAMETER["false_northing",0],\n UNIT["metre",1,\n AUTHORITY["EPSG","9001"]],\n AUTHORITY["EPSG","32622"]]\nOrigin = (369600.000000000000000,7846500.000000000000000)\nPixel Size = (30.000000000000000,-30.000000000000000)\nMetadata:\n AREA_OR_POINT=Point\nImage Structure Metadata:\n INTERLEAVE=BAND\nCorner Coordinates:\nUpper Left ( 369600.000, 7846500.000) ( 54d32'5.33"W, 70d41'20.10"N)\nLower Left ( 369600.000, 7598070.000) ( 54d11'3.95"W, 68d27'53.64"N)\nUpper Right ( 635130.000, 7846500.000) ( 47d20'14.25"W, 70d41'11.06"N)\nLower Right ( 635130.000, 7598070.000) ( 47d42'1.06"W, 68d27'45.61"N)\nCenter ( 502365.000, 7722285.000) ( 50d56'21.05"W, 69d36'32.82"N)\nBand 1 Block=8851x1 Type=Byte, ColorInterp=Gray\n}}}\nSo this is byte data, with 30 m spacing in UTM meters, so no big surprise that it didn't work with read_geotiff.m, which requires uniform lat/lon spacing. So let's warp the image to uniform lat/lon spacing (EPSG code 4326), which is also what Google Earth likes, using "gdalwarp":\n{{{\nc:\sRPS\slandsat\sLE70100112010188EDC00>gdalwarp L71010011_01120100707_B10.TIF -t_srs EPSG:4326 foo.tif\n}}}\nBy default, this uses "nearest" interpolation, which is usually what I want, but you can do "gdalwarp --help" to see other options.\n\nThe file "foo.tif" can then be loaded into Matlab using read_geotiff.m:\n{{{\n>> [lon,lat,z]=read_geotiff('foo.tif');\n>> imagesc(lon,lat,z)\n>> set(gca,'DataAspectRatio',[1 cos(mean(lat)*pi/180) 1]); %approx aspect ratio\n}}}\n\n\n[img[Matlab plot|]]\n[img[Google Earth overview|]]\n[img[Google Earth zoom|]]\n\n
If you have the NetCDF Toolbox installed you can run this test as is to grab the topo data from opendap, then write deflated, chunked output to NetCDF4 and read it back in.\n\nIf you want to just load a .mat file instead and test NetCDF4 writing and reading, grab the script and .mat file from \n\nand then type "netcdf4_test".\n\n{{{\n% netcdf4_test.m\n% Test writing and reading NetCDF4 with chunking and compression using\n% Native Matlab routines (tested in Matlab 2010b)\n% Rich Signell (\n\nfilename=''\nif 1\n % read data from OpenDAP using NJ Toolbox (\n url='';\n nc=mDataset(url);\n topo=nc{'topo'}(1:12:end,1:12:end);\n g=nc{'topo'}(1:12:end,1:12:end).grid;\n topo(topo<0)=0;\n lon=g.lon;\n;\n save topo.mat topo lon lat\nend\n% or load previously save mat file\nload topo.mat\n[ny,nx]=size(topo);\nsubplot(211);pcolor(lon,lat,double(topo));shading flat;caxis([0 5000])\ntitle('Topo from Mat file');\n%%\n% write NetCDF4 with chunking & compression (deflation)\n\nncid = netcdf.create(filename,'NETCDF4');\nlatdimid = netcdf.defDim(ncid,'lat',ny);\nlondimid = netcdf.defDim(ncid,'lon',nx);\nvarid = netcdf.defVar(ncid,'topo','short',[latdimid londimid]);\nlonid = netcdf.defVar(ncid,'lon','float',[londimid]);\nlatid = netcdf.defVar(ncid,'lat','float',[latdimid]);\n\nnetcdf.defVarChunking(ncid,varid,'CHUNKED',[180 360]);\nnetcdf.defVarDeflate(ncid,varid,true,true,5);\nnetcdf.putAtt(ncid,latid,'units','degrees_north');\nnetcdf.putAtt(ncid,lonid,'units','degrees_east');\nnetcdf.putAtt(ncid,varid,'units','m');\n%netcdf.putAtt(ncid,varid,'missing_value',int16(-32767));\n\nnetcdf.putVar(ncid,lonid,[0],[nx],lon(1:nx));\nnetcdf.putVar(ncid,latid,[0],[ny],lat(1:ny));\nnetcdf.putVar(ncid,varid,[0 0],[ny nx],topo(1:ny,1:nx));\n\nnetcdf.close(ncid);\n\n% read NetCDF4 file\,'nowrite');\nvarid=netcdf.inqVarID(ncid,'topo');\ntopo2=netcdf.getVar(ncid,varid,[0 0],[ny nx]);\nnetcdf.close(ncid);\nsubplot(212);pcolor(lon,lat,double(topo2));shading flat;caxis([0 5000])\ntitle('Topo from NetCDF4 file');\n}}}\n\n\n
WebEx Recording and Playback\n\nPlease feel free to add your knowledge to this page, correct errors, and/or improve clarity.\n\nWebEx's network-based recording feature enables a meeting host to capture WebEx session content, both visual and audio, for later playback. This page describes the simplest (of multiple) techniques to record (at no direct cost and with no additional hardware or software) both the WebEx "shared desktop" activity and teleconference audio by "connecting WebEx" to the audio/phone bridge.\n\nThe recordings are stored on WebEx servers in arf (Advanced Recording Format), a proprietary format. They can be played back by anyone having the URL to the recording. The host is e-mailed a URL to the recording within 30 minutes after the recording is ended. This URL can be forwarded or posted to a Web site or wiki site. The host can access all recordings by logging into WebEx, clicking on the "Meeting Center" tab, and clicking on "My Recorded Meetings."\n\nArf files can be converted to Windows Media File (.wmv) format or to Flash (.swf) format through the WebEx Network Recording Player. The Player is available by logging into WebEx, clicking on the "Meeting Center" tab, and clicking on "Support", "Downloads", and "Recording and Playback".\nRecording\n\nThese instructions should work for both the RestonTalk Audio Bridge and audio bridge services.\n\n 1. Start the WebEx session\n 2. Click on "Record" in the WebEx meeting window, or click on "Meeting" and "Start Recording."\n 3. ...\n 4. On the WebEx Recorder (Record on Server) Setup window, enter the blue responses to the prompts:\n * Dial-in number: enter your audio bridge number, with any preceeding 1, 8, or 9.\n (Signell free conference:
Use to reduce color depth of geotiff and other gdal files, e.g. 24 bit geotiff to 8 bit:\n\nWindows: From the FWTOOLS shell:\n{{{\nrgb2pct.bat vs_bathy_shaded.tif vs_bathy_shaded_8bit.tif\n}}}\n
Middle Ground is simply tidal forcing -- no wind, waves, etc. So refined grid should be easy. In the COAWST distribution, the closest example to this is "refined_chan". Looking at refined_chan, /raid4/rsignell/Projects/COAWST/Refined_chan\, I see:\n{{{\n Lm == 100 100 ! Number of I-direction INTERIOR RHO-points\n Mm == 5 15 ! Number of J-direction INTERIOR RHO-points\n}}}\nand \n{{{\nrsignell@nemo:/raid4/rsignell/Projects/COAWST/Refined_chan$ ncdump -h | head\n\nnetcdf refined_chan_grid {\ndimensions:\n xi_rho = 102 ;\n eta_rho = 7 ;\n}}}\n{{{\nrsignell@nemo:/raid4/rsignell/Projects/COAWST/Refined_chan$ ncdump -h | head\n\nnetcdf refined_chan_grid_ref5 {\ndimensions:\n xi_rho = 107 ;\n eta_rho = 22 ;\n}}}\nSo indeed, for parent: \n{{{\nLm = xi_rho - 2\nMm = eta_rho - 2\n}}}\nand for child it's always -7 (regardless of grid_refinement=3,5)\n{{{\nLm = xi_rho - 7\nMm = eta_rho - 7 \n}}}\n\nSo for middle ground, we have:\n{{{\nrsignell@nemo:/raid4/rsignell/Projects/COAWST/VS$ ncdump -h | head\nnetcdf mid_grid3 {\ndimensions:\n xi_rho = 256 ;\n eta_rho = 96 ;\n}}}\n{{{\nnetcdf mid_grid3_nest {\ndimensions:\n xi_rho = 490 ;\n eta_rho = 100 ;\n}}}\n\nso subtracting 2 from the parent, and 7 from the child leaves:\n{{{\n Lm == 254 483 ! Number of I-direction INTERIOR RHO-points\n Mm == 94 93 ! Number of J-direction INTERIOR RHO-points\n N == 20 20 ! Number of s_rho values (layers)\n}}}\n\n\nTo run the parent only, we must:\n1. edit vs002.h and "undef REFINED_GRID"\n2. edit the file and comment out the double grid, forcing, avg, etc files\nso we specify\n3. edit coawst.bash and change from "NestedGrids=2" to "NestedGrids=1"\n\n\n\n
from Norm Vine:\n\nI can mod a few files to give you my additions for histogram massaging etc\nI can get you setup sometime \n\nbut you can play as is see the "-align similarity" command \nbut probably want to play with the affine and perspective options too\n\n\nbut don't add the images together and always allign against the same image\ne.g.\n{{{\nImageStack -load a.jpg -load b.jpg -align similarity -save b_X.jpg\n}}}\n\nyou can use attached as a guide to how I balanced the 'palette' as\nyou don't have my histoadapt operator so you can use something like\n \n-gamma X.X \s\n-eval "Y.Y*[c]" \s\n-eval "(0.5/mean())*[c]"\n\nwhere X.X and Y.Y are appropriate scalars\nuse your imagination imagestack is a cool toy\n\nuse -display instead of -save while debugging\n\n\n\nI use python to generate the scripts that get passed to ImageStack using\nstring templates
!~ImageStack\n\nNorm first pointed me at ImageStack (, which is not python, but a command line driven executable that can align images via commands like this:\n{{{\nc:\sprograms\sImageStack\sImagestack.exe -load b105\sBTV_TIFF_001.tif \s\n-load b105\sBTV_TIFF_020.tif -align similarity -save .\sfix\sBTV_TIFF_020.tif\n}}}\nThis worked great for the 5 images that Norm tried, but not so great for the 20 images I tried where there was a cable waving around. Plus I'd like a python based approach where I know what I'm doing.\n\n!ICP\n\nNorm also pointed me toward ICP:\n\nwhich seeks to register two images on disk. This is pure python, but requires loading images from disk, which seem stupid, and also doesn't seem to output the transformation matrix, which we need because we are going to register images that have all the ripples masked out, so that only the outlines of the images remain, and then use the transformation matrices to affine transform the original non-masked images.\n\n!ITK \nGoogled this one. The C++ based ITK ( and Python bindings in WrapITK ( are extensive image processing tools by Kitware, with code to do affine registration here:\n\n!IRTK (Image Registration Toolkit)\n\nThis isn't python at all, and no c bindings. But here transformation matrix can be obtained as an output, where imagestack didn't seem to do this. The info on the affine transformation to register two images is given here:\n\nIf we use this, we need to mention: "The Image Registration Toolkit was used under Licence from Ixico Ltd." \nIf we use the affine registration, we should reference:\n"C. Studholme, D.L.G.Hill, D.J. Hawkes, An Overlap Invariant Entropy Measure of 3D Medical Image Alignment, Pattern Recognition, Vol. 32(1), Jan 1999, pp 71-86."\n
Stevens complained that there was a lot of access to their site from\nIt turned out that these were remote catalog requests, many from robots, like this one:\n\n\nBy looking at the error logs in\n/Volumes/web/coast-enviro/logs\n(such as ce_error_log.1288224000)\nI found out that something was looking for robots.txt in\n/Volumes/web/coast-enviro/www_root\nso I added a robots.txt file there with the following lines to block robots:\n{{{\nUser-agent: *\nDisallow: /\n}}}\n\nAlso by checking the access logs in this directory\n/Volumes/web/coast-enviro/logs\nit appears that many requests were indeed from robots, which were issuing remote catalog requests like\n{{{\n214.25.31.250 - - [28/Oct/2010:09:18:49 -0400] "GET /thredds/remoteCatalogService?catalog= HTTP/1.0"
/***\n| Name:|RenameTagsPlugin|\n| Description:|Allows you to easily rename or delete tags across multiple tiddlers|\n| Version:|3.0 ($Rev: 1845 $)|\n| Date:|$Date: 2007-03-16 15:19:22 +1000 (Fri, 16 Mar 2007) $|\n| Source:||\n| Author:|Simon Baird <>|\n| License||\nRename a tag and you will be prompted to rename it in all its tagged tiddlers.\n***/\n//{{{\nconfig.renameTags = {\n\n prompts: {\n rename: "Rename the tag '%0' to '%1' in %2 tidder%3?",\n remove: "Remove the tag '%0' from %1 tidder%2?"\n },\n\n removeTag: function(tag,tiddlers) {\n store.suspendNotifications();\n for (var i=0;i<tiddlers.length;i++) {\n store.setTiddlerTag(tiddlers[i].title,false,tag);\n }\n store.resumeNotifications();\n store.notifyAll();\n },\n\n renameTag: function(oldTag,newTag,tiddlers) {\n store.suspendNotifications();\n for (var i=0;i<tiddlers.length;i++) {\n store.setTiddlerTag(tiddlers[i].title,false,oldTag); // remove old\n store.setTiddlerTag(tiddlers[i].title,true,newTag); // add new\n }\n store.resumeNotifications();\n store.notifyAll();\n },\n\n storeMethods: {\n\n saveTiddler_orig_renameTags: TiddlyWiki.prototype.saveTiddler,\n\n saveTiddler: function(title,newTitle,newBody,modifier,modified,tags,fields) {\n if (title != newTitle) {\n var tagged = this.getTaggedTiddlers(title);\n if (tagged.length > 0) {\n // then we are renaming a tag\n if (confirm(config.renameTags.prompts.rename.format([title,newTitle,tagged.length,tagged.length>1?"s":""])))\n config.renameTags.renameTag(title,newTitle,tagged);\n\n if (!this.tiddlerExists(title) && newBody == "")\n // dont create unwanted tiddler\n return null;\n }\n }\n return this.saveTiddler_orig_renameTags(title,newTitle,newBody,modifier,modified,tags,fields);\n },\n\n removeTiddler_orig_renameTags: TiddlyWiki.prototype.removeTiddler,\n\n removeTiddler: function(title) {\n var tagged = this.getTaggedTiddlers(title);\n if (tagged.length > 0)\n if (confirm(config.renameTags.prompts.remove.format([title,tagged.length,tagged.length>1?"s":""])))\n config.renameTags.removeTag(title,tagged);\n return this.removeTiddler_orig_renameTags(title);\n }\n\n },\n\n init: function() {\n merge(TiddlyWiki.prototype,this.storeMethods);\n }\n}\n\nconfig.renameTags.init();\n\n//}}}\n\n
In order to restart the python notebook :\n\nfind the 'pid':\n{{{\nps aux | grep notebook | grep -v parent\n}}}\n\nkill 'pid-number'\nthen, restart the notebook with :\n\n{{{\nsource /home/epifanio/dev/src/wython/bin/activate\ncd /home/epifanio/\nnohup ipython notebook &\n}}}\n\nwe are using the netcdf4 pkg from svn, so in order to update it :\n\nfrom the epifanio's home :\n{{{\nsource /home/epifanio/dev/src/wython/bin/activate\ncd dev/src/\nls \n}}}\n(will list the libs i've installed from src)\n\n{{{\ncd netcdf4-python\nsvn up\npython install\n}}}\n
login in as "usgs":\n{{{\nsudo -i \nsu - usgs\n}}}\nkill all ipython running as usgs:\n{{{\npkill -9 ipython\n}}}\nfind the process to kill:\n{{{\nps aux | grep notebook | grep usgs | grep -v parent\n}}}\nkill -9 the process, and then start it up in the right directory:\n{{{\ncd /usgs/data2/notebook\nnohup ipython notebook --script &\n}}}\n\nOther random stuff:\nWe originally found the ip addresss by doing this:\n{{{\nsudo ifconfig -a \n}}}\nand look for eth0 entry.\n\nThen used it here to test the notebook: \n{{{\nipython notebook --ip="
It seems that the time to compile ROMS is getting longer and longer\nOn a single processor, the default build.bash takes\nRoms Rutgers SVN version 119: 9.85 minutes, build.log has 1,123 lines\nRoms Rutgers SVN version 72: 6.68 minutes, build.log has 1,567 lines\n
[note: Roy also told me they do not restart tomcat on a regular schedule, They monitor the server with jconsole and watch the memory and number of threads. When they start approaching 90% of max, they restart tomcat.]\n\nHi Rich:\n\nOn one machine we just use:\n\nJAVA_OPTS='-server -Xms1500m -Xmx1500m'\n\nIt runs rock solid and is used heavily. It however does not have any remote catalogs, in particular ones that at times produce errors. Unidata has never been ableto duplicate it, but we find the combination of those two things appear to make TDS start to throw errors, and the garbage collection does not clean those up well. Then issues arise with the tomcat unable to create new threads.\n\nOn another machine which has a lot of external links we have changed settings over time with new versions of TDS we are presently using\n\nJAVA_OPTS='-server -Xms1000m -Xmx1500m -XX:MaxPermSize=128m -Xmn256m -XX:SurvivorRatio=16'\n\nresults still are not as good as we would like. I used to have other settings for the garbage collection but I can't remember what they are. I am not certain they make a difference - there was a period where we had a lot of instability and then we added the garbage stuff and things were stable for awhile, but it does seem to change with each version. Running Java 1.6 and Tomcat 6 seems to matter also. We have noticed that say when NCDC becomes unreachable then the errors from that seem to pile up and soon we can not create a new thread and the entire system breaks.\n\nHere is one of the pages I started with (I searched under "tuning java virtual machine) and followed a lot of the links:\n\n\n\nHTH,\n\n-Roy
It looks like the run time for LTRANS is proportional to the size of the input grid, based on a simple test I did.\n\nThe original SABGOM grid and history file are 440x320. I used Dave Robertson's bash script that uses NCO to cut a subset. See: [[Cutting a ROMS file]]\n\nwe cut both the grid and history file to 84x63, limiting the grid to a region about 40 km from the spill location (since we were just playing with short term simulations of a few days anyway). The actual ncks command that got executed by Dave's script was:\n{{{\n ncks -F -d\n xi_rho,149,211 -d eta_rho,198,281 -d xi_u,149,210 -d eta_u,198,\n281 -d xi_v,149,211 -d eta_v,198,280 -d xi_psi,149,148 -d eta_ps\ni,281,280 ../\sn",\n}}}\n\nThe result was impressive: the run using the whole grid took 24 minutes, while the subset grid run took 40 seconds!\n\ntotal grid cells to subset grid cells: (440*320)/(84*63)= 26\ntotal grid runtime to subset grid runtime: (60*24)/40 = 36\n\nI'm not sure why it was even faster than the one would expect from scaling with number of grid cells - perhaps the smaller sizes fit in cache better.\n{{{\n[rsignell@CentOS5 thredds]$ uname -a\nLinux CentOS5 #1 SMP Mon Aug 18 15:15:18 PDT 2008 i686 athlon i386 GNU/Linux\n}}}\ndownloaded and installed java 1.6u18 for 32 bit linux\n{{{\n/usr/sbin/alternatives --install /usr/bin/java java /usr/java/jdk1.6.0_18 /bin/java 1\n}}}\ndownloaded and installed apache\n{{{\n 159 wget\n 161 tar xvfz httpd-2.2.14.tar.gz\n 164 ./configure --enable-proxy --enable-proxy-ftp --enable-proxy-http\n 165 make >& make.log &\n 166 tail -f make.log\n 167 make install\n 168 /usr/local/apache2/bin/httpd -l\n\ncd /usr/local/apache2/conf\ntail httpd.conf\n\n<IfModule mod_proxy.c>\nProxyRequests On\nProxyPreserveHost On\n<Location /thredds>\nProxyPass http://localhost:8080/thredds\nProxyPassReverse http://localhost:8080/thredds\n</Location>\n</IfModule>\n\n\nTiming on Blackburn:\n>> url2='';\n>> tic;nc2=mDataset(url2);toc\nElapsed time is 15.708747 seconds.\n>> tic;t=nc{'temp'}(1,1,:,:);toc\nElapsed time is 3.454730 seconds.\n>> tic;t=nc{'temp'}(1,1:10,:,:);toc\nElapsed time is 8.854005 seconds.\n>> tic;t=nc{'temp'}(1,:,:,:);toc\nElapsed time is 38.424019 seconds.\n\nTiming on Cloud with Apache ProxyPass forwarding via port 80:\nurl1='';\n>> tic;nc1=mDataset(url1);toc\nElapsed time is 61.369901 seconds.\n>> tic;t1=nc1{'temp'}(1,1,:,:);toc\nElapsed time is 5.063409 seconds.\n>> tic;t1=nc1{'temp'}(1,1:10,:,:);toc\nElapsed time is 24.133561 seconds.\n>> tic;t1=nc1{'temp'}(1,:,:,:);toc\nElapsed time is 70.926297 seconds.\n\n/usr/local/apache2/bin/apachectl start\n}}}\n\nTesting transfer from blackburn to cloud, at 7:00 am EDT Saturday morning:\n{{{\ 100% 1934MB 1.5MB/s 21:25\n}}}\n\nTesting again access of data from cloud using THREDDS via port 80 with ProxyPass to 8080.\n{{{\n>> cloud_timing\nTiming on\nRead metadata\nElapsed time is 10.145905 seconds.\nRead 1 layer\nElapsed time is 0.580025 seconds.\nRead 10 layers\nElapsed time is 5.833321 seconds.\nRead 40 layers\nElapsed time is 11.253278 seconds.\nTiming on ENKI cloud with ProxyPass 80=>8080\nRead metadata\nElapsed time is 76.312388 seconds.\nRead 1 layer\nElapsed time is 6.450826 seconds.\nRead 10 layers\nElapsed time is 55.586982 seconds.\nRead 40 layers\nElapsed time is 82.045666 seconds.\n}}}\nSo about 8 times slower to access data from Woods Hole.\n\nGave to Dave Thompson at JPL to try out and see what timings he gets. Will also get port 8080 opened to see if that helps.\n\ndownloaded and installed tomcat 6.0.20 in /usr/local/tomcat\nstarted as rsignell\n\nTiming from USGS in Woods Hole. Interesting that using ProxyPass to access 8080 via port 80 doesn't seem to slow things down at all:\n{{{\nTiming on\nRead metadata\nElapsed time is 2.198298 seconds.\nRead 1 layer\nElapsed time is 0.202833 seconds.\nRead 10 layers\nElapsed time is 1.330647 seconds.\nRead 40 layers\nElapsed time is 2.186299 seconds.\nTiming on ENKI cloud with ProxyPass 80=>8080\nRead metadata\nElapsed time is 16.030789 seconds.\nRead 1 layer\nElapsed time is 0.918176 seconds.\nRead 10 layers\nElapsed time is 9.710402 seconds.\nRead 40 layers\nElapsed time is 14.333925 seconds.\nTiming on ENKI cloud with port 8080\nRead metadata\nElapsed time is 15.899745 seconds.\nRead 1 layer\nElapsed time is 0.898271 seconds.\nRead 10 layers\nElapsed time is 8.399548 seconds.\nRead 40 layers\nElapsed time is 13.691210 seconds.\n}}}
Following the WRF Users Guide:\n\n\nGetting MET Forcing: \nWent to\n\nand used the "ftp4u" service to obtain subsetted GRIB files from NAM\nused bounds [-
{{{\n$ ssh -Y\nWarning: No xauth data; using fake authentication data for X11 forwarding.\nLast login: Thu Mar 8 11:35:55 2012 from\n[root@testbedapps ~]# /usr/java/jdk1.7.0_02/bin/jvisualvm\n}}}
Rutgers \nssh\nssh fedallah
Here is the query:\n\n<,36,144,40&observedproperty=sea_water_salinity&responseformat=text/csv&eventtime=2011-03-28T00:00Z/2011-04-11T00:00Z>\n\nbut it doesnt work from outside for just gliders and ships, note this is network = all.\n\ncurrently we offer to the outside, a single glider or ship by id and by specific time range etc. You are getting a sneak peak from the url above what is coming in the near future which will include gliders and ships.\n\nBTW in our production services a bounding box query as above return no obs in that bounding box from any platform type.\n
Guns, Germs, and Steel\nSex, Bombs and Burgers...\n\n...Wire, Glue and Grandpa\n\nHow about elevating/redefining "Community Impact" on RGE. Letters from external community to tell the story about impact. \n\nGet a NRC panel to recommend a strategy. $50K. \n\nKevin:\nGeologic Mapping was worried about being under Core Science Systems, that science in mapping might be lost.\n\n
\nThe tomcats (one for each app) are at\n/var/www/...\n\nThe tomcat for THREDDS is\n/var/www/apache-tomcat-6.0.29/\n\nThe tomcat for ncWMS is\n/var/www/apache-tomcat-6.0.29-ncwms\n\nThe tomcat for RAMADDA is\n/var/www/apache-tomcat-6.0.29-ramadda\n\nThe RAMADDA repository is at\norig: /home/tomcat/.unidata/repository/derby/repository\nnow: /data/ramadda/derby/repository\n\nThe tomcat for ERDDAP is \n/var/www/apache-tomcat-6.0.29-erddap\n\naccessed by users at: \n\n\n\n\n\nSample datasets for CI are at\n/data/ftp/upload/Catalog\n\nUser uploaded datasets are at:\n/data/ftp/upload/Shelf_Hypoxia\n/data/ftp/upload/Inundation\n/data/ftp/upload/Estuarine_Hypoxia\n\n
These runs were performed on Pikmin:~rsignell/swan\nResults transfered to /mudpile/swan and converted into ROMS forcing\n\n|!run | !met | !gen | !WCAP settings | !other |\n|swan01 | COAMPS | GEN2 | $WCAP (default whitecapping) | no refraction |\n|swan02 | COAMPS | GEN2 | $WCAP KOM 2.36E-05 3.02E-
Example of ADCP data with extra variables\n\n\nSimple seacat file\n
This URL returns a KMZ animation for just these times from the WMS:\n{{{\n,2009-12-09T00:00:00.000Z,2009-12-10T00:00:00.000Z,2009-12-11T00:00:00.000Z&TRANSPARENT=true&STYLES=BOXFILL%2Frainbow&CRS=EPSG%3A4326&COLORSCALERANGE=0%2C5&NUMCOLORBANDS=254&LOGSCALE=false&SERVICE=WMS&VERSION=1.3.0&REQUEST=GetMap&EXCEPTIONS=XML&FORMAT=application/
Sasha says there are 3 factors that compete on figuring out optimal tiling:\n* vectorization\n* communication\n* fitting in cache\n\nWe want long strips along the "I" direction since this is the fastest varying dimension (inner loop). But very long thin strips require lots of communication. Also we want to fit in cache. These things compete, but as a rough rule of thumb, often strips about 100x6 (100 in I, 6 in J)are optimal, and thus tiling should be designed accordingly. So for a 100x30 grid (I=100, J=30), the tiling should be \nNtileI=1\nNtileJ=5\n
Looking at vorticity with NCVIEW:\n\n{{{\nGet:\n\nand\n\n}}}\nThe former one is a "compile once, use forever" operator function\nwhich computes vorticity of ROMS solution stored as a sequence of\nnetCDF files. Just untar it, say "make" within that directory then place\nexecutable file "vort" into your bin directory. After that you can say\n{{{\n vort wake_his*.nc\n}}}\nand, one minute later,\n{{{\n ncview\n}}}\nThe other file is ncview slightly patched by me to insert color palletes I like into the code,\nand to adjust its defaults, so that I have do less mouse clicking later. It fully retains all other\nfunctionalities of ncview of David W. Pierce. You may decide to recompile ncview (it is\n as easy as configure -> make -> make install , or just to use palettes,\n{{{\n bright.ncmap\n rainbow.ncmap\n}}}\n[note, in my patched code I also have correspondig C-headers colormaps_bright.h and\ncolormaps_rainbow.h along with changes in the main code]\n\nIf you have both of them working, you can get running moview of vorticity from roms\nsolution within one minute.
/***\n| Name|SaveCloseTiddlerPlugin|\n| Description|Provides two extra toolbar commands, saveCloseTiddler and cancelCloseTiddler|\n| Version|3.0 ($Rev: 2134 $)|\n| Date|$Date: 2007-04-30 16:11:12 +1000 (Mon, 30 Apr 2007) $|\n| Source||\n| Author|Simon Baird <>|\n| License||\nTo use these you must add them to the tool bar in your EditTemplate\n***/\n//{{{\nmerge(config.commands,{\n\n saveCloseTiddler: {\n text: 'done/close',\n tooltip: 'Save changes to this tiddler and close it',\n handler: function(e,src,title) {\n config.commands.saveTiddler.handler(e,src,title);\n config.commands.closeTiddler.handler(e,src,title);\n return false;\n }\n },\n\n cancelCloseTiddler: {\n text: 'cancel/close',\n tooltip: 'Undo changes to this tiddler and close it',\n handler: function(e,src,title) {\n config.commands.cancelTiddler.handler(e,src,title);\n config.commands.closeTiddler.handler(e,src,title);\n return false;\n }\n }\n\n});\n\n//}}}\n\n
From Brian Sibley:\n\nGo to:\n"LAYERS"\n"Manage Seafloor Databases"\n( look now at bottom left )\n"create"\nSet for resolution of 5.6 meters ( or greater if you wish-remember GPS accuracy )\nHit "OK" and answer "yes" to shift bottom calculation\n\n\nThe "active " layer that is currently recording with say "active" beside it.\nIt will create new data as if it was the first time being used. Now you can flip back and forth between datasets to view them, but still record on the new one.\n
Get an "account" from Racheal that makes a "template" with your initials for you on the scanner. This way only your scanned images are avaialble to you by your initials.\n\nOnce you have an account, hit the "Custom 2" button to bring up the scanning functions. The instructions on the top left indicate that you need to select a template. Scroll down until you see your template (your initials), and then touch the screen to select that template. Make any changes (like 2 sided, etc). Then just press the "Start" button.\n\nThen watch this WebEx recording to see how to\nget your stuff off. The video shows just going to my favorites, but the "WebScan" URL is\nhttp://
Here's what I did to install Scientific Python similar to Enthought and Python(x,y) on Ubuntu.\nI added the spyder package also to give a Matlab-like GUI interface.\nI had to add Adrian Benson's personal package archive to get NetCDF4. I was happy to find that his NetCDF4 has OPeNDAP support built-in!\n{{{\nsudo add-apt-repository ppa:adrian-m-benson/ppa\nsudo apt-get update\nsudo apt-get install python-netcdf4 python-matplotlib python-numpy python-scipy spyder\n}}}\n\nTo install CGAL python:\n{{{\nsudo apt-get install libcgal*\nsudo apt-get install libboost*\n}}}\nThere was a problem with Lazy_exact_nt.h and other files not being up to date in the CGAL Ubuntu distro, so I updated it with one from Rusty Holleman:\ncp ~rsignell/Downloads/Lazy_exact_nt.h .\n\ntar xvfz cgal-python-0.9.4-beta1-RH.tar.gz\ncd cgal-python/\n python build\n\nFrom Adrian Benson on building NetCDF4-Python:\n\nMy repo is really just stuff I compile for myself and 'share with the group' as it were.\n\nTo see what distributions it will work with filter the ppa listing and see if your version is there.\nI have moved my packages to Oneiric, but I uploaded a Natty and a Maverick version of the latest netcdf4\npackage. Not sure when they will finish building (if at all).\n\nIf it all goes okay, your colleague should be able to add the repo as you mentioned below.\npython-netcdf4 will be the only package they can see.\n\nOn a more general not it is often easier to simply download a deb directly and install it from the command line\n(i.e. sudo dpkg -i xxxx.deb or sudo gdebi xxx.deb). Whether it works or not is another question.\n\nIf they have to build the package from scratch, then they should\n{{{\n1. install the following development packages\nsudo apt-get install python-all-dev libnetcdf-dev python-numpy zlib1g-dev libhdf5-serial-dev\n2. download and extract the latest source (\n3. build it\n python install --user (in $HOME/.local, i.e. personal only)\nor \n sudo python install (in /usr/local, system wide but will\n not interfere with ubuntu stuff)\n}}}
Okay, you could use RAMADDA, but here's a solution using GI-CAT, which can also harvest from many other sources, not just the TDS. \n\nFor providers:\n1. Make sure you are using TDS version 4.2.6 or higher\n2. Follow the instructions on this page for setting up ncISO and\nadding the ncISO service to your THREDDS catalogs:\n\n3. Install GI-CAT from\n(just download the .war file and deploy it like the TDS)\n4. Watch this youtube video as a quick start for how to set up GI-CAT\nto harvest from THREDDS catalogs:\n\nFor consumers:\n1. MATLAB: opensearch.m file to query GI-CAT and return OPeNDAP URLs\n
All the tiles for the NOAA Coastal Relief Model exist at WHSC on custer in Arc .adf format. Our strategy here is to merge all the tiles for each volume into a single GeoTIFF using gdal_merge from the FWTOOLS, and then convert to NetCDF. The final step is to add lon/lat values via the THREDDS Data Server (TDS) catalog so that they can be served as CF-1.0 compliant data sets. \n\nThe method I've used is a bit convoluted, but it works. \n\nOn at PC at WHSC, first mount \s\scusterdtxp\ssharedata as drive x.\n\nThen fire up a cygwin window and execute this "do_make_bat_files" script, which creates a batch file for each volume that will be used to assemble the tiles. \n{{{\n#!/bin/bash\n#\n# DO_MAKE_BAT_FILES\n#\n# Step 1: mount \s\scusterdtxp\ssharedata\scrmgrd as drive X:\n# find /cygdrive/x/crmgrd/volume1 -name w001001.adf volume1.list\n# find /cygdrive/x/crmgrd/volume2 -name w001001.adf volume2.list\nfor num in
The tinyurl for this page is:\n\nPeter Schweitzer pointed me at some magnetic anomaly data for Wyoming at\n\nas a sample of USGS gridded data on the web that might be amenable to delivering via the THREDDS Data Server. So here's a brain dump of what I did to get it going with the TDS.\n\nThe data files are available in ARC ASCII, ERM, GXF and ODDF format. I went with the GXF since it looked like it had the most amount of metadata about the projection. \n\nI used the FWTOOLS tools "gdal_translate" to convert from GXF to NetCDF:\n{{{\ngdal_translate magfinal.gxf\n}}}\n\nBecause the GXF used a non-standard form and because gdal_translate does not produce CF compliant NetCDF, I then created an NcML file (by hand) that allows the netcdf file to be CF compliant:\n{{{\n<?xml version="1.0" encoding="UTF-8"?>\n<netcdf xmlns="" \n location="C:/RPS/cf/peter/WY/">\n <attribute name="Conventions" value="CF-1.0" />\n <variable name="Lambert" shape="" type="char">\n <attribute name="grid_mapping_name" value="lambert_conformal_conic" />\n <attribute name="standard_parallel" type="float" value="33.0 45.0" />\n <attribute name="longitude_of_central_meridian" type="float" value="-
Start:1030EDT\n\nThe PRISM climate data is available from the web site as individual files: one ARC ASCII Grid for each variable for each year. My brother Steve Signell, who does GIS work for the State of New York and the Adirondack Park, had already downloaded most of the files and arranged them in directories by decade. So as a test, he sent me two decades of data (using DropSend!):\n{{{\\\n}}}\nwhich I unzipped into c:/rps/cf/prism. The individual file names are like\n{{{\n/cygdrive/c/rps/cf/prism/ppt/
This page can be reached directly at: ''''\n\n1. Get an account on ''''. Send e-mail to Greg Miller at and attach a copy of your '''' file (your SSH public key). If you don't know what this means, Greg can tell you.\n\n2. Make a directory on ''coast-enviro'' for the NetCDF files you want to serve, for example ''/Volumes/models/vs/vs002/his'' for Vineyard Sound Run2 history files. If you don't have a line like ''umask 0002'' in your ''~/.bash_profile'' on ''coast-enviro'', do a ''chmod 755'' on the directory to make it accessible to web folks, or ''chmod 777'' to make it accessible to web folks and writable by others in your group.\n\nExample:\n\nOn coast-enviro:\n{{{\nmkdir /Volumes/models/vs/vs002/his\nchmod 777 /Volumes/models/vs/vs002/his\n}}}\nOn Pikmin:\n{{{\nscp -p /home/rsignell/roms_sed/vs/*his_*.nc\n}}}\nCheck to make sure that the files you just copied are readable by world, and if not, do a ''chmod 644 *.nc''.\n\n3. Modify the THREDDS catalog that points to the data. The catalogs live on coast-enviro at:\n{{{\n/usr/local/tomcat/content/thredds\n}}}\nThere are multiple catalogs, each for a collection of data. For example, the Vineyard Sound model runs are in the vineyard_sound_catalog.xml catalog file. If you are making a new catalog for a project, copy an existing catalog (like ''vineyard_sound_catalog.xml'') to use as a template, and when done modifying, add the name of the new catalog in the ''threddsConfig.xml'' file. The directory where the catalogs live and the catalogs themselves should all be writeable by anyone in the group ''models''.\n\nEdit the catalog file, adding datasets, and collections of datasets if desired. For example, the collection of Vineyard Sound Runs looks like:\n{{{\n<dataset name="Vineyard Sound Model Runs" ID="vs">\n\n<dataset name="vs001: Middle Ground, 3 day run" ID="vs/vs001/his" urlPath="vs/vs001/his">\n <documentation type="summary">3 day run of Middle Ground, used for testing </documentation>\n <serviceName>ncdods</serviceName>\n <netcdf xmlns="">\n <aggregation dimName="ocean_time" type="joinExisting">\n <scan location="/Volumes/models/vs/vs001/his/" suffix=".nc"/>\n </aggregation>\n </netcdf>\n</dataset>\n\n<dataset name="vs002: Middle Ground, Spring/Neap run" ID="vs/vs002/his" urlPath="vs/vs002/his">\n <documentation type="summary">Full 28 day simulation of Middle Ground</documentation>\n <serviceName>ncdods</serviceName>\n <netcdf xmlns="">\n <aggregation dimName="ocean_time" type="joinExisting">\n <scan location="/Volumes/models/vs/vs002/his/" suffix=".nc"/>\n </aggregation>\n </netcdf>\n</dataset>\n\n</dataset>\n}}}\nYou need to change the parts that describe your dataset and then specify the location of the files you placed on coast-enviro. The ID is a unique identifier for THREDDS, and the urlPath will be what shows up at the end of the URL in the web browser (described in Step 5). \n \nThe ''dataset name='' parts determine how the dataset appears if you go directly to the THREDDS server. For example, if you go to: '''' you will get the default catalog.xml, and you will see ''Vineyard Sound Model Runs'' and under that, ''vs002''. The "metadata" that you want to use to describe how the run differs from other runs may be put in the dataset name and in the <documentation> tag.\n \n4. When you are done modifying the catalog (and the threddsConfig.xml file, if you created a new catalog), reload the THREDDS server by using the web-based Tomcat Manager at\nI can give you the admin username and password to do this over the phone.\n\n5. The data should be now served by THREDDS, available at\n\n6. If things are not working, look at the end of these logs:\n{{{\\\n}}}\nGood luck,\nRich
Note: This is specific for serving ROMS or other NetCDF data through servers in CMG Woods Hole.\n \n1. Get an account on Send e-mail to Greg Miller at and attach a copy of your SSH public key. If you don't know what this means, Greg can tell you. I recommend getting "putty" and "puttygen" for SSH under windows, and "openssh" for SSH under cywgin or Linux.\n\n2. Make a directory on stellwagen under the /Volumes/models directory. If you have a lot a group of files that you want to aggregate, put them in a separate directory. For example, the Adriatic Sea model run sed038 generates 68 averaged files, 68 history files and 68 diagnostic files. I make three directories: \n{{{\n/Volumes/models/adria/roms_sed/sed038/his\n/Volumes/models/adria/roms_sed/sed038/avg\n/Volumes/models/adria/roms_sed/sed038/dia\n}}}\nand put all the history files in "his", averages files in "avg" and diagnostics files in "dia". This is to facilitate the automatic generation of THREDDS catalog entries, as will be illustrated below.\n\n3. Use "scp" or "sftp" to copy your data to stellwagen, putting your model data into the directory or directories you created. \n\nExample:\n\nFirst on Stellwagen:\n{{{\nmkdir /Volumes/models/adria/roms_sed/sed036/his\ncmod 755 /Volumes/models/adria/roms_sed/sed036/his \n}}}\nThen on Pikmin:\n{{{\nscp -p adria03_sed038_his_*.nc\n}}}\n\n4. Modify the THREDDS catalog that points to the data. \n\n* Get a copy of the existing THREDDS server catalog (catalog.xml) from stellwagen, and make a backup copy.\n{{{\nscp .\ncp catalog.xml catalog.xml.backup\n}}}\n* Edit the "catalog.xml" file, adding datasets, and collections of datasets if desired. For example, the collection of Adriatic Sea runs look like:\n{{{\n<dataset name="Adriatic Model Runs" ID="ROMS">\n <dataset name="sed038: COAMPS forcing,SWAN" ID="adria/roms_sed/sed038" urlPath="adria/roms_sed/sed038">\n <serviceName>ncdods</serviceName>\n <netcdf xmlns="">\n <aggregation dimName="ocean_time" type="joinExisting">\n <scan location="/Volumes/models/adria/roms_sed/sed038/his/" suffix=".nc"/>\n </aggregation>\n </netcdf>\n </dataset>\n</dataset>\n}}}\nYou will want to change the parts that describe your dataset and then specify the location of the files you placed on Stellwagen. The most important part is the urlPath, which will determine the last part of the OpenDAP URL that your collaborator will type in to access the data (described in Step 6). \n \nThe "dataset name=" parts determine how the dataset appears if you go directly to the THREDDS server. For example, if you go to: you will see "Adriatic Model Runs" and then under that you will see "sed038".\n \n5. Send the catalog <catalog.xml> as an attachment to Greg and ask him to reload the THREDDS server. \nOr, If you have an account and appropriate permissions set up on stellwagen, you can use scp to copy it over:\n{{{\nscp -p catalog.xml stellwagen:/usr/local/tomcat/content/thredds\n}}}\nand reload the THREDDS server at\n{{{\n\n}}}\n yourself. You can only do this if you know the password for user "greg". Ask Greg for info on this.\n \n6. Tell your collaborator to use this OpenDAP URL to access the data:\n{{{\n\n}}}\n(replacing the "adria/sed038" part with whatever you specified at the urlPath above)\n\nOr just tell them to browse to the proper OpenDAP URL by going to main catalog for the THREDDS server \n{{{\n\n}}}\n \nIf they want to see what variables are in the file and what dimensions there are, they can put a ".html" on the end, which tells OpenDAP to list the metadata. (\n \nThen in Matlab, if they have the opendap-enabled mexnc and the netcdf toolkit, they could access data just as they would a local netcdf file. For example:\n\n{{{ \n >> nc=netcdf('');\n >> t=nc{'temp'}(:,end,30,40);\n >> close(nc);\n}}} \n\nto load a time series of surface temperature values at grid cell j=30, i=40. \n \nOr they can also use the RSLICE GUI tool, by typing\n{{{ \n >> rslice('url','');\n}}} \nand they could interactively explore the data.\n \n\nGood luck,\nRich
Downloaded the 800 MB img file and the ERS header from\n{{{\n\n\n}}}\nConvert to NetCDF using "gdal_translate":\n{{{\ngdal_translate topo_9.1b.img.ers -of NetCDF\n}}}\nGenerate a template CDL called "smith_sandwell.cdl":\n(this can be found on ricsigdtlx:/home/rsignell/bathy/smith_sandwell)\n{{{\nnetcdf smith_sandwell {\ndimensions:\n lon = 21600 ;\n lat = 17280 ;\nvariables:\n float lon(lon) ;\n lon:long_name = "Uniformly spaced longitudes (-179.9917E - 179.9917E.01667E)" ;\n lon:units = "degrees_east" ;\n float lat(lat) ;\n lat:long_name = "Mercator spaced latitudes (80.738N - 80.738S)" ;\n lat:units = "degrees_north" ;\n short topo(lat, lon) ;\n topo:units = "meters" ;\n topo:long_name = "topography" ;\n\n// global attributes:\n :Conventions = "COARDS" ;\n :title = "Smith & Sandwell Topography v9.1: 1/60-degree topography and bathymetry" ;\n :institution = "University of California San Diego" ;\n :history = " 28-Apr-2008: Converted to NetCDF using gdal_translate from FWTOOLS, then rearranged from 0:360 to -180:180 using Matlab (Rich Signell: ";\n}\n}}}\n\nCreate the NetCDF container file:\n{{{\nncgen -o smith_sandwell.cdl\n}}}\n\nRun the matlab script:\n{{{\nnc2=netcdf(''); % old file created by gdal_translate\nnc=netcdf('','w'); % new file to create\n\nnx=21600;\nny=17280;\n\nlon=[0 0];\nlat=[-80.738 80.738];\n[x,y]=ll2merc(lon,lat);\nyi=linspace(y(2),y(1),ny); % even spacing in mercator for lat\n[lon,lat]=merc2ll(yi*0,yi);\ndx=60/3600; % 1 minute spacing in longitude\nlon=linspace(-180+dx/2,180-dx/2,nx);\n\nnc{'lon'}(:)=lon;\nnc{'lat'}(:)=lat;\n\nnpieces=30;\nnchunk=ny/npieces;\nfor j=1:npieces\n j\n jj=(j-1)*nchunk+1:nchunk*j;\n z=nc2{'Band1'}(jj,:);\n z2=z;\n z2(:,1:nx/2)=z(:,nx/2+1:nx);\n z2(:,nx/2+1:nx)=z(:,1:nx/2);\n nc{'topo'}(jj,:)=z2;\nend\nclose(nc)\nclose(nc2);\n}}}\n\n\n\n
This page can be reached directly at: ''''\n\n1. Get an account on ''''. Send e-mail to Jonathan Murray ( and attach a copy of your 2048 bit RSA public key file, often found at ''~/.ssh/''. If you don't have a 2048 bit public/private keypair, you can generate one with "ssh-keygen". \n\n2. Make a directory on ''blackburn'' for the NetCDF files you want to serve on ''/usgs/data0'', the 10TB partition for USGS use. For example, you might make a directory called ''/usgs/data0/Carolinas'' for data and model results from the Carolinas Project. You can do this from another machine like via ssh, using a command like:\n{{{\nssh mkdir -p /usgs/data0/Carolinas\n}}}\n\n3. Make the directory writable by others in the ''usgs'' group if more than one person may be putting files in the directory. You can do ''chmod g+w'' on a directory-by-directory basis, or if you want want this to be the default behavior (like I do), you can add the line:\n{{{\numask u=rwx,g=rwx,o=rx\n}}}\nto your ''~/.bashrc'' file on blackburn. This is equivalent to ''umask 0002''.\n\nExample of making a directory and copying files from marlin to blackburn. On blackburn, you could do:\n\n{{{\nssh mkdir -p usgs/data0/vs/vs002/his\nscp -p /home/rsignell/roms_sed/vs/*his_*.nc\n}}}\n\nAs soon as you copy your data to blackburn's /usgs disk, your data files will be accessible via the THREDDS Data Server "usgs_all" catalog that dynamically scans all netcdf and grib files on /usgs. So if you point your browser at \n<> you should be able to see your files available via OpenDAP or HTTP access (download in original grib or NetCDF format).\n\nIf you want to make an aggregation or other type of special dataset from the files you deposited, you need to create or modify the THREDDS catalog that points to the data. The catalogs live on coast-enviro at:\n{{{\n/usr/local/tomcat2/content/thredds\n}}}\nThere are multiple catalogs, each for a collection of data. For example, the Carolinas model runs are in the ''carolinas_catalog.xml'' catalog file. If you are making a new catalog for a project, copy an existing catalog (like ''carolinas_catalog.xml'') to use as a template, and when done modifying, add the name of the new catalog in the ''threddsConfig.xml'' file or in the top level ''catalog'' file. The directory where the catalogs live and the catalogs themselves should all be writeable by anyone in the group ''usgs''.\n\nEdit the catalog file, adding datasets, and collections of datasets if desired. For example, the collection of Vineyard Sound Runs looks like:\n{{{\n<dataset name="Vineyard Sound Model Runs" ID="vs">\n\n<dataset name="vs001: Middle Ground, 3 day run" ID="vs/vs001/his" urlPath="vs/vs001/his">\n <documentation type="summary">3 day run of Middle Ground, used for testing </documentation>\n <serviceName>ncdods</serviceName>\n <netcdf xmlns="">\n <aggregation dimName="ocean_time" type="joinExisting">\n <scan location="/usgs/data0/vs/vs001/his/" suffix=".nc"/>\n </aggregation>\n </netcdf>\n</dataset>\n\n<dataset name="vs002: Middle Ground, Spring/Neap run" ID="vs/vs002/his" urlPath="vs/vs002/his">\n <documentation type="summary">Full 28 day simulation of Middle Ground</documentation>\n <serviceName>ncdods</serviceName>\n <netcdf xmlns="">\n <aggregation dimName="ocean_time" type="joinExisting">\n <scan location="/usgs/data0/vs/vs002/his/" suffix=".nc"/>\n </aggregation>\n </netcdf>\n</dataset>\n\n</dataset>\n}}}\nYou need to change the parts that describe your dataset and then specify the location of the files you placed on blackburn. The ID is a unique identifier for THREDDS, and the urlPath will be what shows up at the end of the URL in the web browser (described in Step 5). \n \nThe ''dataset name='' parts determine how the dataset appears if you go directly to the THREDDS server. For example, if you go to: '''' you will get the default catalog.xml, and you will see ''Vineyard Sound Model Runs'' and under that, ''vs002''. The "metadata" that you want to use to describe how the run differs from other runs may be put in the dataset name and in the <documentation> tag.\n \n*When you are done modifying the catalog (and the threddsConfig.xml file, if you created a new catalog), reload the THREDDS server by using the web-based Tomcat Manager at\nI can give you the admin username and password to do this over the phone.\n\n*The data should be now served by THREDDS, available at\n\n*If things are not working, look at the end of these logs:\n{{{\\\\\n}}}\nGood luck,\nRich
12:31 Start following white paper install for linux\n\nDid not install set the environment variables in /etc, which seems dangerous.\nWill set them in tomcat instead.\n\nSet these two valules in ~geoporta/.bashrc\n{{{\nexport PGDATA=/usr/local/pgsql/data\nexport PATH=$PATH:/usr/local/pgsql/bin\n}}}\nDownloaded postgreSQL 9.1.2 as in the instructions, even though more recent versions are available.\n\nbuilt as user rsignell instead of as user geoportal.\n{{{\ncd $HOME\n\n./configure\n\nrsignell@gam:~$ sudo update-rc.d postgresql defaults\nupdate-rc.d: warning: /etc/init.d/postgresql missing LSB information\nupdate-rc.d: see <>\n Adding system startup for /etc/init.d/postgresql ...\n}}}\n12:53 done installing postgreSQL\n\n1:38 struggling with Database Scripts\n\n\ncan't run directly using instructions .\n{{{\npostgres@gam:/usr/local/etc/geoportal/Database Scripts/PostgreSQL$ ./\n./ 20: ./ Syntax error: "(" unexpected\n\npostgres@gam:/usr/local/etc/geoportal/Database Scripts/PostgreSQL$ bash ./ localhost 5432 postgres geoportal postgres geoportal\n./ line 39: ./createuser: No such file or directory\n./ line 40: ./psql: No such file or directory\n[sudo] password for postgres:\n}}}\nneeded to:\n1) run scripts explicitly with bash\n2) removed local links to binaries so it would find them in bin\n3) removed the sudo gedit commands (designed to just show the files creates, I guess)\n{{{\npostgres@gam:/usr/local/etc/geoportal/Database Scripts/PostgreSQL$ bash localhost 5432 postgres geoportal\npsql:schema_pg.sql:18: ERROR: table "gpt_user" does not exist\npsql:schema_pg.sql:20: ERROR: sequence "gpt_user_seq" does not exist\npsql:schema_pg.sql:36: NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "gpt_user_pk" for table "gpt_user"\npsql:schema_pg.sql:39: ERROR: index "gpt_user_idx1" does not exist\npsql:schema_pg.sql:46: ERROR: index "gpt_user_idx2" does not exist\npsql:schema_pg.sql:54: ERROR: table "gpt_search" does not exist\npsql:schema_pg.sql:63: NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "gpt_search_pk" for table "gpt_search"\npsql:schema_pg.sql:68: ERROR: index "gpt_search_idx1" does not exist\npsql:schema_pg.sql:76: ERROR: table "gpt_harvesting_jobs_pending" does not exist\npsql:schema_pg.sql:89: NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "gpt_harvjobspndg_pk" for table "gpt_harvesting_jobs_pending"\npsql:schema_pg.sql:94: ERROR: index "fki_harvestjobspndg_harvesting" does not exist\npsql:schema_pg.sql:102: ERROR: index "gpt_hjobspndg_idx1" does not exist\npsql:schema_pg.sql:110: ERROR: index "gpt_hjobspndg_idx2" does not exist\npsql:schema_pg.sql:118: ERROR: index "gpt_hjobspndg_idx3" does not exist\npsql:schema_pg.sql:130: ERROR: table "gpt_harvesting_jobs_completed" does not exist\npsql:schema_pg.sql:141: NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "gpt_harvestjobscmpltd_pk" for table "gpt_harvesting_jobs_completed"\npsql:schema_pg.sql:146: ERROR: index "fki_gpt_harvjobscmpltd_harvesting" does not exist\npsql:schema_pg.sql:154: ERROR: index "gpt_hjobscmpltd_idx1" does not exist\npsql:schema_pg.sql:165: ERROR: table "gpt_harvesting_history" does not exist\npsql:schema_pg.sql:177: NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "gpt_harvhist_pk" for table "gpt_harvesting_history"\npsql:schema_pg.sql:182: ERROR: index "fki_gpt_harvhist_harvesting_fk" does not exist\npsql:schema_pg.sql:223: NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "gpt_resource_pk" for table "gpt_resource"\npsql:schema_pg.sql:243: NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "gpt_resource_data_pk" for table "gpt_resource_data"\npsql:schema_pg.sql:243: NOTICE: CREATE TABLE / UNIQUE will create implicit index "gpt_resource_data_id_key" for table "gpt_resource_data"\npsql:schema_pg.sql:251: NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "gpt_collections_pk" for table "gpt_collection"\npsql:schema_pg.sql:251: NOTICE: CREATE TABLE / UNIQUE will create implicit index "gpt_collection_shortname_key" for table "gpt_collection"\n}}}
Logon to machine as Sandro and look around:\n{{{\nssh sandro@\ncarniel@dell:~$ uname -a\nLinux dell 2.6.28-15-generic #52-Ubuntu SMP Wed Sep 9 10:49:34 UTC 2009 i686 GNU/Linux\n\ncarniel@dell:~$ free -m\n total used free shared buffers cached\nMem: 2011 1634 377 0 115 1253\n\ncarniel@dell:~$ java -version\njava version "1.6.0_16"\nJava(TM) SE Runtime Environment (build 1.6.0_16-b01)\nJava HotSpot(TM) Server VM (build 14.2-b01, mixed mode)\n\ncarniel@dell:~$ ps -ef | grep tomcat\ncarniel 20963 1 0 May10 ? 00:00:52 /usr/lib/jvm/java-6-sun/bin/java\n -Djava.util.logging.config.file=/usr/local/tomcat/conf/ -Xmx1\n500m -Xms512m -server -Djava.awt.headless=true -Djava.util.logging.manager=org.a\npache.juli.ClassLoaderLogManager -Djava.endorsed.dirs=/usr/local/tomcat/endorsed\n -classpath /usr/local/tomcat/bin/bootstrap.jar -Dcatalina.base=/usr/local/tomca\nt -Dcatalina.home=/usr/local/tomcat org.\napache.catalina.startup.Bootstrap start\n}}}\n\nSo we are on a Ubuntu machine with 2GB RAM, and Tiziano has already installed a recent version of Java, and tomcat is running as user "carniel" with max memory of 1500m. Good!\n\nSo the first thing to to is upgrade the TDS to the latest stable version. So we shutdown the tomcat server, cd to the webapps directory, save the logo, remove the old thredds war file and subdirectory, download the new version of TDS, copy the logo back to the thredds directory, and then restart the server:\n{{{\ncd /usr/local/tomcat/webapps\n../bin/ \ncp thredds/logo.jpg .\nrm thredds.war\nrm -rf thredds\nwget\n../bin/\ncp logo.jpg thredds\n}}}\n\nI then made several sample catalogs, using the approach of to create a "all.xml" catalog that serves all gridded data on /home/sandro/carniel, "bevano.xml" which aggregates some Sandro's ROMS bevano files, and "creuse.xml" which aggregates some of Andrea's Creuse dense water simulations.\n\nFor the bevano simulations, which use ROMS \n\n\n\nAlso installed ncWMS and RAMADDA, but not sure if these are to be used or not.\n{{{\nhttp://\nhttp://\n}}}\n\n\nSetting up RAMADDA\n\nAdded "-Dramadda_home=/home/carniel/ramadda" to JAVA_OPTS in /usr/local/tomcat/thredds/bin/\nSet up users "rsignell" and "carniel" as admin\nAdded powerpoint talk to Ramadda in Top/Users/rsignell:\n<>\n\n\n\n\n \n
Installed TDS 4.2.9 on Jordan's nomad3 machine:\n{{{\n[wd23ja@nomad3 bin]$ pwd\n/home/wd23ja/tomcat/apache-tomcat-6.0.32/bin\n}}}\nThis can be accessed at:\n\n\nWe set up a datasetscan, using wildcards, but the default I usually use didn't work because they don't name the output files with .grib or .grib2 extensions (because they traditionally only had grib files at NCEP).\n\nThe next order of business is to aggregate the grib data. \n\n
ECM12 Powerpoint Presentation:\n\n\nNCTOOLBOX:\n\n\nGOOGLE Group: IOOS model data interoperability:\n\n\nGOOGLE Group: Unstructured Grid Standards and Interoperability:\n\n\n5 steps to access OPeNDAP data in ArcGIS10: \n1. Install NetCDF4-Python, used here to read OPeNDAP data. Thanks to Christoph Gohlke (UCI), this is super easy. Just get the NetCDF4-Python module installer (.exe) for ArcGIS from:\n\nand run it.\n\n2. Install pyshp, used to write shapefiles. This is also super easy. See instructions at:\n\n3. Use SVN to download my python scripts from\n\n(e.g. svn checkout )\n\n4. Cut and paste the python scripts into the ArcGIS10.0 Python command window (Tools=>Python)\n\n5. Did it work? Send me feedback! (
{{{\n<?xml version="1.0" encoding="UTF-8"?>\n<netcdf xmlns="">\n <aggregation dimName="ocean_time" type="joinExisting">\n <scan location="c:/rps/cf/roms" regExp=".*vs_his_[0-9]{4}\$"/> \n </aggregation>\n</netcdf>\n}}}
I bought a bottle of Zubrowka (bison grass vodka) in the US and after one sip, dumped it down the drain -- it was nothing like the real thing. It turns out the real stuff contains small amounts of coumarin, a blood thinner, so is outlawed by the FDA. The stuff for sale in the US tries to recreate the original taste with artificial flavor and fails miserably. \n\nI found another recipe for home made Zubrowka on the web, but it seemed more complicated than it needed to be, so I tried my own technique. I think the result is outstanding (naturally). This simple recipe below will make three 750 bottles of Zubrowka:\n\nIngredients:\n\n- 15 oz of 190 proof grain alcohol - e.g. Everclear\n- 1 30 inch sweet grass braid - e.g.\n- 3 bottles of 80 proof vodka\n- 1/2 cup simple syrup (boil 1/2 water and 1/2 cup sugar for 1 minute)\n\nSet aside a 3 pieces of grass that are bottle sized to use in the finished product.\n\nCut up the braid into 4 pieces each 6 inches long and put in a mason jar with 12 oz of grain alcohol. \n\nAfter 2 days, remove the grass -- the alcohol will be a beautiful green color and smell great. \n\nMake a simple syrup by bringing 1/2 cup water and 1/2 cup sugar to a full boil for 1 minute.\n\nTo make the Zubrowka, mix in this ratio: \n1 shot infused grain alcohol \n1 shot water\n4 shots 80 proof vodka\n2 tsp. simple syrup\n\nTo make a 750 ml bottle, which is just about 24 oz, you want:\n\n3 shots infused grain alcohol\n3 shots water\n12 shots (2 1/4 cups 80 proof vokda)\n2 Tbl. simple syrup\n\nMix together and pour through a coffee filter into a clean vodka bottle. \nAdd one blade of grass into the bottle for show.\n\nEnjoy! I brought this to my polish friend's house, who brought up a bottle of the real thing from Poland and we tried them side by side. They were definitely a bit different, but both delicious. In fact, my friend said he liked the homemade one better!
a TiddlyWiki approach instead of [[Blogger|]]
Rich Signell's Work Log
We want to be able to reproduce Figure 5 in the Comp GeoSci Ppaer, the migrating trench test case.\n\nLooking at John's folder for the paper\n\s\sJohwardtxp\sdata\spapers\scomm_sed_Comp_Geosci\nwe see that Figure 5 files, \n"trench.eps" was created at 10:11 PM on Feb 15, 2007.\n"" was created at 10:29 PM on Feb 15, 2007. This matches Figure 5 in the paper.\nLooking at "trench.eps", there is a title "morph 60" that does not appear in the Comp GeoSci paper. \n\nWe could not find the m-file that generated "trench.eps" by searching for "morph 60", but we did find a "trench.m" m-file that produces a "trench.eps" file. \n\nWe are not completely certain that the trench.eps was based on morph 60, but using trench.m with "" found in the directory:\n\s\sJohwardtxp\sdata\smodels\sroms\shelp_cases\strench2\nmatches the figure quite well, provided we add 0.02 m to the observed final bed location when making the plot. The version of "trench.m" already added 0.01 m to the final bed, and contained a comment:\n"added .01 because fig 12.4.8 shows init elev at 0.01."\nOr perhaps the observed data was simply moved or smoothed slightly in Adobe Illustrator.\n\nIn any case, it seems unlikely that the morph 10 run was actually used, as documented in the paper, because the "" file has a date of 1:12 AM on Feb 16, about 3 hours after the figure had already been created.\n\nSo if morph60 was used, what were the parameters for the run? Do they match the paper?\nIn the morph60 file, we find:\n :his_file = "" ;\n :grd_file = "ROMS/External/" ;\n :ini_file = "ROMS/External/" ;\nPerusal of this ini_file shows that it is not a roms restart file, saved after 1000 steps, as the paper would indicate, but a file generated by Matlab. The 3D field "U" has a logrithmic structure and the intitial sediment concentration is a uniform 0.2 kg/m3.\n\nSo if this is the right output file, what parameters were likely used in the run? It seems likely that the actual code used is \n\s\sJohwardtxp\sdata\smodels\sroms\shelp_cases\strench2\\nIn this code distribution, cppdefs.h is indeed set to "trench", and \nsediment.F\n{{{\n#include "cppdefs.h"\n#undef NEUMANN\n#undef LINEAR_CONTINUATION\n#undef REMIX_BED\n#undef SLOPE_NEMETH\n#define SLOPE_LESSER\n#define BSTRESS_UPWIND\n}}}\n\nIn cppdefs.h, the TRENCH options are:\n{{{\n# elif defined TRENCH\n\n/*\n** Trench migration suspended sediment test.\n*/\n\n#undef LOG_PROFILE\n#define UV_ADV\n#define UV_LOGDRAG\n#define TS_U3HADVECTION\n#undef SALINITY\n#define SOLVE3D\n#define SEDIMENT\n#ifdef SEDIMENT\n# define SUSPLOAD\n# define BEDLOAD_MPM\n# define SED_MORPH\n# define SED_DENS\n#endif\n#define SPLINES\n#define NORTHERN_WALL\n#define SOUTHERN_WALL\n#define WEST_FSGRADIENT\n#define WEST_M2CLAMPED\n#define WEST_M3GRADIENT\n#define WEST_TCLAMPED\n#define EAST_FSGRADIENT\n#define EAST_M2CLAMPED\n#define EAST_M3GRADIENT\n#define EAST_TGRADIENT\n#undef ANA_VMIX\n#define GLS_MIXING\n#ifdef GLS_MIXING\n# define KANTHA_CLAYSON\n# define N2S2_HORAVG\n#endif\n#define ANA_BPFLUX\n#define ANA_BSFLUX\n#define ANA_BTFLUX\n#define ANA_SMFLUX\n#define ANA_SPFLUX\n#define ANA_SRFLUX\n#define ANA_SSFLUX\n#define ANA_STFLUX\n#define ANA_TOBC\n#define ANA_M2OBC\n\n# elif defined UPWELLING\n}}}\n\nIn the file and file, we find the parameters for a MORPHFAC=1 run, in that there are
I'm trying to see if there is consistency running a simple channel test case with various sediment transport models.\n\nI've got a 10 km long, 10 m deep channel driven by a 0.4 m slope over 10 km (slope = 0.00004 m/m)). The bottom roughness is z0 = 0.005 m for 3D runs, or Manning N = 0.030 for 2D runs, which results in a depth-averaged flow of just under 1 m/s at the center of the channel (0.95 m/s to be exact). I run for 6 hours, which is sufficient for equilibrium conditions to be established.\n\nFor 500 micron sediment:\nIf I run Delft3d in 2D mode with Van Rijn, I get sediment concentrations of 0.0103 kg/m^3, and a flow speed of 0.95 m/s, which yields a suspended load transport of 0.0103 kg/m^3 * 0.95 m/s *10 m = 0.098 kg/m/s. The bedload is 0.0479 kg/m/s. The total transport is 0.1458 kg/m/s. (Delft3D reports transport rates in m3/s/m, so must multiply by 2650 kg/m3 to get kg/m/s).\n\n
Running \n{{{\n$ system-config-soundcard\n}}}\nwas all I needed to get sound going on my linux RHEL4 box
Ingredients:\n*GBP\n*1/4 tsp creme-of-tartar\n*125 g of sugar\n*Juice from 1/2 lemon\n*150 g of peeled, thinly sliced ginger root\nInstructions: \nMake Ginger juice by boiling ginger root in 2 cups water for 30 minutes. Add sugar and cool to room temp. Add lemon juice, GBP and creme of tartar. Top up to 1 liter with cool water. \n
in cygwin:\ncd\nssh -i myhosts.pem
OMG, I can't believe I waited this long. \nHere's how to change the dark blue to cyan:\n{{{\nexport LS_COLORS='di=01;36'\n}}}\nThat's all it takes.
*Put 1 gallon of raw milk on stove over medium heat. Bring to 99 degrees F stirring often with a wooden spoon.\n*Put 1 cup of greek yogurt (any brand containing both L. Bulgaricus and S. Thermophilus) in a 2 cup measuring cup and whisk in 1 cup of the warm milk to make a better consistency for adding to the pot. \n*Add yogurt back into the milk in the pot. Let rest at 95-100 F for 30 min. Add 4 drops of liquid animal rennet diluted in 1/3 cool spring water. Let stand at 100 degrees for 90 min. \n*Cut curd vertically into large pieces, and ladle into cheesecloth lined cheese molds. If you don't line the mold with cheesecloth, the cheese fills the holes and doesn't drain as well. \n*Rest the molds on a splatter screen over a 13x9 pan to drain. Let sit draining in the mold for a couple of hours, prying the edges of the cheese away from the cheesecloth as it sinks. \n*After a couple of hours, wrap and flip the cheese. Flip again after a few more hours. Let sit at room temperature for 2 days, rewrapping and flipping every so often. \n*Soak in brine (3 cups cool (50 F) spring water + 3 T salt) in a 8 inch square pan for 1 hour. \n*Put back into molds, store at room temperature for an additional day. Wring out the water from the cheese cloth, rewrap, and wrap in paper towel. Put in quart size baggies in the fridge. \n*Eat after 3 days, good until about 10 days.
Vacation stuff:\nCancel Cape Cod Times: EMAIL \nor call Circulation Services at
To install Sun Java 1.6 on Ubuntu 11.04 (Natty Narwhale)\n{{{\nsudo apt-get update\nsudo apt-get install sun-java6-jdk\n}}}
Adapted from:\nThree types of information heterogeneity are often present: \n* Syntactic heterogeneity: Information resources use different representation and encodings of data. (e.g. ASCII files, NetCDF files, HDF files, Relational Databases)\n* Structural heterogeneity: Different information systems store their data in different data models, data structures and schemas (e.g. Unidata Common Data Model, CSML Feature Types)\n* Semantic heterogeneity: The meaning of the data is expressed in different ways (e.g. sea_surface_temperature_remotely_sensed, "temp", "Temperature")\n\nThe use of Web Services can solve the syntactic heterogeneity. XML and XSD (schemas) can solve the structural heterogeneity because a XML file that respects a specific XSD Schema has a well-defined structure. Using OWL, as a shared ontology, semantic heterogeneity is resolved.
Service Tag: JKPRM51\nQuantity Parts # Part Description\n1 67JDG Cable Assembly, Audio, TRANSFORMER METROPLEX DESKTOP...\n1 N2285 Processor, 80546K, 3.0G, 1M, Xeon Nocona, 800\n1 X0392 Printed Wiring Assy, Planar Tumwater, PWS670\n1 5120P Cord, Power, 125V, 6Feet, SJT..., Unshielded\n1 H2703 Card, Voltage Regulator Module Controller, 10.1\n1 N2285 Processor, 80546K, 3.0G, 1M, Xeon Nocona, 800\n1 7N242 Keyboard, 104 Key, UNITED STATES..., Silitek, LC, MIDNIGHT GRAY...\n2 H2084 Dual In-Line Memory Module 512, 400M, 32X72, 8, 240, 2RX8\n2 U3364 Dual In-Line Memory Module 512, 400M, 64X72, 8, 240, 1RX8\n1 Y3668 Mouse, Universal Serial Bus 2BTN, Wheel, Entry, Primax Electronics Ltd\n1 N4077 Card, Graphics, NVIDIA QUADRO FX1300 128MB MRGA10\n1 5R212 Floppy Drive, 1.44M, 3.5" FORM FACTOR..., 3MD NBZ, NEC CORPORATION..., CHASSIS 2001...\n1 H5102 Hard Drive, 250GB, Serial ATA 8MB, WD-XL80-2\n2 C6355 Assembly, Cable, Serial ATA Transformer Sky Dive Mini Tower, 2.0\n1 H5102 Hard Drive, 250GB, Serial ATA 8MB, WD-XL80-2\n1 J2427 DIGITAL VIDEO DISK DRIVE..., 17G, 16X, I, 5.25" FORM FACTOR..., Liteon Chassis 2001, V5\n1 C3164 Digital Video Disk Drive Read Write, 8X, IDE (INTEGRATED DRIVE ELECTRONICS) ..., Half Height, NEC 01\n1 42964 CABLE..., LIGHT EMITTING DIODE..., HARD DRIVE..., AUXILIARY..., 7\n1 C4272 Card, Controller, U320..., SCSI Precision Workstation Poweredge\n1 C5130 Kit, Documentation on Compact Disk, Red Hat Enterprise Linux 3WS, 1YR, WEST...\n \nParts & Upgrades\n\n\nThis Dell Precision WorkStation 670
* THREDDS is currently available only via http (no SSL) at\n> from the following subnets:\n>\n>
2009-10-06: Need a valid time variable to aggregate along the time dimension with TDS4. Dave Robertson and Neil Ganju discovered this the hard way at nearly the same time. From Dave: \n{{{\nFor some reason the variable is ocean_time(time) instead of ocean_time(ocean_time). 3.17 works fine but 4.1 (and maybe 4.0) crap out with:\n\nError {\n code = 500;\n message = "AggregationExisting: no coordinate variable for agg dimension= time";\n};\n\nThe catalog.xml is attached. Any ideas? The earlier runs with ocean_time(ocean_time) are fine on 4.1 and 3.17.\n}}}\nWe solved this exact same problem for Neil by renaming the dimension and adding a value units variable for the variable "ocean-time". Here's the modified catalog entry:\n{{{\n <dataset name="UCLA SCB Runs" ID="scb/output_SCB" urlPath="scb/output_SCB">\n <serviceName>gridServices</serviceName>\n <netcdf xmlns="">\n <aggregation dimName="ocean_time" type="joinExisting">\n <scan location="/usgs/data0/scb/output_SCB/" suffix=".nc"/>\n <dimension name="ocean_time" orgName="time"/>\n <variable name="ocean_time">\n <attribute name="units" value="seconds since 2009-01-01"/>\n </variable>\n </aggregation>\n </netcdf>\n </dataset>\n}}}\n\n\n2009-10-07: Try to avoid NSF mounting data that the TDS points to. We discovered on the Stevens server that if the data that if the TDS catalogs point to data on an NSF-mounted disk that goes down, then we end up with some very nasty behavior. If you try to click on the dataset in your web browser (to get to the page where it lists the metadata and services available) you get a page that endlessly says "loading" with no error response! Also, if you try to specify the dataURL in Matlab and do a "nj_info" or anything that tries to open the file, there is no response, and you cannot kill with cntrl-c. The only way think that worked for me was to use the process manager to kill Matlab!\n
1. Get Java 7\nGo to and download java version 7\nI didn't change the installation directory, and on my 64 bit machine, this version 7 installed in \n{{{\nC:\sProgram Files\sJava\sjre7\n}}}\n2. Get Tomcat 7\n{{{\nwget\nunzip into c:\sprograms\n}}}\n{{{\ncd c:\sprograms\sapache-tomcat-7.0.29\sbin\n}}}\ncreate setenv.bat (on Linux, this is with the following lines:\n{{{\nset JAVA_HOME=C:\sProgram Files\sJava\sjre7\nset JAVA_OPTS=-XX:MaxPermSize=256M -Xmx2048m -Xms512m -server -Djava.awt.headless=true\n}}}\n3. Get THREDDS Data Server 4.3\n{{{\ncd c:\sprograms\sapache-tomcat-7.0.29\swebapps\nwget\n}}}\n4. Start Tomcat\n{{{\nc:\sprograms\sapache-tomcat-7.0.29\sbin\\n}}}\nThe TDS is now accessible with an example catalog at: http://localhost:8080/thredds\n5. Create your own catalogs. \n\nSee\n\nHere I just one that dynamically scans a particular directory tree: c:/rps/cf\n\n{{{\n<?xml version="1.0" encoding="UTF-8"?>\n<catalog xmlns=""\n xmlns:xlink=""\n name="THREDDS Catalog for NetCDF and other gridded data files" version="1.0.1">\n <service name="allServices" serviceType="Compound" base="">\n <service name="ncdods" serviceType="OpenDAP" base="/thredds/dodsC/"/>\n <service name="HTTPServer" serviceType="HTTPServer" base="/thredds/fileServer/"/>\n <service name="wcs" serviceType="WCS" base="/thredds/wcs/"/>\n <service name="ncss" serviceType="NetcdfSubset" base="/thredds/ncss/grid/"/>\n <service name="wms" serviceType="WMS" base="/thredds/wms/"/>\n <service name="iso" serviceType="ISO" base="/thredds/iso/"/>\n <service name="ncml" serviceType="NCML" base="/thredds/ncml/"/>\n <service name="uddc" serviceType="UDDC" base="/thredds/uddc/"/>\n </service>\n \n <datasetScan name="Model Data" ID="models" path="models" location="c:/rps/cf">\n <metadata inherited="true">\n <serviceName>allServices</serviceName>\n <publisher>\n <name vocabulary="DIF">USGS/ER/WHCMSC/Dr. Richard P. Signell</name>\n <contact url="" email=""/>\n </publisher>\n </metadata>\n <filter>\n <include wildcard="*.ncml"/>\n <include wildcard="*.nc"/>\n <include wildcard="*.grd"/>\n <include wildcard="*.nc.gz"/>\n <include wildcard="*.cdf"/>\n <include wildcard="*.grib"/>\n <include wildcard="*.grb"/>\n <include wildcard="*.grb2"/>\n <include wildcard="*.grib2"/>\n </filter>\n <sort>\n <lexigraphicByName increasing="true"/>\n </sort>\n <addDatasetSize/>\n </datasetScan>\n\n</catalog>\n}}}\n
We have 4 tomcat servers on\n\n|Server # |Service |Location |URL |Manager |\n|Tomcat 1 |THREDDS |/usr/local/tomcat | | |\n|Tomcat 2 |ERDDAP |/usr/local/tomcat2 | | |\n|Tomcat 3 |OOSTETHYS |/usr/local/tomcat3 | | |\n|Tomcat 4 |ncWMS |/usr/local/tomcat4 | | |\n\nTo restart one of these tomcats, do something like this:\n\n{{{\ncd /usr/local/tomcat4/bin\nsudo sh ./\nsudo sh ./\n}}}\n
/***\n| Name|TagglyTaggingPlugin|\n| Description|tagglyTagging macro is a replacement for the builtin tagging macro in your ViewTemplate|\n| Version|3.0 ($Rev: 2101 $)|\n| Date|$Date: 2007-04-20 00:24:20 +1000 (Fri, 20 Apr 2007) $|\n| Source||\n| Author|Simon Baird <>|\n| License||\n!Notes\nSee\n***/\n//{{{\nconfig.taggly = {\n\n // for translations\n lingo: {\n labels: {\n asc: "\su2191", // down arrow\n desc: "\su2193", // up arrow\n title: "title",\n modified: "modified",\n created: "created",\n show: "+",\n hide: "-",\n normal: "normal",\n group: "group",\n commas: "commas",\n sitemap: "sitemap",\n numCols: "cols\su00b1", // plus minus sign\n label: "Tagged as '%0':",\n excerpts: "excerpts",\n noexcerpts: "no excerpts"\n },\n\n tooltips: {\n title: "Click to sort by title",\n modified: "Click to sort by modified date",\n created: "Click to sort by created date",\n show: "Click to show tagging list",\n hide: "Click to hide tagging list",\n normal: "Click to show a normal ungrouped list",\n group: "Click to show list grouped by tag",\n sitemap: "Click to show a sitemap style list",\n commas: "Click to show a comma separated list",\n numCols: "Click to change number of columns"\n }\n },\n\n config: {\n showTaggingCounts: true,\n listOpts: {\n // the first one will be the default\n sortBy: ["title","modified","created"],\n sortOrder: ["asc","desc"],\n hideState: ["show","hide"],\n listMode: ["normal","group","sitemap","commas"],\n numCols: ["1","2","3","4","5","6"],\n excerpts: ["noexcerpts","excerpts"]\n },\n valuePrefix: "taggly.",\n excludeTags: ["excludeLists","excludeTagging"],\n excerptSize: 50,\n excerptMarker: "/%"+"%/"\n },\n\n getTagglyOpt: function(title,opt) {\n var val = store.getValue(title,this.config.valuePrefix+opt);\n return val ? val : this.config.listOpts[opt][0];\n },\n\n setTagglyOpt: function(title,opt,value) {\n if (!store.tiddlerExists(title))\n // create it silently\n store.saveTiddler(title,title,config.views.editor.defaultText.format([title]),config.options.txtUserName,new Date(),null);\n // if value is default then remove it to save space\n return store.setValue(title,\n this.config.valuePrefix+opt,\n value == this.config.listOpts[opt][0] ? null : value);\n },\n\n getNextValue: function(title,opt) {\n var current = this.getTagglyOpt(title,opt);\n var pos = this.config.listOpts[opt].indexOf(current);\n // a little usability enhancement. actually it doesn't work right for grouped or sitemap\n var limit = (opt == "numCols" ? store.getTaggedTiddlers(title).length : this.config.listOpts[opt].length);\n var newPos = (pos + 1) % limit;\n return this.config.listOpts[opt][newPos];\n },\n\n toggleTagglyOpt: function(title,opt) {\n var newVal = this.getNextValue(title,opt);\n this.setTagglyOpt(title,opt,newVal);\n }, \n\n createListControl: function(place,title,type) {\n var lingo = config.taggly.lingo;\n var label;\n var tooltip;\n var onclick;\n\n if ((type == "title" || type == "modified" || type == "created")) {\n // "special" controls. a little tricky. derived from sortOrder and sortBy\n label = lingo.labels[type];\n tooltip = lingo.tooltips[type];\n\n if (this.getTagglyOpt(title,"sortBy") == type) {\n label += lingo.labels[this.getTagglyOpt(title,"sortOrder")];\n onclick = function() {\n config.taggly.toggleTagglyOpt(title,"sortOrder");\n return false;\n }\n }\n else {\n onclick = function() {\n config.taggly.setTagglyOpt(title,"sortBy",type);\n config.taggly.setTagglyOpt(title,"sortOrder",config.taggly.config.listOpts.sortOrder[0]);\n return false;\n }\n }\n }\n else {\n // "regular" controls, nice and simple\n label = lingo.labels[type == "numCols" ? type : this.getNextValue(title,type)];\n tooltip = lingo.tooltips[type == "numCols" ? type : this.getNextValue(title,type)];\n onclick = function() {\n config.taggly.toggleTagglyOpt(title,type);\n return false;\n }\n }\n\n // hide button because commas don't have columns\n if (!(this.getTagglyOpt(title,"listMode") == "commas" && type == "numCols"))\n createTiddlyButton(place,label,tooltip,onclick,type == "hideState" ? "hidebutton" : "button");\n },\n\n makeColumns: function(orig,numCols) {\n var listSize = orig.length;\n var colSize = listSize/numCols;\n var remainder = listSize % numCols;\n\n var upperColsize = colSize;\n var lowerColsize = colSize;\n\n if (colSize != Math.floor(colSize)) {\n // it's not an exact fit so..\n upperColsize = Math.floor(colSize) + 1;\n lowerColsize = Math.floor(colSize);\n }\n\n var output = [];\n var c = 0;\n for (var j=0;j<numCols;j++) {\n var singleCol = [];\n var thisSize = j < remainder ? upperColsize : lowerColsize;\n for (var i=0;i<thisSize;i++) \n singleCol.push(orig[c++]);\n output.push(singleCol);\n }\n\n return output;\n },\n\n drawTable: function(place,columns,theClass) {\n var newTable = createTiddlyElement(place,"table",null,theClass);\n var newTbody = createTiddlyElement(newTable,"tbody");\n var newTr = createTiddlyElement(newTbody,"tr");\n for (var j=0;j<columns.length;j++) {\n var colOutput = "";\n for (var i=0;i<columns[j].length;i++) \n colOutput += columns[j][i];\n var newTd = createTiddlyElement(newTr,"td",null,"tagglyTagging"); // todo should not need this class\n wikify(colOutput,newTd);\n }\n return newTable;\n },\n\n createTagglyList: function(place,title) {\n switch(this.getTagglyOpt(title,"listMode")) {\n case "group": return this.createTagglyListGrouped(place,title); break;\n case "normal": return this.createTagglyListNormal(place,title,false); break;\n case "commas": return this.createTagglyListNormal(place,title,true); break;\n case "sitemap":return this.createTagglyListSiteMap(place,title); break;\n }\n },\n\n getTaggingCount: function(title) {\n // thanks to Doug Edmunds\n if (this.config.showTaggingCounts) {\n var tagCount = store.getTaggedTiddlers(title).length;\n if (tagCount > 0)\n return " ("+tagCount+")";\n }\n return "";\n },\n\n getExcerpt: function(inTiddlerTitle,title) {\n if (this.getTagglyOpt(inTiddlerTitle,"excerpts") == "excerpts") {\n var t = store.getTiddler(title);\n if (t) {\n var text = t.text.replace(/\sn/," ");\n var marker = text.indexOf(this.config.excerptMarker);\n if (marker != -1) {\n return " {{excerpt{<nowiki>" + text.substr(0,marker) + "</nowiki>}}}";\n }\n else if (text.length < this.config.excerptSize) {\n return " {{excerpt{<nowiki>" + t.text + "</nowiki>}}}";\n }\n else {\n return " {{excerpt{<nowiki>" + t.text.substr(0,this.config.excerptSize) + "..." + "</nowiki>}}}";\n }\n }\n }\n return "";\n },\n\n notHidden: function(t,inTiddler) {\n if (typeof t == "string") \n t = store.getTiddler(t);\n return (!t || !t.tags.containsAny(this.config.excludeTags) ||\n (inTiddler && this.config.excludeTags.contains(inTiddler)));\n },\n\n // this is for normal and commas mode\n createTagglyListNormal: function(place,title,useCommas) {\n\n var list = store.getTaggedTiddlers(title,this.getTagglyOpt(title,"sortBy"));\n\n if (this.getTagglyOpt(title,"sortOrder") == "desc")\n list = list.reverse();\n\n var output = [];\n var first = true;\n for (var i=0;i<list.length;i++) {\n if (this.notHidden(list[i],title)) {\n var countString = this.getTaggingCount(list[i].title);\n var excerpt = this.getExcerpt(title,list[i].title);\n if (useCommas)\n output.push((first ? "" : ", ") + "[[" + list[i].title + "]]" + countString + excerpt);\n else\n output.push("*[[" + list[i].title + "]]" + countString + excerpt + "\sn");\n\n first = false;\n }\n }\n\n return this.drawTable(place,\n this.makeColumns(output,useCommas ? 1 : parseInt(this.getTagglyOpt(title,"numCols"))),\n useCommas ? "commas" : "normal");\n },\n\n // this is for the "grouped" mode\n createTagglyListGrouped: function(place,title) {\n var sortBy = this.getTagglyOpt(title,"sortBy");\n var sortOrder = this.getTagglyOpt(title,"sortOrder");\n\n var list = store.getTaggedTiddlers(title,sortBy);\n\n if (sortOrder == "desc")\n list = list.reverse();\n\n var leftOvers = []\n for (var i=0;i<list.length;i++)\n leftOvers.push(list[i].title);\n\n var allTagsHolder = {};\n for (var i=0;i<list.length;i++) {\n for (var j=0;j<list[i].tags.length;j++) {\n\n if (list[i].tags[j] != title) { // not this tiddler\n\n if (this.notHidden(list[i].tags[j],title)) {\n\n if (!allTagsHolder[list[i].tags[j]])\n allTagsHolder[list[i].tags[j]] = "";\n\n if (this.notHidden(list[i],title)) {\n allTagsHolder[list[i].tags[j]] += "**[["+list[i].title+"]]"\n + this.getTaggingCount(list[i].title) + this.getExcerpt(title,list[i].title) + "\sn";\n\n leftOvers.setItem(list[i].title,-1); // remove from leftovers. at the end it will contain the leftovers\n\n }\n }\n }\n }\n }\n\n var allTags = [];\n for (var t in allTagsHolder)\n allTags.push(t);\n\n var sortHelper = function(a,b) {\n if (a == b) return 0;\n if (a < b) return -1;\n return 1;\n };\n\n allTags.sort(function(a,b) {\n var tidA = store.getTiddler(a);\n var tidB = store.getTiddler(b);\n if (sortBy == "title") return sortHelper(a,b);\n else if (!tidA && !tidB) return 0;\n else if (!tidA) return -1;\n else if (!tidB) return +1;\n else return sortHelper(tidA[sortBy],tidB[sortBy]);\n });\n\n var leftOverOutput = "";\n for (var i=0;i<leftOvers.length;i++)\n if (this.notHidden(leftOvers[i],title))\n leftOverOutput += "*[["+leftOvers[i]+"]]" + this.getTaggingCount(leftOvers[i]) + this.getExcerpt(title,leftOvers[i]) + "\sn";\n\n var output = [];\n\n if (sortOrder == "desc")\n allTags.reverse();\n else if (leftOverOutput != "")\n // leftovers first...\n output.push(leftOverOutput);\n\n for (var i=0;i<allTags.length;i++)\n if (allTagsHolder[allTags[i]] != "")\n output.push("*[["+allTags[i]+"]]" + this.getTaggingCount(allTags[i]) + this.getExcerpt(title,allTags[i]) + "\sn" + allTagsHolder[allTags[i]]);\n\n if (sortOrder == "desc" && leftOverOutput != "")\n // leftovers last...\n output.push(leftOverOutput);\n\n return this.drawTable(place,\n this.makeColumns(output,parseInt(this.getTagglyOpt(title,"numCols"))),\n "grouped");\n\n },\n\n // used to build site map\n treeTraverse: function(title,depth,sortBy,sortOrder) {\n\n var list = store.getTaggedTiddlers(title,sortBy);\n if (sortOrder == "desc")\n list.reverse();\n\n var indent = "";\n for (var j=0;j<depth;j++)\n indent += "*"\n\n var childOutput = "";\n for (var i=0;i<list.length;i++)\n if (list[i].title != title)\n if (this.notHidden(list[i].title,this.config.inTiddler))\n childOutput += this.treeTraverse(list[i].title,depth+1,sortBy,sortOrder);\n\n if (depth == 0)\n return childOutput;\n else\n return indent + "[["+title+"]]" + this.getTaggingCount(title) + this.getExcerpt(this.config.inTiddler,title) + "\sn" + childOutput;\n },\n\n // this if for the site map mode\n createTagglyListSiteMap: function(place,title) {\n this.config.inTiddler = title; // nasty. should pass it in to traverse probably\n var output = this.treeTraverse(title,0,this.getTagglyOpt(title,"sortBy"),this.getTagglyOpt(title,"sortOrder"));\n return this.drawTable(place,\n this.makeColumns(output.split(/(?=^\s*\s[)/m),parseInt(this.getTagglyOpt(title,"numCols"))), // regexp magic\n "sitemap"\n );\n },\n\n macros: {\n tagglyTagging: {\n handler: function (place,macroName,params,wikifier,paramString,tiddler) {\n var refreshContainer = createTiddlyElement(place,"div");\n // do some refresh magic to make it keep the list fresh - thanks Saq\n refreshContainer.setAttribute("refresh","macro");\n refreshContainer.setAttribute("macroName",macroName);\n refreshContainer.setAttribute("title",tiddler.title);\n this.refresh(refreshContainer);\n },\n\n refresh: function(place) {\n var title = place.getAttribute("title");\n removeChildren(place);\n if (store.getTaggedTiddlers(title).length > 0) {\n var lingo = config.taggly.lingo;\n config.taggly.createListControl(place,title,"hideState");\n if (config.taggly.getTagglyOpt(title,"hideState") == "show") {\n createTiddlyElement(place,"span",null,"tagglyLabel",lingo.labels.label.format([title]));\n config.taggly.createListControl(place,title,"title");\n config.taggly.createListControl(place,title,"modified");\n config.taggly.createListControl(place,title,"created");\n config.taggly.createListControl(place,title,"listMode");\n config.taggly.createListControl(place,title,"excerpts");\n config.taggly.createListControl(place,title,"numCols");\n config.taggly.createTagglyList(place,title);\n }\n }\n }\n }\n },\n\n // todo fix these up a bit\n styles: [\n"/*{{{*/",\n"/* created by TagglyTaggingPlugin */",\n".tagglyTagging { padding-top:0.5em; }",\n".tagglyTagging li.listTitle { display:none; }",\n".tagglyTagging ul {",\n" margin-top:0px; padding-top:0.5em; padding-left:2em;",\n" margin-bottom:0px; padding-bottom:0px;",\n"}",\n".tagglyTagging { vertical-align: top; margin:0px; padding:0px; }",\n".tagglyTagging table { margin:0px; padding:0px; }",\n".tagglyTagging .button { visibility:hidden; margin-left:3px; margin-right:3px; }",\n".tagglyTagging .button, .tagglyTagging .hidebutton {",\n" color:[[ColorPalette::TertiaryLight]]; font-size:90%;",\n" border:0px; padding-left:0.3em;padding-right:0.3em;",\n"}",\n".tagglyTagging .button:hover, .hidebutton:hover, ",\n".tagglyTagging .button:active, .hidebutton:active {",\n" border:0px; background:[[ColorPalette::TertiaryPale]]; color:[[ColorPalette::TertiaryDark]];",\n"}",\n".selected .tagglyTagging .button { visibility:visible; }",\n".tagglyTagging .hidebutton { color:[[ColorPalette::Background]]; }",\n".selected .tagglyTagging .hidebutton { color:[[ColorPalette::TertiaryLight]] }",\n".tagglyLabel { color:[[ColorPalette::TertiaryMid]]; font-size:90%; }",\n".tagglyTagging ul {padding-top:0px; padding-bottom:0.5em; margin-left:1em; }",\n".tagglyTagging ul ul {list-style-type:disc; margin-left:-1em;}",\n".tagglyTagging ul ul li {margin-left:0.5em; }",\n".editLabel { font-size:90%; padding-top:0.5em; }",\n".tagglyTagging .commas { padding-left:1.8em; }",\n"/* not technically tagglytagging but will put them here anyway */",\n".tagglyTagged li.listTitle { display:none; }",\n".tagglyTagged li { display: inline; font-size:90%; }",\n".tagglyTagged ul { margin:0px; padding:0px; }",\n".excerpt { color:[[ColorPalette::TertiaryMid]]; }",\n"div.tagglyTagging table,",\n"div.tagglyTagging table tr,",\n"td.tagglyTagging",\n" {border-style:none!important; }",\n"/*}}}*/",\n ""].join("\sn"),\n\n init: function() {\n merge(config.macros,this.macros);\n config.shadowTiddlers["TagglyTaggingStyles"] = this.styles;\n store.addNotification("TagglyTaggingStyles",refreshStyles);\n }\n};\n\nconfig.taggly.init();\n\n//}}}\n\n
\nI changed the update on the UCSD and WHOI data to every 20 minutes:\n{{{\n0,20,40 * * * * /usgs/data0/rsignell/data/ooi/do_ooi_whoi\n0,20,40 * * * * /usgs/data0/rsignell/data/ooi/do_ooi_ucsd\n}}}\n\nand to take the datasets\n\noffline, I can just comment out the ast2.xml catalog from the threddsConfig.xml file so that it isn't scanned.\n\nSo cd to the OOI thredds content directory:\n{{{\ncd /usr/local/usgs/tomcat-thredds-ooi/content/thredds\n}}}\nvi threddsConfig.xml and comment out the line\n{{{\n <catalogRoot>ast2.xml</catalogRoot>\n}}}\nreload thredds\n{{{\nsudo -u usgs touch ../../webapps/thredds/WEB-INF/web.xml\n}}}\n\nAh, change of plan:\nNow we will take this catalog off line:\n\n{{{\n35 * * * * /usgs/data0/rsignell/data/oceansites/do_wget\n}}}\ncd /usr/local/usgs/tomcat-thredds/content/thredds\nvi threddsConfig.xml\nsudo -u usgs touch ../../webapps/thredds/WEB-INF/web.xml\n}}}\n
I did a recursive global replace on the ROMS directory, changing all values of "nf90_clobber" to "NF90_NETCDF4" so that ROMS would write NetCDF4 files. These occurred in many different routines such as "def_his.F", "def_avg.F", "def_floats.F", etc. A better solution would be to have a variable defined in the input file so that the user could choose what type of files to write.\n\nI first tried to change all variables to be compressed by changing these lines in "def_var.F":\n{{{\n status=nf90_def_var(ncid, TRIM(Vinfo(1)), Vtype, &\n & varid = Vid)\n}}}\nto\n{{{\n status=nf90_def_var(ncid, TRIM(Vinfo(1)), Vtype, &\n & varid = Vid)\n shuffle = 1\n deflate = 1\n deflate_level = 1\n status=nf90_def_var_deflate(ncid, Vid, &\n & shuffle, deflate, deflate_level)\n}}}\n\nThis compiled okay, but at runtime, although the status of nf90_def_var_deflate was okay (status=0), the status of nf90_enddef was NOT okay, returning status=-101.\n\nDid I do something wrong? As it turns out, I did. I didn't look carefully at the code, and I was deflating variables that had a single value! Okay, perhaps that should have crashed NetCDF, but it wasn't too smart, either. So when I only deflated multidimensional data, it worked fine.\n\nI tested on the teignmouth grid, which has a large masked area. The size for 1 time step in the deflated file was 1.7 MB, while the original was 14 MB.\n\nI then was able to read this into Matlab using the exact same syntax as for regular files.\n\n{{{\njavaaddpath('toolsUI-4.0.jar','-end');\nimport ucar.nc2.*\nuri=''\\n}}}\n\nIt all worked as advertised!\n\n-Rich\n
{{{\ncd /oceandata\nmkdir share\ncd share\nwget\ntar xvz zlib-1.2.6.tar.gz\nwget\ntar xvfz hdf5-1.8.8.tar.gz\ncd hdf5-1.8.8\n./configure --with-zlib=/oceandata --prefix=/oceandata --enable-hl --enable-shared\nmake clean\nmake check install\n\n\nwget\ntar xvfz netcdf-4.2.tar.gz\n\ncd netcdf-4.2\nexport CPPFLAGS=-I/oceandata/include\nexport LDFLAGS=-L/oceandata/lib\n./configure --prefix=/oceandata --enable-netcdf-4 --enable-shared --enable-dap\nmake clean\nmake install\n\nwget\ntar xvfz netcdf-fortran-4.2.tar.gz\ncd netcdf-fortran-4.2\n./configure --prefix=/oceandata\nmake install\nmake check\n\ncd /oceandata/share/roms\nsvn \n\ncd /oceandata/shares\n wget\n\n}}}\n\n{{{\nroot@master:~# df\nFilesystem           1K-blocks      Used Available Use% Mounted on\n/dev/xvda1            
\nThe ToolsUI GUI (the NetCDF 4.0 Tools App) is a very useful tool for debugging, quick browsing and mapping of data\nfrom NetCDF, NcML, OpenDAP URLs and more.If you haven't seen it, it's available on the NetCDF Java page:\n<>. \n\nGet the version 4+ toolsUI.jar, which you can use via Java webstart, or as I prefer, from the command line:\n{{{\nwget\njava -Xmx512m -jar toolsUI-4.1.jar\n}}}\n\nFor those of you who have wondered about what all those tabs do in the ToolsUI GUI, I had a web-enabled conversation with John Caron and we captured a WebEx recording of John giving a demo of the features he uses most often.The demo is about 40 minutes, and can be viewed with the WebEx Recorder on Mac & Windows.\n\nJohn Caron's ToolsUI Demo: <> (46 MB)\n\nWebEx Player (free download):\n\n(note that you don't have to sign up for a free trial of WebEx to just download the player).\n\n
On 04/29/2005 11:14 AM I received this message from David Divins in response to a query about the GMT "surface" routine parameters used for the Coastal Relief Model V1.0:\n\n{{{\nRich,\n\nWe blockmean the data (-I3c), then run a perl script that does most \nprocessing for us. Here is the surface info from that script:\n\n$gRes = "3c"; # 3 second grid resolution\n$gFormat = "12"; # NGDC G98 Format\n\n$Convergence = 0.1;\n$Tension = 0.5;\n$gOptions = "-I$gRes -C$Convergence -T$Tension -N500 -Lu-0.1 -V";\n\n$command = "surface $xyzFile -G$$.grd=$gFormat -R$range $gOptions";\n\n\nDavid \n}}}\n\nI think these settings could be improved in the following ways: \n\n * Change the tension from 0.5 to 0.35 (T=1 is a harmonic surface with no maxima or minima except at control data points, T=0 is the "minimum curvature solution" which can give undesired false maxima and minima). 0.35 is recommended by the GMT folks for "steep topography data" and decreasing the tension from 0.5 to 0.35 will make isolated data points look less like tent poles.\n\n * Include the aspect ratio argument, since we are gridding in geographic (lon,lat) coordinates. Should be set to the cosine of the central latitude (e.g. -A0.75 at 41.5 degrees north)\n\n
Jeff List asked if I could give him an ADCIRC tidal current prediction for Cape Hatteras, and I said "sure", then realized that the tri_tide_movie.m script was reading from a VDatum-style NetCDF file, which we only had for the Gulf of Maine. For the rest of the east coast, we had "adcirc_ec2001v2e_fix.mat" in a mat file, but no netcdf file. So here's what I did to make the netcdf file for ec2001v2e.\n\n* In Matlab, opened adcirc_ec2001v2e_fix.mat, and noticed that the number of elements and nodes did not match the number obtained by unzipping the file\n<>\nlinked from\n<>. In the file ec2001.grd, the number of elements is 492179 and the number of nodes is 254565. The "fix" suffix indicates that it had been "fixed" with the Matlab routine "triage", which adds extra nodes and elements on the end of the arrays. So to change back to the original data, I just removed the extra nodes and elements:\n{{{\nload c:\srps\sm_cmg\strunk\sadcirc_tides\sadcirc_ec2001v2e_fix\nind=1:254565;\ndepth=depth(ind);\nelev=elev(ind,:);\nlat=lat(ind);\nlon=lon(ind);\nu=u(ind,:);\nv=v(ind,:);\n\ntri=tri(1:492179,:);\nsave ec2001_v2e.mat\n}}}\n\n\n* Downloaded a local copy of "" from \n\n\nIn cygwin\n{{{\ncd c:/rps/tide\nncdump -h > foo.cdl\n}}}\nedited the foo.ncml file, replacing the number of nodes and elements, and reducing the number of tidal contituents from 38 to 10. Then I did\n{{{\nncgen -o < foo.cdl\n}}}\n\nThe tidal constituent names in ec2001v2e.mat were \n[STEADY K1 O1 Q1 M2 S2 N2 K2 M4 M2]\nwhich corresponded to indices\n[1
Testing reading data from using several different protocols. \n\nTest reading the 24th field bottom layer of mud_01 using "cf_tslice":\n{{{\n[m,g]=cf_tslice(uri,'mud_01','10,0,:,:');\n}}}\n\n| Mudpile | 1.37 s|uri1='y:\srsignell\' |\n| Stellwagen HTTP | 6.56 s|uri2='' |\n| Stellwagen opendap | 4.80 s|uri3='' |\n| THREDDS | 5.16 s|uri4='' |\n| local disk | 1.19 s|uri5='c:/downloads/' |\n\nTest reading the 12 3D fields of mud_01 using "nj_varget":\n{{{\n[m,g]=nj_varget(uri3,'mud_01','12:23,:,:,:');toc\n}}}\n\n| Mudpile | 5.87 s|\n| Stellwagen HTTP | 30.75 s|\n| Stellwagen opendap | 6.06 s|\n| THREDDS | 12.35 s|\n| local disk | 1.81 s|\n\n
/***\n| Name|ToggleTagPlugin|\n| Description|Makes a checkbox which toggles a tag in a tiddler|\n| Version|3.0 ($Rev: 1845 $)|\n| Date|$Date: 2007-03-16 15:19:22 +1000 (Fri, 16 Mar 2007) $|\n| Source||\n| Author|Simon Baird <>|\n| License||\n!Usage\n{{{<<toggleTag }}}//{{{TagName TiddlerName LabelText}}}//{{{>>}}}\n* TagName - the tag to be toggled, default value "checked"\n* TiddlerName - the tiddler to toggle the tag in, default value the current tiddler\n* LabelText - the text (gets wikified) to put next to the check box, default value is '{{{[[TagName]]}}}' or '{{{[[TagName]] [[TiddlerName]]}}}'\n(If a parameter is '.' then the default will be used)\n\nExamples:\n\n|Code|Description|Example|h\n|{{{<<toggleTag>>}}}|Toggles the default tag (checked) in this tiddler|<<toggleTag>>|\n|{{{<<toggleTag TagName>>}}}|Toggles the TagName tag in this tiddler|<<toggleTag TagName>>|\n|{{{<<toggleTag TagName TiddlerName>>}}}|Toggles the TagName tag in the TiddlerName tiddler|<<toggleTag TagName TiddlerName>>|\n|{{{<<toggleTag TagName TiddlerName 'click me'>>}}}|Same but with custom label|<<toggleTag TagName TiddlerName 'click me'>>|\n|{{{<<toggleTag . . 'click me'>>}}}|dot means use default value|<<toggleTag . . 'click me'>>|\nNotes:\n* If TiddlerName doesn't exist it will be silently created\n* Set label to '-' to specify no label\n* See also\n\n!Known issues\n* Doesn't smoothly handle the case where you toggle a tag in a tiddler that is current open for editing\n\n***/\n//{{{\n\nmerge(config.macros,{\n\n toggleTag: {\n\n doRefreshAll: true,\n createIfRequired: true,\n shortLabel: "[[%0]]",\n longLabel: "[[%0]] [[%1]]",\n\n handler: function(place,macroName,params,wikifier,paramString,tiddler) {\n var tag = (params[0] && params[0] != '.') ? params[0] : "checked";\n var title = (params[1] && params[1] != '.') ? params[1] : tiddler.title;\n var defaultLabel = (title == tiddler.title ? this.shortLabel : this.longLabel);\n var label = (params[2] && params[2] != '.') ? params[2] : defaultLabel;\n label = (label == '-' ? '' : label);\n var theTiddler = title == tiddler.title ? tiddler : store.getTiddler(title);\n var cb = createTiddlyCheckbox(place, label.format([tag,title]), theTiddler && theTiddler.isTagged(tag), function(e) {\n if (!store.tiddlerExists(title)) {\n if (config.macros.toggleTag.createIfRequired) {\n var content = store.getTiddlerText(title); // just in case it's a shadow\n store.saveTiddler(title,title,content?content:"",config.options.txtUserName,new Date(),null);\n }\n else \n return false;\n }\n store.setTiddlerTag(title,this.checked,tag);\n return true;\n });\n }\n }\n});\n\n//}}}\n\n
!THREDDS\nURL: ( still works, but is deprecated)\nTomcat manager URL:\nTHREDDS manager URL:\nTomcat home: /usr/local/usgs/tomcat-thredds\nThredds catalog home: /usr/local/usgs/tomcat-thredds/content/thredds\nSVN for Thredds catalogs:\n\n!THREDDS-Dev\nURL: ( still works, but is deprecated)\nTomcat manager URL:\nTHREDDS manager URL:\nTomcat home: /usr/local/usgs-dev/tomcat-thredds\nThredds catalog home: /usr/local/usgs-dev/tomcat-thredds/content/thredds\nSVN for Thredds catalogs:\n\n!geoportal\nURL: \nTomcat manager URL:\nTomcat home: /usr/local/usgs/tomcat-geoportal\n\n!ncWMS-Dev\nURL:\nTomcat manager URL:\nTomcat home: /usr/local/usgs-dev/tomcat-ncwms\n\n!RAMADDA\nURL:\nTomcat manager URL:\nTomcat home: /usr/local/usgs/tomcat-ramadda\nRamadda home: /usgs/data0/ramadda\n\n\n!GI-CAT\nURL:\nTomcat manager URL:\nTomcat home: /usr/local/usgs/tomcat-gicat\n\n!GI-CAT-dev\nURL:\nTomcat manager URL:\nTomcat home: /usr/local/usgs-dev/tomcat-gicat\n\nThe following info should get you in on all four tomcat admin GUIs:\n admin usgsThredds=1 (actually, this is not quite the real password, for obvious reasons)\n
The tinyurl for this tiddler is:\n\n!Tomcat\nI downloaded the "windows service installer" from \n\nwhich ended up downloading this file:\n\n\nDuring the installation, instead of accepting the default path: \n{{{\nC:\sProgram Files\sApache Software Foundation\sTomcat 6.0\n}}}\nI typed in:\n{{{\nc:\sprograms\stomcat6\n}}}\nThis is important since I want to run ERDDAP, which has problems with spaces in path names.\n\nTo control the tomcat server on the PC, I go to "Start=>All Programs=>Administrative Tools=>Services\nand then select the "Apache Tomcat" service and reload or restart.\n\n!THREDDS Data Server\nI downloaded the THREDDS Data Server (TDS) following the instructions at, which ended up downloading this file:\n\nI put the war file in my tomcat webapps directory (c:\sprograms\stomcat6\swebapps) and stopped and started tomcat.\nPointing my browser at http://localhost:8080/thredds confirmed that the TDS was running.\nThe TDS catalogs and TDS config file (threddsConfig.xml) are in c:\sprograms\stomcat6\scontent\sthredds, so I dropped my existing TDS catalogs from the server in there, and then modified the default catalog.xml to point to my catalogs. \n\n!ncWMS\nI downloaded ncWMS following the instructions at, which ended up downloading this file: \n\nI put the war file in my tomcat webapps directory, (c:\sprograms\stomcat6\swebapps), renamed it to ncWMS.war and then stopped and started tomcat.\nThis should install ncWMS, making a subdirectory called ncWMS in the webapps directory. Pointing my browser at http://localhost:8080/ncWMS confirmed that ncWMS was running.\n\n!ERDDAP\nI folllowed the instructions at I downloaded the war file and put it in c:\sprograms\stomcat6\swebapps and then unzipped the zip file into the tomcat content directory (c:\sprograms\stomcat6\scontent). \n\nFollowing the instructions, I downloaded the fonts that ERDDAP likes from and unzipped the contents into my java jre font directory (C:\sProgram Files\sJava\sjre6\slib\sfonts).\n\nI then edited the file c:\sprograms\stomcat6\scontent\serddap\sdatasets.xml and added my datasets.\n\nI then reloaded tomcat.\n\nPointing my browser at http://localhost:8080/erddap confirmed that ERDDAP was running!
TortoiseSVN is a popular Windows SVN client, that can be obtained from\n\nMost people will want to download the Windows 32 bit installer.
> I have some geotiff images with a black border. I want to make the border\n> transparent. Is there a way to do this with FWTools?\n\nSimon,\n\nIn part this depends on what software you want it to be transparent in.\nIn some software packages (like MapServer or OpenEV) you can pick a\n"nodata value" such as 0 to be treated as transparent. In some cases\nthe packages can also recognise nodata values as a kind of metadata from\nthe file. The gdal_translate -a_nodata switch can be used to assign this\nnodata value to an output file though it is only preserved in some formats.\n\nFor example:\n{{{\ngdal_translate -a_nodata 0 in.tif out.tif\n}}}\nAnother approach to transparency is to add an alpha channel with\nexplicit transparency. I think you could essentially assign all "zero"\npixels in an input greyscale image to have an alpha of zero in the output\nusing something like:\n\n{{{\ngdalwarp -srcnodata 0 -dstalpha in.tif out.tif\n}}}\nThis whole area though is rife with challenges because:\n\n* many software packages don't automatically support nodata values.\n* some software packages don't recognise alpha bands for transparency\n* some file formats don't support saving nodata values.\n* some formats don't support alpha bands.\n* GDAL's "nodata" data model treats nodata values as independent for\n each band.\n* all black pixels will get treated as transparent, not just border values.
Starting with parameter set exactly as in ROMS sed paper, except that we spin up for 30,000 time steps, which is 25 minutes, which is how long it takes to obtain steady state (for the initial transients to decay).\n\n{{{\nr001 ROMS 3.1, SLOPE_NEMETH, BEDLOAD_COEFF == 0.06, poros=0.3 (8cpu=35min)\nr002 ROMS 3.1, SLOPE_NEMETH, BEDLOAD_COEFF == 0.17, poros=0.3 (bombed!)\nr003 ROMS 3.1, SLOPE_NEMETH, MPM_RIPPLE, BEDLOAD_COEFF == 1.00, poros=0.3 (4cpu=66min)\nr004 ROMS 3.1, SLOPE_NEMETH, BSTRESS_UPWIND, BEDLOAD_COEFF == 0.17, poros=0.4 \nr005 ROMS 3.1, SLOPE_NEMETH, MPM_RIPPLE, BEDLOAD_COEFF == 1.0, poros=0.4 \nr006 ROMS 3.1, SLOPE_LESSER, MPM_RIPPLE, BEDLOAD_COEFF == 1.0, poros=0.4 \nr007 ROMS 3.1, SLOPE_NEMETH, MPM_RIPPLE, BEDLOAD_COEFF == 1.00, poros=0.3, d50=160, Wsed=14.3, TauCE=0.1658 \nr008 ROMS 3.1, SLOPE_NEMETH, MPM_RIPPLE, BSTRESS_UPWIND, BEDLOAD_COEFF == 1.00, poros=0.3, d50=160, Wsed=14.3, TauCE=0.1658 \nr009 ROMS 3.1, SLOPE_NEMETH, MPM_RIPPLE, BEDLOAD_COEFF == 1.00, poros=0.3, Wsed=13.0, Flux=3.5e-2\nr010 ROMS 3.1, SLOPE_NEMETH, BEDLOAD_COEFF == 0.17, BSTRESS_UPWIND, poros=0.3 \nr011 ROMS 3.1, SLOPE_LESSER, BEDLOAD_COEFF == 0.17, BSTRESS_UPWIND, poros=0.3 \nr012 ROMS 3.1, SLOPE_LESSER, BEDLOAD_COEFF == 0.08, BSTRESS_UPWIND, poros=0.3 \nr013 ROMS 3.1, SLOPE_LESSER, BEDLOAD_COEFF == 0.10, BSTRESS_UPWIND, poros=0.3 \nr014 ROMS 3.1, SLOPE_LESSER, MPM_RIPPLE, BEDLOAD_COEFF == 1.00, BSTRESS_UPWIND, poros=0.3\nr015 ROMS 3.1, SLOPE_LESSER, MPM_RIPPLE, BEDLOAD_COEFF == 1.00, BSTRESS_UPWIND, poros=0.4 \nr016 ROMS 3.1, SLOPE_LESSER, BEDLOAD_COEFF == 0.10, BSTRESS_UPWIND, poros=0.4 \nr017 ROMS 3.1, SLOPE_LESSER, BEDLOAD_COEFF == 0.10, BSTRESS_UPWIND, poros=0.4, WSED=13.0\nr018 ROMS 3.1, SLOPE_LESSER, BEDLOAD_COEFF == 0.10, BSTRESS_UPWIND, poros=0.4, MUD_ERATE=0.35d-2\nr019 ROMS 3.1, SLOPE_LESSER, BEDLOAD_COEFF == 0.10, BSTRESS_UPWIND, poros=0.4, MUD_ERATE=0.35d-2, MORFAC=90\nr020 ROMS 3.1, SLOPE_LESSER, BEDLOAD_COEFF == 0.06, BSTRESS_UPWIND, poros=0.4, MUD_ERATE=0.35d-2, MORFAC=90\nr021 ROMS 3.1, SLOPE_NEMETH, MPM_RIPPLE, BEDLOAD_COEFF == 1.00, BSTRESS_UPWIND, poros=0.4, MUD_ERATE=0.35d-2, MORFAC=90\nr022 ROMS 3.1, SLOPE_NEMETH, MPM_RIPPLE, BEDLOAD_COEFF == 1.00, BSTRESS_UPWIND, poros=0.4, MUD_ERATE=0.40d-2, WSED=13.0 MORFAC=90\nr023 ROMS 3.1, SLOPE_LESSER, BEDLOAD_COEFF == 0.06, BSTRESS_UPWIND, poros=0.4, MUD_ERATE=0.40d-2, WSED=13.0 MORFAC=90\nr024 ROMS 3.1, SLOPE_LESSER, MPM_RIPPLE, BEDLOAD_COEFF == 0.06, BSTRESS_UPWIND, poros=0.4, MUD_ERATE=0.35d-2, MORFAC=90\nr025 ROMS 3.1, SLOPE_NEMETH, MPM_RIPPLE, BEDLOAD_COEFF == 0.06, BSTRESS_UPWIND, poros=0.4, MUD_ERATE=0.35d-2, MORFAC=90\nr026 ROMS 3.1, SLOPE_NEMETH, MPM_RIPPLE, BEDLOAD_COEFF == 1.00, poros=0.4, MUD_ERATE=0.35d-2, MORFAC=90 (exe copied from r003)\nr027 ROMS 3.1, SLOPE_NEMETH, MPM_RIPPLE, BEDLOAD_COEFF == 1.00, poros=0.4, MUD_ERATE=0.25d-2, MORFAC=90 (exe copied from r003)\nr028 ROMS 3.1, SLOPE_NEMETH, BEDLOAD_COEFF == 0.06, BSTRESS_UPWIND, poros=0.4, MUD_ERATE=0.35d-2, MORFAC=90 (same as 20, but with nemeth)\nr029 ROMS 3.1, SLOPE_NEMETH, BEDLOAD_COEFF == 0.06, poros=0.4, MUD_ERATE=0.35d-2, MORFAC=90 (same as 28, but without upwind)\n}}}\n\nDelft3d cases:\nSpin up for 30 minutes, use morph fac of 90 for 10 minutes more (40 minutes total)\n{{{\nrps03 10 layer algebraic model (log-stretched with 4% layer at bottom)\nrps04 10 layer k-eps model (log-stretched with 4% layer at bottom)\nrps05 20 layer k-eps model (log-stretched with 2% layer at bottom)\nrps06 10 layer algebraic model (log-stretched with 4% layer at bottom), 160 micron sand\n}}}
If you can log into you can run:\nssh\nsu eric\ncd /var/www/html/catalog\nrm -rf thredds\ntime ./ &\n\nThat's a perl script which starts the ncISO crawler and then runs:\n/var/www/html/catalog/bin/ and outputs the results in /var/www/html/catalog/sura_iso.xml which is the file which Charlton's viewer loads.\n\nI'm going to download a copy of and annotate it and send you that so you'll know how to change it.
This COAWST test case runs with latest COAWST (2013-04-30):\n{{{\n/peach/data1/rsignell/Projects/COAWST/INLET_TEST\n}}}\nI did:\n{{{\n./coawst.bash -j\nqsub run_inlet_test\n}}}
/***\nContains the stuff you need to use Tiddlyspot\nNote you must also have UploadPlugin installed\n***/\n//{{{\n\n// edit this if you are migrating sites or retrofitting an existing TW\nconfig.tiddlyspotSiteId = 'rsignell';\n\n// make it so you can by default see edit controls via http\nconfig.options.chkHttpReadOnly = false;\n\n// disable autosave in d3\nif (window.location.protocol != "file:")\n config.options.chkGTDLazyAutoSave = false;\n\n// tweak shadow tiddlers to add upload button, password entry box etc\nwith (config.shadowTiddlers) {\n SiteUrl = 'http://'+config.tiddlyspotSiteId+'';\n SideBarOptions = SideBarOptions.replace(/(<<saveChanges>>)/,"$1<<tiddler TspotSidebar>>");\n OptionsPanel = OptionsPanel.replace(/^/,"<<tiddler TspotOptions>>");\n DefaultTiddlers = DefaultTiddlers.replace(/^/,"[[Welcome to Tiddlyspot]] ");\n MainMenu = MainMenu.replace(/^/,"[[Welcome to Tiddlyspot]] ");\n}\n\n// create some shadow tiddler content\nmerge(config.shadowTiddlers,{\n\n'Welcome to Tiddlyspot':[\n "This document is a ~TiddlyWiki from A ~TiddlyWiki is an electronic notebook that is great for managing todo lists, personal information, and all sorts of things.",\n "",\n "@@font-weight:bold;font-size:1.3em;color:#444; //What now?// &nbsp;&nbsp;@@ Before you can save any changes, you need to enter your password in the form below. Then configure privacy and other site settings at your [[control panel|http://" + config.tiddlyspotSiteId + "]] (your control panel username is //" + config.tiddlyspotSiteId + "//).",\n "<<tiddler TspotControls>>",\n "See also GettingStarted.",\n "",\n "@@font-weight:bold;font-size:1.3em;color:#444; //Working online// &nbsp;&nbsp;@@ You can edit this ~TiddlyWiki right now, and save your changes using the \s"save to web\s" button in the column on the right.",\n "",\n "@@font-weight:bold;font-size:1.3em;color:#444; //Working offline// &nbsp;&nbsp;@@ A fully functioning copy of this ~TiddlyWiki can be saved onto your hard drive or USB stick. You can make changes and save them locally without being connected to the Internet. When you're ready to sync up again, just click \s"upload\s" and your ~TiddlyWiki will be saved back to",\n "",\n "@@font-weight:bold;font-size:1.3em;color:#444; //Help!// &nbsp;&nbsp;@@ Find out more about ~TiddlyWiki at [[|]]. Also visit [[TiddlyWiki Guides|]] for documentation on learning and using ~TiddlyWiki. New users are especially welcome on the [[TiddlyWiki mailing list|]], which is an excellent place to ask questions and get help. If you have a tiddlyspot related problem email [[tiddlyspot support|]].",\n "",\n "@@font-weight:bold;font-size:1.3em;color:#444; //Enjoy :)// &nbsp;&nbsp;@@ We hope you like using your site. Please email [[|]] with any comments or suggestions."\n].join("\sn"),\n\n'TspotControls':[\n "| tiddlyspot password:|<<option pasUploadPassword>>|",\n "| site management:|<<upload http://" + config.tiddlyspotSiteId + " index.html . . " + config.tiddlyspotSiteId + ">>//(requires tiddlyspot password)//<<br>>[[control panel|http://" + config.tiddlyspotSiteId + "]], [[download (go offline)|http://" + config.tiddlyspotSiteId + "]]|",\n "| links:|[[|]], [[FAQs|]], [[announcements|]], [[blog|]], email [[support|]] & [[feedback|]], [[donate|]]|"\n].join("\sn"),\n\n'TspotSidebar':[\n "<<upload http://" + config.tiddlyspotSiteId + " index.html . . " + config.tiddlyspotSiteId + ">><html><a href='http://" + config.tiddlyspotSiteId + "' class='button'>download</a></html>"\n].join("\sn"),\n\n'TspotOptions':[\n "tiddlyspot password:",\n "<<option pasUploadPassword>>",\n ""\n].join("\sn")\n\n});\n//}}}\n
Go to the NOAA Tsunami Inundation DEM site at \n\nthe right click on the data you want and edit the URL to contain only the data location and pass to wget:\n{{{\ncd\nwget\n}}}\nThen convert to NetCDF using gdal_translate\n{{{\ngdal_translate -of netcdf -a_srs EPSG:4326 biloxi_ms.asc\n}}}\nUnfortunately, the gdal_translate produced netcdf file is not CF or COARDS compliant, so we have to create a little bit of NcML:\n{{{\n <netcdf xmlns="" location\n="/usgs/data0/bathy/ngdc/">\n <variable name="topo" orgName="Band1">\n <attribute name="units" value="m"/>\n <attribute name="long_name" value="Topography"/>\n </variable>\n <dimension name="lon" orgName="x"/>\n <dimension name="lat" orgName="y"/>\n\n\n <variable name="lon" shape="lon" type="double">\n <attribute name="units" value="degrees_east"/>\n <values start="-89.30" increment="9.25926e-05"/>\n </variable>\n\n <variable name="lat" shape="lat" type="double">\n <attribute name="units" value="degrees_north"/>\n <values start="30.60" increment="-9.25926e-05"/>\n </variable>\n\n <attribute name="Conventions" value="COARDS"/>\n </netcdf>\n}}}\n
__Ranglum__\n{{{\n2 oz Gosslings Black Seal\n1 oz lime juice\n¾ oz falernum\n½ oz Wray & Nephew Overproof\n½ to 1 bar-spoon sugar syrup\n}}}\nShake all ingredients with ice. Strain in to an ice filled old-fashioned glass and garnish with a lime wedge.\n\n__Royal Bermuda Yacht Club__\n{{{\n2 oz gold Barbados rum\n1 oz lime juice\n¾ oz falernum\n¼ oz Cointreau\n}}}\nShake all ingredients with ice. Strain in to a cocktail glass and garnish with a lime wedge.
Work with GI-CAT group/Matt to ensure that their THREDDS crawler calculates geospatial and temporal extents\n\nComplete Wiki entry for accessing Web Services from Matlab\nCreate an example CSW request from Matlab\n\nInstall ERDDAP for IOOS Modeling Testbed at SURA\n\nVisit NDBC to fix NetCDF files, TDS configuration and set up ERDDAP\n\nTest Radial FeatureType for Wave Spectra and HF Radar radial data \n\nTest TrajectoryCollection Featuretype for Particle tracking Model Output FeatureType.\n
ssh into SMAST as user tomcat\n{{{\n ssh\n}}}\nBecause the sys admin has not yet responded to my requests to put a startup script for tomcat7 in the /etc/init.d directory, I decided to take matters into my own hands and edited the crontab for tomcat to include a reboot entry:\n{{{\n-bash-3.2$ whoami\ntomcat\n-bash-3.2$ crontab -l\n@reboot /http/www/tomcat/apache-tomcat-7.0.22/bin/\n}}}\n\nTo start Chen's TDS, do:\n{{{\n/http/www/tomcat/apache-tomcat-7.0.22/bin/\n}}}\nbut after shutdown, check to make sure that it's not running by doing:\n{{{\nps aux | grep tomcat\n}}}\nand make sure you don't see tomcat-7 running (you may see a tomcat5, but that's someone else's). If you do see a tomcat-7 still running, kill it with "kill -9", and then do:\n{{{\n/http/www/tomcat/apache-tomcat-7.0.22/bin/\n}}}\n\nOld stuff...\nChen:\n/http/www/apache-tomcat-6.0.18/content/thredds\n{{{\ncd /http/www/apache-tomcat-6.0.18\n./bin/\nps aux | grep apache-tomcat\n./bin/\n}}}\n\nnow this is correct for tomcat7:\n\nSo I stopped tomcat6, and started tomcat7 and things seem okay again.\n{{{\nTo be explicit, I did this:\n/http/www/apache-tomcat-6.0.18/bin/\n\n/http/www/tomcat/apache-tomcat-7.0.22/bin/\n}}}\n\n\nAvijit: \n/usr/local/tomcat/content/thredds\n\n\n\n\n
{{{\n\s\sTeramac\str8\sACOE_LIDAR\s2005_2007_NCMP_MA\s\n.\sShapefiles has the list of which box corresponds to which region\n.\s2005_2007_NCMP_MA_1mGrid contains the original DEM geotiffs (with houses, trees, etc)\n.\s2005_2007_NCMP_MA_BareEarth contains the bare earth DEM geotiffs\n}}}\n\nConverted to the UTM grids to EPSG 4326:\n{{{\ngdalwarp 2005_2007_NCMP_MA_059_BareEarth.tif -t_srs EPSG:4326 -srcnodata 0 -dstnodata -99 bare_059_geo.tif\n}}}\nThe "dstnodata" puts -99 in the extra region around the borders created by UTM to lon/lat conversion.\n\nConverted the tif to NetCDF GMT format because Mirone didn't honor the -99 from the tif.\n{{{\ngdal_translate bare_059_geo.tif -of gmt foo.grd\n}}}\nI then read the "foo.grd" into Mirone, and used the "clip grid" to set any values less than -88 to NaN. I saved this as a GMT file (bare_059_geo.grd), and then did a color shaded relief image, setting the color pallete to "lidar.cpt" from the c:\srps\slidar\s directory, and then setting the color range to be from -2 to 26. I then did the default color shaded relief and saved to Google Earth.\n\nI put the GMT files on coast-enviro/models/bathy for Chen's group to grab.\n
*check Java version\n*mock install Tomcat (go to c:/downloads/thredds, discuss)\n*install thredds/ncWMS
Katharine Hayhoe, texas tech\nShe has ranked all the GCMs based on some basic measures (\nHayhoe et al 2009 MITI global downscaling for New England\nstatistical downscaling\ndynamical downscaling\n1/8 degree => 30km\ncan test statistical downscaling with dynamical downscaling\n\nweakness of dynamical downscaling: only can model what the physics of the model \n\n\n\n\n\n\nSeth McGinnis, NCAR\nNARCCAP Overview\n\nRegional climate modeling project.\n6 different regional models\nSCRIPPS is a participant (and 6 other groups)\nNesting all 6 within 4 different glocal climate models.\nfor US, Canada, northern Mexico.\n\n\nSame spatial resolution over the same domain (50km)\n\nStep 1. Use 25 years of NCEP Reanalysis boundary conditions (then two 30 year periods)\nStep 2. nest regional models within global (GFDL and NCAR CAM3, HADCM3, CCSM)\n\nA2 emission scenario (a high one, but not the highest one)\n\nMM5 Iowa State\nRegCM3 UCSC\nCRCM Quebec Ouranos\nHADRM3 Hadley Centre\nRSM Scripps\nWRF PNL\n\nFunded by NOAA-OGP, NSF, DOE, USEPA-ORD\n\n\n\ngroups doing further downscaling (50km => 10km)\n\nMellissa Buchowski working on which models are working well and why.\n\n57 variables @ 3-hourly resolution\nCF-compliant NetCDF\nData served through ESG (CF compliant NetCDF)\nGIS compatible, plain text for impacts\n\nCORDEX is the next follow on project
Software links:\n*[[Installing Java, Tomcat and the THREDDS Data Server |]]\n*[[NetCDF Web Map Service (ncWMS) |]]\n*[[Integrated Data Viewer (IDV) |]]\n*[[Matlab NJ Toolbox (njTBX) |]]\n*[[R package for NetCDF/OPeNDAP |]]\n*[[Python package for NetCDF/OpeNDAP |]]\n*[[ArcGIS Environmental Data Connector |]]\n*[[Matlab NetCDF writing software |]]\n**For SVN access to various Matlab high-level packages to write ~NetCDF (instead of zip file access) \n**svn co \n**svn co \n**svn co \n\nHelp links:\n* [[A method for setting up TDS catalogs for the first time]]\n* Unidata E-mail support:,,,\n* [[Google Group on IOOS model data interoperability |]]\n* [[NOAA Data Management and Integration Team GeoIDE Unified Access Framework for Gridded Data Project Page |]]\n* [[Files used in Short Course | ]]\n \nA few THREDDS catalogs of interest:\n*Bathymetry/World Topography:\n*NOAA GeoIDE Clean Catalog:\n*Catalog of IOOS-compliant ocean modeling results by IOOS region: (see the GCOOS region for some of the models being used to simulate the Deepwater Horizon spill)\n*Unidata's Motherlode Server with lots of realtime met data:\n*NOAA PFEG:\n
Upgrading to Sun Java: try this first\n{{{\nsudo add-apt-repository ppa:sun-java-community-team/sun-java6\nsudo apt-get update\nsudo apt-get install sun-java6*\n}}}\nIf that doesn't work, try using the GUI first to change the repositories that are checked by apt-get, then try "apt-get install sun-java6*" again:\n\n\nTo see where a package got installed:\ndpkg -L <packagename>\n\nBefore Howar's NCO script would work, I needed to do\n{{{\nsudo apt-get install g++\n}}}\n
If you take a look at:\nhttp://localhost:8080/thredds/catalog/testAll/catalog.html\nyou will see several netcdf files being served.\n\nIf you add another netcdf file to\n/programs/tomcat6/content/thredds/public/testdata\nand reload your browser, you will see the new netcdf file being served, as this thredds catalog is created dynamically. You can also make a subdirectory and any files in the subdirectory will be served automatically. Thus you can add new data without reloading thredds from the tomcat manager. \nThe title of this dataset is "Test all files in a directory"\nThe dataset ID is "testDatasetScan"
Next meeting, April 11-12, 2011\nClimate push from Policy Committee (NASA, NOAA. NSF supports broadening the user base of Unidata Program software)\n\nNew Unidata members:\nJulian Chasting (IDV/RAMADDA)\nDoug Dirks (writer/editor, used to work for IDL)\n\n500 machines at 250+ sites are running LDM-6. NOAA, NASA and NAVY and companies all use LDM software without IDD to push data within their institutions.\n\nUPC IDD Cluster relays data to more than 650 downstream connections. 5.7TB output/day => 525 Mbps (peaks rates exceed 1.1Gbps, and UCAR has a 10Gbps pipe, so 10% of available bandwidth). NEXRAD Level 2 and CONDUIT are the big players.\n\nRAMADDA use continues to grow, even after Jeff McWhirter departing, moving to more open source model.\n\nUnidata working with GFDL on Gridspec in libCF. Ed Hartnett is point man.\n\nNetCDF-CF has been proposed as an OGC standard, comments closed on oct 7. (how many comments?)\n\nNew NOAAPORT will have 3 times bandwidth. \n\nNext year 1/4 degree GFS, 1/2 degree is currently the smaller, \nCurrently queue sizes are not large enough. Machines are fast enough, but need more memory.\n\nLDM will have \n\n"Leverage the synergies" Steve\n\n\nNSF panels work very differently some just rubber-stamp the peer-reviews, but some actually just fund based on panel recommendations.\n"pack the panels!"\n\nAWIPS II\njava and python,\nbasically a new GEMPACK\n\nNew GIS project @ NCEP\nKML, Shapefiles, WMS and WCS\npublication quality graphs?\nwould encourage Web Services and Restful URL access for workflow.\n\nWould be cool to export NcML from the database so that same HDF5 files could be seen as NetCDF-4 files and therefore useful to load in other software.\n\nAll the custom local weather office stuff will be done as Python.\n\nSome version of OpenGL is required.\n\n\nCONDUIT\n58GB/day\n\nnew WOC in Boulder\nConduit is now immune to outages in Silver Spring\n\nCONDUIT:\nNDFD now in Conduit.\nAnything with a WMO header can now be put in Conduit (upon request).\n\nGFS now 0-192 hours\nRUC extension to 18 hours, all cycles\nnew 2.5 km CONUS RTMA (RealTime Mesoscale Analysis)\n\nFY11 Model Implementations (could be added to CONDUIT)\nFYQ1\nNew grids for hurricane wave model\nMajor upgrade to Climate Forecast System (CFS)\n\nFYQ2\n HIRES window upgrade\n\nFYQ3\nNAM /NEMS/NMMB subgrids within NAM\n\nFY12Q1 \nGFS upgrade with 0.25 degree output\n\nCould do now:\nRTOFS\nWW3 output\nHurricane-driven, GFS multi-grid wave model\n\nBrendan:\nUnidata software on iphone, droid \nyum install IDV\n\nPower of the CDM in the most common \nScientific Analysis and Visualization Environment: Matlab, Python, R and IDL\n\nWork on toolboxes that have the functionality in Python. \n\nGrads, GEMPACK, FERRET, \n\n
Action item for Signell: get a policy committee member on the THREDDS Steering Team\n\nWRF training classes, they asked what users wanted training on for visualizing WRF: and the answers were NCL, Vapor and IDV. \n\n"VAPOR is the Visualization and Analysis Platform for Ocean, Atmosphere, and Solar Researchers. VAPOR provides an interactive 3D visualization environment that runs on most UNIX and Windows systems equipped with modern 3D graphics cards."\n\nTom, \nHow are calculations in IDV (Theta E?) Don has said that there are no guarantees on postprocessing -- so we need to check it out. \n\nNETCDF: Ed Hartnett leaving is going to put a damper on NetCDF development\nRAMADDA: Julian: Unidata depends on RAMADDA because of it's tight integration with IDV\n\nBecky Cosgrove:\n4m CONUS (0-60 hours)\nNFCens NCEP/FNMOC Combined Wave ensemble\nSignificant wave height\nGlobal RTOFS 12/degree HYCOM with 3 hourly forcing to 6 days. \nNCEP has 1982-present (9 month reforecasts every 6 hours for every 5th day - looking for a home)\nHRRR - NCO is in discussion to GSD (Global Systems Division) (4km ARW 6 hours from RUC \nS\n\nIDD/LDM: 6000 megabytes/hour at peak (3GB/hour average)\n\nSend message about NCTOOLBOX to usercomm folks (Kevin, \n\nOpenDAP 2.45 TB\nWMS 0.27 TB\nADDE on Motherlode 14.4 TB\nADDE GOES-East 12.0 TB\nADDE GOES-West 9.6TB\nADDE GOES-South 6.0TB\nTotal 48.69TB\n\nGlobal output is the most accessed data from Motherlode, perhaps because people in the past who used IDD/LDM to access the global model now can use motherlode to clip out just the part they need.\n\nSEES (NSF project for sustainability education, research)
In addition to dropping a new war file, we had to remove the old plugins from\n/usgs/data0/ramadda/plugins\nand drop the new\ in that folder, then restart ramadda.\n\nWe didn't remember the <rammada_home> directory, but we found this by looking at the running ramadda process on\n{{{\n$ps aux | grep ramadda\n\nusgs 12572 ... -Djava.awt.headless=true -Dramadda_home=/usgs/data0/ramadda ...\n}}}
\nGo to http://localhost:8080/thredds and note the existing version number (including date).\n\nGo to the Tomcat Webapps directory c:\sprograms\stomcat6\swebapps and move thredds.war to thredds.war.bak\nDownload the latest thredds.war file to the webapps directory.\n\nGo to "Administrative Tools=>Services", click on "Apache Tomcat" and the click on "Stop the service" (if it is running). Leave this window open. Delete the c:\sprograms\stomcat6\swebapps\sthredds directory. Click on "Start the service".\n\nGo to <http://localhost:8080/thredds> and make sure the version number has changed.\n
> Regarding NCTOOLBOX, is making the zipfile available via NCTOOLBOX\n> something only you can do?\n\nNope.\n>\n> Do you do that by grabbing Alex's zipfile from github and uploading it?\n\nNope.\n\ncd nctoolbox\nhg pull\nhg update\n\\n\n\nA new date-stamped zip file will be created in the nctoolbox directory.\n>\n> If we are going to do that I noticed that the README file needs updating.\n>\nGo ahead and update the README and push your change.
from windows command prompt (run "cmd") type:\n{{{\ncd c:\sPython27_EPD\sScripts\s\nenpkg epd\nenpkg ipython 0.10.2 # downgrading from ipython-0.12-1.egg to 0.10.2-2.egg\nenpkg spyder # Spyder 2.1\nenpkg PyQt # required by Spyder 2.1\nenpkg rope # used by Spyder for code introspection\n}}}
1. Upgrade to the latest ipython from EPD (which will install any other dependencies, like kernmagic):\n{{{\nenpkg ipython\n}}}\n2. Get and build the latest ipython from github:\n{{{\ngit clone\ncd ipython\npython build\npython -c "import setuptools;execfile('')" bdist_egg\n}}}\n3. Remove the old ipython with "enpkg" and install the new ipython with "egginst". (Note that "egginst" doesn't know anything dependencies, so it really isn't "managed" in the sense that other EPD packages are, but using "egginst" means you can use {{{enpkg --remove}}} to remove non-EPD packages if they become supported by EPD).\n{{{\nenpkg --remove ipython \negginst dist/\n}}}\n\nThis didn't work on windows. It appeared to all work fine, but after installing, there was no ipython when I did:\n{{{enpkg -l}}}. And when I did {{{enpkg --remove ipython}}} it told me none was installed. So then I tried reinstalling the EPD ipython using {{{enpkg ipython}}}, but that also had problems. It turned out I needed to update kernmagicwhich provides extra ipython %magic commands available in the EPD Ipython:\n{{{\nhg clone\ncd kernmagic\npython build\npython -c "import setuptools;execfile('')" bdist_egg\negginst dist/kernmagic-0.0.0-py2.7.egg\n}}}
Here's how I upgraded my Matlab from 7.2 to 7.5 (r2006a to r2007b):\n\nMake new directory:\n{{{\nmkdir /usr/local/matlab75\ncd /usr/local/matlab\n}}}\nCopy over my old license file:\n{{{\ncp /usr/local/matlab7.2/etc/license.dat /usr/local/matlab75 \n}}}\nThe pop the DVD in and type:\n{{{\n/media/cdrom/install &\n}}}\nI changed the MATLAB environment variable in my .bashrc to point to /usr/local/matlab75\n\nThen copied over my startup.m file:\n{{{\ncd /usr/local/matlab75/toolbox/local\ncp /usr/local/matlab7.2/toolbox/local/startup.m .\n}}}\nthen fired up Matlab and everything just worked!\n\nMy license file looks like:\n{{{\n# MATLAB license passcode file for use with FLEXnet.\n# Get FLEX_LM license from Server\nDAEMON MLM /usr/local/matlab75/etc/lm_matlab\nSERVER 0030482298cc 27000\nUSE_SERVER\n}}}
We got the latest version of repository.war and plugins (ramaddacollab.jar) from the ramadda site:\n\n\nUnfortunately, you can't use wget because of the way it's set up with password, so I downloaded both locally to my laptop, and then uploaded to my share site on coast-enviro. \n{{{\n\n\n}}}\nSo to upgrade the ramadda dev site, we then login to gam and become the user usgs-dev:\n{{{\nsudo -i\nsu usgs-dev\ncd /usgs/data0/ramadda-dev/plugins\nwget\ncd /usr/local/usgs-dev/tomcat-ramadda-dev/webapps\n../bin/\nmv repository-dev.war repository-dev.bak\nwget\nmv repository.war repository-dev.war\nrm -rf repository-dev\n../bin/\n}}}\n\nSame process for the production site, but done as user "usgs":\n{{{\nsudo -i\nsu usgs\ncd /usgs/data0/ramadda/plugins\nwget\ncd /usr/local/usgs-dev/tomcat-ramadda/webapps\n../bin/\nmv repository.war repository.bak\nwget\nrm -rf repository\n../bin/\n}}}\n
Upgraded to latest version of TDS to fix "black tile" problem with WMS, caused by a NetCDF-java issue reading certain subsets of NetCDF4 files.\n{{{\ncd /usr/local/tomcat\nwget\n}}}\nFound that tomcat6 was running, but tomcat7 is still setting environment variables somewhere. Changed all files in /usr/local/tomcat (tomcat 6) to be owned by tomcat\n{{{\nsudo chown -R tomcat:tomcat /usr/local/tomcat\n}}}\nthen notices that the params for tomcat were not being set, so set those:\n{{{\n[root@atmos init.d]# more /usr/local/tomcat/bin/\n#!/bin/bash\nexport CATALINA_BASE="/usr/local/tomcat"\nexport CATALINA_HOME="/usr/local/tomcat"\nexport JRE_HOME="/usr/java/default"\nexport CLASSPATH="/usr/local/tomcat/bin/bootstrap.jar"\nexport JAVA_OPTS="-server -Djava.awt.headless=true -Xms512m -Xmx2048m -XX:MaxPermSize=180m"\n}}}\n\nThen to get rid of the need to supply the port 8080 in the URL, set up proxyPass, doing\n"sudo vi /etc/httpd/conf/httpd.conf", and adding this section:\n{{{\n#\n# Proxy Server directives. Uncomment the following lines to\n# enable the proxy server:\n#\n<IfModule mod_proxy.c>\nProxyRequests Off\nProxyPreserveHost On\n\n<Location /thredds>\nProxyPass\nProxyPassReverse\n</Location>\n\n<Proxy>\nAllowOverride None\nOrder allow,deny\nAllow from all\n</Proxy>\n\n</IfModule>\n}}}\nThen did \n{{{\n/etc/init.d/httpd restart\n/etc/init.d/tomcat restart\n}}}\nseems like\n\nis working fine!\n\n
The full tutorial for TDS 4.2 is at:\n\n\nInstalling TDS 4.2 requires a bit more than the usual upgrade, which is just:\n\n1. Download. Get the latest thredds.war file from the Thredds Data\nServer (TDS) page\n\nFor example:\n{{{\nwget\n}}}\n2. Undeploy: Go to your Tomcat Manager:\n{{{\nhttp://your_site:8080/manager/html\n}}}\nlook for "thredds" and click "undeploy".\n\n3. Deploy: Using the Tomcat Manager again, go to "WAR file to\ndeploy", locate the thredds.war you downloaded and click "deploy".\n\nIn the case of TDS 4.2, there are some additional options in\nthe main configuration file, and a second configuration file for the\nWMS server options, which I think you will want.\nSo in this case I would recommend renaming your existing configuration directory\n<tomcat>/content/thredds\nto, say\n<tomcat>/content/thredds_old\nand then following steps 1,2,3 above.\n\nThis way you will get template configuration files with all the new\noptions (most of them commented out), and then you can copy your\ncatalog files back into that directory and make your customizations\nthat you had in the old threddsConfig.xml file.\n\nOne cool thing about TDS 4.2 is the upgrade to the WMS service -- the dataset page will pop up with a link to godiva2 if you have WMS enabled for that dataset. To set the WMS defaults for certain standard names or datasets, like color range for specific variables, you will need to modify the wmsConfig.xml file. A sample modified file from UCSD's HFRNET is at\n\n\nFull instructions for upgrading to TDS 4.2 are on the TDS page\n(\n
Following these instructions:\n\n\nGot the Java SE Development Kit 7u11,\njdk-7u11-linux-x64.tar.gz\n from:\n\nthen did\n{{{\ntar xvf jdk-7u11-linux-x64.tar.gz\nsudo update-alternatives --config java\nalready had java 7, so:\nrm -rf /usr/lib/jvm/jdk1.7.0\nsudo mkdir /usr/lib/jvm/jdk1.7.0\nsudo mv jdk1.7.0_11/* /usr/lib/jvm/jdk1.7.0/\nln -s /usr/lib/jvm/jdk1.7.0/jre/lib/amd64/ ~/.mozilla/plugins/\n}}}
upgrading to Ubuntu 11.04\n\nuname -a\n\nsudo apt-get install linux-headers-2.6.38-8-generic\nsudo dpkg-reconfigure nvidia-current\n\n\nWent to the NVIDIA driver page, and specified GeForce FX 1300, 32 bit linux, and got this package:\n\nsudo sh NVIDIA-\n\nWhen I ran this, the package needed to build a new kernel, but the source code to do that was not present. So I then had to do:\n\nsudo apt-get linux-source-2.6.38\n\n(after first determining that I was running 2.6.38 by doing\nuname -a \nand getting back\nLinux IGSAGIEGWSRSIL00 2.6.38-8-generic)\n\nSo then I tried again:\n
| !date | !user | !location | !storeUrl | !uploadDir | !toFilename | !backupdir | !origin |\n| \n| 28/1/2014 21:38:23 | RichSignell | [[/|]] | [[store.cgi|]] | . | index.html | . | Ok |\n| 28/1/2014 21:41:54 | RichSignell | [[/|]] | [[store.cgi|]] | . | index.html | . | Ok |\n| 28/1/2014 21:42:35 | RichSignell | [[/|]] | [[store.cgi|]] | . | index.html | . |\n| 13/2/2014 20:31:35 | RichSignell | [[/|]] | [[store.cgi|]] | . | index.html | . |\n| 26/2/2014 16:38:54 | RichSignell | [[/|]] | [[store.cgi|]] | . | index.html | . |\n| 6/3/2014 14:8:33 | RichSignell | [[/|]] | [[store.cgi|]] | . | index.html | . |\n| 21/5/2014 14:7:41 | RichSignell | [[/|]] | [[store.cgi|]] | . | index.html | . |\n| 12/6/2014 15:58:48 | RichSignell | [[/|]] | [[store.cgi|]] | . | index.html | . |\n| 26/6/2014 15:59:48 | RichSignell | [[/|]] | [[store.cgi|]] | . | index.html | . |\n| 10/7/2014 11:32:10 | RichSignell | [[/|]] | [[store.cgi|]] | . | index.html | . | Ok |\n| 10/7/2014 11:34:48 | RichSignell | [[/|]] | [[store.cgi|]] | . | index.html | . | Ok |\n| 10/7/2014 11:39:53 | RichSignell | [[/|]] | [[store.cgi|]] | . | index.html | . |\n| 15/7/2014 16:7:30 | RichSignell | [[/|]] | [[store.cgi|]] | . | index.html | . |\n| 8/8/2014 14:4:1 | RichSignell | [[/|]] | [[store.cgi|]] | . | index.html | . |\n| 12/8/2014 18:3:59 | RichSignell | [[/|]] | [[store.cgi|]] | . | index.html | . | Ok |\n| 12/8/2014 18:7:22 | RichSignell | [[/|]] | [[store.cgi|]] | . | index.html | . | Ok |\n| 12/8/2014 18:8:53 | RichSignell | [[/|]] | [[store.cgi|]] | . | index.html | . | Ok |\n| 13/8/2014 6:32:57 | RichSignell | [[/|]] | [[store.cgi|]] | . | index.html | . | Ok |\n| 13/8/2014 6:44:25 | RichSignell | [[/|]] | [[store.cgi|]] | . | index.html | . |\n| 21/8/2014 15:1:21 | RichSignell | [[/|]] | [[store.cgi|]] | . | index.html | . |\n| 4/9/2014 11:15:36 | RichSignell | [[/|]] | [[store.cgi|]] | . | index.html | . |\n| 19/9/2014 8:55:11 | RichSignell | [[/|]] | [[store.cgi|]] | . | index.html | . |\n| 20/10/2014 11:43:28 | RichSignell | [[/|]] | [[store.cgi|]] | . | index.html | . |\n| 22/10/2014 16:46:17 | RichSignell | [[/|]] | [[store.cgi|]] | . | index.html | . |\n| 23/10/2014 7:12:17 | RichSignell | [[/|]] | [[store.cgi|]] | . | index.html | . | Ok |\n| 23/10/2014 7:24:48 | RichSignell | [[/|]] | [[store.cgi|]] | . | index.html | . | Ok |\n| 23/10/2014 7:34:8 | RichSignell | [[/|]] | [[store.cgi|]] | . | index.html | . |\n| 30/10/2014 16:38:53 | RichSignell | [[/|]] | [[store.cgi|]] | . | index.html | . | Ok |\n| 30/10/2014 17:4:27 | RichSignell | [[/|]] | [[store.cgi|]] | . | index.html | . |\n| 3/11/2014 10:8:16 | RichSignell | [[/|]] | [[store.cgi|]] | . | index.html | . |\n| 3/12/2014 18:53:5 | RichSignell | [[/|]] | [[store.cgi|]] | . | index.html | . |\n| 15/12/2014 6:42:34 | RichSignell | [[/|]] | [[store.cgi|]] | . | index.html | . |\n| 17/12/2014 16:27:46 | RichSignell | [[/|]] | [[store.cgi|]] | . | index.html | . |
/***\n|''Name:''|UploadPlugin|\n|''Description:''|Save to web a TiddlyWiki|\n|''Version:''|3.4.5|\n|''Date:''|Oct 15, 2006|\n|''Source:''||\n|''Documentation:''||\n|''Author:''|BidiX (BidiX (at) bidix (dot) info)|\n|''License:''|[[BSD open source license| ]]|\n|''~CoreVersion:''|2.0.0|\n|''Browser:''|Firefox 1.5; InternetExplorer 6.0; Safari|\n|''Include:''|config.lib.file; config.lib.log; config.lib.options; PasswordTweak|\n|''Require:''|[[UploadService|]]|\n***/\n//{{{\nversion.extensions.UploadPlugin = {\n major: 3, minor: 4, revision: 5, \n date: new Date(2006,9,15),\n source: '',\n documentation: '',\n author: 'BidiX (BidiX (at) bidix (dot) info',\n license: '[[BSD open source license|]]',\n coreVersion: '2.0.0',\n browser: 'Firefox 1.5; InternetExplorer 6.0; Safari'\n};\n//}}}\n\n////+++!![config.lib.file]\n\n//{{{\nif (!config.lib) config.lib = {};\nif (!config.lib.file) config.lib.file= {\n author: 'BidiX',\n version: {major: 0, minor: 1, revision: 0}, \n date: new Date(2006,3,9)\n};\nconfig.lib.file.dirname = function (filePath) {\n var lastpos;\n if ((lastpos = filePath.lastIndexOf("/")) != -1) {\n return filePath.substring(0, lastpos);\n } else {\n return filePath.substring(0, filePath.lastIndexOf("\s\s"));\n }\n};\nconfig.lib.file.basename = function (filePath) {\n var lastpos;\n if ((lastpos = filePath.lastIndexOf("#")) != -1) \n filePath = filePath.substring(0, lastpos);\n if ((lastpos = filePath.lastIndexOf("/")) != -1) {\n return filePath.substring(lastpos + 1);\n } else\n return filePath.substring(filePath.lastIndexOf("\s\s")+1);\n};\nwindow.basename = function() {return "@@deprecated@@";};\n//}}}\n////===\n\n////+++!![config.lib.log]\n\n//{{{\nif (!config.lib) config.lib = {};\nif (!config.lib.log) config.lib.log= {\n author: 'BidiX',\n version: {major: 0, minor: 1, revision: 1}, \n date: new Date(2006,8,19)\n};\nconfig.lib.Log = function(tiddlerTitle, logHeader) {\n if (version.major < 2)\n this.tiddler = store.tiddlers[tiddlerTitle];\n else\n this.tiddler = store.getTiddler(tiddlerTitle);\n if (!this.tiddler) {\n this.tiddler = new Tiddler();\n this.tiddler.title = tiddlerTitle;\n this.tiddler.text = "| !date | !user | !location |" + logHeader;\n this.tiddler.created = new Date();\n this.tiddler.modifier = config.options.txtUserName;\n this.tiddler.modified = new Date();\n if (version.major < 2)\n store.tiddlers[tiddlerTitle] = this.tiddler;\n else\n store.addTiddler(this.tiddler);\n }\n return this;\n};\n\nconfig.lib.Log.prototype.newLine = function (line) {\n var now = new Date();\n var newText = "| ";\n newText += now.getDate()+"/"+(now.getMonth()+1)+"/"+now.getFullYear() + " ";\n newText += now.getHours()+":"+now.getMinutes()+":"+now.getSeconds()+" | ";\n newText += config.options.txtUserName + " | ";\n var location = document.location.toString();\n var filename = config.lib.file.basename(location);\n if (!filename) filename = '/';\n newText += "[["+filename+"|"+location + "]] |";\n this.tiddler.text = this.tiddler.text + "\sn" + newText;\n this.addToLine(line);\n};\n\nconfig.lib.Log.prototype.addToLine = function (text) {\n this.tiddler.text = this.tiddler.text + text;\n this.tiddler.modifier = config.options.txtUserName;\n this.tiddler.modified = new Date();\n if (version.major < 2)\n store.tiddlers[this.tiddler.tittle] = this.tiddler;\n else {\n store.addTiddler(this.tiddler);\n story.refreshTiddler(this.tiddler.title);\n store.notify(this.tiddler.title, true);\n }\n if (version.major < 2)\n store.notifyAll(); \n};\n//}}}\n////===\n\n////+++!![config.lib.options]\n\n//{{{\nif (!config.lib) config.lib = {};\nif (!config.lib.options) config.lib.options = {\n author: 'BidiX',\n version: {major: 0, minor: 1, revision: 0}, \n date: new Date(2006,3,9)\n};\n\nconfig.lib.options.init = function (name, defaultValue) {\n if (!config.options[name]) {\n config.options[name] = defaultValue;\n saveOptionCookie(name);\n }\n};\n//}}}\n////===\n\n////+++!![PasswordTweak]\n\n//{{{\nversion.extensions.PasswordTweak = {\n major: 1, minor: 0, revision: 3, date: new Date(2006,8,30),\n type: 'tweak',\n source: ''\n};\n//}}}\n/***\n!!config.macros.option\n***/\n//{{{\nconfig.macros.option.passwordCheckboxLabel = "Save this password on this computer";\nconfig.macros.option.passwordType = "password"; // password | text\n\nconfig.macros.option.onChangeOption = function(e)\n{\n var opt = this.getAttribute("option");\n var elementType,valueField;\n if(opt) {\n switch(opt.substr(0,3)) {\n case "txt":\n elementType = "input";\n valueField = "value";\n break;\n case "pas":\n elementType = "input";\n valueField = "value";\n break;\n case "chk":\n elementType = "input";\n valueField = "checked";\n break;\n }\n config.options[opt] = this[valueField];\n saveOptionCookie(opt);\n var nodes = document.getElementsByTagName(elementType);\n for(var t=0; t<nodes.length; t++) \n {\n var optNode = nodes[t].getAttribute("option");\n if (opt == optNode) \n nodes[t][valueField] = this[valueField];\n }\n }\n return(true);\n};\n\nconfig.macros.option.handler = function(place,macroName,params)\n{\n var opt = params[0];\n if(config.options[opt] === undefined) {\n return;}\n var c;\n switch(opt.substr(0,3)) {\n case "txt":\n c = document.createElement("input");\n c.onkeyup = this.onChangeOption;\n c.setAttribute ("option",opt);\n c.className = "txtOptionInput "+opt;\n place.appendChild(c);\n c.value = config.options[opt];\n break;\n case "pas":\n // input password\n c = document.createElement ("input");\n c.setAttribute("type",config.macros.option.passwordType);\n c.onkeyup = this.onChangeOption;\n c.setAttribute("option",opt);\n c.className = "pasOptionInput "+opt;\n place.appendChild(c);\n c.value = config.options[opt];\n // checkbox link with this password "save this password on this computer"\n c = document.createElement("input");\n c.setAttribute("type","checkbox");\n c.onclick = this.onChangeOption;\n c.setAttribute("option","chk"+opt);\n c.className = "chkOptionInput "+opt;\n place.appendChild(c);\n c.checked = config.options["chk"+opt];\n // text savePasswordCheckboxLabel\n place.appendChild(document.createTextNode(config.macros.option.passwordCheckboxLabel));\n break;\n case "chk":\n c = document.createElement("input");\n c.setAttribute("type","checkbox");\n c.onclick = this.onChangeOption;\n c.setAttribute("option",opt);\n c.className = "chkOptionInput "+opt;\n place.appendChild(c);\n c.checked = config.options[opt];\n break;\n }\n};\n//}}}\n/***\n!! Option cookie stuff\n***/\n//{{{\nwindow.loadOptionsCookie_orig_PasswordTweak = window.loadOptionsCookie;\nwindow.loadOptionsCookie = function()\n{\n var cookies = document.cookie.split(";");\n for(var c=0; c<cookies.length; c++) {\n var p = cookies[c].indexOf("=");\n if(p != -1) {\n var name = cookies[c].substr(0,p).trim();\n var value = cookies[c].substr(p+1).trim();\n switch(name.substr(0,3)) {\n case "txt":\n config.options[name] = unescape(value);\n break;\n case "pas":\n config.options[name] = unescape(value);\n break;\n case "chk":\n config.options[name] = value == "true";\n break;\n }\n }\n }\n};\n\nwindow.saveOptionCookie_orig_PasswordTweak = window.saveOptionCookie;\nwindow.saveOptionCookie = function(name)\n{\n var c = name + "=";\n switch(name.substr(0,3)) {\n case "txt":\n c += escape(config.options[name].toString());\n break;\n case "chk":\n c += config.options[name] ? "true" : "false";\n // is there an option link with this chk ?\n if (config.options[name.substr(3)]) {\n saveOptionCookie(name.substr(3));\n }\n break;\n case "pas":\n if (config.options["chk"+name]) {\n c += escape(config.options[name].toString());\n } else {\n c += "";\n }\n break;\n }\n c += "; expires=Fri, 1 Jan 2038 12:00:00 UTC; path=/";\n document.cookie = c;\n};\n//}}}\n/***\n!! Initializations\n***/\n//{{{\n// define config.options.pasPassword\nif (!config.options.pasPassword) {\n config.options.pasPassword = 'defaultPassword';\n window.saveOptionCookie('pasPassword');\n}\n// since loadCookies is first called befor password definition\n// we need to reload cookies\nwindow.loadOptionsCookie();\n//}}}\n////===\n\n////+++!![config.macros.upload]\n\n//{{{\nconfig.macros.upload = {\n accessKey: "U",\n formName: "UploadPlugin",\n contentType: "text/html;charset=UTF-8",\n defaultStoreScript: "store.php"\n};\n\n// only this two configs need to be translated\nconfig.macros.upload.messages = {\n aboutToUpload: "About to upload TiddlyWiki to %0",\n backupFileStored: "Previous file backuped in %0",\n crossDomain: "Certainly a cross-domain isue: access to an other site isn't allowed",\n errorDownloading: "Error downloading",\n errorUploadingContent: "Error uploading content",\n fileLocked: "Files is locked: You are not allowed to Upload",\n fileNotFound: "file to upload not found",\n fileNotUploaded: "File %0 NOT uploaded",\n mainFileUploaded: "Main TiddlyWiki file uploaded to %0",\n passwordEmpty: "Unable to upload, your password is empty",\n urlParamMissing: "url param missing",\n rssFileNotUploaded: "RssFile %0 NOT uploaded",\n rssFileUploaded: "Rss File uploaded to %0"\n};\n\nconfig.macros.upload.label = {\n promptOption: "Save and Upload this TiddlyWiki with UploadOptions",\n promptParamMacro: "Save and Upload this TiddlyWiki in %0",\n saveLabel: "save to web", \n saveToDisk: "save to disk",\n uploadLabel: "upload" \n};\n\nconfig.macros.upload.handler = function(place,macroName,params){\n // parameters initialization\n var storeUrl = params[0];\n var toFilename = params[1];\n var backupDir = params[2];\n var uploadDir = params[3];\n var username = params[4];\n var password; // for security reason no password as macro parameter\n var label;\n if (document.location.toString().substr(0,4) == "http")\n label = this.label.saveLabel;\n else\n label = this.label.uploadLabel;\n var prompt;\n if (storeUrl) {\n prompt = this.label.promptParamMacro.toString().format([this.toDirUrl(storeUrl, uploadDir, username)]);\n }\n else {\n prompt = this.label.promptOption;\n }\n createTiddlyButton(place, label, prompt, \n function () {\n config.macros.upload.upload(storeUrl, toFilename, uploadDir, backupDir, username, password); \n return false;}, \n null, null, this.accessKey);\n};\nconfig.macros.upload.UploadLog = function() {\n return new config.lib.Log('UploadLog', " !storeUrl | !uploadDir | !toFilename | !backupdir | !origin |" );\n};\nconfig.macros.upload.UploadLog.prototype = config.lib.Log.prototype;\nconfig.macros.upload.UploadLog.prototype.startUpload = function(storeUrl, toFilename, uploadDir, backupDir) {\n var line = " [[" + config.lib.file.basename(storeUrl) + "|" + storeUrl + "]] | ";\n line += uploadDir + " | " + toFilename + " | " + backupDir + " |";\n this.newLine(line);\n};\nconfig.macros.upload.UploadLog.prototype.endUpload = function() {\n this.addToLine(" Ok |");\n};\nconfig.macros.upload.basename = config.lib.file.basename;\nconfig.macros.upload.dirname = config.lib.file.dirname;\nconfig.macros.upload.toRootUrl = function (storeUrl, username)\n{\n return root = (this.dirname(storeUrl)?this.dirname(storeUrl):this.dirname(document.location.toString()));\n}\nconfig.macros.upload.toDirUrl = function (storeUrl, uploadDir, username)\n{\n var root = this.toRootUrl(storeUrl, username);\n if (uploadDir && uploadDir != '.')\n root = root + '/' + uploadDir;\n return root;\n}\nconfig.macros.upload.toFileUrl = function (storeUrl, toFilename, uploadDir, username)\n{\n return this.toDirUrl(storeUrl, uploadDir, username) + '/' + toFilename;\n}\nconfig.macros.upload.upload = function(storeUrl, toFilename, uploadDir, backupDir, username, password)\n{\n // parameters initialization\n storeUrl = (storeUrl ? storeUrl : config.options.txtUploadStoreUrl);\n toFilename = (toFilename ? toFilename : config.options.txtUploadFilename);\n backupDir = (backupDir ? backupDir : config.options.txtUploadBackupDir);\n uploadDir = (uploadDir ? uploadDir : config.options.txtUploadDir);\n username = (username ? username : config.options.txtUploadUserName);\n password = config.options.pasUploadPassword; // for security reason no password as macro parameter\n if (!password || password === '') {\n alert(config.macros.upload.messages.passwordEmpty);\n return;\n }\n if (storeUrl === '') {\n storeUrl = config.macros.upload.defaultStoreScript;\n }\n if (config.lib.file.dirname(storeUrl) === '') {\n storeUrl = config.lib.file.dirname(document.location.toString())+'/'+storeUrl;\n }\n if (toFilename === '') {\n toFilename = config.lib.file.basename(document.location.toString());\n }\n\n clearMessage();\n // only for forcing the message to display\n if (version.major < 2)\n store.notifyAll();\n if (!storeUrl) {\n alert(config.macros.upload.messages.urlParamMissing);\n return;\n }\n // Check that file is not locked\n if (window.BidiX && BidiX.GroupAuthoring && BidiX.GroupAuthoring.lock) {\n if (BidiX.GroupAuthoring.lock.isLocked() && !BidiX.GroupAuthoring.lock.isMyLock()) {\n alert(config.macros.upload.messages.fileLocked);\n return;\n }\n }\n \n var log = new this.UploadLog();\n log.startUpload(storeUrl, toFilename, uploadDir, backupDir);\n if (document.location.toString().substr(0,5) == "file:") {\n saveChanges();\n }\n var toDir = config.macros.upload.toDirUrl(storeUrl, toFilename, uploadDir, username);\n displayMessage(config.macros.upload.messages.aboutToUpload.format([toDir]), toDir);\n this.uploadChanges(storeUrl, toFilename, uploadDir, backupDir, username, password);\n if(config.options.chkGenerateAnRssFeed) {\n //var rssContent = convertUnicodeToUTF8(generateRss());\n var rssContent = generateRss();\n var rssPath = toFilename.substr(0,toFilename.lastIndexOf(".")) + ".xml";\n this.uploadContent(rssContent, storeUrl, rssPath, uploadDir, '', username, password, \n function (responseText) {\n if (responseText.substring(0,1) != '0') {\n displayMessage(config.macros.upload.messages.rssFileNotUploaded.format([rssPath]));\n }\n else {\n var toFileUrl = config.macros.upload.toFileUrl(storeUrl, rssPath, uploadDir, username);\n displayMessage(config.macros.upload.messages.rssFileUploaded.format(\n [toFileUrl]), toFileUrl);\n }\n // for debugging store.php uncomment last line\n //DEBUG alert(responseText);\n });\n }\n return;\n};\n\nconfig.macros.upload.uploadChanges = function(storeUrl, toFilename, uploadDir, backupDir, \n username, password) {\n var original;\n if (document.location.toString().substr(0,4) == "http") {\n original =, toFilename, uploadDir, backupDir, username, password);\n return;\n }\n else {\n // standard way : Local file\n \n original = loadFile(getLocalPath(document.location.toString()));\n if(window.Components) {\n // it's a mozilla browser\n try {\n"UniversalXPConnect");\n var converter = Components.classes[""]\n .createInstance(Components.interfaces.nsIScriptableUnicodeConverter);\n converter.charset = "UTF-8";\n original = converter.ConvertToUnicode(original);\n }\n catch(e) {\n }\n }\n }\n //DEBUG alert(original);\n this.uploadChangesFrom(original, storeUrl, toFilename, uploadDir, backupDir, \n username, password);\n};\n\nconfig.macros.upload.uploadChangesFrom = function(original, storeUrl, toFilename, uploadDir, backupDir, \n username, password) {\n var startSaveArea = '<div id="' + 'storeArea">'; // Split up into two so that indexOf() of this source doesn't find it\n var endSaveArea = '</d' + 'iv>';\n // Locate the storeArea div's\n var posOpeningDiv = original.indexOf(startSaveArea);\n var posClosingDiv = original.lastIndexOf(endSaveArea);\n if((posOpeningDiv == -1) || (posClosingDiv == -1))\n {\n alert(config.messages.invalidFileError.format([document.location.toString()]));\n return;\n }\n var revised = original.substr(0,posOpeningDiv + startSaveArea.length) + \n allTiddlersAsHtml() + "\sn\st\st" +\n original.substr(posClosingDiv);\n var newSiteTitle;\n if(version.major < 2){\n newSiteTitle = (getElementText("siteTitle") + " - " + getElementText("siteSubtitle")).htmlEncode();\n } else {\n newSiteTitle = (wikifyPlain ("SiteTitle") + " - " + wikifyPlain ("SiteSubtitle")).htmlEncode();\n }\n\n revised = revised.replaceChunk("<title"+">","</title"+">"," " + newSiteTitle + " ");\n revised = revised.replaceChunk("<!--PRE-HEAD-START--"+">","<!--PRE-HEAD-END--"+">","\sn" + store.getTiddlerText("MarkupPreHead","") + "\sn");\n revised = revised.replaceChunk("<!--POST-HEAD-START--"+">","<!--POST-HEAD-END--"+">","\sn" + store.getTiddlerText("MarkupPostHead","") + "\sn");\n revised = revised.replaceChunk("<!--PRE-BODY-START--"+">","<!--PRE-BODY-END--"+">","\sn" + store.getTiddlerText("MarkupPreBody","") + "\sn");\n revised = revised.replaceChunk("<!--POST-BODY-START--"+">","<!--POST-BODY-END--"+">","\sn" + store.getTiddlerText("MarkupPostBody","") + "\sn");\n\n var response = this.uploadContent(revised, storeUrl, toFilename, uploadDir, backupDir, \n username, password, function (responseText) {\n if (responseText.substring(0,1) != '0') {\n alert(responseText);\n displayMessage(config.macros.upload.messages.fileNotUploaded.format([getLocalPath(document.location.toString())]));\n }\n else {\n if (uploadDir !== '') {\n toFilename = uploadDir + "/" + config.macros.upload.basename(toFilename);\n } else {\n toFilename = config.macros.upload.basename(toFilename);\n }\n var toFileUrl = config.macros.upload.toFileUrl(storeUrl, toFilename, uploadDir, username);\n if (responseText.indexOf("destfile:") > 0) {\n var destfile = responseText.substring(responseText.indexOf("destfile:")+9, \n responseText.indexOf("\sn", responseText.indexOf("destfile:")));\n toFileUrl = config.macros.upload.toRootUrl(storeUrl, username) + '/' + destfile;\n }\n else {\n toFileUrl = config.macros.upload.toFileUrl(storeUrl, toFilename, uploadDir, username);\n }\n displayMessage(config.macros.upload.messages.mainFileUploaded.format(\n [toFileUrl]), toFileUrl);\n if (backupDir && responseText.indexOf("backupfile:") > 0) {\n var backupFile = responseText.substring(responseText.indexOf("backupfile:")+11, \n responseText.indexOf("\sn", responseText.indexOf("backupfile:")));\n toBackupUrl = config.macros.upload.toRootUrl(storeUrl, username) + '/' + backupFile;\n displayMessage(config.macros.upload.messages.backupFileStored.format(\n [toBackupUrl]), toBackupUrl);\n }\n var log = new config.macros.upload.UploadLog();\n log.endUpload();\n store.setDirty(false);\n // erase local lock\n if (window.BidiX && BidiX.GroupAuthoring && BidiX.GroupAuthoring.lock) {\n BidiX.GroupAuthoring.lock.eraseLock();\n // change mtime with new mtime after upload\n var mtime = responseText.substr(responseText.indexOf("mtime:")+6);\n BidiX.GroupAuthoring.lock.mtime = mtime;\n }\n \n \n }\n // for debugging store.php uncomment last line\n //DEBUG alert(responseText);\n }\n );\n};\n\nconfig.macros.upload.uploadContent = function(content, storeUrl, toFilename, uploadDir, backupDir, \n username, password, callbackFn) {\n var boundary = "---------------------------"+"AaB03x"; \n var request;\n try {\n request = new XMLHttpRequest();\n } \n catch (e) { \n request = new ActiveXObject("Msxml2.XMLHTTP"); \n }\n if (window.netscape){\n try {\n if (document.location.toString().substr(0,4) != "http") {\n'UniversalBrowserRead');}\n }\n catch (e) {}\n } \n //DEBUG alert("user["+config.options.txtUploadUserName+"] password[" + config.options.pasUploadPassword + "]");\n // compose headers data\n var sheader = "";\n sheader += "--" + boundary + "\sr\snContent-disposition: form-data; name=\s"";\n sheader += config.macros.upload.formName +"\s"\sr\sn\sr\sn";\n sheader += "backupDir="+backupDir\n +";user=" + username \n +";password=" + password\n +";uploaddir=" + uploadDir;\n // add lock attributes to sheader\n if (window.BidiX && BidiX.GroupAuthoring && BidiX.GroupAuthoring.lock) {\n var l = BidiX.GroupAuthoring.lock.myLock;\n sheader += ";lockuser=" + l.user\n + ";mtime=" + l.mtime\n + ";locktime=" + l.locktime;\n }\n sheader += ";;\sr\sn"; \n sheader += "\sr\sn" + "--" + boundary + "\sr\sn";\n sheader += "Content-disposition: form-data; name=\s"userfile\s"; filename=\s""+toFilename+"\s"\sr\sn";\n sheader += "Content-Type: " + config.macros.upload.contentType + "\sr\sn";\n sheader += "Content-Length: " + content.length + "\sr\sn\sr\sn";\n // compose trailer data\n var strailer = new String();\n strailer = "\sr\sn--" + boundary + "--\sr\sn";\n //strailer = "--" + boundary + "--\sr\sn";\n var data;\n data = sheader + content + strailer;\n //"POST", storeUrl, true, username, password);\n try {\n"POST", storeUrl, true); \n }\n catch(e) {\n alert(config.macros.upload.messages.crossDomain + "\snError:" +e);\n exit;\n }\n request.onreadystatechange = function () {\n if (request.readyState == 4) {\n if (request.status == 200)\n callbackFn(request.responseText);\n else\n alert(config.macros.upload.messages.errorUploadingContent + "\snStatus: "+request.status.statusText);\n }\n };\n request.setRequestHeader("Content-Length",data.length);\n request.setRequestHeader("Content-Type","multipart/form-data; boundary="+boundary);\n request.send(data); \n};\n\n\ = function(uploadUrl, uploadToFilename, uploadDir, uploadBackupDir, \n username, password) {\n var request;\n try {\n request = new XMLHttpRequest();\n } \n catch (e) { \n request = new ActiveXObject("Msxml2.XMLHTTP"); \n }\n try {\n if (uploadUrl.substr(0,4) == "http") {\n"UniversalBrowserRead");\n }\n else {\n"UniversalXPConnect");\n }\n } catch (e) { }\n //"GET", document.location.toString(), true, username, password);\n try {\n"GET", document.location.toString(), true);\n }\n catch(e) {\n alert(config.macros.upload.messages.crossDomain + "\snError:" +e);\n exit;\n }\n \n request.onreadystatechange = function () {\n if (request.readyState == 4) {\n if(request.status == 200) {\n config.macros.upload.uploadChangesFrom(request.responseText, uploadUrl, \n uploadToFilename, uploadDir, uploadBackupDir, username, password);\n }\n else\n alert(config.macros.upload.messages.errorDownloading.format(\n [document.location.toString()]) + "\snStatus: "+request.status.statusText);\n }\n };\n request.send(null);\n};\n\n//}}}\n////===\n\n////+++!![Initializations]\n\n//{{{\nconfig.lib.options.init('txtUploadStoreUrl','store.php');\nconfig.lib.options.init('txtUploadFilename','');\nconfig.lib.options.init('txtUploadDir','');\nconfig.lib.options.init('txtUploadBackupDir','');\nconfig.lib.options.init('txtUploadUserName',config.options.txtUserName);\nconfig.lib.options.init('pasUploadPassword','');\nsetStylesheet(\n ".pasOptionInput {width: 11em;}\sn"+\n ".txtOptionInput.txtUploadStoreUrl {width: 25em;}\sn"+\n ".txtOptionInput.txtUploadFilename {width: 25em;}\sn"+\n ".txtOptionInput.txtUploadDir {width: 25em;}\sn"+\n ".txtOptionInput.txtUploadBackupDir {width: 25em;}\sn"+\n "",\n "UploadOptionsStyles");\nif (document.location.toString().substr(0,4) == "http") {\n config.options.chkAutoSave = false; \n saveOptionCookie('chkAutoSave');\n}\nconfig.shadowTiddlers.UploadDoc = "[[Full Documentation| ]]\sn"; \n\n//}}}\n////===\n\n////+++!![Core Hijacking]\n\n//{{{\nconfig.macros.saveChanges.label_orig_UploadPlugin = config.macros.saveChanges.label;\nconfig.macros.saveChanges.label = config.macros.upload.label.saveToDisk;\n\nconfig.macros.saveChanges.handler_orig_UploadPlugin = config.macros.saveChanges.handler;\n\nconfig.macros.saveChanges.handler = function(place)\n{\n if ((!readOnly) && (document.location.toString().substr(0,4) != "http"))\n createTiddlyButton(place,this.label,this.prompt,this.onClick,null,null,this.accessKey);\n};\n\n//}}}\n////===\n\n
Here's how to use GMT from cygwin on Windows without compiling anything!\n\nWent to the Mirone site for prebundled GMT\n\nand downloaded the 64 bit version (for my 64 bit Windows 7 machine)\n\nInstalled in the default location:\nc:\sprograms\sGMT64\nThen fired up Cygwin and set these environment variables:\n{{{ \n export GMT_SHAREDIR=c:\s\sprograms\s\smirone\s\sgmt_userdir\n export PATH=$PATH:/cygdrive/c/programs/gmt64/bin\n}}}\nand then I can use all the GMT binaries from Cygwin, for example:\n{{{\ncut -f2,3,4 DEM_NE_5.0_2005_01_01.txt | xyz2grd -: -I2.5m -R-84/-66.041667/36.041667/48\n}}}\nThe output grid can be nicely visualized in Mirone, btw:\n<html><img src=""/></html>\nThis was produced by the standalone version of Mirone, which is handy because on my home machine (on which I produced the above image) does not have Matlab. You can get this of Mirone version here:\n\n
I stumbled across this web services page at CO-OPS\n\nwhich contains information on how to get tidal constituent data and water level data using SOAP w/ WSDL.\n\nI had just earlier stumbled upon this Mathworks "Accessing Web Services Using Matlab SOAP functions" page:\n\nand this "Accessing Web Services That Use WSDL Documents" page:\n\nSo I was curious if I could use these tools to read some harmonic constituent data. Turns out it was pretty easy to follow the directions on this Mathworks page, and the tools just worked!\n\n!!Constituent Data\n\nFirst, I did a right mouse click on the "WSDL" link to copy the link location, and the pasted that into Matlab's createClassFromWsdl function: \n{{{\n>> createClassFromWsdl('')\n\n}}}\nThis created a directory called @HarmonicConstituentsService, with methods "getHConstiuentsAndMetadata.m", "getHarmonicConstituents.m", "display.m", and "HarmonicConstituentsService.m"\nI then did \n{{{\n>> help getHarmonicConstituents \n\n>> obj = HarmonicConsituentsService\n>> stationId='8454000'; \n>> unit = 0; % 0 for meters, 1 for feet\n>> time_zone = 0; % 0 for GMT, 1 for local time\n>> data=getHarmonicConstituents(obj,'8454000',unit,time_zone);\n\n>> data\n\ndata = \n\n item: [37x1 struct]\n\n>> data.item\n\nans = \n\n37x1 struct array with fields:\n constNum\n name\n amplitude\n phase\n speed\n}}}\n\n!!Water Level Data\n\nRight clicking on 6-minute data WSDL:\n{{{\n>> createClassFromWsdl('')\nRetrieving document at ''\n\nans =\n\nWaterLevelRawSixMinService\n\n>> help WaterLevelVerifiedSixMinService\nWaterLevelVerifiedSixMinService methods:\n\ngetWLVerifiedSixMinAndMetadata - (obj,stationId,beginDate,endDate,datum,unit,timeZone)\ngetWaterLevelVerifiedSixMin - (obj,stationId,beginDate,endDate,datum,unit,timeZone)\n\n}}}\n\nSo let's try getting the water levels at the Battery for Superstorm Sandy:\n{{{\n>> obj = WaterLevelVerifiedSixMinService\n>> data = getWaterLevelVerifiedSixMin(obj,'8518750','20121028','20121031','MLLW','0','0');\n>> for i=1:length(data.item);\n>> z(i)=str2num(data.item(i).WL);\n>> dn(i)=datenum(data.item(i).timeStamp);\n>> end\n\n>> plot(dn,z); datetick\n}}}\n\n\n
Install nctoolbox from You want the latest version of nctoolbox from the mecurial code repository, so follow the instructions here:\n\nIn Matlab, make sure you have run "setup_nctoolbox.m" to get your Matlab path and javaclasspath set up properly.\n\nGlobal NCOM Region 5 results are available on the OceanNomads site\n\nas individual forecast datasets (one each day)\n\n\n\nor as an aggregation, which is a "best time series" created from previous forecasts up to and including the latest forecast. The URL for this aggregated dataset never changes.\n\n\n\nSo here's how to access the last 3 days of data from the NCOM region 5 model for just the region around the Taiwan Strait [\n\n{{{\n% open as \nurl='';\nnc=ncgeodataset(url);\n\n\n
Ncdump will print out the netcdf header info and values in ascii. You might find that you can parse it easy enough. ncdump -x will put out NcML but not the data values.\n\nThe java library will put out NcML with the data values using:\n{{{\n java -classpath toolsUI.jar ucar.nc2.NCDumpW <NetCDF-3 filename> -ncml -vall\n}}}\nUnforunately, I just discovered that previous versions are not doing this correctly, so you'll have to get the latest development release (4.0.25) from Note you should use NCDumpW not NCDump as it says on the web page.
1. open a 32-bit Anaconda command prompt\n2. type \n{{{\nconda create -n esri python=2.7 numpy=1.6 pandas\nactivate esri\n}}}\n3. Create a file called {{{conda.pth}}} in your ArcGIS10.1 site-packages folder. On my machine this is {{{\nC:\sPython27\sArcGIS10.1\sLib\ssite-packages\sconda.pth\n}}}\n4. In this file add one line that points to your Anaconda site-packages for the esri environment you created. On my machine this is:\n{{{\nC:\sprograms\sAnaconda\senvs\sesri\sLib\ssite-packages\n}}}\n
Remote image urls:\n\nadded network place of\nmade a folder\nappears externally as "ftpext" instead of "ftpint":\n{{{\n\n}}}\nCurtis notes on enterprises FTP:\n{{{\n\n}}}
To strip a particular variable out, first check what varaibles are available by doing \n{{{\nwgrib2 file.grib\n}}}\nThen you can select the variable (like sensible heat net flux) like this:\n{{{\n wgrib2 ofs_atl.t00z.F024.3d.grb.grib2 -s | grep ":SHTFL:surface" | wgrib2 -i ofs_atl.t00z.F024.3d.grb.grib2 -grib shtfl.grib2\n}}}\n
{{{\n gdaltransform -s_srs EPSG:4326 -t_srs "+proj=lcc +lat_1=30n +lat_2=60n +lon_0=80e +a=6371229.0 +es=0.0" eurostrat.ll\n}}}
*Grabbed BSB version of NOAA Raster Chart 13229 from\n*Brought it up in OpenEV and discovered that "white" was index=1. Want to make that transparent. So converting to UTM and making white transparent is two steps:\n{{{\ncd C:\sRPS\sBSB_ROOT\s13229\ngdalwarp -rcs -t_srs "+proj=utm +zone=19 +datum=WGS84" 13229_4.kap noaa_chart.tif\ngdal_translate -a_nodata 1 noaa_chart.tif noaa_chart_trans.tif\n}}}\n\n
On testbedapps, I followed these instructions for getting the virtual frame buffer going:\n\n\nAs root, I had to first install Xvfb, and then setup display 1:\n{{{\nyum install xorg-x11-server-Xvfb\nXvfb :1 -screen 0 1280x1024x24 -auth localhost\n}}}\nThen as user tomcat, I started the notebook, specifying display 1:\n{{{\nDISPLAY=":1" nohup ipython notebook -pylab inline &\n}}}\nProof that it works:\n{{{\n\n}}}
WAF of ISO metadata on geoport is at\n/var/www/waf/data\n
Stace:\nDrupal 6 is cool.\n\nAmber York: \n, working on passing metadata to OBIS. \n\nAlex: Classifiers. opencalais can read a newspaper and figure out the article is about Obama, how he has a dog, and where he went. Right, now, lots of command line java stuff. Trying to use MOA.\n\nRyan: lightning talk\n
type "cmd", right click to run as admin, then type "net start wlansvc"\n\n\n,44,30,54&WIDTH=256&HEIGHT=256\n
For the 360x160x10 homogeneous Island Wake Problem, the Rutgers version takes significantly longer to build and to run than the UCLA version. This run spends nearly 80% of the time in the 2D Kernel, and the UCLA code runs 80% faster than the Rutgers code, thus supporting Sasha's claim that the 2D Kernel in UCLA is about 2 times faster than the 2D Kernel in Rutgers. Of course, this run has only a constant background mixing, just the 3rd order advection and no sediment. In our ADRIATIC runs with 6 sediment classes, MPDATA took 66% of the run time, GLS Mixing took 8% and 2D Kernel took 2-12% of the run.\n\nSasha had 57601 steps, but just for timing, it seems 240 is sufficient.\n \nThe log-based timing for 240 steps (single Xeon cpu):\n| !Model | !Style | !Advection | !Subdomain | !FFLAGS | !2D Kernel (s) | !Total (s) | !Fraction in 2D (%) | !Relative Time | !Build Time |\n| UCLA (run003) | OceanO | 3rd order | 3x40 | -O3 -ip -pc80 -tpp7 -axN -xN -auto -stack_temps -fpp2 -openmp | unknown | 355.4| unknown | 1.00 (base) | 59.0|\n| UCLA (run003) | OceanS | 3rd order | 3x40 | -O3 -ip -pc80 -tpp7 -axN -xN -auto -stack_temps | unknown | 305.7| unknown | 0.85| 77.0|\n| Rutgers 1.9 | OceanO | 3rd order | 2x32 | ifort 9.1, -fpp2 -openmp -pc80 -tpp7 -axW -xW -w95 -align dcommon -auto -stack_temps -c -O3 -IPF_fma -ip| unknown | 418.0| | 1.17| 106.4|\n| Rutgers 1.9 | OceanO | 3rd order | 1x1 | ifort 9.1, -fpp2 -openmp -pc80 -tpp7 -axW -xW -w95 -align dcommon -auto -stack_temps -c -O3 -IPF_fma -ip| unknown | 685.7| | 1.92| 106.4|\n| Rutgers 1.9 | OceanO | 3rd order | 3x40 | ifort 9.1, -fpp2 -openmp -pc80 -tpp7 -axW -xW -w95 -align dcommon -auto -stack_temps -c -O3 -IPF_fma -ip| unknown | 445.7| | 1.28| 106.4|\n| Rutgers 1.9 | OceanO | 3rd order | 2x16 | ifort 9.1, -fpp2 -openmp -pc80 -tpp7 -axW -xW -w95 -align dcommon -auto -stack_temps -c -O3 -IPF_fma -ip| unknown | 431.3| | 1.21| 106.4|\n| Rutgers 3.0 (run009) | OceanO | 3rd order | 3x40 | ifort v8.0, -O3 -ip -pc80 -tpp7 -axN -xN -auto -stack_temps| 530.5| 651.9| 79.8| 1.83| |\n| Rutgers 3.0 (run009) | OceanS | 3rd order | 3x40 | ifort v8.0 -O3 -ip -pc80 -tpp7 -axN -xN -auto -stack_temps| 482.6| 589.9| 80.0| 1.65| 446 |\n| Rutgers 3.0 w/o 2D advection (run010)| OceanS | 3rd order | 3x40 | -O3 -ip -pc80 -tpp7 -axN -xN -auto -stack_temps| 415.7| 536.8| 75.8| 1.51| |\n| Rutgers 3.0 w/o 2D advection (run010)| OceanS | 3rd order | 3x40 | -O2 | 478.1| 617.2| 75.9| 1.73| 64.1|\n| Delft3D 1 layer (3 min time steps) | | 3rd order | 1x1 | | unknown | 126.1| | | |\n| Delft3D 1 layer (4.5s time steps) | | 3rd order | 1x1 | | unknown | 4050.0| | 11.39| |\n| Delft3D 10 layer (4.5s time steps) | | 3rd order | 1x1 | | unknown | 34200.0| | 96.23| |
Thursday, July 21: heavy fog all day\nTuesday, Sep 27: fog at 8:21 am ET, 0.2 mile visability
This document is a ~TiddlyWiki from A ~TiddlyWiki is an electronic notebook that is great for managing todo lists, personal information, and all sorts of things.\n\n@@font-weight:bold;font-size:1.3em;color:#444; //What now?// &nbsp;&nbsp;@@ Before you can save any changes, you need to enter your password in the form below. Then configure privacy and other site settings at your [[control panel|]] (your control panel username is //rsignell2//).\n<<tiddler tiddlyspotControls>>\n@@font-weight:bold;font-size:1.3em;color:#444; //Working online// &nbsp;&nbsp;@@ You can edit this ~TiddlyWiki right now, and save your changes using the "save to web" button in the column on the right.\n\n@@font-weight:bold;font-size:1.3em;color:#444; //Working offline// &nbsp;&nbsp;@@ A fully functioning copy of this ~TiddlyWiki can be saved onto your hard drive or USB stick. You can make changes and save them locally without being connected to the Internet. When you're ready to sync up again, just click "upload" and your ~TiddlyWiki will be saved back to\n\n@@font-weight:bold;font-size:1.3em;color:#444; //Help!// &nbsp;&nbsp;@@ Find out more about ~TiddlyWiki at [[|]]. Also visit [[TiddlyWiki Guides|]] for documentation on learning and using ~TiddlyWiki. New users are especially welcome on the [[TiddlyWiki mailing list|]], which is an excellent place to ask questions and get help. If you have a tiddlyspot related problem email [[tiddlyspot support|]].\n\n@@font-weight:bold;font-size:1.3em;color:#444; //Enjoy :)// &nbsp;&nbsp;@@ We hope you like using your site. Please email [[|]] with any comments or suggestions.
Trying to find out where to put the robots.txt file so that it shows up at\n\n\nLook at the "httpd.conf" file in /usr/local/apache2/conf and see what the "DocumentRoot" is set to\n{{{\n grep DocumentRoot /usr/local/apache2/conf/httpd.conf\n}}}
*Zachary Taylor - 12th U.S. President\n*William Howard Taft - 27th U.S. President\n*Cokie Roberts \n*Bing Crosby \n*Katherine Hepburn \n*Louisa May Alcott\n*Ralph Waldo Emerson\n*Margaret Bingham\n*Elizabeth Morris\n*Rich Signell\n*Katherine Signell\n*Julia Signell\n*Alex Signell
Grrr....\n\nDownloaded WinMorph 3.01 and spent some time indicating features on Middle Ground that I wanted morphed from our Sep 2006 survey to Nov 2006 survey. But the resulting movie was all black! A quick google turned up the fact that 32 bit TIFF (or any other 32 bit format) for that matter will produce a black movie if the alpha channel is blank. So the simple thing is to convert the 32 bit TIFF to 24 bit TIFF. But how to do that? I couldn't figure out how to do it with IrfanView, but googling again turned up that I could use ImageMagick (or more correctly the Mogrify tool from Imagemagick) thusly:\n{{{\nmogrify +matte my_tiff_image.tif\n}}}\nA guick gdalinfo confirmed that my 32 bit tiff became a 24 bit tiff by this process, and then WinMorph worked fine.\n
[img[Rich at Work Picture|]]
Grabbed some small test files:\n{{{\nrsignell@pikmin:~> ncks -O -x -v T,P,PB,QVAPOR,QCLOUD,QRAIN,U,V,W,PH,PHB,TSLB,SMOIS,SH2O\nrsignell@pikmin:~> ncks -O -x -v T,P,PB,QVAPOR,QCLOUD,QRAIN,U,V,W,PH,PHB,TSLB,SMOIS,SH2O\nrsignell@pikmin:~> ncks -O -x -v T,P,PB,QVAPOR,QCLOUD,QRAIN,U,V,W,PH,PHB,TSLB,SMOIS,SH2O\n}}}
We wanted to get only the "tar" files found in a bunch of subdirectories on the gsod ftp site, so we did\n{{{\n wget -r -A.tar\n}}}\nwhich works on gam and on laptop (wget 1.11.4, circa 2008), didn't work on blackburn (wget 1.10.2, circa 2005)\n\nRetrieved 2.9GB of data in 45 minutes.\n{{{\nrsignell@gam:~/gsod/2010$ ls | wc -l\n10457 \n}}}\nThere are 10,457 files in 2010!\nIf we stored all 12 variables in 10,457 files from
To list all packages with kernels:\n{{{\nrpm -qa | grep kernel\n}}}\nTo remove a specific kernel package:\n{{{\nrpm -e kernel-smp-2.6.9-34.EL\n}}}
To create a custom Wakari environment with just pandas, matplotlib and netcdf4 (and their dependencies), type:\n\n* From the Create a new Wakari environment called ioos_env\n{{{\nconda create -n ioos_env dateutil=1.5 pandas matplotlib netcdf4 ipython shapely cython pip pytz\n}}}\n\n\n* If your env is call ioos_env, then do the following.\n{{{\n export PATH=/opt/anaconda/envs/ioos_env/bin:/opt/anaconda/bin:$PATH\n export CONDA_DEFAULT_ENV=/opt/anaconda/envs/ioos_env\n}}}\n\nThe source activate command normally sets these two env vars for you.\n\n* Log out, log back in. Start a new terminal using the ioos_env environment and then do:\n{{{\nconda list\n}}}\n\n* If you want to install packages via pip, you must make sure that you include pip and pytz in your custom env\n{{{\nconda install --prefix=/opt/anaconda/envs/ows pytz\nconda install --prefix=/opt/anaconda/envs/ows pip\n}}}\nThen you can do\n{{{\npip install lxml \n}}}\n\n* to install iris, we need to install scipy, cython\n\n* To remove an environment from wakari:\n\nFrom a shell, you can use the\n{{{\nll /opt/anaconda/env\n}}}\nto list your environments.\nThen do\n{{{\n$ rm -rf /opt/anaconda/envs/ioos_env\n}}}\nto delete the environment, in this case I'm deleting the ioos_env environment.\n\n* to remove bundles from wakari, just use the terminal to remove the specific bundle subdirectory from the bundles directory\n{{{\n$ ls ~/bundles\n$ rm -rf ~/bundles/ioos_test\n}}}\n\n\n \n \nLibraries have been installed in:\n /user_home/w_rsignell/proj4_static/lib\n \nIf you ever happen to want to link against installed libraries\nin a given directory, LIBDIR, you must either use libtool, and\nspecify the full pathname of the library, or use the `-LLIBDIR'\nflag during linking and do at least one of the following:\n - add LIBDIR to the `LD_LIBRARY_PATH' environment variable\n during execution\n - add LIBDIR to the `LD_RUN_PATH' environment variable\n during linking\n - use the `-Wl,-rpath -Wl,LIBDIR' linker flag\n - have your system administrator add LIBDIR to `/etc/'\n
Just a few notes here about NOAA's GEODAS site\n\n\n\nwhere you can download old charts as well as hi-res digital bathymetry and sidescan data.\n\nIf you click on "Get NOS Hydro Data" for example, you reach an\ninteractive map where you can select what type of data you are\ninterested in and then zoom in to see what's available. As an\nexample, I picked the layer for coastal multibeam data, zoomed into\nWoods Hole, saw a survey region that looked what I wanted, then\nclicked on the "i" icon (for information) and was led to this page\nwhich shows all the products available:\n\n\n\nAs you can see, you can get the report from the survey as a pdf,\nimages, the bathy data as YXZ ASCII files or Fledermaus/Iview3D files\nfor instant 3D visualization. I recommend downloading the Iview3D\nfile first, installing the free Iview3D viewer, and taking a look.\nYou can see all sorts of cool features of the sea floor in The Hole -\nat 1/2 m resolution!\n\nWhat I do is convert these big ASCII files to NetCDF and 32 bit\nGeoTIFF images. GeoTIFF is just a tiff file that has special tags\nthat contain the georeferencing information (lon/lat extent, grid\nspacing, projection, datum, etc) and can be read by most mapping\nsoftware. It's an image format, but if you store as 32 bits, you\ncan then interpret the pixel values as real numbers, thus it functions as a data format as well.\n\nI use the "xyz2grd" function from the GMT (generic mapping tools) to\nconvert the xyz grid to a NetCDF grid. I then use the "Mirone"\nprogram (can work as stand-alone or with Matlab) to crop, clip,\nhill-shade, convert to other projections, output as GeoTIFF or Iview3D and more. You can get both the pre-compiled GMT tools for PC and Mirone at:\n\n\n\n(see for some\nof the things Mirone can do. I've been using it for several years,\nand though the interface and doc could use some work, it's got\nawesome functionality!)\n
To get curvilinear grids going with ncWMS, we first make a lookup table using "LUT_Creator" from Greg Smith. This is a fortran code that reads a grid and mask from a netcdf file using variable names found in the input file "namelist".\n\nThis stuff is on\n\nI modified "namelist" to work with one of Ruoying He's GOMTOX red tide simulations that uses ROMS:\n\nRunning "./LUT_Creator" produced these three files:\n{{{\nLUT_i_361_241.dat LUT_j_361_241.dat LUT_ok_361_241.dat\n}}}\n
Voucher=>View Vouchers\nchoose one that says (CREATED)\nclick on pen icon on left\nselect confirmation\nselect stamp and submit\nselect continue stamping\nselect close funding\nselect close post stamp ...\nshould see VOUCHER SIGNED
{{{\n<catalog xmlns=""\n xmlns:xlink="" name="OPeNDAP Data Server" version="1.0.1">\n <service name="ncdods" serviceType="OpenDAP" base="/thredds/dodsC/"/>\n <dataset name="Gulf of Maine Model Interoperability Experiment" ID="gom_interop">\n <metadata inherited="true">\n <serviceName>ncdods</serviceName>\n <authority></authority>\n <dataType>Grid</dataType>\n <dataFormat>NetCDF</dataFormat>\n <!-- <publisher>\n <name vocabulary="DIF">OM/WHSC/USGS</name>\n <contact url="" email=""/>\n </publisher>-->\n <geospatialCoverage zpositive="up">\n <updown>\n <start> -3511.6</start>\n <size> 3511.6</size>\n <units>meters</units>\n </updown>\n <northsouth>\n <start>38.666</start>\n <size>7.98</size>\n <units>degrees_north</units>\n </northsouth>\n <eastwest>\n <start>-72.9181</start>\n <size>10.9387</size>\n <units>degrees_east</units>\n </eastwest>\n </geospatialCoverage>\n <documentation\n xlink:href=""\n xlink:title="Gulf of Maine Model Interoperablity Experiment"/>\n </metadata>\n <dataset name="WHOI-ROMS" ID="gom_interop/whoi">\n <metadata inherited="true">\n <creator>\n <name vocabulary="DIF">Dr. Ruoying He</name>\n <contact url="" email=""/>\n <name vocabulary="DIF">Dr. Dennis McGillicuddy</name>\n <contact url="" email=""/>\n </creator>\n <documentation type="Rights"> This model data was generated as part of an academic research\n project, and the principal investigators: Ruoying He ( and Dennis\n McGillicuddy ( ask to be informed of intent for scientific use and\n appropriate acknowledgment given in any publications arising therefrom. The data is\n provided free of charge, without warranty of any kind. </documentation>\n <documentation type="Summary"> Hydrodynamic simulations to support Alexandrium fundyense\n research in the Gulf of Maine </documentation>\n </metadata>\n <dataset name="2005 Ecohab Simulation (Tidally Averaged Fields)"\n ID="gom_interop/whoi/2005_avg" urlPath="gom_interop/whoi/2005_avg">\n <serviceName>ncdods</serviceName>\n <netcdf xmlns="">\n <aggregation dimName="ocean_time" type="joinExisting">\n <netcdf\n location=""/>\n <netcdf\n location=""/>\n <netcdf\n location=""/>\n <netcdf\n location=""/>\n <netcdf\n location=""/>\n <netcdf\n location=""/>\n <netcdf\n location=""/>\n <netcdf\n location=""/>\n <netcdf\n location=""/>\n <netcdf\n location=""/>\n <netcdf\n location=""/>\n <netcdf\n location=""/>\n <netcdf\n location=""/>\n <netcdf\n location=""/>\n <netcdf\n location=""/>\n <netcdf\n location=""/>\n <netcdf\n location=""\n />\n </aggregation>\n </netcdf>\n </dataset>\n <dataset name="2005 Ecohab Simulation (Instantaneous Fields)" ID="gom_interop/whoi/2005_his"\n urlPath="gom_interop/whoi/2005_his">\n <serviceName>ncdods</serviceName>\n <netcdf xmlns="">\n <aggregation dimName="ocean_time" type="joinExisting">\n <netcdf\n location=""/>\n <netcdf\n location=""/>\n <netcdf\n location=""/>\n <netcdf\n location=""/>\n <netcdf\n location=""/>\n <netcdf\n location=""/>\n <netcdf\n location=""/>\n <netcdf\n location=""/>\n <netcdf\n location=""/>\n <netcdf\n location=""/>\n <netcdf\n location=""/>\n <netcdf\n location=""/>\n <netcdf\n location=""/>\n <netcdf\n location=""/>\n <netcdf\n location=""/>\n <netcdf\n location=""/>\n <netcdf\n location=""/>\n <netcdf\n location=""\n />\n </aggregation>\n </netcdf>\n </dataset>\n </dataset>\n\n <catalogRef xlink:title="UMAINE-POM"\n xlink:href="" name=""/>\n\n <dataset name="UMASSB-ECOM" ID="gom_interop/umassb">\n <metadata inherited="true">\n <creator>\n <name vocabulary="DIF">Dr. Mingshun Jiang</name>\n <contact url=""\n email=""/>\n </creator>\n <documentation type="summary"> MBEFS: Massachusetts Bay Environmental Forecast System, based\n on ECOM-si (semi-implicit Estuarine and Coastal Ocean Model). The model is driven by solar\n radition, meterological forcing, tides, river discharge and boundary forcing, which\n include climatological data, meterological forecast results, and real-time observations\n including measurements from NOAA buoy 44013 and GoMOOS buoy B. Each daily forecast\n contains 1 hour snapshots for 96 hours (24 hour hindcast and 72 hour forecast). </documentation>\n <documentation xlink:href=""\n xlink:title="MBEFS page at U. Mass Boston"/>\n </metadata>\n <dataset name="Latest Mass Bay Forecast" ID="gom_interop/umassb/latest"\n urlPath="gom_interop/umassb/latest">\n <serviceName>ncdods</serviceName>\n <netcdf xmlns="">\n <aggregation type="union">\n <netcdf location=""/>\n <netcdf location=""/>\n </aggregation>\n <variable name="elev">\n <attribute name="standard_name" value="sea_surface_height_above_sea_level"/>\n <attribute name="coordinates" type="String" value="lon lat"/>\n <attribute name="_FillValue" type="short" value="-626"/>\n </variable>\n <variable name="heat_flux">\n <attribute name="coordinates" type="String" value="lon lat"/>\n </variable>\n <variable name="CD">\n <attribute name="coordinates" type="String" value="lon lat"/>\n </variable>\n <variable name="depth">\n <attribute name="coordinates" type="String" value="lon lat"/>\n <attribute name="_FillValue" type="short" value="-99999"/>\n </variable>\n <variable name="temp">\n <attribute name="coordinates" type="String" value="lon lat zpos"/>\n <attribute name="_FillValue" type="short" value="-23405"/>\n </variable>\n <variable name="salt">\n <attribute name="coordinates" type="String" value="lon lat zpos"/>\n <attribute name="_FillValue" type="short" value="-32767"/>\n </variable>\n <variable name="conc">\n <attribute name="coordinates" type="String" value="lon lat zpos"/>\n </variable>\n <variable name="u">\n <attribute name="coordinates" type="String" value="lon lat zpos"/>\n <attribute name="_FillValue" type="short" value="0"/>\n </variable>\n <variable name="v">\n <attribute name="coordinates" type="String" value="lon lat zpos"/>\n <attribute name="_FillValue" type="short" value="0"/>\n </variable>\n <variable name="time">\n <attribute name="units" type="String" value="hours since 2006-01-01 00:00 UTC"/>\n </variable>\n <variable name="zpos">\n <attribute name="standard_name" type="String" value="ocean_sigma_coordinate"/>\n <attribute name="formula_terms" type="String" value="sigma: zpos eta: elev depth: depth"/>\n <attribute name="positive" type="String" value="up"/>\n <attribute name="axis" type="String" value="Z"/>\n <attribute name="units" type="String" value="1"/>\n </variable>\n <attribute name="Conventions" type="String" value="CF-1.0"/>\n </netcdf>\n </dataset>\n <dataset name="2000 Mass Bay Simulation (Instantaneous Fields)" ID="gom_interop/umassb/2000"\n urlPath="gom_interop/umassb/2000">\n <serviceName>ncdods</serviceName>\n <netcdf xmlns="">\n <dimension name="time" length="694"/>\n <aggregation dimName="time" type="joinExisting">\n <netcdf location=""/>\n <netcdf location=""/>\n <netcdf location=""/>\n <netcdf location=""/>\n <netcdf location=""/>\n <netcdf location=""/>\n <netcdf location=""/>\n <netcdf location=""/>\n <netcdf location=""/>\n <netcdf location=""/>\n <netcdf location=""/>\n <netcdf location=""/>\n </aggregation>\n <variable name="time">\n <attribute name="units" type="String" value="days since 2000-01-01 00:00 UTC"/>\n </variable>\n <variable name="temp">\n <attribute name="coordinates" type="String" value="lon lat sigma"/>\n </variable>\n <variable name="sigma">\n <attribute name="standard_name" type="String" value="ocean_sigma_coordinate"/>\n <attribute name="formula_terms" type="String"\n value="sigma: sigma eta: elev depth: depth"/>\n <attribute name="positive" type="String" value="up"/>\n <attribute name="units" type="String" value="1"/>\n </variable>\n\n\n <variable name="lon" shape="ypos xpos" type="float">\n <attribute name="units" type="String" value="degrees_east"/>\n <values>-70.56301 -70.55691 -70.55035 -70.54282 -70.53487 -70.52669 -70.51821 -70.50947\n -70.50048 -70.49122 -70.4816 -70.4716 -70.46124 -70.45049 -70.43941 -70.42803\n -70.41645 -70.40469 -70.3928 -70.38078 -70.36865 -70.35639 -70.34398 -70.33138\n -70.31856 -70.3055 -70.29218 -70.27863 -70.26484 -70.25085 -70.23669 -70.22238\n -70.20795 -70.1934 -70.17873 -70.16396 -70.14907 -70.13406 -70.11891 -70.10363\n -70.08821 -70.07266 -70.05698 -70.04117 -70.02523 -70.00917 -69.993 -69.97674\n -69.96037 -69.94392 -69.92738 -69.91074 -69.894 -69.87712 -69.8601 -69.84293 -69.82558\n -69.80809 -69.79044 -69.77267 -69.75479 -69.73683 -69.7188 -69.70077 -69.68272\n -69.66514 -69.64951 -69.615 -70.57022 -70.56117 -70.55212 -70.54352 -70.53539\n -70.52724 -70.5188 -70.51014 -70.50131 -70.49224 -70.48284 -70.4731 -70.463 -70.45255\n -70.44178 -70.43074 -70.4195 -70.4081 -70.39658 -70.38494 -70.37319 -70.36132 -70.3493\n -70.3371 -70.32468 -70.31203 -70.29914 -70.28601 -70.27265 -70.25908 -70.24535\n -70.23147 -70.21746 -70.20334 -70.18909 -70.17472 -70.16023 -70.14561 -70.13086\n -70.11596 -70.10091 -70.08572 -70.07039 -70.05492 -70.03931 -70.02356 -70.00768\n -69.99169 -69.97559 -69.95938 -69.94306 -69.92662 -69.91004 -69.89332 -69.87642\n -69.85933 -69.84204 -69.82455 -69.80686 -69.78898 -69.77092 -69.75268 -69.73422\n -69.71558 -69.69646 -69.67749 -69.65448 -69.62 -70.57671 -70.56647 -70.55636 -70.54705\n -70.53851 -70.53011 -70.52152 -70.51279 -70.50397 -70.49493 -70.48562 -70.47599\n -70.46604 -70.45577 -70.4452 -70.43439 -70.42339 -70.41224 -70.40098 -70.38961\n -70.37814 -70.36655 -70.35483 -70.34293 -70.33082 -70.31848 -70.30592 -70.29311\n -70.28007 -70.26684 -70.25343 -70.23988 -70.2262 -70.2124 -70.19848 -70.18443\n -70.17025 -70.15594 -70.14148 -70.12687 -70.11211 -70.0972 -70.08213 -70.06691\n -70.05154 -70.03603 -70.02037 -70.00458 -69.98866 -69.97263 -69.95645 -69.94013\n -69.92366 -69.90701 -69.89016 -69.87309 -69.85577 -69.83821 -69.8204 -69.80234\n -69.78401 -69.7654 -69.74641 -69.72704 -69.70678 -69.68607 -69.65884 -69.62305\n -70.58288 -70.57199 -70.56133 -70.55162 -70.5428 -70.53419 -70.52545 -70.51662\n -70.50773 -70.49867 -70.48937 -70.47978 -70.4699 -70.45972 -70.44926 -70.43858\n -70.42772 -70.41673 -70.40565 -70.39445 -70.38318 -70.37179 -70.36027 -70.34858\n -70.33669 -70.32457 -70.31223 -70.29964 -70.28684 -70.27384 -70.26067 -70.24737\n -70.23392 -70.22035 -70.20666 -70.19283 -70.17888 -70.16478 -70.15054 -70.13613\n -70.12157 -70.10686 -70.09197 -70.07693 -70.06173 -70.04636 -70.03085 -70.0152\n -69.9994 -69.98345 -69.96736 -69.9511 -69.93468 -69.91805 -69.90118 -69.88407\n -69.86668 -69.84901 -69.83104 -69.81276 -69.79414 -69.77514 -69.75566 -69.73563\n -69.71451 -69.6927 -69.66397 -69.62724 -70.589 -70.57778 -70.56682 -70.55686 -70.54785\n -70.5391 -70.53024 -70.52132 -70.51237 -70.5033 -70.49398 -70.48441 -70.47457\n -70.46445 -70.45406 -70.44347 -70.43273 -70.42187 -70.41091 -70.39986 -70.38873\n -70.3775 -70.36614 -70.35462 -70.34291 -70.33097 -70.31881 -70.30642 -70.29382\n -70.28101 -70.26804 -70.25492 -70.24167 -70.22829 -70.21478 -70.20115 -70.18738\n -70.17346 -70.15939 -70.14516 -70.13076 -70.1162 -70.10146 -70.08656 -70.07149\n -70.05624 -70.04083 -70.02527 -70.00955 -69.99368 -69.97763 -69.9614 -69.94498\n -69.92834 -69.91144 -69.89426 -69.87676 -69.85895 -69.84079 -69.82226 -69.80333\n -69.78395 -69.76399 -69.74338 -69.72153 -69.69891 -69.66916 -69.63167 -70.5951\n -70.58369 -70.57254 -70.56244 -70.55331 -70.54446 -70.53553 -70.52656 -70.51757\n -70.50847 -70.49915 -70.4896 -70.47978 -70.4697 -70.45938 -70.44887 -70.43821\n -70.42744 -70.41659 -70.40566 -70.39465 -70.38355 -70.37233 -70.36095 -70.34937\n -70.33759 -70.32558 -70.31335 -70.3009 -70.28825 -70.27544 -70.26249 -70.2494\n -70.23618 -70.22283 -70.20935 -70.19573 -70.18196 -70.16803 -70.15393 -70.13966\n -70.12522 -70.11061 -70.09581 -70.08084 -70.06568 -70.05035 -70.03486 -70.0192\n -70.00335 -69.98733 -69.9711 -69.95467 -69.93798 -69.92101 -69.90374 -69.88612\n -69.86815 -69.84978 -69.83101 -69.81178 -69.79203 -69.77164 -69.75049 -69.72802\n -69.70466 -69.67381 -69.63461 -70.60126 -70.58973 -70.57848 -70.56829 -70.5591\n -70.55019 -70.54121 -70.53221 -70.52321 -70.5141 -70.50479 -70.49525 -70.48547\n -70.47543 -70.46516 -70.4547 -70.44412 -70.43344 -70.42268 -70.41184 -70.40094\n -70.38995 -70.37884 -70.36758 -70.35612 -70.34447 -70.33259 -70.32049 -70.30817\n -70.29567 -70.283 -70.27018 -70.25723 -70.24414 -70.23093 -70.21758 -70.20408\n -70.19043 -70.17662 -70.16264 -70.14848 -70.13413 -70.11961 -70.1049 -70.09 -70.07491\n -70.05964 -70.04418 -70.02855 -70.01272 -69.9967 -69.98045 -69.96398 -69.94724\n -69.93019 -69.91281 -69.89506 -69.87691 -69.85835 -69.83932 -69.81979 -69.79969\n -69.77888 -69.75725 -69.7342 -69.71017 -69.67828 -69.63744 -70.60748 -70.5959 -70.5846\n -70.57438 -70.56516 -70.55624 -70.54724 -70.53822 -70.52922 -70.52012 -70.51083\n -70.5013 -70.49155 -70.48156 -70.47134 -70.46095 -70.45043 -70.43983 -70.42915\n -70.4184 -70.40759 -70.39668 -70.38567 -70.37452 -70.36317 -70.35162 -70.33986\n -70.32787 -70.31568 -70.30329 -70.29074 -70.27804 -70.26521 -70.25224 -70.23913\n -70.22588 -70.21249 -70.19895 -70.18523 -70.17135 -70.15727 -70.14301 -70.12856\n -70.11392 -70.09908 -70.08404 -70.0688 -70.05337 -70.03774 -70.02192 -70.00587\n -69.98961 -69.97308 -69.95627 -69.93913 -69.92162 -69.90373 -69.88541 -69.86663\n -69.84735 -69.82754 -69.8071 -69.7859 -69.76382 -69.74025 -69.71561 -69.68287 -69.6409\n -70.61379 -70.60221 -70.5909 -70.58067 -70.57144 -70.56252 -70.55353 -70.54453\n -70.53553 -70.52645 -70.51718 -70.5077 -70.49799 -70.48804 -70.47787 -70.46754\n -70.45708 -70.44655 -70.43594 -70.42528 -70.41454 -70.40373 -70.39281 -70.38174\n -70.37049 -70.35905 -70.34738 -70.3355 -70.3234 -70.31112 -70.29867 -70.28607\n -70.27334 -70.26047 -70.24746 -70.23431 -70.22102 -70.20756 -70.19392 -70.18011\n -70.16611 -70.15191 -70.13752 -70.12292 -70.10812 -70.09312 -70.0779 -70.06248\n -70.04685 -70.03101 -70.01494 -69.99862 -69.98203 -69.96513 -69.94789 -69.93027\n -69.91222 -69.89372 -69.87473 -69.85521 -69.83511 -69.81435 -69.79279 -69.77029\n -69.74623 -69.72106 -69.68758 -69.64473 -70.62023 -70.60867 -70.59737 -70.58716\n -70.57795 -70.56905 -70.56007 -70.5511 -70.54214 -70.53309 -70.52386 -70.51441\n -70.50474 -70.49484 -70.48473 -70.47446 -70.46407 -70.45361 -70.44307 -70.43248\n -70.42182 -70.41109 -70.40025 -70.38927 -70.37811 -70.36674 -70.35517 -70.34338\n -70.33138 -70.31919 -70.30683 -70.29433 -70.28168 -70.2689 -70.25597 -70.2429\n -70.22968 -70.2163 -70.20274 -70.18899 -70.17504 -70.1609 -70.14655 -70.13199\n -70.11722 -70.10223 -70.08702 -70.0716 -70.05596 -70.04009 -70.02398 -70.00761\n -69.99094 -69.97395 -69.9566 -69.93884 -69.92065 -69.90196 -69.88276 -69.863 -69.84263\n -69.82156 -69.79964 -69.77674 -69.75223 -69.72655 -69.69238 -69.64864 -70.62679\n -70.61526 -70.60402 -70.59384 -70.58467 -70.57581 -70.56687 -70.55793 -70.54901 -70.54\n -70.53082 -70.52142 -70.5118 -70.50196 -70.49191 -70.4817 -70.47138 -70.46098\n -70.45052 -70.43999 -70.42941 -70.41876 -70.408 -70.39709 -70.38601 -70.37473\n -70.36324 -70.35153 -70.33962 -70.32751 -70.31523 -70.30281 -70.29025 -70.27753\n -70.26469 -70.25169 -70.23854 -70.22521 -70.21171 -70.19801 -70.18411 -70.17001\n -70.1557 -70.14117 -70.12641 -70.11143 -70.09623 -70.0808 -70.06512 -70.04922\n -70.03306 -70.01662 -69.99988 -69.98279 -69.96532 -69.94743 -69.92907 -69.9102\n -69.89079 -69.8708 -69.85015 -69.82878 -69.80651 -69.78323 -69.75828 -69.73212\n -69.69729 -69.65266 -70.6335 -70.62204 -70.61085 -70.60073 -70.59161 -70.58279\n -70.57391 -70.56502 -70.55614 -70.54719 -70.53806 -70.52872 -70.51916 -70.50938\n -70.49939 -70.48924 -70.47899 -70.46866 -70.45826 -70.44781 -70.43731 -70.42672\n -70.41604 -70.4052 -70.3942 -70.383 -70.37159 -70.35996 -70.34812 -70.33609 -70.32389\n -70.31154 -70.29904 -70.28641 -70.27362 -70.26069 -70.2476 -70.23433 -70.22087\n -70.20722 -70.19336 -70.17929 -70.165 -70.15049 -70.13574 -70.12077 -70.10555 -70.0901\n -70.07439 -70.05845 -70.04223 -70.02572 -70.00889 -69.9917 -69.97411 -69.95608\n -69.93756 -69.91851 -69.89889 -69.87865 -69.85775 -69.83607 -69.81348 -69.78983\n -69.76445 -69.73783 -69.70235 -69.65686 -70.64036 -70.62898 -70.61788 -70.60783\n -70.59876 -70.59 -70.58118 -70.57235 -70.56353 -70.55463 -70.54556 -70.53629 -70.52679\n -70.51707 -70.50716 -70.49708 -70.48689 -70.47663 -70.46631 -70.45593 -70.4455\n -70.43498 -70.42437 -70.41361 -70.40268 -70.39156 -70.38021 -70.36866 -70.3569\n -70.34493 -70.3328 -70.32052 -70.30809 -70.29552 -70.2828 -70.26992 -70.25687\n -70.24365 -70.23024 -70.21663 -70.2028 -70.18876 -70.17449 -70.15999 -70.14524\n -70.13026 -70.11503 -70.09955 -70.08382 -70.06782 -70.05154 -70.03495 -70.01803\n -70.00074 -69.98302 -69.96485 -69.94616 -69.92693 -69.9071 -69.88663 -69.86546\n -69.84349 -69.82056 -69.79655 -69.77077 -69.7437 -69.70758 -69.66125 -70.64738\n -70.63611 -70.62509 -70.61513 -70.60613 -70.59744 -70.58868 -70.57992 -70.57117\n -70.56233 -70.55333 -70.54412 -70.53469 -70.52504 -70.5152 -70.5052 -70.49508\n -70.48489 -70.47464 -70.46433 -70.45397 -70.44353 -70.43299 -70.4223 -70.41145\n -70.40039 -70.38912 -70.37763 -70.36594 -70.35405 -70.34199 -70.32977 -70.3174\n -70.30488 -70.29221 -70.27939 -70.2664 -70.25321 -70.23985 -70.22626 -70.21246\n -70.19844 -70.18418 -70.16969 -70.15494 -70.13995 -70.12469 -70.10918 -70.09341\n -70.07735 -70.061 -70.04434 -70.02733 -70.00993 -69.99209 -69.97377 -69.95492\n -69.93549 -69.91546 -69.89475 -69.87332 -69.85107 -69.82781 -69.80344 -69.77724\n -69.74973 -69.71298 -69.66583 -70.65459 -70.64342 -70.63251 -70.62263 -70.61372\n -70.6051 -70.59642 -70.58772 -70.57905 -70.57029 -70.56136 -70.55222 -70.54286\n -70.53329 -70.52351 -70.51359 -70.50355 -70.49343 -70.48325 -70.47302 -70.46273\n -70.45237 -70.44189 -70.43128 -70.42049 -70.40951 -70.39831 -70.38689 -70.37527\n -70.36344 -70.35144 -70.33928 -70.32697 -70.31451 -70.3019 -70.28912 -70.27617\n -70.26303 -70.24969 -70.23614 -70.22236 -70.20835 -70.19411 -70.17961 -70.16486\n -70.14985 -70.13457 -70.11903 -70.1032 -70.0871 -70.07068 -70.05393 -70.03682\n -70.01931 -70.00134 -69.98288 -69.96386 -69.94425 -69.924 -69.90307 -69.88137\n -69.85883 -69.83525 -69.81052 -69.78392 -69.75595 -69.71858 -69.67061 -70.66198\n -70.65093 -70.64014 -70.63036 -70.62154 -70.61301 -70.6044 -70.59579 -70.58719\n -70.57851 -70.56965 -70.56059 -70.55131 -70.54182 -70.53212 -70.52226 -70.51231\n -70.50227 -70.49216 -70.48201 -70.47179 -70.4615 -70.4511 -70.44056 -70.42984\n -70.41892 -70.40779 -70.39645 -70.38488 -70.37312 -70.36118 -70.34908 -70.33684\n -70.32442 -70.31186 -70.29913 -70.28622 -70.27312 -70.25981 -70.24628 -70.23253\n -70.21854 -70.20429 -70.1898 -70.17504 -70.16001 -70.14471 -70.12912 -70.11325\n -70.09708 -70.0806 -70.06377 -70.04656 -70.02893 -70.01083 -69.99222 -69.97304\n -69.95324 -69.93279 -69.91161 -69.88966 -69.86682 -69.84293 -69.81783 -69.79082\n -69.76241 -69.7244 -69.67558 -70.66956 -70.65865 -70.64799 -70.63832 -70.62959\n -70.62115 -70.61263 -70.6041 -70.59559 -70.58698 -70.57821 -70.56922 -70.56003\n -70.55061 -70.541 -70.53123 -70.52135 -70.51139 -70.50137 -70.49129 -70.48115\n -70.47093 -70.4606 -70.45013 -70.43948 -70.42863 -70.41758 -70.4063 -70.3948 -70.3831\n -70.37122 -70.35918 -70.34699 -70.33464 -70.32212 -70.30943 -70.29656 -70.28349\n -70.27022 -70.25671 -70.24298 -70.229 -70.21477 -70.20026 -70.18549 -70.17045\n -70.15511 -70.1395 -70.12357 -70.10735 -70.09079 -70.07387 -70.05656 -70.03883\n -70.02059 -70.00183 -69.98248 -69.9625 -69.94183 -69.92043 -69.89821 -69.87508\n -69.85086 -69.82541 -69.79799 -69.76911 -69.73046 -69.68076 -70.67735 -70.66658\n -70.65604 -70.64648 -70.63786 -70.62949 -70.62107 -70.61263 -70.60419 -70.59568\n -70.58698 -70.57809 -70.56898 -70.55965 -70.55012 -70.54043 -70.53063 -70.52075\n -70.51081 -70.50082 -70.49075 -70.48061 -70.47035 -70.45995 -70.44938 -70.43861\n -70.42761 -70.4164 -70.40497 -70.39334 -70.38152 -70.36954 -70.35741 -70.3451\n -70.33263 -70.31998 -70.30716 -70.29413 -70.28088 -70.2674 -70.25368 -70.23972\n -70.22549 -70.21098 -70.19621 -70.18114 -70.16577 -70.15012 -70.13415 -70.11786\n -70.10123 -70.08422 -70.06682 -70.04897 -70.0306 -70.01169 -69.99217 -69.972 -69.95112\n -69.92948 -69.907 -69.88358 -69.85903 -69.83322 -69.80539 -69.77605 -69.73676\n -69.68618 -70.68536 -70.67471 -70.66429 -70.65485 -70.64632 -70.63805 -70.62972\n -70.62137 -70.61302 -70.60459 -70.59599 -70.58717 -70.57815 -70.56891 -70.55946\n -70.54987 -70.54015 -70.53036 -70.5205 -70.51058 -70.5006 -70.49052 -70.48035\n -70.47003 -70.45953 -70.44882 -70.4379 -70.42677 -70.4154 -70.40383 -70.39208\n -70.38016 -70.36808 -70.35583 -70.34341 -70.3308 -70.31802 -70.30502 -70.2918\n -70.27835 -70.26465 -70.25069 -70.23647 -70.22197 -70.20718 -70.19209 -70.1767\n -70.16101 -70.14499 -70.12865 -70.11194 -70.09486 -70.07735 -70.05938 -70.04089\n -70.02183 -70.00214 -69.98178 -69.96069 -69.93881 -69.91607 -69.89235 -69.86748\n -69.8413 -69.81305 -69.78325 -69.74331 -69.69187 -70.69358 -70.68307 -70.67278\n -70.66345 -70.65501 -70.64685 -70.63861 -70.63036 -70.6221 -70.61375 -70.60524\n -70.59652 -70.58759 -70.57843 -70.56908 -70.55957 -70.54994 -70.54024 -70.53046\n -70.52063 -70.51072 -70.50073 -70.49064 -70.48039 -70.46997 -70.45934 -70.44849\n -70.43742 -70.42612 -70.41463 -70.40294 -70.39108 -70.37905 -70.36686 -70.35448\n -70.34193 -70.32919 -70.31623 -70.30304 -70.28961 -70.27593 -70.26199 -70.24778\n -70.23328 -70.21848 -70.20338 -70.18796 -70.17223 -70.15617 -70.13976 -70.12299\n -70.10582 -70.08822 -70.07014 -70.05151 -70.03231 -70.01246 -69.99191 -69.9706\n -69.94849 -69.92548 -69.90146 -69.87626 -69.8497 -69.82103 -69.79076 -69.75014\n -69.69782 -70.70202 -70.69164 -70.68149 -70.67228 -70.66395 -70.65588 -70.64774\n -70.63958 -70.63142 -70.62317 -70.61475 -70.60612 -70.59727 -70.58822 -70.57896\n -70.56954 -70.56001 -70.55038 -70.5407 -70.53095 -70.52113 -70.51122 -70.50121\n -70.49104 -70.4807 -70.47015 -70.45937 -70.44837 -70.43716 -70.42572 -70.4141\n -70.40231 -70.39034 -70.3782 -70.36588 -70.35338 -70.34067 -70.32775 -70.3146 -70.3012\n -70.28755 -70.27362 -70.25941 -70.24492 -70.23012 -70.215 -70.19955 -70.18379\n -70.16769 -70.15122 -70.13438 -70.11713 -70.09944 -70.08125 -70.06251 -70.04315\n -70.02313 -70.0024 -69.98088 -69.95853 -69.93525 -69.91093 -69.88539 -69.85846\n -69.82935 -69.7986 -69.7573 -69.70406 -70.71069 -70.70044 -70.69042 -70.68133\n -70.67311 -70.66515 -70.6571 -70.64904 -70.64098 -70.63282 -70.6245 -70.61597\n -70.60722 -70.59825 -70.58909 -70.57977 -70.57032 -70.5608 -70.5512 -70.54153\n -70.53181 -70.52199 -70.51205 -70.50197 -70.4917 -70.48122 -70.47053 -70.45961\n -70.44846 -70.4371 -70.42555 -70.41383 -70.40192 -70.38984 -70.37758 -70.36512\n -70.35246 -70.33959 -70.32648 -70.31311 -70.29948 -70.28558 -70.27139 -70.25689\n -70.24209 -70.22696 -70.2115 -70.1957 -70.17956 -70.16305 -70.14614 -70.12881\n -70.11102 -70.09273 -70.07386 -70.05437 -70.03419 -70.01326 -69.99153 -69.96894\n -69.9454 -69.92078 -69.8949 -69.86758 -69.83804 -69.80679 -69.76479 -69.71059\n -70.71956 -70.70948 -70.6996 -70.69064 -70.68253 -70.67466 -70.66673 -70.65876\n -70.65079 -70.64273 -70.63451 -70.62607 -70.61742 -70.60855 -70.59949 -70.59026\n -70.58092 -70.57149 -70.56198 -70.55241 -70.54277 -70.53304 -70.52319 -70.51319\n -70.50301 -70.49261 -70.482 -70.47115 -70.46009 -70.44881 -70.43732 -70.42567\n -70.41383 -70.40181 -70.38961 -70.37721 -70.3646 -70.35178 -70.3387 -70.32537\n -70.31178 -70.2979 -70.28372 -70.26923 -70.25443 -70.2393 -70.22382 -70.20801\n -70.19183 -70.17526 -70.1583 -70.1409 -70.12302 -70.10463 -70.08564 -70.066 -70.04566\n -70.02456 -70.00262 -69.97979 -69.95598 -69.93106 -69.90484 -69.87714 -69.84714\n -69.81539 -69.77265 -69.71744 -70.72865 -70.71873 -70.70901 -70.70016 -70.69217\n -70.68441 -70.67657 -70.66872 -70.66085 -70.65289 -70.64476 -70.63643 -70.62788\n -70.61912 -70.61015 -70.60102 -70.59178 -70.58244 -70.57304 -70.56356 -70.55401\n -70.54436 -70.53461 -70.5247 -70.5146 -70.5043 -70.49377 -70.48301 -70.47202 -70.46082\n -70.44941 -70.43782 -70.42606 -70.41411 -70.40197 -70.38963 -70.37708 -70.3643\n -70.35127 -70.33799 -70.32442 -70.31058 -70.29642 -70.28195 -70.26716 -70.25202\n -70.23654 -70.2207 -70.20449 -70.18789 -70.17088 -70.15341 -70.13545 -70.11696\n -70.09785 -70.07809 -70.05759 -70.0363 -70.01416 -69.9911 -69.96702 -69.9418 -69.91523\n -69.88713 -69.85667 -69.8244 -69.78091 -69.72462 -70.73799 -70.72822 -70.71863\n -70.70992 -70.70203 -70.69438 -70.68665 -70.6789 -70.67113 -70.66328 -70.65525\n -70.64702 -70.63857 -70.62991 -70.62106 -70.61203 -70.60289 -70.59365 -70.58434\n -70.57497 -70.56551 -70.55596 -70.5463 -70.53648 -70.52647 -70.51625 -70.50581\n -70.49514 -70.48424 -70.47311 -70.46179 -70.45029 -70.43859 -70.42672 -70.41464\n -70.40237 -70.38988 -70.37716 -70.36419 -70.35094 -70.33743 -70.32361 -70.30949\n -70.29504 -70.28026 -70.26513 -70.24965 -70.2338 -70.21756 -70.20092 -70.18386\n -70.16633 -70.1483 -70.12971 -70.1105 -70.09061 -70.06996 -70.0485 -70.02615 -70.00286\n -69.97852 -69.95299 -69.92607 -69.89758 -69.86665 -69.83385 -69.78957 -69.73222\n -70.74757 -70.73794 -70.72849 -70.71992 -70.71214 -70.70461 -70.69698 -70.68934\n -70.68168 -70.67393 -70.666 -70.65788 -70.64954 -70.64098 -70.63223 -70.62331\n -70.61427 -70.60514 -70.59593 -70.58665 -70.5773 -70.56785 -70.55828 -70.54855\n -70.53864 -70.52852 -70.51817 -70.50759 -70.49678 -70.48574 -70.47451 -70.46308\n -70.45147 -70.43967 -70.42767 -70.41547 -70.40305 -70.39039 -70.37747 -70.36429\n -70.35081 -70.33704 -70.32295 -70.30854 -70.29378 -70.27866 -70.26318 -70.24733\n -70.23108 -70.21442 -70.19731 -70.17973 -70.16164 -70.14297 -70.12366 -70.10364\n -70.08285 -70.06122 -70.03868 -70.01515 -69.99055 -69.96472 -69.93745 -69.90855\n -69.87715 -69.8438 -69.79871 -69.74025 -70.75739 -70.74792 -70.73863 -70.73019\n -70.72253 -70.7151 -70.7076 -70.70006 -70.6925 -70.68486 -70.67704 -70.66902 -70.66079\n -70.65234 -70.6437 -70.63489 -70.62596 -70.61694 -70.60783 -70.59866 -70.58941\n -70.58006 -70.57059 -70.56097 -70.55116 -70.54113 -70.53088 -70.52039 -70.50967\n -70.49873 -70.48759 -70.47625 -70.46473 -70.45301 -70.44109 -70.42897 -70.41662\n -70.40403 -70.39118 -70.37805 -70.36464 -70.35091 -70.33687 -70.32249 -70.30777\n -70.29267 -70.27721 -70.26136 -70.2451 -70.22842 -70.2113 -70.19367 -70.17552\n -70.15678 -70.13737 -70.11724 -70.09632 -70.07453 -70.0518 -70.02805 -70.00319\n -69.97706 -69.94944 -69.92014 -69.88824 -69.85432 -69.8084 -69.74873 -70.76745\n -70.75814 -70.749 -70.74069 -70.73315 -70.72585 -70.71844 -70.71101 -70.70358\n -70.69604 -70.68833 -70.68042 -70.67229 -70.66396 -70.65543 -70.64673 -70.63791\n -70.629 -70.62001 -70.61094 -70.60179 -70.59255 -70.58318 -70.57366 -70.56396\n -70.55403 -70.54388 -70.5335 -70.52288 -70.51204 -70.50099 -70.48975 -70.47832\n -70.46669 -70.45486 -70.44283 -70.43056 -70.41805 -70.40527 -70.3922 -70.37885\n -70.36519 -70.3512 -70.33687 -70.32218 -70.30712 -70.29169 -70.27585 -70.2596\n -70.24292 -70.22577 -70.20811 -70.18992 -70.17111 -70.15163 -70.13139 -70.11035\n -70.08841 -70.0655 -70.04154 -70.01642 -69.99 -69.96203 -69.93231 -69.89993 -69.86543\n -69.81863 -69.75774 -70.77774 -70.76859 -70.7596 -70.75142 -70.744 -70.73681 -70.72952\n -70.72221 -70.71487 -70.70744 -70.69984 -70.69205 -70.68404 -70.67581 -70.6674\n -70.65881 -70.65011 -70.64131 -70.63243 -70.62347 -70.61444 -70.60531 -70.59605\n -70.58664 -70.57703 -70.56721 -70.55717 -70.5469 -70.53638 -70.52565 -70.51471\n -70.50357 -70.49223 -70.48071 -70.46897 -70.45702 -70.44484 -70.43241 -70.41972\n -70.40674 -70.39346 -70.37987 -70.36594 -70.35166 -70.33703 -70.32201 -70.3066\n -70.29079 -70.27457 -70.25789 -70.24074 -70.22307 -70.20483 -70.18598 -70.16643\n -70.14611 -70.12495 -70.10287 -70.0798 -70.05563 -70.03027 -70.00356 -69.97524\n -69.94512 -69.91225 -69.87716 -69.82948 -69.76731 -70.78825 -70.77925 -70.77042\n -70.76237 -70.75507 -70.74799 -70.74082 -70.73361 -70.72639 -70.71908 -70.71159\n -70.7039 -70.69601 -70.6879 -70.6796 -70.67113 -70.66255 -70.65387 -70.6451 -70.63627\n -70.62734 -70.61832 -70.60918 -70.59988 -70.59039 -70.58069 -70.57076 -70.56059\n -70.55019 -70.53957 -70.52873 -70.5177 -70.50648 -70.49505 -70.48342 -70.47157\n -70.45949 -70.44715 -70.43455 -70.42165 -70.40845 -70.39494 -70.38109 -70.36688\n -70.3523 -70.33734 -70.32198 -70.30621 -70.29002 -70.27335 -70.25621 -70.23853\n -70.22028 -70.20139 -70.18179 -70.1614 -70.14014 -70.11794 -70.0947 -70.07035\n -70.04476 -70.01777 -69.98913 -69.9586 -69.92523 -69.88956 -69.84097 -69.77748\n -70.79893 -70.79008 -70.7814 -70.77349 -70.76631 -70.75934 -70.75229 -70.74519\n -70.73808 -70.73088 -70.7235 -70.71593 -70.70815 -70.70016 -70.69199 -70.68364\n -70.67517 -70.66661 -70.65797 -70.64925 -70.64044 -70.63154 -70.62251 -70.61333\n -70.60396 -70.59437 -70.58456 -70.57451 -70.56422 -70.55372 -70.543 -70.53209\n -70.52097 -70.50966 -70.49814 -70.4864 -70.47442 -70.46218 -70.44968 -70.43688\n -70.42377 -70.41035 -70.39658 -70.38245 -70.36795 -70.35306 -70.33776 -70.32204\n -70.30589 -70.28926 -70.27213 -70.25447 -70.23621 -70.2173 -70.19766 -70.17722\n -70.15588 -70.13357 -70.1102 -70.08567 -70.05987 -70.03262 -70.00365 -69.97274\n -69.93889 -69.90263 -69.85312 -69.7883 -70.80975 -70.80106 -70.79253 -70.78475\n -70.77769 -70.77084 -70.7639 -70.75692 -70.74992 -70.74283 -70.73557 -70.72811\n -70.72045 -70.71258 -70.70452 -70.6963 -70.68795 -70.67951 -70.67099 -70.66239\n -70.65371 -70.64493 -70.63602 -70.62696 -70.61771 -70.60825 -70.59856 -70.58863\n -70.57847 -70.56808 -70.55749 -70.5467 -70.53571 -70.52451 -70.51311 -70.50148\n -70.48961 -70.47749 -70.4651 -70.4524 -70.4394 -70.42607 -70.4124 -70.39836 -70.38395\n -70.36913 -70.3539 -70.33826 -70.32216 -70.30558 -70.28849 -70.27085 -70.25261\n -70.2337 -70.21404 -70.19355 -70.17216 -70.14976 -70.12627 -70.1016 -70.0756 -70.04811\n -70.01884 -69.98756 -69.95324 -69.9164 -69.86597 -69.79979 -70.82068 -70.81217\n -70.80379 -70.79614 -70.78921 -70.78246 -70.77563 -70.76875 -70.76187 -70.75489\n -70.74774 -70.7404 -70.73286 -70.72511 -70.71717 -70.70908 -70.70085 -70.69254\n -70.68414 -70.67567 -70.66711 -70.65845 -70.64967 -70.64073 -70.63161 -70.62227\n -70.61271 -70.60291 -70.59288 -70.58263 -70.57217 -70.5615 -70.55064 -70.53957\n -70.52828 -70.51678 -70.50504 -70.49303 -70.48076 -70.46819 -70.45529 -70.44208\n -70.42851 -70.41457 -70.40026 -70.38554 -70.3704 -70.35483 -70.3388 -70.32229\n -70.30526 -70.28767 -70.26945 -70.25056 -70.2309 -70.2104 -70.18896 -70.1665 -70.14293\n -70.11813 -70.09196 -70.06425 -70.03471 -70.00308 -69.96832 -69.93092 -69.87959\n -69.81198 -70.83172 -70.82334 -70.8151 -70.80758 -70.80075 -70.79412 -70.7874\n -70.78064 -70.77386 -70.76699 -70.75996 -70.75273 -70.74532 -70.73769 -70.72987\n -70.72189 -70.71381 -70.70562 -70.69735 -70.689 -70.68057 -70.67204 -70.66339\n -70.65458 -70.64558 -70.63638 -70.62695 -70.61729 -70.60739 -70.59727 -70.58694\n -70.57641 -70.56568 -70.55475 -70.5436 -70.53223 -70.52061 -70.50874 -70.49659\n -70.48414 -70.47138 -70.45828 -70.44483 -70.43102 -70.41681 -70.4022 -70.38716\n -70.37169 -70.35575 -70.33932 -70.32236 -70.30483 -70.28667 -70.26781 -70.24818\n -70.22769 -70.20624 -70.18375 -70.1601 -70.1352 -70.10889 -70.081 -70.05122 -70.01926\n -69.98408 -69.94614 -69.89393 -69.82498 -70.84282 -70.83456 -70.82645 -70.81905\n -70.81232 -70.80579 -70.79919 -70.79253 -70.78587 -70.77911 -70.77219 -70.76508\n -70.75778 -70.75027 -70.74258 -70.73474 -70.72678 -70.71872 -70.71058 -70.70236\n -70.69406 -70.68566 -70.67714 -70.66846 -70.65961 -70.65054 -70.64124 -70.63172\n -70.62196 -70.61198 -70.60179 -70.59141 -70.58082 -70.57002 -70.55902 -70.54778\n -70.53631 -70.52458 -70.51257 -70.50025 -70.48763 -70.47466 -70.46134 -70.44765\n -70.43357 -70.41908 -70.40417 -70.3888 -70.37297 -70.35664 -70.33978 -70.32233\n -70.30424 -70.28545 -70.26587 -70.24541 -70.22397 -70.20146 -70.17779 -70.15282\n -70.1264 -70.09835 -70.06835 -70.03611 -70.00053 -69.9621 -69.90905 -69.83884\n -70.85396 -70.84581 -70.83781 -70.83053 -70.82391 -70.81749 -70.81099 -70.80445\n -70.79789 -70.79124 -70.78444 -70.77745 -70.77026 -70.76289 -70.75533 -70.7476\n -70.739
list all conda environments\n{{{\nconda info -e \n}}}\nremove all packages from an environment and then delete the environment\n{{{\nconda remove -n my_package --all\n}}}\nto try to automagically build a recipe for a package available from pypi:\n{{{\nconda build pyoos --build-recipe\n}}}
cut-n-pasted my ~/.ssh/ into github\nthen in cygwin:\n{{{\ngit config --global "Rich Signell"\ngit config --global\n\ncd RPSstuff\ngit init\ngit remote add origin\ngit add -A\ngit commit -am "initial commit"\ngit push -u origin master\n}}}
{{{\n [a,b,c]\n a || b || c\n\n [[a,b,c]]\n a && b && c\n\n [[a,b],[c],[d],[e]] or [[a,b],c,d,e]\n (a && b) || c || d || e\n}}}\n
Want to see what someone has installed in their cygwin?\n\n{{{cygcheck -c}}}\n\nCool!
{{{\n conda create -n blog python=2.7 pip\n\nsource activate blog\n}}}\nGet the "requirements.txt" from this page:\n\n{{{\nmore requirements.txt\n\nJinja2==2.7.1\nMarkdown==2.3.1\nMarkupSafe==0.18\nPygments==1.6\nUnidecode==0.04.14\nblinker==1.3\ndocutils==0.11\nfeedgenerator==1.7\nipython==1.1.0\npelican==3.3\npytz==2013.7\nsix==1.4.1\nwsgiref==0.1.2\n\n pip install -r requirements.txt\n}}}\nAlso install ghp-import\n{{{\npip install ghp-import\n\ncd blog/python4oceanographers\nmake html\n\n\n(blog) ssh-keygen -t rsa -C ""\n\n(blog) ssh-agent | tee ~/.ssh/ (every time you login)\n(blog) source ~/.ssh/ (every time you login)\n(blog) ssh-add ~/.ssh/id_rsa (every time you login)\n(blog) cd ~/blog/python4oceanographers/github\n(blog) git clone\n(blog) ghp-import -p ../output\n}}}\nthis pushes the pages to the gh-pages branch of my blog repository on github
Govtrip => Vouchers=> Edit => Review and Sign=>Digital Signature\nThen proceed and write either "authorized" or "would not take gov't cc" (for cash payments) and then "submit completed document"
Setting up gridFTP to the SURA IOOS Testbed server (\n\nEdit /etc/grid-security/grid-mapfile to add one line for each user like the following, where the GlobusOnline username is "rsignell" and local account is "sura_ftp".\n\n /C=US/O=Globus Consortium/OU=Globus Connect User/CN=rsignell sura_ftp\n\nExample: my username is "rsignell" has a local endpoint called "laptop2". \nSo the would select:\nEndpoint1 (e.g. left window): rsignell#laptop2\nEndpoint2 (e.g. right window): ioos#testbed\nThen just drag files or folders from left to right.\n\nRich\n\n\n
Quantity: about 1 pint\n\nIngredients:\n\n1 cup water\n1 1/3 cups pure cane sugar\n1 Tbsp. cinchona bark; available in some herb stores or online)\n2 Tbsp. powdered citric acid (found in the bulk section of most well-stocked grocery stores)\n1 lime, zested and juiced\n1 stalk lemongrass, diced\n\nIn a small saucepan, bring the sugar and water to a boil until the sugar dissolves, then turn the heat down to low. Add the quinine, citric acid, lemongrass, lime zest and lime juice. Stir well and simmer for about 25 minutes. The syrup will be thin and runny. Remove from heat and let cool. Strain out the large chunks through a tea strainer or fine mesh colander into a 1 pint mason jar. Will keep for a month or more in the refrigerator. Use about 1/2 oz of syrup with 3 to 5 oz of sparking water and 1.5 oz gin for an awesome gin and tonic.
It would be cool to have a tool to compare the bottom temperature data from the groundfish surveys to FVCOM, Global RTOFS and ROMS?\nsend (x,y,z,t), get back data from FVCOM or \nNOAA NEIS goal: search, access all noaa data\nuse apache SOLR to index ISO metadata records from metadata catalogs (GeoPortal)\nTerraViz (uses Unity, a popular 3D game engine, leveraging GPUs, will run on web, ipad, and wi, xbox)\n\\njeff smith\\n\nbuild/work/deploy ipad=>objective windows=>direct x, linux=opengl \n\n
Trick to make NaN values look white in imagesc, just like in pcolor\n{{{\nA = magic(3)\nAA = A\nAA(1) = NaN\nimalpha = ones(size(A));\nh = image(AA), colorbar\ncolormap(jet(9))\nimalpha(isnan(AA)) = 0\nset(h,'alphadata',imalpha)\n}}}
\n1. Fill out this moped (motorized bicycle) registration form:‎\n2. Send renewal form with stamped, self-addressed envelope and check to:\n\nRegistry of Motor Vehicles\nATTN: Moped Registration Renewal\nP.O. BOX 55889\nBoston, MA 02205
The location of the ncWMS config file is set in \n{{{\n/usr/local/tomcat4/webapps/ncWMS-1.5/WEB-INF/WMS-servlet.xml\n}}}\nand we've modified the default location for coast-enviro so that the config file is in \n{{{\n/usr/local/tomat4/content/ncWMS/config.xml\n}}}\nWhen a new WAR file is deployed, the WMS-servlet.xml is overwritten, so I've put "backup" copies of these two files in my home directory on coast-enviro. So to move them back we can do:\n{{{\nsudo cp ~/WMS-servlet.xml /usr/local/tomcat4/webapps/ncWMS-1.5/WEB-INF-servlet.xml \nsudo cp ~/config.xml /usr/local/tomcat4/content/ncWMS/config.xml\n}}}
First, NCO 4.0.1 introduced a simpler method to turn fixed dimensions\ninto record dimensions:\n\n{{{\nncks --mk_rec_dmn time\n}}}\nDetails are here:\nThis saves a few lines and significantly reduces disk access.\nThe preferred way to attack your problem is to change time into the\nrecord dimension using this command once per input file.
To get ncview going have to start up xinit with cygwin using:\n{{{\nxinit -- -nolock\ntwm &\n}}}
Nemo is a ubuntu system:\n{{{\nrsignell@nemo:/raid4/rsignell/Projects/COAWST/Refined_chan$ more /etc/lsb-release\nDISTRIB_ID=Ubuntu\nDISTRIB_RELEASE=8.04\nDISTRIB_CODENAME=hardy\nDISTRIB_DESCRIPTION="Ubuntu 8.04.1"\n}}}\n\nThere are 22 nodes on nemo, each with 8 cpus\n(nemo2-nemo23).\n\nTo show status of all nodes \n{{{\n pbsnodes -a \n}}}\nTo see if any nodes are down:\n{{{\npbsnodes -a | grep down\n}}}\n\nTo make sure the right netcdf is used for ROMS/COAWST, I need to make sure that the right nc-config is being used:\n{{{\nrsignell@nemo:/raid4/rsignell/Projects/COAWST/Refined_chan$ which nc-config\n/share/apps/netcdf/bin/nc-config\n}}}\nThis means checking the tail end of .bashrc to make sure that the path is set so that this happens. \n\n, which means that I can't have the
Adriatic Sea:\n{{{\nm_proj('albers equal-area','parallels',[39 49],'clon',15,'lon',[7 23],'lat',[39 49],'ell','clrk66')\nclf;m_gshhs_f('patch',[.6 .6 .6]);dasp;shg\n}}}\n\nGreat Lakes:\n{{{\nm_proj('albers equal-area','parallels',[41 49],'clon',15,'lon',[-93 -76],'lat',[41 49],'ell','clrk66')\nclf;m_gshhs_f('patch',[.6 .6 .6]);dasp;shg\n}}}\n\n\nItalian Place names:\n{{{\ncurl -o citiesJSON.txt\n# grep -v fcodeName citiesJSON.txt > foo.txt\n# grep -v fcl foo.txt > foo2.txt\n# grep -v name foo2.txt > foo.txt\n# grep -v fcode foo.txt > foo2.txt\n# grep -v geonameId foo2.txt > foo.txt\n# grep -v wikipedia foo.txt > foo2.txt\nsed -e s/"/'/g\n}}}\n\n
All the port mapping proxy stuff is set up at\n{{{\nroot@gam:/etc/apache2/sites-enabled\n}}}\nin these files:\n{{{\n010-usgs_geoport\n020-usgs_geoport-dev\n}}}\nNote: before using another port, check to make sure that it's not already being used by doing:\n{{{\nsudo netstat -atpn | grep 8090\n}}}\n(to check if 8090 is being used. If it's not being used, nothing will come back)\n\nHere are the ports being used, copied from gam:~root/gam_ports.txt (this is just a text file that Jeff set up \nto remind himself of what ports were already being used):\n{{{\n thredds geoportal ramadda erddap thredds-ooi\nhttp 8081 8082 8083 8084 8091\nhttps 8444 8445 8446 8447 8454\najp 8009 8011 8012 8013 8047\nshutdown 8006 8007 8020 8008 8037\njmxremote 8506 8510\n}}}\n{{{\n thredds-dev ncWMS-dev ramadda-dev erddap-dev\nhttp 8085 8086 8087 8088\nhttps 8448 8449 8450 8451\najp 8041 8042 8043 8044\nshutdown 8031 8032 8033 8034\n}}}\n{{{\n gicat gicat-dev\nhttp 8089 8093\nhttps 8452 8453\najp 8045 8046\nshutdown 8035 8036\njmxremote\n}}}\n{{{\n trac/svn\nhttp 8092\nhttps 8455\n NetWorker (backup softwrare)\n 8090\n\n Ipython notebook (Massimo)\nhttps 8455\n\n Ipython notebook (EPD)\nhttps 8456\n}}}\n
{{{\nC:\sPython27\s;C:\sPython27\sDLLs;C:\sPython27\sScripts;;C:\sPython27\sLib\ssite-packages;C:\sProgram Files\sswan;C:\sProgram Files\sTecplot\sTec360 2010\sBin;C:\sWindows\ssystem32;C:\sWindows;C:\sWindows\sSystem32\sWbem;C:\sWindows\sSystem32\sWindowsPowerShell\sv1.0\s;C:\sProgram Files\sMATLAB\sR2010b\sbin;C:\sProgram Files\sIntel\sWiFi\sbin\s;C:\sProgram Files\sCommon Files\sIntel\sWirelessCommon\s;C:\sProgram Files\sTortoiseSVN\sbin;C:\sProgram Files (x86)\sVideoMach;C:\sProgram Files\sIVS7\sbin;C:\sProgram Files (x86)\sTortoiseSVN\sbin;c:\sprograms\sapache-ant-1.8.1\sbin;C:\sProgram Files (x86)\sQuickTime\sQTSystem;C:\sArcGIS\sarcexe10x\sbin;C:\sArcGIS\sDesktop10.0\sArcToolbox\sToolboxes\sUSGS_EGISTools\sscripts\smetadata;C:\sArcGIS\sDesktop10.0\segis\smetadata\stools\sbin;C:\sArcGIS\segis\sgbin;C:\sopendap\sloaddap;C:\sProgram Files (x86)\sCommon Files\sAcronis\sSnapAPI\s;C:\sProgram Files\sTortoiseHg\s\n}}}
how many 8 cpu nodes on peach?\n\n{{{\n pbsnodes -a | grep 'np = 8' | wc -l\n25\n}}}\nHow many are down?\n{{{\nrsignell@peach:/peach/data0$ pbsnodes -a | grep down | wc -l\n 1\n}}}\n\n
Eric Bridger asked me for the outline of the FVCOM grids, and I didn't know how to do this, so here's what I came up with. \n\nFirst I downloaded Brian Blanton's old OPNML matlab routines from because it had a "plotbnd.m" script which sounds like the right thing. I put this in c:/rps/m_contrib/trunk/opmnl_matlab5. To use this script you need a "FEM grid structure", which is described in "feb_grid_struct.m". In that routine, if the boundary list isn't defined, it's created with "detbnd.m", so I made the following script to first plot the boundary using plotbnd, then extracted the x,y positions and used join_cst.m to turn the collection of two point line segments into continuous coastline pieces, from longest to shortest.\n{{{\n%Script plotbnd_fvcom.m\nurl=''\nfout='gom2_bnd.txt';\n\n% Use NJ Toolbox ( for Matlab\nnc=mDataset(url);\n\n% Get Nodal Grid\nlon=nc{'lon'}(:);lat=nc{'lat'}(:);tri=nc{'nv'}(:).';\nh=nc{'h'}(:);\nclose(nc);\n\n% load into FEM grid struct required by OPMNL tools\n% help FEM_GRID_STRUCT\'gom2';\nfem.e=double(tri);\nfem.x=double(lon);\nfem.y=double(lat);\nfem.z=double(h);\n% determine boundary segment list\nfem.bnd=detbndy(double(tri));\n% plot up boundary. Looks good?\nh=plotbnd(fem);\n% extract boundary positions from graphics object and join all these\n% two point segments\nx=get(h,'XData');\ny=get(h,'Ydata');\ncoast=[x(:) y(:)];\ncoast=join_cst(coast,.0001);\n% save as ascii\nsaveascii(fout,coast,'%12.6f %12.6f\sn');\n}}}\n\n
Following the instructions here:\n\n{{{\nssh\n\nsudo su\nsu - wakari\n/opt/wakari/wakari-server/bin/supervisord\n/opt/wakari/wakari-gateway/bin/supervisord\n/opt/wakari/wakari-compute/bin/supervisord\n\n/opt/wakari/wakari-server/bin/supervisorctl start all\n/opt/wakari/wakari-gateway/bin/supervisorctl start all\n/opt/wakari/wakari-compute/bin/supervisorctl start all\n}}}
{{{\ncd /home/rsignell/python/sci-wms/src/pywms\ngunicorn_django -c -D\n}}}\n
{{{\n[rsignell@testbed2 ~]$ sudo -i\n[sudo] password for rsignell:\nSorry, try again.\n[sudo] password for rsignell:\n[root@testbed2 ~]# su - wms\n[wms@testbed2 ~]$ source ~/envs/standard/bin/activate\ncd /home/wms/sci-wms\n./\n\n}}}\n
However, via port forwarding and Cygwin this request can be accommodated. I will assume you have Cygwin already installed on the laptop you are using from home. \n\nFrom Putty, \n\n*load the profile you use for connecting to our internal SSH server , \n*On the right pane under Category goto SSH > Tunnels \n*under Port Forwarding goto Source port and enter 23 \n*for Destination enter (IP for geoport)\n*click ADD \n*under Port Forwarding goto Source port and enter 23 \n*for Destination, enter (IP for peach)\n*click ADD\n*go back to Categories > Session and click the Save button \n\nTo launch tunnel \n\nLog in to the internal SSH server \nOnce you are granted access: from Cygwin enter:\n\nFor ssh to Geoport: "ssh -p 10000" \nfor ssh to peach: "ssh -p 10001"\n\nThis will tunnel your SSH traffic to the WHOI Geoport or peach server.
open a 32-bit anaconda command prompt and type\n{{{\nactivate starcluster\nstarcluster terminate rps_cluster\n}}}
| tiddlyspot password:|<<option pasUploadPassword>>|\n| site management:|<<upload index.html . . rsignell2>>//(requires tiddlyspot password)//<<br>>[[control panel|]], [[download (go offline)|]]|\n| links:|[[|]], [[FAQs|]], [[announcements|]], [[blog|]], email [[support|]] & [[feedback|]], [[donate|]]|
go to nomads3\nuse directory "tomcat/apache-tomcat-6.0.32", not "tomcat6".\n\n
The problem:\nWe find that <tomcat2>/logs/catalina.out is HUGE, and is full of "too many files open".\n\nWe can try to figure out what files are associated with tomcat by first figuring out the <PID> \n{{{\nps -ef | grep tomcat2\n}}}\nand then doing \n{{{\nsudo /usr/sbin/lsof -a -p <PID> | more\n}}}\nThis revealed a bunch of umb_massbay files. \n\nI also found that the root file partition "/" was full.\n{{{\ndu -h --max-depth=1\n}}}\nwas super handy.\n\nI also found out that the default number of files to have open on a Linux system is 1024. You can check by doing\n{{{\nulimit -n\n}}}\nThen you can increase by following this article using\n\n{{{\n$more /etc/security/limits.conf\n* - nofile 4096\n}}}\nafter this modification we have not had the "too many files open" problem. \n\n\n
ssh\ncd /usr/local/las.v7.1.2/conf/server\nvi las.xml\nadd dataset xml files in two places.\nrestart tomcat\n\ncbofs.xml\nhypoxia.xml\nsabgom.xml
GovTrip with the new FBMS system, it will be as follows:\n\n2014 GX14.GY00ET2A100\n2013 GX13 GY00ET2A100\n2012 number: GX12.GY00.ET2A100\n\nCopies of travel authorization to:\nJanet Jaensch \nSusan Russell-Robinson\n
{{{\nproj -I +init="EPSG:32619" -f %12.6f >\n}}}
{{{\nvs005 august 5, 2007\nvs008 sep 25, 2007 30 day run with 1 seds, MORFAC=100 (5 m of 800 micron sediment)\nvs009 sep 23, 2007 30 day run with 4 seds (1 m of 200,400,600,800 sediment)\nvs010 sep 23, 2007 30 day run with 5 seds (1 m of 50,100,200,400,800 sediment)\nvs011 sep 25, 2007 30 day run with 5 seds only on Middle Ground shoal (same as vs010)\n}}}
Currently in WWM, Aron has\nunstructured grid, implicit scheme, ultimate quickest for advection, parallelization with Joseph Zhang using domain decomposition. \n\nWork for Hendrik:\n1. unstruct grid in ww3 (done)\n2. implicit scheme in ww3 (testing)\n3. ultimate quickest in ww3 (should be done by end of year)\n\nOther issues. Currently WW3 parallelizes by doing each spectral component separately, but there is no domain decomposition. Aron & Joseph want to get some funding before they give that away.\n\nAron thinks the best testbed activities would bring air, wave, hydro and sediment guys together to work on nearshore waves, currents, met and morphodynamics.