I have 16 netcdf files (without a time dimension) on a global scale that contain the same amount of longitude grid points (namely 432) and the same amount of latitude grid points (namely 324). Each netcdf file contains a slice of data, for example in file 1 there is a data slice from latitude range 90 to 80 and in file 2 there is data available from latitude range 80 to 70. I want to merge these files which each contain a different slice of data to obtain a complete global dataset.
I have tried to merge the netcdf files with cdo mergegrid as described in this answer https://stackoverflow.com/a/51566286/20668027 and that worked as far as I could see, however, when looking in the cdo guidelines it was stated that cdo mergegrid is only supported for rectilinear grids. The grid that I am using is not rectilinear but since it seemed to work on my data I was wondering if I could still use cdo mergegrid? Does anyone have insights about this?
Thank you very much in advance!
The NCO equivalent to CDO ensmean is nces described here. You could try
nces in*.nc out.nc
Related
I have several datasets (approx. 10) that comes from user input and the labels (x-axis) will therefore almost never overlap between the datasets.
What i would like to do is connecting points from the same datasets (for instance bloodPressure) thru a "non-existing" datapoint when this is necessary - like in the graph below. I would not want to fake a datapoint to achieve this.
Any suggestions how to do this?
I found the answer in the Chart.js documentation. It is possible to skip the array of labels and instead create the datasets with a x and y value like in the screen dump.
Works fine!
I have a bunch of data where the hours taken to process an item ranges from 3-3000 hours. most of the data is <1000 hours
I am creating a boxplot of that data. I have an large number of outliers within the data that I don't need to display, but I do need to analyse.
I have tried to use both 'scale_y_continuous(limits=c(0,1000))' and 'ylim(0,1000)' that appears to change the data that is used to create the boxplot, I altered the limits to '20' to test this theory and I get a complete plot, which can only be because the method i'm using to limit the axis also limits the range of data analysed.
I'd like to limit the y axis but not limit the range of data that is used in the analysis, what function do I use to accomplish that?
many thanks
it appears that it's 'coord_cartesian(ylim = c(nnn,nnn))+' that I needed to use.
I need help regarding the time series in Tableau. So far Here is what I can do.
Connect to TabPY
Call / Run scripts on TabPy
My current issue is that tableau doesn't seem to allow more output than input elements. Say I want to use the last 100 data points to predict the coming 10 points. Input of the data to python isn't a problem. The problem comes when I want to return a list with 110 elements. I've also tried returning the 10 elements and it complaints that it expects 100 elements list.
Thanks for reading
I've found a work around. You can see the post here for more information. Basically you shift the original values by the prediction amount and then have the prediction return the same amount as the shifted original
G'day,
I have ocean model output in the form of netCDF files. The netCDF files are approximately 21GB, and the variables that I want to load are also pretty big (~ 120 * 31 * 300 * 400 sized matrices).
I want to load some of these variables from a netCDF file into MATLAB. Usually, I would do this via:
ncload('filename.nc',var1)
Which would load the variables var1 into similarly named MATLAB variables. However, since I only need a single column of var1, I only want to load a subset of var1 - This should speed up the loading process. For example, say,
size(var1)
>> var1 120x31x260x381
I only want the 31st column, and loading the other 30 columns, and discarding the information seems like a waste of time. In other words, this is what I want to accomplish: ncload('filename.nc',var1(:,31,:,:)).
I know there are a few different netCDF toolboxes floating around, and I have heard that one can use a stride flag to only load every xth entry... but I'm not sure if it's possible to do what I want. Does anyone know of a way to do this?
Cheers
If you have a current version of MATLAB, look for NCREAD and the example therein.
What is the best way to calculate AND add a field to a data file that shows the crow-fly distance (in miles) between two zip codes for each record (250K+) in a file? THANKS
Use this page for the raw distance information. Then use this website for the formula from Adam Bellaire's response to calculate the distance.
There is no reason to use an external service as zip codes really don't change all that often.
Enjoy!
Get a Google Earth API Key and use the API to calculate distances between two zip codes in your language of choice.
UPDATE:
If a web service isn't for you, you can check my OLD Visual Basic Posting on planet source code. It has a database of zip codes, their lat/long positions, and some VB code to calculate the distances between two zip codes. The Zipcode DB will probably require some updating, and it's in a MS Access format (but you can move that data anywhere).
You could use the Yahoo Geocoding Service to first get the latitude and longitude corrodinates for each zip code, then simply use the haversine formula to get the distance between any two sets of latitude/longitude data.
Here is a c# implementation of the haversine formula.
Enjoy!