4D (or higher) popular vtk or netCDF dataset - visualization

I am looking for a 4 dimensional(or higher dimension) popular dataset for my term project in Scientific visualization. I searched on the Internet and found 3D only. Any idea

If you are just looking for something small, the latest versions of ParaView come with a data set named can.ex2 (look in the Examples directory). For somewhat larger data you can download some examples from https://www.unidata.ucar.edu/software/netcdf/examples/files.html. The file tos_O1_2001-2002.nc is technically a 2D time series, but ParaView can load it up as a sphere in 3D. The file rhum.2003.nc is a true 3D time series. Both of these files are loaded as "NetCDF files generic and CF conventions".

Related

Analyze 3D objects by Voxels

I plan to use OpenVDB to analyze 3D objects/meshes. The objective is:
To detect object surface regions with a certain criterion, like slope
Then manipulate those regions
The manipulation might be adding other 3D objects to those regions, for example
OpenVDB has some tools available:
Conversion Tools
Filters
Topological Operations
Level Set Tools
Morphological Operations
Geometric Transforms
Compositing Tools
...
It is a large set of confusing tools to choose from. Does anybody with OpenVDB experience know:
Is OpenVDB the proper library to achieve my objective
If so, which OpenVDB tool best suits my needs
Answer provided by OpenVDB community:
An important question is what you mean by "3D objects/meshes."
OpenVDB is very good at performing those regions with surfaces by
representing them as signed distance fields. But the word "mesh"
raises some alarm bells that you may want to maintain topology. In
this case another library may be more effective.
It also sounds like you have a problem domain you are trying to
explore. For that, I would not go straight to code but instead
explore solutions using 3d applications first. My own biased first
choice would be Houdini, whose apprentice version you can get for
free. This provides most of the VDB code as separate nodes. So, for
example, you can use a File SOP to load a mesh from disk, a VDB From
Polygons to convert it to a Signed Distance FIeld, and then VDB
Analysis to compute the Gradient. The gradient I think matches what
you are looking for as slope, but it is also possible you are looking
for curvature...
To return to mesh land, you can use a VDB Convert. Finally a ROP
Geometry can save it out.
Attached is a file showing a network to compute an approximate Y-slope
as a volume, apply it back to a mesh, and save to disk.
Attached file

Using MODIS Terra TIF file to get land-water map

I am able to read the modis image into Matlab. And I want to do some simple calculations. But, how do I access the different bands in Matlab?
So, I have downloaded the 7-2-1 tif data (Not HDF) from the MODIS website and I want to create a map that shows flooded and non-flooded regions. I would like to do ISODATA classification in Matlab (And also in ArcGIS or ENVI). So far I have read the data into both ArcGIS and Matlab. But I am not able to use the different bands (7-2-1).
But I am not sure how to proceed.
This is an example MODIS image: http://lance-modis.eosdis.nasa.gov/imagery/subsets/?subset=USA6.2013323.terra.721.1km
And I am using Geotiffread
I have
ArcGIS 10 (All tools)
Matlab
ENVI
Could you please guide me?

Efficiently visualise large quantities of points with matlab.

I have a set of 3D points which numbers up around 1 million points. I am looking to visualise these with matlab.
I have tried the following functions:
plot3
scatter3
But they are both very sluggish. Is there a more efficient way to visualise this level of points in matlab? Maybe a way to mesh the points?
If not can anyone suggest a plug-in or even a different program for visualising 3D points?
You're going to run into efficiency issues no matter what plugin/program you use if you want all million+ points to show up in a plot. My suggestion would be to downsample. Use the plot3 or scatter3 function on every other point, or every nth point, until you get a figure that is not sluggish. As long as the variance in your data isn't astronomical, downsampling a little bit shouldn't affect the overall distribution of points (given that you have a million+ points). And any software that is able to display that much data without being sluggish is most likely downsampling/binning or using some interpolation technique to do so (so you might as well have control over it).
fscatter3 from the file exchange, does what you like.
Is there a specific reason to actually have it display that many points?
I Googled around a bit and found some people who have had similar issues (someone suggested Avizo as an alternate program but I've never used it):
http://www.mathworks.com/matlabcentral/newsreader/view_thread/308948
mathworks.com/matlabcentral/newsreader/view_thread/134022 (not clickable because I don't have enough rep to post more than two links)
An alternate solution would be to generate a histogram if you're more interested in the density of the data:
http://blogs.mathworks.com/videos/2010/01/22/advanced-making-a-2d-or-3d-histogram-to-visualize-data-density/
I you know beforehand roughly the coordinates of the feature you are looking for, try passing the cloud through a simple pass-through-filter, which essentially crops your point cloud. I.e. if you know that the feature is at a x-coordinate > 5, remove all points with x-coordinate < 5.
This filter could for the first coordinated be realized as
data = data(data(1,:) > 5,:);
Provided that your 3d data is stored in an n by 3 matrix.
This, together with downsampling, could help you out. If you still find the performance lagging, consider using something like the PCD viewer in PointCloudLibrary, check half way down the page at
http://pointclouds.org/documentation/overview/visualization.php
It is a stand alone app you could launch from matlab. I find it's performance far better than the sluggish matlab plotting tools.
For anyone who is interested I ended up finding a Point cloud visualiser called Cloud Compare. It is extremely fast and allows selection and segmentation as well as filtering on point clouds.

Creating figures for publication - artefacts in export

Now I am already trying for hours to get a satisfying vectorized output from a 3D matlab plot. I illustrated the artefacts of the resulting pdf exports in the following image (created with export_fig -> -r2000). I know that this problem is somehow related to the pdf viewer, but is there no solution to get a compatible output for all viewers?
In addition I have tried libs like plot2svg and matlab2tikz, but they seem to have problems with some of my surface plots resulting in completely different problems.
If there are no other ways to create vectorized outputs of the figures, do you have any tips for high quality bitmap figures (especially regarding to the font blurring)?
From my experience, exporting matlab figures results with less than satisfying results.
Personally, I prefer to export the data to other programs and create the plots there.
Excel does pretty good job with many types of figures. You may also try gnuplot.
I know this is not exactly the answer you are looking for but sometime instead of fighting Matlab, it is best to leave this jobs to better suited software.

Ideas for extracting features of an object using keypoints of image

I'll be appreciated if you help me to create a feature vector of an simple object using keypoints. For now, I use ETH-80 dataset, objects have an almost blue background and pictures are took from different views. Like this:
After creating a feature vector, I want to train a neural network with this vector and use that neural network to recognize an input image of an object. I don't want make it complex, input images will be as simple as train images.
I asked similar questions before, some one suggested using average value of 20x20 neighborhood of keypoints. I tried it, It seems it's not working with ETH-80 images, because of different views of images. It's why I asked another question.
SURF or SIFT. Look for interest point detectors. A MATLAB SIFT implementation is freely available.
Update: Object Recognition from Local Scale-Invariant Features
SIFT and SURF features consist of two parts, the detector and the descriptor. The detector finds the point in some n-dimensional space (4D for SIFT), the descriptor is used to robustly describe the surroundings of said points. The latter is increasingly used for image categorization and identification in what is commonly known as the "bag of word" or "visual words" approach. In the most simple form, one can collect all data from all descriptors from all images and cluster them, for example using k-means. Every original image then has descriptors that contribute to a number of clusters. The centroids of these clusters, i.e. the visual words, can be used as a new descriptor for the image. The VLfeat website contains a nice demo of this approach, classifying the caltech 101 dataset:
http://www.vlfeat.org/applications/apps.html#apps.caltech-101