I was visualising my wind velocity using glyphs and cloud water content at the same time. However, I notice that the direction where the clouds move do not match the direction the glyphs pointing.
Below are the steps how I create the output:
The data is a netcdf file with wind variable array "ua" (eastward_wind_speed), "va" (northward_wind_speed), and "wa" (wind_vertical_velocity).
I used a cell_data_to_point_data filter to convert them into point data.
Then I combined these 3 arrays using a Paraview calculator with the equation iHatua + jHatva + kHat*wa.
Then do a glyph filter to visualise the wind velocity.
The problem is, the clouds are moving to the left(east), which does not match where the glyphs are pointing at (south).
What would be the possible reason for this error?
TIA
Update:
For anyone that might have the same problem:
Just solved the problem and the glyphs make much more sense now.
Switched off the spherical coordinate
Transform filter to scaled down the vertical components
Then do the contour filter and glyph filter as usual.
There are two things to consider.
Some weather agencies use a convection of wind direction from which is blowing. However there are agencies that have a wind direction to which is blowing.
Probably you are not using the wind direction at the same height as the clouds.
Related
i'm new to Swift, and now i'm trying to build sky map app like the application "star chart".
i already got a sky map image from NASA and cover it on SCNsphere, also already set camera node in the center of this sphere to make it looks like 360 degrees. Furthermore, i used accelerator to check what direction the camera is looking at.
i know that the sky map like “star chart” doesn't need internet to update data. so now the biggest problem is that i don't know how to correct the position of my sky map according to people's current time and location.
Any good advice and help? Thanks in advance!!! cause i tried vary hard to find some related information but still stuck in here for three weeks.
You just need to rotate your map with time+longitude around Earth's rotation axis and with latitude around axis longitude=90 degrees while earth is placed in the center of your sphere. For stars the offset does not matter so you can ignore Sun-Earth distance and also Earth's radius as well.
The time rotation must be day+year rotations together. On top of that you have to apply precession and nutation if you want to have higher precision.
Of coarse the stars are moving too so if you need really high precision and or high time interval to cover (hundreds or thousands of years) then this approach is not good and you should use stellar catalog with the motions implemented.
For more info see related:
How to draw sky chart
Plotting a star chart efficiently
If you want to use catalog and real colors then you will also need
Star B-V color index to apparent RGB color
simplified atmospheric scattering GLSL shader
And finally here some hints for such applications:
Is it possible to make realistic n-body solar system simulation in matter of size and mass?
everyone.
I'm trying to record the movement from a person, frame to frame using the Microsoft Kinect API. For that i'm saving all the joint's position, and besides i would like to get the direction of the vector of the joint. I've seen that the API has something about joint orientation with quaternion matrices, but i don´t know how to use it to get the direction, or should i simply calculate the direction from the coordinates?
Thanks
Thanks to the answer from Carmine Si - MSFTMicrosoft (MSFT)
"To determine the direction a joint is travelling, then you should just calculate the vector based on the point locations from frame to frame. Typically the other values are for mapping your skeletion from different coordiante spaces so you can do things like the Avateering sample."
I'm working on an IPhone robot that would be moving around. One of the challenges is estimating distance to objects- I don't want the robot to run into things. I saw some very expensive (~1000$) laser rangefinders, and would like to emulate one using iPhone.
I got one or two camera feeds and two laser pointers. The laser pointers are mounted about 6 inches apart, at an angle The angle of lasers in relation to the cameras is known. The Angle of cameras to each other is known.
The lasers are pointing ahead of cameras, creating 2 dots on a camera feed. Is it possible to estimate the distance to the dots by looking at the distance between the dots in a camera image?
The lasers form a trapezoid from the
/wall \
/ \
/laser mount \
As the laser mount gets closer to the wall, the points should be moving further away from each other.
Is what I'm talking about feasible? Has anyone done something like that?
Would I need one or two cameras for such calculation?
If you just don't want to run into things, rather than have an accurate idea of the distance to them, then you could go "dambusters" on it and just detect when the two points become one - this would be at a known distance from the object.
For calculation, it is probaby cheaper to have four lasers instead, in two pairs, each pair at a different angle, one pair above the other. Then a comparison between the relative differences of the dots would probably let you work out a reasonably accurate distance. Math overflow for that one, though.
In theory, yes, something like this can work. Google "light striping" or "structured light depth measurement" for some good discussions of using this sort of idea on a larger scale.
In practice, your measurements are likely to be crude. There are a number of factors to consider: the camera intrinsic parameters (focal length, etc) and extrinsic parameters will affect how the dots appear in the image frame.
With only two sample points (note that structured light methods use lines, etc), the environment will present difficulties for distance measurement. Surfaces that are directly perpendicular to the floor (and direction of travel) can be handled reasonably well. Slopes and off-angle walls may be detectable, but you will find many situations that will give ambiguous or incorrect distance measures.
Hi Guys I am doing some work on iOS and the work requires use of OpenGL es. So now I have a bunch of squares, cubes and triangles on the screen. Some of these geometries might overlap. Any ideas/ approaches for touch detection?
Regards
To follow up on the answer already given, squares, cubes and triangles are convex shapes so you can perform ray-object intersection quite easily, even directly from the geometry rather than from the mathematical description of the perfect object.
You're going to need to be able to calculate the distance of a point from the plane and the intersection of a ray with the plane. As a simple test you can implement yourself very quickly, for each polygon on the convex shape work out the intersection between the ray and the plane. Then check whether that point is behind all the planes defined by polygons that share an edge with the one you just tested. If so then the hit is on the surface of the object — though you should be careful about coplanar adjoining polygons and rounding errors.
Once you've found a collision you can easily get the length of the ray to the point of collision. The object with the shortest distance is the one that's in front.
If that's fast enough then great, otherwise you'll probably want to look into partitioning the world or breaking objects down to their silhouettes. Convex objects are really simple — consider all the edges that run between one polygon and the next. If only exactly one of those polygons is front facing then the edge is part of the silhouette. All the silhouettes edges together can be projected to a convex 2d shape on the view plane. You can then test touches by performing a 2d point-in-polygon from that.
A further common alternative that eliminates most of the maths is picking. You'd render the scene to an invisible buffer with each object appearing as a solid blob in a suitably unique colour. To test for touch, you'd just do a glReadPixels and inspect the colour.
For the purposes of glu on the iPhone, you can grab SGI's implementation (as used by MESA). I've used its tessellator in a shipping, production project before.
I had that problem in the past. What I have used is an implementation of glu unproject that you can find on google (it uses the inverse of the model view projection matrix and the viewport size). This allows you to map the 2D screen coordinates to a 3D vector into the world. Then, you can use this vector to intersect with your objects and see which one intersects (or comes really close to doing so).
I do hope there are better ways of doing this, so I look forward to other answers as well!
Once you get the inverse-modelview and cast your ray (vector), you still need to know if the ray intersects your geometry. One approach would be to grab the depth (z in view coordinate system) of the object's center and extend (stretch) your vector just that far. Then see if the vector's "head" ends within the volume of your object or not (you need the objects center and e.g. Its radius, if it's a sphere)
Maybe I'm asking this too soon in my research, but I'd better know if this is possible sooner than later.
Imagine I have the following square printed on a paper on top of a table:
The table is brown, so it does not match with any of the colors in the square. Is there a way for me, from a common iPhone camera (non-stereo view), to figure out the distance and angle from which Im looking at the square in the table?
In the end what I'm looking for is being able to draw a 3D square on top of this one using the camera image, but I'm not sure if I am going to be able to figure out the distance and position of the object in space using only a 2D image. Any hints are well appreciated.
Short answer: http://weblog.bocoup.com/javascript-augmented-reality
Big answer:
First posterize, Then vectorize, With the vectors in your power you may need to do some math tricks to define, based on the vectors position, the perspective and then the camera position.
Maybe this help:
www.pixastic.com/lib/docs/actions/posterize/
github.com/selead/cl-vectorizer
vectormagic.com/home
autotrace.sourceforge.net
www.scipy.org/PyLab
raphaeljs.com/
technabob.com/blog/2007/12/29/video-games-get-vectorized/
superuser.com/questions/88415/is-there-an-open-source-alternative-to-vector-magic
Oughta be possible. Scan the image for the red/blue/yellow pattern, then do edge detection to figure out how warped the squares are (they'll be parallelograms in anything but straight-on view). Distance would depend on the camera's zoom setting and scan resolution. But basically you'd count how many pixels are visible in each of the squares, run that past the camera's specs and you should be able to determine a rough distance.