OpenGL 2D coordinates to Psychtoolbox (pixel) coordinates - matlab

I'm modifying a matlab code. It displays graphics using the Psychtoolbox, which can basically create an on-screen window. The code I want to adapt uses both higher-level Psychtoolbox commands and lower-level OpenGL calls, as provided by the Matlab OpenGL toolbox. I'm familiar with Psychtoolbox, not at all familiar with OpenGL.
Coordinates in Psychtoolbox are in pixels and start on (0,0) from the top-left corner of the screen and move rightwards (x) and downwards (y).
I simply need the conversion between the Matlab implementation of OpenGL coordinates and Psychtoolbox's pixel-based ones. There are a few questions and answers and many resources online about this, but I am still confused.
For example, as far as I understand OpenGL uses normalized coordinates that range between [-1, 1]. However, in the code I am adapting something is nicely displayed despite y = -1.5.
So my questions are:
How do I convert between Matlab's OpenGL and Matlab's Psychtoobox coordinates?
Can OpenGL coordinates in Matlab go beyond the [-1, 1] range?

Related

Point cloud on an STL surface, integrate per element

I have a cloud of points that lie randomly on a 3D object surface. The object is a CAD model, can be saved as STL. The point cloud is obtained from ray tracing, each point representing the power of light absorbed when a ray partially reflects off the surface. I would like to visualize absorbed intensity on the object, using ParaView.
So, input: [x, y, z, p] + STL. (x, y, z) are guaranteed to lie on the object surface, but are likely slightly off the STL due to it being an approximation of the real surface.
Desired output: colored STL image, with each surface element coloured according to the total absorbed power in that element divided by its area.
Optional: Ideally, the data should be smoothened, something like "sliding average" or Gaussian blur.
Difficulty: The main problem I am facing, independent of using ParaView, is that I don't know the intensity, only the power. I can calculate the intensity myself, e.g. in Matlab, and get a poor Matlab graphics (compared to Paraview) and a very noisy image (because of random fluctuation of intensity between pixels due to finite number of rays). ParaView seems to be doing magic, hope to solve this problem with it.
Can I do the above with ParaView, without programming a new filter / with minimum programming?
I have just discovered ParaView, so please excuse a very novice question. Googling for answer didn't help, hopefully I didn't miss it due to poor wording.
RessampleToDataset filter should let you ressample your point cloud unto the stl.

Understanding of openCV undistortion

I'm receiving depth images of a tof camera via MATLAB. the delivered drivers of the tof camera to compute x,y,z coordinates out of the depth image are using openCV function, which are implemented in MATLAB via mex-files.
But later on I can't use those drivers anymore nor use openCV functions, therefore I need to implement the 2d to 3d mapping on my own including the compensation of radial distortion. I already got hold of the camera parameters and the computation of the x,y,z coordinates of each pixel of the depth image is working. Until now I am solving the implicit equations of the undistortion via the newton method (which isn't really fast...). But I want to implement the undistortion of the openCV function.
... and there is my problem: I dont really understand it and I hope you can help me out there. how is it actually working? I tried to search through the forum, but havent found any useful threads concerning this case.
greetings!
The equations of the projection of a 3D point [X; Y; Z] to a 2D image point [u; v] are provided on the documentation page related to camera calibration :
(source: opencv.org)
In the case of lens distortion, the equations are non-linear and depend on 3 to 8 parameters (k1 to k6, p1 and p2). Hence, it would normally require a non-linear solving algorithm (e.g. Newton's method, Levenberg-Marquardt algorithm, etc) to inverse such a model and estimate the undistorted coordinates from the distorted ones. And this is what is used behind function undistortPoints, with tuned parameters making the optimization fast but a little inaccurate.
However, in the particular case of image lens correction (as opposed to point correction), there is a much more efficient approach based on a well-known image re-sampling trick. This trick is that, in order to obtain a valid intensity for each pixel of your destination image, you have to transform coordinates in the destination image into coordinates in the source image, and not the opposite as one would intuitively expect. In the case of lens distortion correction, this means that you actually do not have to inverse the non-linear model, but just apply it.
Basically, the algorithm behind function undistort is the following. For each pixel of the destination lens-corrected image do:
Convert the pixel coordinates (u_dst, v_dst) to normalized coordinates (x', y') using the inverse of the calibration matrix K,
Apply the lens-distortion model, as displayed above, to obtain the distorted normalized coordinates (x'', y''),
Convert (x'', y'') to distorted pixel coordinates (u_src, v_src) using the calibration matrix K,
Use the interpolation method of your choice to find the intensity/depth associated with the pixel coordinates (u_src, v_src) in the source image, and assign this intensity/depth to the current destination pixel.
Note that if you are interested in undistorting the depthmap image, you should use a nearest-neighbor interpolation, otherwise you will almost certainly interpolate depth values at object boundaries, resulting in unwanted artifacts.
The above answer is correct, but do note that UV coordinates are in screen space and centered around (0,0) instead of "real" UV coordinates.
Source: own re-implementation using Python/OpenGL. Code:
def correct_pt(uv, K, Kinv, ds):
uv_3=np.stack((uv[:,0],uv[:,1],np.ones(uv.shape[0]),),axis=-1)
xy_=uv_3#Kinv.T
r=np.linalg.norm(xy_,axis=-1)
coeff=(1+ds[0]*(r**2)+ds[1]*(r**4)+ds[4]*(r**6));
xy__=xy_*coeff[:,np.newaxis]
return (xy__#K.T)[:,0:2]

Matlab: surface plot not working

I have a surface plot I'm trying to do. x is an 11 element vector, y a 300 element vector and z a 300*11 element matrix.
When I try to plot it like this:
surf(x y z)
The surface plot doesn't show up. The axes are there but there is no surface plot.
However, if for some reason I do a surface plot of a subset of the matrix like this:
surf(x y(1:31) z(1:31,:))
Then it works and the plot shows up.
As soon as I increase the number in the brackets to 32 it stops working. If I change the range from 2:32 then it works, so it's nothing to do with the data just the size of the matrices.
What's going on here? How do I fix it?
P.S I'd attach the code but it's a bit long and complex, and imports .txt files to load into the x and y vectors.
Sometimes, it can help to change Matlab's figure renderer, which is basically the backend that performs the drawing. Options are painters, zbuffer, and OpenGL.
Since it is a figure property, you can apply it to a specific figure, e.g.:
set(gcf(), 'Renderer', 'painters')
or update the default figure properties (if always needed, you could put it in your user-specific startup.m):
set(0, 'Renderer', 'painters')
Similarly, to get the current Renderer state, use get instead of set:
get(gcf(), 'Renderer')
Different renderers have different performance properties (e.g. OpenGL renderer can use hardware acceleration, if supported), but also different quirks (in my experience, frame capturing using getframe() works with some renderers while using remote desktop login, but not all). While I don't know the exact reason for your problem, it may be one of these weird quirks, so try changing the renderer.
From the Renderer property documentation:
Rendering method used for screen and printing.
Selects the method used to render MATLAB graphics. The choices are:
painters — The original rendering method used by MATLAB is faster when the figure contains only simple or small graphics objects.
zbuffer — MATLAB draws graphics objects faster and more accurately because it colors objects on a per-pixel basis and MATLAB renders only those pixels that are visible in the scene (thus eliminating front-to-back sorting errors). Note that this method can consume a lot of system memory if MATLAB is displaying a complex scene.
OpenGL — OpenGL is a renderer that is available on many computer systems. This renderer is generally faster than painters or zbuffer and in some cases enables MATLAB to access graphics hardware that is available on some systems.
Looks at the change in the min/max values of the axis along the left side (y-axis) and the top (z-axis). I think it's still there but it's just very very small.
Try setting the axis afterwards like this:
axis([6E-6 8E-6 9.2E14 10E14 0.96 1.06 -1 1])
Note: the E-6 might be E-8, I can't really tell from the image...
This is based off the code of: axis([xmin xmax ymin ymax zmin zmax cmin cmax])

Matlab: animation

I want to write a program which shows a visual animation of the orbit of satellite in 3D space, with the Earth's rotation.
I can write a code which shows a visualisation of the orbit (simply comet3()). It is possible to rotate the 3D model of the Earth either.
But I can't merge these two programs.
I've seen some Youtube videos like "Satellite Orbit Analysis and Simulation (in MATLAB)". How has he done it?
Is there any special stackexchange site for Matlab questions?
You can see a demo how to draw the Earth in 3D or 2D here:
Earth's Topography
To rotate an object like surface you can use function ROTATE. For example:
rotate(hsurf, [0 0 1], 20) #% rotates surface with handle hsurf around z axis by 20 deg
In addition have a look at Orbit Determination Toolbox (ODTBX).
And yeh, the best MATLAB SE site is here at SO. Just add or search for matlab tag.
UPDATE: Another beautiful Earth plot at FileExchange: http://www.mathworks.com/matlabcentral/fileexchange/25048
Consider doing the graphical front-end in Java. MATLAB interfaces flawlessly with Java, and it's much easier to do GUI stuff in Java. If you don't know Java and you have time, start learning, it's worth the effort as a general purpose programming language that's everywhere, and it's an invaluable companion to MATLAB.

Kinect (Microsoft Sdk) Skeleton (recorded) Data from pixel to 3d real world coordinates

I have a dataset of partial joints (right elbow, shoulder and wrist) taken from a fellow who acquired this data with OpenNi.
The joints are in pixel as regarding to x and y, while z is in mm. I have to convert them to real world space to match them with data acquired by me (using Microsoft Sdk) for a gesture recognition application. I'm working in Matlab.
Searching on web and papers, I found that a floor reference is necessary for the conversion but I don't have any, so how could this conversion be done, possibly in matlab, and which candidate should I pick ? (maybe height of kinect from the floor?)
Here's a not-so-awesome solution:
Plot the 3D points you have of both data sets
Look for a pose where the arm and the forearm seems to be making a similar pose (as L-shaped as possible).
Use this to compute the transformation.