Polar 2D Interpolation - matlab

Say we are creating a calibration lookup table for a device, shown in the plot below. The theta represents different phase values, and the r represents different magnitude values. The calibration setpoints are shown in blue circles, and are taken at every N degrees of phase and N values of magnitude. For every setpoint, we measure the actual device output and obtain the red coordinates, which describe the resulting phase and magnitude. Thus for every blue setpoint, we observe the device outputting red points.
The question now is, I want to set the device to a value of the green circle with orange ring. How do I calculate what the setpoint should be (green circle) to set the device to in order to obtain green/orange on the output?
The issue I am having is that for every 2D setpoint (mag, phase), the resultant data is 2D (mag, phase). In addition, magnitude and phase are not independent variables (fixing phase and changing only magnitude, the resulting phase output does change).
So what basic math/logic should I use to perform the necessary interpolation?

How about treating this like a registration problem. For example, you could use an affine transformation as the model between the measured and calibrated points? For each cell (i.e., the 4 blue points in your figure) compute a least squares estimate of the affine transformation between the blue and red points. Then for new points apply the corresponding transformation to get the green point you want. Here and here are some SO questions that discuss this. In addition, you might consider estimating and applying the transformation directly in magnitude/phase space.

Related

Image Processing Q: Separate/segment an image

need some help here on image processing. I'm using Matlab and try to segment the following figure based on the two major peaks (in yellow). The color yellow means higher value and blue means low value (on z-axis, or image color from 0 to 1 for your convenience). The ideal cut is roughly the line from point (1,75) to (120,105). But I want a systematic way to derive this rather than by observation.
My intuition was to first identify the two peaks (based on this), and then classify each point/pixel on this figure to the two peaks (the metric here is to compute the shortest Euclidean distance to the edge of the two peaks).
And I end up with the following fig.
As you can see, the cut is pretty much a straight line, which I'm not quite satisfied. Maybe I can use the orientation of the peak circle and somehow tilt the line.. but I'm not sure how to do so? Any clues? Thanks.
This is an Image segmentation problem.
you can use GMM Gaussian of Mixture Model to model the image.
in your case the number of components will be 2.
after you model the image by using this mixture, you can find the probability of each pixel P(pixel x belong to the first component or the second component)
check
http://www.mathworks.com/matlabcentral/newsreader/view_thread/272162
http://www.mathworks.com/help/stats/cluster-data-from-mixture-of-gaussian-distributions.html

Verify that camera calibration is still valid

How do you determine that the intrinsic and extrinsic parameters you have calculated for a camera at time X are still valid at time Y?
My idea would be
to use a known calibration object (a chessboard) and place it in the camera's field of view at time Y.
Calculate the chessboard corner points in the camera's image (at time Y).
Define one of the chessboard corner points as world origin and calculate the world coordinates of all remaining chessboard corners based on that origin.
Relate the coordinates of 3. with the camera coordinate system.
Use the parameters calculated at time X to calculate the image points of the points from 4.
Calculate distances between points from 2. with points from 5.
Is that a clever way to go about it? I'd eventually like to implement it in MATLAB and later possibly openCV. I think I'd know how to do steps 1)-2) and step 6). Maybe someone can give a rough implementation for steps 2)-5). Especially I'd be unsure how to relate the "chessboard-world-coordinate-system" with the "camera-world-coordinate-system", which I believe I would have to do.
Thanks!
If you have a single camera you can easily follow the steps from this article:
Evaluating the Accuracy of Single Camera Calibration
For achieving step 2, you can easily use detectCheckerboardPoints function from MATLAB.
[imagePoints, boardSize, imagesUsed] = detectCheckerboardPoints(imageFileNames);
Assuming that you are talking about stereo-cameras, for stereo pairs, imagePoints(:,:,:,1) are the points from the first set of images, and imagePoints(:,:,:,2) are the points from the second set of images. The output contains M number of [x y] coordinates. Each coordinate represents a point where square corners are detected on the checkerboard. The number of points the function returns depends on the value of boardSize, which indicates the number of squares detected. The function detects the points with sub-pixel accuracy.
As you can see in the following image the points are estimated relative to the first point that covers your third step.
[The image is from this page at MATHWORKS.]
You can consider point 1 as the origin of your coordinate system (0,0). The directions of the axes are shown on the image and you know the distance between each point (in the world coordinate), so it is just the matter of depth estimation.
To find a transformation matrix between the points in the world CS and the points in the camera CS, you should collect a set of points and perform an SVD to estimate the transformation matrix.
But,
I would estimate the parameters of the camera and compare them with the initial parameters at time X. This is easier, if you have saved the images that were used when calibrating the camera at time X. By repeating the calibrating process using those images you should get very similar results, if the camera calibration is still valid.
Edit: Why you need the set of images used in the calibration process at time X?
You have a set of images to do the calibrations for the first time, right? To recalibrate the camera you need to use a new set of images. But for checking the previous calibration, you can use the previous images. If the parameters of the camera are changes, there would be an error between the re-estimation and the first estimation. This can be used for evaluating the validity of the calibration not for recalibrating the camera.

Disparity calculation of two similar images in matlab

I have two images(both are exactly same images) and I am trying to calculate the disparity between them using sum of squared distances and reconstruct disparity in 3D space.
Do I need to rectify the image before calculating disparity?
The following are the steps that I have done so far for disparity map computation(I have tried with rectification and without rectification but both are returning all zeroes disparity matrix).
For each pixel in the left image X,
Take the pixels in the same row in the right image.
Separate the row in right image to windows.
For each window,
Calculate the disparity for each pixel in that window with X
Select the pixel in the window which gives minimum SSD with X
Find the pixel with minimum disparity among all windows as the best match to X
Am I doing it correctly?
How can I visualise the 3D reconstruction of the disparity as scatter plot in matlab?
Rectification guarantees that matches are to be found in the same row (for horizontally separated cameras). If you have doubts about rectification of your images you can try to compare rows by drawing horizontal lines between horizontally separated images. If the lines hit the same features you are fine, see the picture below where images are NOT rectified. The fact that they are distorted means there was a lens distortion correction as well as attempted (but not actually performed correctly) rectification.
Now, let’s see what you meant by the same images. Did you mean the images of the same object that were taken from different viewpoints? Note that if the images are literally the same (the same viewpoints) the disparity will be zero as was noted in another answer. The definition of disparity (for horizontally separated cameras) is a value of shift (in the same row) between matching features. The disparity is related to depth (if optical axes of cameras are parallel) as disparity d=f*B/z, where z - depth, B - baseline or separation between cameras and f is a focal length. You can transform the formula above into disparity/B=f/z which basically says that disparity related to camera separation as focal length is related to distance. In other words, the ratios of horizontal and distance measures are equal.
If your images are taken with the cameras shifted horizontally the disparity (in a simple correlation algorithm) is typically calculated in 5-embedded loops:
loop over image1 y
loop over image1 x
loop over disparity d
loop over correlation window y
loop over correlation window x
Disparity, or D_best, gives you the best matching window between image1 and image2 across all possible values of d. Finally, scatterplots are for 3D point clouds while disparity can be rather visualized as a heat color map. If you need to visualize 3D reconstruction or simply saying a 3D point cloud calculate X, Y, Z as:
Z=fB/D, X=uZ/f, Y=v*Z/f, where u and v are related to column and row of wxh image as
u=col-w/2 and v=h/2-row, that is u, v form an image centered coordinate system.
If your two images are exactly the same, then the disparity would be 0 for every pixel. You either have to use two separate cameras to take the images, or take them with a single camera from two different locations. The best way to do 3D reconstruction is to use a calibrated stereo pair of cameras. Here is an example of how to do that using the Computer Vision System Toolbox for MATLAB.

How to make "well" a ridge-shape from a given 2d line? (gaussian, matlab)

My goal is to make a ridge(mountain)-like shape from the given line. For that purpose, I applied the gaussian filter to the given line. In this example below, one line is vertical and one has some slope. (here, background values are 0, line pixel values are 1.)
Given line:
Ridge shape:
When I applied gaussian filter, the peak heights are different. I guess this results from the rasterization problem. The image matrix itself is discrete integer space. The gaussian filter is actually not exactly circular (s by s matrix). Two lines also suffer from rasterization.
How can I get two same-peak-height nice-looking ridges(mountains)?
Is there more appropriate way to apply the filter?
Should I make a larger canvas(image matrix) and then reduce the canvas by interpolation? Is it a good way?
Moreover, I appreciate if you can suggest a way to make ridges with a certain peak height. When using gaussian filter, what we can do is deciding the size and sigma of the filter. Based on those parameters, the peak height varies.
For information, image matrix size is 250x250 here.
You can give a try to distance transform. Your image is a binary image (having only two type of values, 0 and 1). Therefore, you can generate similar effects with distance transform.
%Create an image similar to yours
img=false(250,250);
img(sub2ind(size(img),180:220,linspace(20,100,41)))=1;
img(1:200,150)=1;
%Distance transform
distImg=bwdist(img);
distImg(distImg>5)=0; %5 is set manually to achieve similar results to yours
distImg=5-distImg; %Get high values for the pixels inside the tube as shown
%in your figure
distImg(distImg==5)=0; %Making background pixels zero
%Plotting
surf(1:size(img,2),1:size(img,1),double(distImg));
To get images with certain peak height, you can change the threshold of 5 to a different value. If you set it to 10, you can get peaks with height equal to the next largest value present in the distance transform matrix. In case of 5 and 10, I found it to be around 3.5 and 8.
Again, if you want to be exact 5 and 10, then you may multiply the distance transform matrix with the normalization factor as follows.
normalizationFactor=(newValue-minValue)/(maxValue-minValue) %self-explanatory
Only disadvantage I see is, I don't get a smooth graph as you have. I tried with Gaussian filter too, but did not get a smooth graph.
My result:

Artifacts in image after super resolution using delaunay triangulation in MATLAB

i have to do super resolution of two low resolution images to obtain a high resolution image.
2nd image is taken as base image and the first image is registered with respect to it . i used SURF algorithm for image registration . A Delaunay triangulation is constructed over the points using a built-in MATLAB delaunay function . The HR grid of size is constructed for a prespecified resolution enhancement factor R Then HR algorithm for interpolating the pixel values on the HR grid is summarized next.
HR Algorithm Steps: 1. Construct the Delaunay triangulation over the set of scattered vertices in the irregularly sampled raster formed from the LR frames.
Estimate the gradient vector at each vertex of the triangulation by calculating the unit normal vector of neighbouring vector using cross product method.Sum of the unit normal vector of each triangle multiplied by its area is divided by summation of area of all neighbouring triangles to get the vertex normal.
Approximate each triangle patch in the triangulation by a continuous and, possibly, a continuously differentiable surface, subject to some smoothness constraint. Bivariate polynomials or splines could be the approximants as explained below.
Set the resolution enhancement factor along the horizontal and vertical directions and then calculate the pixel value at each regularly spaced HR grid point to construct the initial HR image
now i have the results shown below
now for one kind of data set i get this result that has a few pixels black and wite in a random manner for the other type i get thin parallel lines all over image after super resolution the results are attached
any one can tell me the reason, i have figured out may be its demosaicing but i am not sure,because i dnt have much understanding of it , moreover can it be a bug in my code but it behaves different for different images, i have increased the size by super resolution twice.