Artifacts in image after super resolution using delaunay triangulation in MATLAB - matlab

i have to do super resolution of two low resolution images to obtain a high resolution image.
2nd image is taken as base image and the first image is registered with respect to it . i used SURF algorithm for image registration . A Delaunay triangulation is constructed over the points using a built-in MATLAB delaunay function . The HR grid of size is constructed for a prespecified resolution enhancement factor R Then HR algorithm for interpolating the pixel values on the HR grid is summarized next.
HR Algorithm Steps: 1. Construct the Delaunay triangulation over the set of scattered vertices in the irregularly sampled raster formed from the LR frames.
Estimate the gradient vector at each vertex of the triangulation by calculating the unit normal vector of neighbouring vector using cross product method.Sum of the unit normal vector of each triangle multiplied by its area is divided by summation of area of all neighbouring triangles to get the vertex normal.
Approximate each triangle patch in the triangulation by a continuous and, possibly, a continuously differentiable surface, subject to some smoothness constraint. Bivariate polynomials or splines could be the approximants as explained below.
Set the resolution enhancement factor along the horizontal and vertical directions and then calculate the pixel value at each regularly spaced HR grid point to construct the initial HR image
now i have the results shown below
now for one kind of data set i get this result that has a few pixels black and wite in a random manner for the other type i get thin parallel lines all over image after super resolution the results are attached
any one can tell me the reason, i have figured out may be its demosaicing but i am not sure,because i dnt have much understanding of it , moreover can it be a bug in my code but it behaves different for different images, i have increased the size by super resolution twice.

Related

Cell colony survival mapping in a particular spatial pattern

I am attempting to spatially map the cell survival in a given scanned image of a cell flask. Quick background: the cells have received a high dose of irradiation (protons/X-rays) delivered through a grid so that some regions are covered from the irradiation, whereas other regions are not. After scanning such cell colonies, the images are then fed into a segmentation algorithm (in which I have developed using Matlab), centroid coordinates (c_i = (x_i,y_i)) of each detected viable colony are provided.
I have done this type of assessment for grid ‘stripes’, where I have counted colonies within a band along a single dimension (x) and tested for different band widths Δx (as shown in the left figure below). However, my issue is for grid ‘holes’ (see right figure below) – how can I perform the same type of assessment for cell colony survival in two dimensions (x and y) given the centroid coordinates? Do I have to “think” radially?
Thank you in advance for any guidance or help to this problem.
You are in the right direction. In the left side image the variation is along x-axis and you are using a new axis for plating efficiency (y-axis).
Similarly, for grid - you will have to introduce a new axis : z axis. Suppose your image I is 500x500 and each grid-cell is 50x50. So you will create a 10x10 grid G where each cell of G is count of centroids in one 50x50 grid cell of I.
Since visualizing a 3-D chart is difficult, people use images, where the value of z-axis is the intensity in image or the grayscale value of a grayscale image. Make sure to normalize your z-axis values on [0,1] or [0,255] range for using images as your visualization tool.

Polar 2D Interpolation

Say we are creating a calibration lookup table for a device, shown in the plot below. The theta represents different phase values, and the r represents different magnitude values. The calibration setpoints are shown in blue circles, and are taken at every N degrees of phase and N values of magnitude. For every setpoint, we measure the actual device output and obtain the red coordinates, which describe the resulting phase and magnitude. Thus for every blue setpoint, we observe the device outputting red points.
The question now is, I want to set the device to a value of the green circle with orange ring. How do I calculate what the setpoint should be (green circle) to set the device to in order to obtain green/orange on the output?
The issue I am having is that for every 2D setpoint (mag, phase), the resultant data is 2D (mag, phase). In addition, magnitude and phase are not independent variables (fixing phase and changing only magnitude, the resulting phase output does change).
So what basic math/logic should I use to perform the necessary interpolation?
How about treating this like a registration problem. For example, you could use an affine transformation as the model between the measured and calibrated points? For each cell (i.e., the 4 blue points in your figure) compute a least squares estimate of the affine transformation between the blue and red points. Then for new points apply the corresponding transformation to get the green point you want. Here and here are some SO questions that discuss this. In addition, you might consider estimating and applying the transformation directly in magnitude/phase space.

Summing squared area changes from Voronoi cells given area of triangles in 3D?

I have a list of triangles in 3D that form a surface (ie a triangulation). The structure is a deformed triangular lattice. I want to know the change in area of the deformed hexagons of the voronoi tessalation of the lattice with respect to the rest area of the undeformed lattice cells (ie with respect to a regular hexagon). In fact, I really want the sum of the squared change in area of the hexagonal unit cells associated with those triangles.
Background/Math details:
I'm approximating a curved elastic sheet by a triangular lattice. One way to tune the poisson ratio (elastic constant) of the sheet is by adding a 'volumetric' strain energy term to the energy. I'm trying to compute a 'volumetric' strain energy of a deformed, elastic, triangular lattice, defined as: U_volumetric = 1/2 T (e_v)^2, where e_v=deltaV/V is determined by the change in area of a voronoi cell with respect to its reference area, which is a known constant.
Reference: https://www.researchgate.net/publication/265853755_Finite_element_implementation_of_a_non-local_particle_method_for_elasticity_and_fracture_analysis
Want:
Sum[ (DeltaA/ A).^2 ] over all hexagonal cells.
My data is stored in the variables:
xyz = [ x1,y1,z1; x2,y2,z2; etc] %the vertices/particles in 3D
TRI = [ vertex0, vertex1, vertex2; etc] %
where vertex0 is the row of xyz for the particle sitting at vertex 0 of the first triangle.
NeighborList = [ p1n1, p1n2, p1n3, p1n4, p1n5,p1n6 ; p2n1...]
% where p1n1 is particle 1's first nearest neighbor as a row index for xyz. For example, xyz(NL(1,1),:) returns the xyz location of particle 1's first neighbor.
AreaTRI = [ areaTRI1; areaTRI2; etc]
I am writing this in MATLAB.
As of now, I am approximating the amount of area attributed to each vertex as 1/3 of the triangle's area, then summing over the 6 nearest neighbor triangles. But a voronoi cell area will NOT be exactly equal to Sum_(i=0,1,...5) 1/3* areaTRI_i, so this is a bad approximation. See the image in the link above, which I think makes this clearer.
You can do this using the DUALMESH-submission on the file exchange:
DUALMESH is a toolbox of mesh processing routines that allow the construction of "dual" meshes based on underlying simplicial triangulations. Support is provided for various planar and surface triangulation types, including non-Delaunay and non-manifold types.
Simply use the following commands to generate a vector areas of all the dual elements' areas. The ordering will correspond to the nodes xyz.
[cp,ce,pv,ev] = makedual2(xyz, TRI);
[~,areas(cp(:,1))] = geomdual2(cp,ce,pv,ev);
You might want to have a look at the boundary areas using:
trisurf(TRI, xyz(:,1), xyz(:,2), areas);
The dual cells of boundary nodes theoretically are unbounded and thus should have infinite area. This submission handles it differently however: Instead of an unbounded cell it will return the intersection of the unbounded cell with the original mesh.
Also mind that your question is not well defined if the mesh you are working with is not planar, as the dual mesh cells will be planar and won't scale the same way as the triangles. So this solution will probably only work correctly if your mesh is really 2D. (From what I can tell, the paper you mention is also only for the 2D-case.)

Uncalibrated multi-view reconstruction depth estimation

I'm trying to make a 3D reconstruction from a set of uncalibrated photographs in MATLAB. I use SIFT to detect feature points and matches between images. I want to make a projective reconstruction first and then update this to a metric one using auto-calibration.
I know how to estimate the 3D points from 2 images by computing the fundamental matrix, camera matrices and triangulation. Now say I have 3 images, a, b and c. I compute the camera matrices and 3D points for image a and b. Now I want to update the structure by adding image c. I estimate the camera matrix by using known 3D points (calculated from a and b) that match with 2D points in image c, since:
However when I reconstruct the 3D points between b and c they don't add up with the existing 3D points from a and b. I'm assuming this is because I don't know the correct depth estimates of the points (depicted by s in above formula).
With the factorization method of Sturm and Triggs I can estimate the depths and find the structure and motion. However in order to do this, all points have to be visible in all views, which is not the case for my images. How can I estimate the depths for points not visible in all views?
This is not a question about Matlab. It is about an algorithm.
It is not mathematically possible to estimate the position of a 3D point in an image when you don't see an observation of the point in said image.
There are extensions for factorization to work with missing data. However, the field seems to have converged to Bundle Adjustment as the Gold Standard.
An excellent tutorial on how to achieve what you want can be found here, which is a culmination of several years of research into a working application. Starting from projective reconstruction up to the metric upgrade.

Disparity calculation of two similar images in matlab

I have two images(both are exactly same images) and I am trying to calculate the disparity between them using sum of squared distances and reconstruct disparity in 3D space.
Do I need to rectify the image before calculating disparity?
The following are the steps that I have done so far for disparity map computation(I have tried with rectification and without rectification but both are returning all zeroes disparity matrix).
For each pixel in the left image X,
Take the pixels in the same row in the right image.
Separate the row in right image to windows.
For each window,
Calculate the disparity for each pixel in that window with X
Select the pixel in the window which gives minimum SSD with X
Find the pixel with minimum disparity among all windows as the best match to X
Am I doing it correctly?
How can I visualise the 3D reconstruction of the disparity as scatter plot in matlab?
Rectification guarantees that matches are to be found in the same row (for horizontally separated cameras). If you have doubts about rectification of your images you can try to compare rows by drawing horizontal lines between horizontally separated images. If the lines hit the same features you are fine, see the picture below where images are NOT rectified. The fact that they are distorted means there was a lens distortion correction as well as attempted (but not actually performed correctly) rectification.
Now, let’s see what you meant by the same images. Did you mean the images of the same object that were taken from different viewpoints? Note that if the images are literally the same (the same viewpoints) the disparity will be zero as was noted in another answer. The definition of disparity (for horizontally separated cameras) is a value of shift (in the same row) between matching features. The disparity is related to depth (if optical axes of cameras are parallel) as disparity d=f*B/z, where z - depth, B - baseline or separation between cameras and f is a focal length. You can transform the formula above into disparity/B=f/z which basically says that disparity related to camera separation as focal length is related to distance. In other words, the ratios of horizontal and distance measures are equal.
If your images are taken with the cameras shifted horizontally the disparity (in a simple correlation algorithm) is typically calculated in 5-embedded loops:
loop over image1 y
loop over image1 x
loop over disparity d
loop over correlation window y
loop over correlation window x
Disparity, or D_best, gives you the best matching window between image1 and image2 across all possible values of d. Finally, scatterplots are for 3D point clouds while disparity can be rather visualized as a heat color map. If you need to visualize 3D reconstruction or simply saying a 3D point cloud calculate X, Y, Z as:
Z=fB/D, X=uZ/f, Y=v*Z/f, where u and v are related to column and row of wxh image as
u=col-w/2 and v=h/2-row, that is u, v form an image centered coordinate system.
If your two images are exactly the same, then the disparity would be 0 for every pixel. You either have to use two separate cameras to take the images, or take them with a single camera from two different locations. The best way to do 3D reconstruction is to use a calibrated stereo pair of cameras. Here is an example of how to do that using the Computer Vision System Toolbox for MATLAB.