What is the OpenCV equivalent of this image-sampling code in Matlab? - matlab

I'm trying to perform downsampling so I can create a gaussian pyramid in opencv. Right now I'm trying to translate this MATLAB code, and I'm not sure if my approach is right.
R = R(1:2:r, 1:2:c, :);
So this code only retrieves the odd-numbered pixels of an image. I read the documents of opencv and thought that the resize() method would be the one I'm looking for.
https://docs.opencv.org/3.0-beta/modules/imgproc/doc/geometric_transformations.html#resize
Imgproc.resize(src,dest,new Size(), 0.5,0.5, INTER_NEAREST);
But I'm not sure if this code will retreat only the odd numbered pixels though... Do they show the exact same behavior?

Related

How can I find the boundary surface in this image

I am new to image processing. I want to find the surface between black and white pixels which separates them. Here is the link of image.
The size of image is (21,900,900)
https://drive.google.com/file/d/1zUWK0Fb_n6f1JZou5mrUJq0x3h2X8mBK/view?usp=sharing
I tried to use boundarymask command of MATLAB in one plane of image but I am getting noise and also it works for 2d image only. Please suggest me how to find boundary 3d surface here. Thank you.
This is the output image after applying boundarymask.
Your first step should be to get rid of your noise. Since you got some kind of salt and pepper noise you can to that using the median filter on a 2D-image with medfilt2() in matlab. After that you can use an edge ditector to find your edge pixels. The code for this could look like this. If you want the surface, you need to loop this, over the 3rd dimension of your 3D-image. The code will look like this:
for ii=1:16
I=imread('image.tif',ii);
I_bs=boundarymask(I);
I_filt=medfilt2(I_bs,[7 7]);
boundarysurface(:,:,ii)=edge(I_filt,'Canny');
end
The edge detector I used here is certainly overkill for this easy case, but was the easiest thing I could think of in short term. If performance is relevant let me know, and I will give you another approach.

Find the other end of a curve after a cut in an image

I would like to follow a curve (with matlab or opencv) and to find the other end of it when it is cut by an empty space like this example, which is simplified to illustrate the problem:
Link to image of cut curve
Real images are more like this one: Link to real image to analyse
To follow the curve, I can use a skeleton and look at the neighbourhood. The problem is that I don't know how to find the other end efficiently.
I don't think that closing or opening operations could help because as shown on the previous image, there are other curves and the two parts of the curve are quite far from each other so it could lead to boundaries between the different curves instead of the two parts.
I was thinking about polynomial evaluation which could be a solution for simple curves but I am not sure about the precision I could get. If I use a skeleton, I have to find exactly the right pixel or to search in a reasonable neighbourhood which would take some time and once again, as there are other curves in the images, I have to be sure that I will find the good one.
That's why I am searching for an existing function which could estimate precisely the trajectory of the curve and give an usefull output to go further and find the second part of the curve.
If that kind of function doesn't exist, I'm open to any other way of analysing the problem if it can help.
I will start to explain with the first image you provided, you can implement common OpenCV function useful for detecting contour(black region in your case as you have binary image) known as cv2.findContours(), which returns the coordinates of the edges of the surface detected then you can plot each detected contour separately in a blank image to get the edge of your desired line.
Now coming to your 2nd image you have to be slightly careful while performing above analysis as there are many tiny lines. get back to me for further help

MATLAB: Digitizing a plot with multiple variables and implementing the data

I have 8 plots which I want to implement in my Matlab code. These plots originate from several research papers, hence, I need to digitize them first in order to be able to use them.
An example of a plot is shown below:
This is basically a surface plot with three different variables. I know how to digitize a regular plot with just X and Y coordinates. However, how would one digitize a graph like this? I am quite unsure, hence, the question.
Also, If I would be able to obtain the data from this plot. How would you be able to utilize it in your code? Maybe with some interpolation and extrapolation between the given data points?
Any tips regarding this topic are welcome.
Thanks in advance
Here is what I would suggest:
Read the image in Matlab using imread.
Manually find the pixel position of the left bottom corner and the upper right corner
Using these pixels values and the real numerical value, it is simple to determine the x and y value of every pixel. I suggest you use meshgrid.
Knowing that the curves are in black, then remove every non-black pixel from the image, which leaves you only with the curves and the numbers.
Then use the function bwareaopen to remove the small objects (the numbers). Don't forget to invert the image to remove the black instead of the white.
Finally, by using point #3 and the result of point #6, you can manually extract the data of the graph. It won't be easy, but it will be feasible.
You will need the data for the three variables in order to create a plot in Matlab, which you can get either from the previous research or by estimating and interpolating values from the plot. Once you get the data though, there are two functions that you can use to make surface plots, surface and surf, surf is pretty much the same as surface but includes shading.
For interpolation and extrapolation it sounds like you might want to check out 2D interpolation, interp2. The interp2 function can also do extrapolation as well.
You should read the documentation for these functions and then post back with specific problems if you have any.

Corner Detection in 2D Vector Data

I am trying to detect corners (x/y coordinates) in 2D scatter vectors of data.
The data is from a laser rangefinder and our current platform uses Matlab (though standalone programs/libs are an option, but the Nav/Control code is on Matlab so it must have an interface).
Corner detection is part of a SLAM algorithm and the corners will serve as the landmarks.
I am also looking to achieve something close to 100Hz in terms of speed if possible (I know its Matlab, but my data set is pretty small.)
Sample Data:
[Blue is the raw data, red is what I need to detect. (This view is effectively top down.)]
[Actual vector data from above shots]
Thus far I've tried many different approaches, some more successful than others.
I've never formally studied machine vision of any kind.
My first approach was a homebrew least squares line fitter, that would split lines in half resurivly until they met some r^2 value and then try to merge ones with similar slope/intercepts. It would then calculate the intersections of these lines. It wasn't very good, but did work around 70% of the time with decent accuracy, though it had some bad issues with missing certain features completely.
My current approach uses the clusterdata function to segment my data based on mahalanobis distance, and then does basically the same thing (least squares line fitting / merging). It works ok, but I'm assuming there are better methods.
[Source Code to Current Method] [cnrs, dat, ~, ~] = CornerDetect(data, 4, 1) using the above data will produce the locations I am getting.
I do not need to write this from scratch, it just seemed like most of the higher-class methods are meant for 2D images or 3D point clouds, not 2D scatter data. I've read a lot about Hough transforms and all sorts of data clustering methods (k-Means etc). I also tried a few canned line detectors without much success. I tried to play around with Line Segment Detector but it needs a greyscale image as an input and I figured it would be prohibitivly slow to convert my vector into a full 2D image to feed it into something like LSD.
Any help is greatly appreciated!
I'd approach it as a problem of finding extrema of curvature that are stable at multiple scales - and the split-and-merge method you have tried with lines hints at that.
You could use harris corner detector for detecting corners.

Backprojection from projection matrix using MATLAB

I've a 256x256 projection matrix. each row is a projection taken with equal angles. i need to generate the original image with backprojection using matlab and I am not really familiar with matlab. Can you suggest me any code samples or alghorithms? I've found some similar codes i couldn't generate the original image using them.
This should be relatively simple with the iradon command, if you have the Image Processing Toolbox. If you don't, it will be a bit tougher because you need to roll your own version of that. Apparently you can't use this, but for what it's worth I get an image if I use:
I = iradon(Pteta',linspace(0,179,size(Pteta,1));
So, how can you do this yourself? I'll try to help you along the way without giving you the answer - this is homework after all!
First, think about your 0-degree projection. Imagine the axis you're projecting on has units 1,256. Now imagine back projection of these coordinates across your image, it would look something like this:
Similarly, think a 90-degree projection would look like this:
Cool, we can get these matrices by using [X, Y] = meshgrid(1:256);, but what about off-axis projections? Just think of the distance along some angled line, like converting polar/Cartesian coordinates:
theta = 45 % projection angle in degrees
t = X*cosd(theta) + Y*sind(theta);
And it works!
There is a problem, though! Notice the values go up over 350 now? Also it's sort of off-center. The coordinates now exceed the length of our projections because the diagonal of a square is longer than the side. I'll leave it to you to figure out how to resolve this, but figure the final image will be smaller than the initial projections, and you may need to use different units (-127 to 128 instead of 1 to 256).
Now you can just index your projections for those angles to backproject the actual values across the image. Here we have a second problem, though, because the values are not integers! We could just round them, this is called nearest-neighbor interpolation, but it doesn't give the best results.
proj = Pteta(angle,:);
% add projection filtering here
t = X*cosd(theta) + Y*sind(theta);
% do some rounding/interpolating to make t all integers
imagesc(proj(t));
For our off-center version, that gives us this image, or something similar:
Now you just need to do this for every angle, and add them all up.