I am trying to implement this paper
Patch based Image warping for Content aware Retargeting
I am half way through in its implementation in matlab, when I came across the warping using quad mesh. The section III.c suggests me to formulate the image as a quad mesh with vertices, edges and quad faces. I have searched, but did not get the concrete result as to how to do this in matlab. Can you please tell me how to represent image as a quad mesh in matlab? Thanks in advance.
Update :
Pics from the paper stating the requirements better. Can you also please see the paper and tell me if am looking for the right thing?
Related
I have an binary image as shown below.
As can be seen in the image there is an edge which looks like an arc of an ellipse, as illustrated below and I manually marked it as red. These red pixels should be found by the code.
My goal is to fit an ellipse to the pixels that are colored in red in the above picture. This fitted ellipse is shown in below.
Could someone kindly tell me how I can get the pixels that are marked as red in the second image using MATLAB? I will then use them for an elliptical fitting.
The problem you are describing is extremely non-trivial. This article describes some of the existing methods. It is nice because it is a survey that will point you to other articles.
As you may have guessed, not having both ends of the ellipse to work with makes things infinitely more complex. If that were not the case, you could use the Hough transform. There is already a script available on the mathworks site do do this.
All that being said, I recommend Googling "ellipse detection". It may not help directly with the MATLAB implementation, but will at least give you an idea of the magnitude of the problem you are trying to solve.
I have one 2D CT image and I want to convert it to 3D image using Markov Random Field. There are several papers in the literature in which this technique was used based on 3 2D orthogonal images. However, I can't find a simple and clear resource that explains the conversion process using MRF in clear steps. Here are some papers I found,
http://www.immijournal.com/content/pdf/s40192-014-0019-3.pdf
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1971113/
https://www.eecs.berkeley.edu/Research/Projects/CS/vision/papers/efros-iccv99.pdf
What I have understood is that the image is converted into a graph of connected pixels, and the properties of a pixel depend on the properties of its adjacent ones. Put it was not really clear how the process takes place. Also, the cost minimization process was confusing to me what parameters are we trying to minimize? And how this will lead to constructing a 3D image from the 3 2D orthogonal ones?
Can anyone please explain to me how the conversion algorithm works using MRF in steps?
Thank You
EDIT: Detection of triangle, rectangle/square or any other with sharp edges can be detected, but I'm not getting how to detect the spiral.
Is it possible to detect different shapes based on the general equation of the shape. Like for example if I give a general equation of a circle/ rectangle/ triangle/ spiral or any other shape, is it possible to detect that shape in an image?
For example if I give a general equation of the shapes, it should detect the shape in the image.
More precisely defining the problem: If I give a general equation of a triangle, it should detect the triangle and mark it.
Here's a sample input image.
I know that using some morphological analysis and edge detection is very easy for this but I have to use curve fitting, but I'm not able to know how to start, can anyone please provide an algorithm or a snippet please.
You get line detection using the hough() function and circle detection using imfindcircles() in the Image Processing Toolbox.
Alternatively, you can turn this problem around: first detect objects of interest by some means, e. g. by color, and then try to identify their shape. The regionprops() function can compute many different shape characteristics for you.
And if all else fails, you can write your own Generalized Hough Transform
I am developing a project of detecting vehicles' headlights in night scene. First I am working on a demo on MATLAB. My detection method is edge detection using Difference of Gaussian (DoG): I take the convolution of the image with Gaussian blur with 2 difference sigma then minus 2 filtered images to find edge. My result is shown below:
Now my problem is to find a method in MATLAB to circle the round edge such as car's headlights and even street lights and ignore other edge. If you guys got any suggestion, please tell me.
I think you may be able to get a better segmentation using a slightly different approach.
There is already strong contrast between the lights and the background, so you can take advantage of this to segment out the bright spots using a simple threshold, then you can apply some blob detection to filter out any small blobs (e.g. streetlights). Then you can proceed from there with contour detection, Hough circles, etc. until you find the objects of interest.
As an example, I took your source image and did the following:
Convert to 8-bit greyscale
Apply Gaussian blur
Threshold
This is a section of the source image:
And this is the thresholded overlay:
Perhaps this type of approach is worth exploring further. Please comment to let me know what you think.
I'm very new to 3D image processing.i'm working in my project to find the perspective angle of an circle.
A plate having set of white circles,using those circles i want to find the rotation angles (3D) of that plate.
For that i had finished camera calibration part and got camera error parameters.The next step i have captured an image and apply the sobel edge detection.
After that i have a little bit confusion about the ellipse fitting algorithm.i saw a lot of algorithms in ellipse fit.which one is the best method and fast method?
after finished ellipse fit i don't know how can i proceed further?how to calculate rotation and translation matrix using that ellipse?
can you tell me which algorithm is more suitable and easy. i need some matlab code to understand concept.
Thanks in advance
sorry for my English.
First, find the ellipse/circle centres (e.g. as Eddy_Em in other comments described).
You can then refer to Zhang's classic paper
https://research.microsoft.com/en-us/um/people/zhang/calib/
which allows you to estimate camera pose from a single image if some camera parameters are known, e.g. centre of projection. Note that the method fails for frontal recordings, i.e. the more of a perspective effect, the more accurate your estimate will be. The algorithm is fairly simple, you'll need a SVD and some cross products.