Extract ROI of soccer field - matlab

I need to calibrate a camera according to soccer field white lines.
In order to do so I used Canny for edge detection and HoughLinesP to get the white lines vectors. The location of the camera is not fixed and the picture may contain also the crowed. In this case the crowed may be very noisy for HoughLinesP, thus i thought to extract the ROI of field from the image.
Iv'e converted the image to HSV and used inRange on green color.
Now, what is the best way to get the ROI?
Link to example for a noisy image - Source and after InRange

Well in this approach using a clustering method like DBSCAN you can get rid of the noise and select biggest region. Then you can use opencv to describe that region, with a contour, with lines or with a convex hull.
By the way, you may want to take a look at recent line detectors
We use field, line, circle informations together with some problem specific assumptions, also lens undistortion and model fitting for auto calibration of static cameras; however there will always be a noisy case that require minimal user input

Related

How can I find the boundary surface in this image

I am new to image processing. I want to find the surface between black and white pixels which separates them. Here is the link of image.
The size of image is (21,900,900)
https://drive.google.com/file/d/1zUWK0Fb_n6f1JZou5mrUJq0x3h2X8mBK/view?usp=sharing
I tried to use boundarymask command of MATLAB in one plane of image but I am getting noise and also it works for 2d image only. Please suggest me how to find boundary 3d surface here. Thank you.
This is the output image after applying boundarymask.
Your first step should be to get rid of your noise. Since you got some kind of salt and pepper noise you can to that using the median filter on a 2D-image with medfilt2() in matlab. After that you can use an edge ditector to find your edge pixels. The code for this could look like this. If you want the surface, you need to loop this, over the 3rd dimension of your 3D-image. The code will look like this:
for ii=1:16
I=imread('image.tif',ii);
I_bs=boundarymask(I);
I_filt=medfilt2(I_bs,[7 7]);
boundarysurface(:,:,ii)=edge(I_filt,'Canny');
end
The edge detector I used here is certainly overkill for this easy case, but was the easiest thing I could think of in short term. If performance is relevant let me know, and I will give you another approach.

Extract co-ordinates from detected contour

I want to detect the contour of a ring/disc which may be rotated in 3D. I used Detect circles with various radii in grayscale image via Hough Transform by Tao Peng. The results were very close to my requirement. Except for two points:
Using Tao Peng's code I could get a neat blue line indicating the detected contour. I want to access these co-ordinates (sub-pixels) for further processing. I could not figure out where these co-ordinates of detected contour are stored. If you have any idea, please let me know.
Is there any such code to detect ellipse and not only circles? This is because a ring when rotated in 3D wouldn't necessarily be a circle (in which case Tao Peng's code fails. But this is the best I have come across till date. I do not want to binarize my image, as I'll be losing out on a lot of information). Do let me know if there's anything.
Apart from this code, if there's any other one which does a similar job (even if it is like Tao Peng's code, for circles, plus gives me the co-ordinate values), please tell me.
I would prefer MATLAB, but C would also do.
This is an example image who's contour I want to detect, with high accuracy. (The outline of silver disc)
Regards.
Edit:
Here is an example of my output using Tao Peng's code.

Headlights detection using Difference of Gaussian (DoG)

I am developing a project of detecting vehicles' headlights in night scene. First I am working on a demo on MATLAB. My detection method is edge detection using Difference of Gaussian (DoG): I take the convolution of the image with Gaussian blur with 2 difference sigma then minus 2 filtered images to find edge. My result is shown below:
Now my problem is to find a method in MATLAB to circle the round edge such as car's headlights and even street lights and ignore other edge. If you guys got any suggestion, please tell me.
I think you may be able to get a better segmentation using a slightly different approach.
There is already strong contrast between the lights and the background, so you can take advantage of this to segment out the bright spots using a simple threshold, then you can apply some blob detection to filter out any small blobs (e.g. streetlights). Then you can proceed from there with contour detection, Hough circles, etc. until you find the objects of interest.
As an example, I took your source image and did the following:
Convert to 8-bit greyscale
Apply Gaussian blur
Threshold
This is a section of the source image:
And this is the thresholded overlay:
Perhaps this type of approach is worth exploring further. Please comment to let me know what you think.

How to detect curves in a binary image?

I have a binary image, i want to detect/trace curves in that image. I don't know any thing (coordinates, angle etc). Can any one guide me how should i start? suppose i have this image
I want to separate out curves and other lines. I am only interested in curved lines and their parameters. I want to store information of curves (in array) to use afterward.
It really depends on what you mean by "curve".
If you want to simply identify each discrete collection of pixels as a "curve", you could use a connected-components algorithm. Each component would correspond to a collection of pixels. You could then apply some test to determine linearity or some other feature of the component.
If you're looking for straight lines, circular curves, or any other parametric curve you could use the Hough transform to detect the elements from the image.
The best approach is really going to depend on which curves you're looking for, and what information you need about the curves.
reference links:
Circular Hough Transform Demo
A Brief Description of the Application of the Hough
Transform for Detecting Circles in Computer Images
A method for detection of circular arcs based on the Hough transform
Google goodness
Since you already seem to have a good binary image, it might be easiest to just separate the different connected components of the image and then calculate their parameters.
First, you can do the separation by scanning through the image, and when you encounter a black pixel you can apply a standard flood-fill algorithm to find out all the pixels in your shape. If you have matlab image toolbox, you can find use bwconncomp and bwselect procedures for this. If your shapes are not fully connected, you might apply a morphological closing operation to your image to connect the shapes.
After you have segmented out the different shapes, you can filter out the curves by testing how much they deviate from a line. You can do this simply by picking up the endpoints of the curve, and calculating how far the other points are from the line defined by the endpoints. If this value exceeds some maximum, you have a curve instead of a line.
Another approach would be to measure the ratio of the distance of the endpoints and length of the object. This ratio would be near 1 for lines and larger for curves and wiggly shapes.
If your images have angles, which you wish to separate from curves, you might inspect the directional gradient of your curves. Segment the shape, pick set of equidistant points from it and for each point, calculate the angle to the previous point and to the next point. If the difference of the angle is too high, you do not have a smooth curve, but some angled shape.
Possible difficulties in implementation include thick lines, which you can solve by skeleton transformation. For matlab implementation of skeleton and finding curve endpoints, see matlab image processing toolkit documentation
1) Read a book on Image Analysis
2) Scan for a black pixel, when found look for neighbouring pixels that are also black, store their location then make them white. This gets the points in one object and removes it from the image. Just keep repeating this till there are no remaining black pixels.
If you want to separate the curves from the straight lines try line fitting and then getting the coefficient of correlation. Similar algorithms are available for curves and the correlation tells you the closeness of the point to the idealised shape.
There is also another solution possible with the use of chain codes.
Understanding Freeman chain codes for OCR
The chain code basically assigns a value between 1-8(or 0 to 7) for each pixel saying at which pixel location in a 8-connected neighbourhood does your connected predecessor lie. Thus like mention in Hackworths suggestions one performs connected component labeling and then calculates the chain codes for each component curve. Look at the distribution and the gradient of the chain codes, one can distinguish easily between lines and curves. The problem with the method though is when we have osciallating curves, in which case the gradient is less useful and one depends on the clustering of the chain codes!
Im no computer vision expert, but i think that you could detect lines/curves in binary images relatively easy using some basic edge-detection algorithms (e.g. sobel filter).

Remove paper texture pattern from a photograph

I've scanned an old photo with paper texture pattern and I would like to remove the texture as much as possible without lowering the image quality. Is there a way, probably using Image Processing toolbox in MATLAB?
I've tried to apply FFT transformation (using Photoshop plugin), but I couldn't find any clear white spots to be paint over. Probably the pattern is not so regular for this method?
You can see the sample below. If you need the full image I can upload it somewhere.
Unfortunately, you're pretty much stuck in the spatial domain, as the pattern isn't really repetitive enough for Fourier analysis to be of use.
As #Jonas and #michid have pointed out, filtering will help you with a problem like this. With filtering, you face a trade-off between the amount of detail you want to keep and the amount of noise (or unwanted image components) you want to remove. For example, the median filter used by #Jonas removes the paper texture completely (even the round scratch near the bottom edge of the image) but it also removes all texture within the eyes, hair, face and background (although we don't really care about the background so much, it's the foreground that matters). You'll also see a slight decrease in image contrast, which is usually undesirable. This gives the image an artificial look.
Here's how I would handle this problem:
Detect the paper texture pattern:
Apply Gaussian blur to the image (use a large kernel to make sure that all the paper texture information is destroyed
Calculate the image difference between the blurred and original images
EDIT 2 Apply Gaussian blur to the difference image (use a small 3x3 kernel)
Threshold the above pattern using an empirically-determined threshold. This yields a binary image that can be used as a mask.
Use median filtering (as mentioned by #Jonas) to replace only the parts of the image that correspond to the paper pattern.
Paper texture pattern (before thresholding):
You want as little actual image information to be present in the above image. You'll see that you can very faintly make out the edge of the face (this isn't good, but it's the best I have time for). You also want this paper texture image to be as even as possible (so that thresholding gives equal results across the image). Again, the right hand side of the image above is slightly darker, meaning that thresholding it well will be difficult.
Final image:
The result isn't perfect, but it has completely removed the highly-visible paper texture pattern while preserving more high-frequency content than the simpler filtering approaches.
EDIT
The filled-in areas are typically plain-colored and thus stand out a bit if you look at the image very closely. You could also try adding some low-strength zero-mean Gaussian noise to the filled-in areas to make them look more realistic. You'd have to pick the noise variance to match the background. Determining it empirically may be good enough.
Here's the processed image with the noise added:
Note that the parts where the paper pattern was removed are more difficult to see because the added Gaussian noise is masking them. I used the same Gaussian distribution for the entire image but if you want to be more sophisticated you can use different distributions for the face, background, etc.
A median filter can help you a bit:
img = imread('http://i.stack.imgur.com/JzJMS.jpg');
%# convert rgb to grayscale
img = rgb2gray(img);
%# apply median filter
fimg = medfilt2(img,[15 15]);
%# show
imshow(fimg,[])
Note that you may want to pad the image first to avoid edge effects.
EDIT: A smaller filter kernel than [15 15] will preserve image texture better, but will leave more visible traces of the filtering.
Well i have tried out a different approach using Anisotropc diffusion using the 2nd coefficient that operates on wider areas
Here is the output i got:
From what i can See from the Picture, the Noise has a relatively high Frequency Compared to the image itself. So applying a low Pass filter should work. Have a look at the Power spectrum abs(fft(...)) to determine the cutoff Frequency.