My image is something like below:
I want to be able to draw 2 layers: (1) red line on top of 1st layer, but (2) blue line in the middle of 2nd layer
I am using OpenCV. but any languages/advice are welcomed.
You can do the following:
Small closing in order to reconnect the small separated components/patterns.
Small opening in order to remove the small isolated components/patterns.
Skeletonize (or median axis)
Pruning in order to remove the small branches.
You will then get a skeleton for each pattern. It will be close to the lines you want to draw. But it will be a little bit irregular, so you can interpolate it.
[EDIT] if you need the red line on top of the edge, a solution is to:
Extract the pattern contour
Keep only the pixel on top.
Algorithmically, it can be achieved doing this: for each X coordinate on the top border, go down the image vertically until you meet the first non-null pixel. If your image is NxM, you must have N pixels in your solution.
If you want to regularize/smooth the result, you have two solutions:
Transform the contour as a parametric function and smooth it.
Do an interpolation (spline?)
Related
I have a problem where we have a grid of points and I'd like to fit a "deformed grid which would best fit the set of points.
The MatLab data can be found at:
https://drive.google.com/file/d/14fKKEC5BKGDOjzWupWFSmythqUrRXae4/view?usp=sharing
You will see that cenX and cenY are the x and y coordinates of these centroids.
Like on this image. To note is that there are points missing, and there are a few extra points. Moreover, You can see some lines are not one single line from left to right, however, we could safely assume that the fitting a line somewhat horizontally (+-5degrees) would properly link the points into a somewhat deformed grid.
The vertical lines are trivial because that is how we generated these dots. We can find the number of lines required through a mode of the count of points on each of the columns of the grid.
I'd like to be able to ensure that a point is only part of one line, as this is a grid.
I have an image, and I'm letting the user draw a line on it to pick a region. Now, I would like to take that line (red line in the attached image) and extend it to get to the ends of the frame from both sides (white line in image).
I tried using interp1 but when I'm trying to get those coordinates on the frame itself, I get NaNs since it's not between the two points that the user picked.
Any suggestions on how to pick those points? Or alternatively, a better way to split the image?
I'm trying find all straight lines in an image which is border. for example,stamps have four edges and I have already find those edges by edge function in MATLAB. But there is a problem that they are not real straight line. so I need to use line fitting to get all four borders. But polyfit function can only fit one line at one time. Is there any solutions that can fit all lines at one time.
for example:here I upload some pictures,the image with red lines is what I want. Please be ware I need four separate lines.
Judging from the image you won't be trying to smooth some lines, or fill in the gaps. Instead it looks more like you need to put your image in the smallest possible box.
Here is an algorithm that you can try:
Start from all 4 corners.
'walk' one of the corners inwards and determine if all points are still within four corners
If so, save this corner and go to step 2, else go to step 2
Keep repeating step 2 and 3 till you have a steady solution.
Are you trying to get rid of the perforations? In that case I would suggest using thresholding to segment out dark areas of the image, and then using regionprops to get their bounding boxes. Then you can figure out the largest rectangle that excludes them.
Consider that I have a colored image like this in which the outline is not complete (There are gaps between lines). I want to be able to fill the area between the lines with one color or another. This actually is a binary image which I got after applying canny edge detector on a corresponding gray scale image.
I tried first dilating the image and then eroding it, but the result is not good enough. I want to be able to preserve the thickness of the root
Any help would be greatly appreciated
Original Image
Image after edge detection and some manual removal of pixels
Using the information in the edge image, I thought I would try to extract pixels from the original image of a certain color. For every white pixel in the edited image, I used a search space in the original image along the same horizontal line. I used different thresholds for R, G and B and I ended up with this
I'm not sure what your original image looks like. It would be helpful to see.
You have gaps between the lines because a line in your original image has two edges, one on each side. The canny algorithm is detecting them both. The Canny edge detection algorithm has at its heart the application of two Sobel kernels to calculate the gradient, one for detecting horizontal edges and one for detection vertical edges.
-1 0 +1
-2 0 +2
-1 0 +1
and
+1 +2 +1
0 0 0
-1 -2 -1
These kernels will present peaks for both sides of the line. One peak positive and one negative. You can exclude one side of the line by excluding the corresponding peak. After taking the gradient of each direction truncate any values below zero (set the values to zero) to remove the second peak. Then continue with the Canny edge detection as usual. This will result in the detection of only a single edge for each line instead of the two that you are seeing now.
I'll add a third approach now that I have seen the image. It looks like most of the information is in the green channel.
Green channel image
This image gives you a decent result if you simply apply a threshold.
Thresholded image with a somewhat arbitrary threshold
You can then either clean this image up by itself or use your edge image. To clean it up with the edge image you produced remove any white pixels that are more than a certain distance from one of your detected edges (create a Euclidean distance map from your edge image and use that to set any white pixels greater than a certain distance from an edge to black).
If you are still collecting images you may want to try to position the camera in a way to avoid the bottom of the jar (or whatever this is).
You could attempt to use a line scanning methodology. Start at the side and scan horizontally. When you hit an edge you assume you are in a root and you start setting the voxels to white. When you hit another edge you assume you are leaving a root and you start. There will be some fringe cases and you may want to add additional checks, such as limiting the allowed thickness of a root.
You could also do a flood fill style algorithm where you take a seed point in a root and travel up the root filling it in.
Not sure how well these would work as it depends on the image and I did not test it.
What coding techniques would allow me to to differentiate between circles, rectangles, and triangles in black and white image bitmaps?
You could train an Artificial Neural Network to classify the shapes :P
If noise is low enough to extract curves, approximations can be used: for each shape select parameters giving least error (he method of least squares may help here) and then compare these errors...
If the image is noisy, I would consider Hough transform - it may be used to detect shapes with small count of parameters, like circle (harder for rectangles and triangles).
just an idea off of the top of my head: scan the (pixel) image line-by-line, pixel-by-pixel. If you encounter the first white pixel (assuming it has a black background) you keep it's position as a starting point and look at the eight pixels surrounding it in every direction for the next white pixel. If you find an adjacent second pixel you can establish a directional vector between those two pixels.
Now repeat this until the direction of your vector changes (or the change is above a certain threshold). Keep the last point before the change as the endpoint of your first line and repeat the process for the next line.
Then calculate the angle between the two lines and store it. Now trace the third line. Calculate the angle between the 2nd and 3rd line as well.
If both angles are rectangular you probably found a rectangle, otherwise you probably found a triangle. If you can't find any straight line you could conclude that you found a circle.
I know the algorithm is a bit sketchy but I think (with some refinement) it could work if your image's quality is not too bad (too much noise, gaps in the lines etc.).
You are looking for the Hough Transform. For an implementation, try the AForge.NET framework. It includes circle and line hough transformations.