Need to apply erosion only to the lines in my image which are more thicker using python
Here is my input image
I'm having a image which contains white lines with both thick and thin lines,my goal is to erode only the lines which are more thicker using python.i used normal erosion using opencv when applying this method thin lines are removing from the image.Erosion has to apply only on thick lines
import cv2
import numpy as np
img = cv2.imread('123.png',0)
kernel = np.ones((6,6),np.uint8)
erosion = cv2.erode(img,kernel,iterations = 1)
cv2.imshow("Result", erosion)
cv2.waitKey(0)
cv2.destroyAllWindows()
How can i achieve this any answers will be highly useful for me.Thanks in advance
Here is one way to do that in Python/OpenCV.
Read the input as grayscale
Threshold it
Apply morphology close to remove the thin lines and save as a mask image
Apply morphology dilate to the threshold image again to thin the lines
Use np.where() to combine the threshold and dilated images using the mask
Save the results
Input:
import cv2
import numpy as np
# read the input as grayscale
img = cv2.imread('black_lines.jpg', cv2.IMREAD_GRAYSCALE)
# threshold
thresh = cv2.threshold(img, 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)[1]
# apply morphology close to remove the thin lines and save as mask
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (5,5))
mask = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, kernel)
# apply morphology dilate to thin image as desired
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (5,5))
dilate = cv2.morphologyEx(thresh, cv2.MORPH_DILATE, kernel)
# combine the thresh and dilate using mask
result = np.where(mask==0, dilate, thresh)
# save results
cv2.imwrite('black_lines_mask.jpg', mask)
cv2.imwrite('black_lines_dilate.jpg', dilate)
cv2.imwrite('black_lines_thinned.jpg', result)
# show results
cv2.imshow('mask', mask)
cv2.imshow('dilate', dilate)
cv2.imshow('thinned', result)
cv2.waitKey(0)
Mask Image:
Dilate Image:
Combined Image with thin lines:
Related
I have an image of a robot moving, I need to extract the white rings
in order to find the midpoint of the robot. But thresholding is not giving correct result:
What method should I try to extract only the white rings.
%code to get second image
img=imread('data\Image13.jpg');
hsv=rgb2hsv(img);
bin=hsv(:,:,3)>0.8;
Something like that?
import cv2
import numpy as np
# get bounding rectangles of contours
img = cv2.imread('img.jpg')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
ret, thresh = cv2.threshold(gray, 200, 255, cv2.THRESH_BINARY)
contours, _ = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
# filter contours by area and width
contours = [c for c in contours if (50 < cv2.contourArea(c) < 500) and cv2.boundingRect(c)[2] > 20]
# draw contours on empty mask
out = np.zeros(thresh.shape, dtype=np.uint8)
cv2.drawContours(out, contours, -1, 255, -1)
cv2.imwrite('out.png', out)
Output:
This is the image in the code above and nothing is outputed
I am using pytesseract and opencv to recognize the text on license plate, however alot of times when i run the code below no text is outputted for the images i use
import cv2
import imutils
import numpy as np
import pytesseract as tess
tess.pytesseract.tesseract_cmd =r'C:\Users\raul__000\AppData\Local\Tesseract-OCR\tesseract.exe'
# read image file
img = cv2.imread("Plate_images/plate14.jpg")
cv2.imshow("Image", img)
cv2.waitKey(0)
# RGB to Gray scale conversion
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
cv2.imshow("1 - Grayscale Conversion", gray)
cv2.waitKey(0)
# Noise removal with iterative bilateral filter(removes noise while preserving edges)
gray = cv2.bilateralFilter(gray, 11, 17, 17)
cv2.imshow("2 - Bilateral Filter", gray)
cv2.waitKey(0)
# thresholding the grayscale image
gray = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]
cv2.imshow("3 - Thresh Filter", gray)
cv2.waitKey(0)
# Dilation adds pixels to the boundaries of objects in an image
kernel = np.ones((5,5),np.uint8)
gray = cv2.dilate(gray, kernel, iterations = 1)
cv2.imshow("4 - dilation Filter", gray)
cv2.waitKey(0)
# use tesseract to convert image to string
text = tess.image_to_string(gray, lang="eng", config='--psm 6')
print(text)
This is the image in the code above and nothing is outputed
Your 4th step is removing all the text from the image
You should be able to see that when using cv2.imshow("4 - dilation Filter", gray)
If you remove the third step and run tesseract you should see output.
At my website I receive an image contains the user fingerprint and signature, I wan't to extract these two pieces of information.
for example:
Original Image
import os
import cv2
import numpy as np
def imshow(label, image):
cv2.imshow(label, image)
cv2.waitKey(0)
cv2.destroyAllWindows()
#read image
rgb_img = cv2.imread('path')
rgb_img = cv2.resize(rgb_img, (900, 600))
gray_img = cv2.cvtColor(rgb_img, cv2.COLOR_BGR2GRAY)
Gray Image
#canny edge detection
canny = cv2.Canny(gray_img, 50, 120)
canny edge image
# Morphology Closing
kernel = np.ones((7, 23), np.uint8)
closing = cv2.morphologyEx(canny, cv2.MORPH_CLOSE, kernel)
Morphology Closing
# Find contours
contours, hierarchy = cv2.findContours(closing.copy(), cv2.RETR_LIST, cv2.CHAIN_APPROX_NONE)
# Sort Contors by area and then remove the largest frame contour
n = len(contours) - 1
contours = sorted(contours, key=cv2.contourArea, reverse=False)[:n]
copy = rgb_img.copy()
# Iterate through contours and draw the convex hull
for c in contours:
if cv2.contourArea(c) < 750:
continue
hull = cv2.convexHull(c)
cv2.drawContours(copy, [hull], 0, (0, 255, 0), 2)
imshow('Convex Hull', copy)
Image divided to parts
Now my goals are:
Know which part is the signature and which is the fingerprint
Resolve the contours overlapping if exist
P.S: I'm not sure if the previous steps are final so please if you have better steps tell me.
These are some hard examples i may wanna deal with
You can use morphology for finger print and signature selecting.
By example:
import cv2
import numpy as np
img = cv2.imread('fhZCs.png')
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
img=cv2.bitwise_not(img) #negate image
#color definition
blue_upper = np.array([130,255,255])
blue_lower = np.array([115,0,0])
#blue color mask (sort of thresholding, actually segmentation)
mask = cv2.inRange(hsv, blue_lower, blue_upper)
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (20,20))
finger=cv2.morphologyEx(mask, cv2.MORPH_OPEN, kernel)
mask2=cv2.morphologyEx(finger, cv2.MORPH_DILATE, kernel)
signature=cv2.compare(mask2, mask, cv2.CMP_LT)
signature=cv2.morphologyEx(signature, cv2.MORPH_DILATE, kernel)
signature=cv2.bitwise_and(img, img, mask=signature)
signature=cv2.bitwise_not(signature)
finger=cv2.bitwise_and(img, img, mask=finger)
finger=cv2.bitwise_not(finger)
cv2.imwrite('finger.png', finger)
cv2.imwrite('signature.png',signature)
I have a segmented image
When I apply bwperim function on this I get the output as below
I want to have a thin line of perimeter - just one pixel-thick. This is essential for further processing work. What is the best approach?
Please suggest.
======
BoundingBox
%%% ComputeBoundingBox
%%%
function [stats, statsAlreadyComputed] = ...
ComputeBoundingBox(imageSize,stats,statsAlreadyComputed)
% [minC minR width height]; minC and minR end in .5.
if ~statsAlreadyComputed.BoundingBox
statsAlreadyComputed.BoundingBox = 1;
[stats, statsAlreadyComputed] = ...
ComputePixelList(imageSize,stats,statsAlreadyComputed);
num_dims = numel(imageSize);
for k = 1:length(stats)
list = stats(k).PixelList;
if (isempty(list))
stats(k).BoundingBox = [0.5*ones(1,num_dims) zeros(1,num_dims)];
else
min_corner = min(list,[],1) - 0.5;
max_corner = max(list,[],1) + 0.5;
stats(k).BoundingBox = [min_corner (max_corner - min_corner)];
end
end
end
That is happening because your image had quantization error when you were saving the image. Did you save your image using a lossy compression algorithm, like JPEG? If you want to preserve the intensities so that they don't change when you save the image, use a lossless compression algorithm, like PNG.
To eliminate these "noisy" effects, threshold your image first to eliminate any quantization errors so that you can set these pixels to completely white, then try using bwperim again. In other words, do something like this:
im = im2bw(imread('http://i.stack.imgur.com/dagEc.png'));
im_noborder = imclearborder(im);
out = bwperim(im_noborder);
imshow(out);
The first line of code reads in your image directly from StackOverflow and we use im2bw to threshold your image. This image was originally grayscale, and so we want to convert this into black and white only. This will also remove any quantization artifacts as it thresholds anything higher than 128. The next line of code removes the white border with imclearborder that surrounds your shape because the image you uploaded has a white border surrounding it for some reason. Once we remove this border, we then apply bwperim and we show the image.
This is the image I get:
I have a retinal fundus image which has a white border along the corners. I am trying to remove the borders on all four sides of the image. This is a pre-processing step and my image looks like this:
fundus http://snag.gy/XLGkC.jpg
It is an RGB image, and I took the green channel, and created a mask using logical indexing. I searched for pixels which were all black in the image, and eroded the mask to remove the white edge pixels. However, I am not sure how to retrieve the final image, without the white pixel border using the mask that I have. This is my code, and any help would be appreciated:
maskIdx = rgb(:,:,2) == 0; # rgb is the original image
se = strel('disk',3); # erode 3-pixel using a disk structuring element
im2 = imerode(maskIdx, se);
newrgb = rgb(im2); # gives a vector - not the same size as original im
Solved it myself. This is what I did with some help.
I first computed the mask for all three color channels combined. This is because the mask for each channel is not the same when applied to all the three channels individually, and residual pixels will be left in the final image if I used only the mask from one of the channels in the original image:
mask = (rgb(:,:,1) == 0) & (rgb(:,:,2) == 0) & (rgb(:,:,3) == 0);
Next, I used a disk structuring element with a radius of 9 pixels to dilate my mask:
se = strel('disk', 9);
maskIdx = imdilate(mask,se);
EDIT: A structuring element which is arbitrary can also be used. I used: se = strel(ones(9,9))
Then, with the new mask, I multiplied the original image with the new dilated mask:
newImg(:,:,1) = rgb(:,:,1) .* uint8(maskIdx); # image was of double data-type
newImg(:,:,2) = rgb(:,:,2) .* uint8(maskIdx);
newImg(:,:,3) = rgb(:,:,3) .* uint8(maskIdx);
Finally, I subtracted the computed color-mask from the original image to get my desired border-removed image:
finalImg = rgb - newImg;
Result:
image http://snag.gy/g2X1v.jpg