Please suggest how to connect the dotted pixels in an image like below:
Original Image
I want to apply OCR on this image. I have tried some morphological operations such as thickening and bridging but not obtaining the correct output as expected (NH5343320).
The original image is also uploaded. On applying horizontal edge detection on the original image, I got the dotted image as above. Is there any another methods available for applying OCR in these kind of images.
I would crop out and fill in a template for each of the available letters. Presumably, that would be the letters [A-Z] and the digits [0-9] like this.
0.png
3.png
Now I would do a sub-image search for each of them in your original image. I am doing this at the command-line with ImageMagick but you could use Matlab, OpenCV, or CImg or the Python, Perl, PHP, C, C++ bindings of ImageMagick.
So, I look for the 3 first:
compare -metric rmse -dissimilarity-threshold 1 -subimage-search plate.png 3.png result.png
25607.9 (0.390752) # 498,46
So, the 3 is found at coordinates 498,46. There will be 2 output files, output-0.png which looks like this:
and output-1.png in which you can see the brightest areas showing where the match is best:
Likewise with the 0:
compare -metric rmse -dissimilarity-threshold 1 -subimage-search plate.png 0.png result.png
31452.6 (0.479936) # 664,44
Related
I am searching for a faster way to blur an image than to use the GaussianBlur.
The solution I am looking for can be a command line solution, but I prefer code in perl notation.
Actually, we use the Perl image magick API to blur images:
# $image is our Perl object holding a imagemagick perl image
# level is a natural number between 1 and 10
$image->GaussianBlur('x' . $level);
This works fine, but with the level height the amount of time it consumes seems to grow exponentially.
Question: How can I improve the time used for the bluring operation?
Is there another faster approach to blur images?
I found that the suggested method of resizing image for blur imitation makes output look very pixelated for very large values of sigma like 25 or more. So I finally came to an idea of downscale-blur-enlarge, which makes very nice result (almost indistinguishable from simple blur with large sigma):
# plain slow blur
convert -blur 0x25 sample.jpg blurred_slow.jpg
# much faster
convert -scale 10% -blur 0x2.5 -resize 1000% sample.jpg blurred_fast.jpg
On my i5 2.7Ghz it shows up to 10x speed up.
The documentation speaks of the difference between Blur and GaussianBlur.
There has been some confusion as to which operator, "-blur" or the
"-gaussian-blur" is better for blurring images. First of all "-blur"
is faster, but it does this using two stage technique. First in one
axis, then in the other. The "-gaussian-blur" operator on the other
hand is more mathematically correct as it blurs in all directions
simultaneously. The speed cost between the two can be enormous, by a
factor of 10 or more, depending on the amount of bluring involved.
[...]
In summary, the two operators are slightly different, but only
minimally. As "-blur" is much faster, use it. I do in just about all
the examples involving blurring. Large
That would simply be:
$image->Blur( 'x' . $level );
But the Perl ImageMagick documentation has the same text on both Blur and GaussianBlur (emphasis mine). I can't try now, you would have to benchmark it yourself.
Blur: reduce image noise and reduce detail levels with a Gaussian operator of the given radius and standard deviation (sigma).
GaussianBlur: reduce image noise and reduce detail levels with a Gaussian operator of the given radius and standard deviation (sigma).
An alternative that the documentation also lists is resizing the image to be very tiny, and then enlarging again.
Using large sigma values for image bluring is very slow. But onw
technique can be used to speed up this process. This however is only a
rough method and could use some mathematicaly rigor to improve
results. Essentually the reason large blurs are slow is because you
need a large window or 'kernel' to merge lots of pixels together, for
each and every pixel in the image. However resize (making image
smaller) does the same thing but generates fewer pixels in the
process. The technique is basically shrink the image, then enlarge it
again to generate the heavilly blured result. The Gaussian Filter is
especially useful for this as you can directly specify a Gaussian
Sigma define.
The example command line code is this:
convert rose: -blur 0x5 rose_blur_5.png
convert rose: -filter Gaussian -resize 50% \
-define filter:sigma=2.5 -resize 200% rose_resize_5.png
Not sure if I could still help OP with this, but I recently tried the same for a blurred screenlock picture.
I found that omitting the -blur part saves even more calculation time and is still delivering great results for a 4K picture:
convert in.png -scale 2.5% -resize 4000% out.png
# real: 0.174s user: 0.144s size: 1.2MiB
convert in.png -scale 10% -blur 0x2.5 -resize 1000% out.png
# real: 0.136s user: 2.117s size: 1.2MiB
convert in.png -blur 0x25 out.png
# real: 2.425s user: 21.408s size: 1KiB
However, you couldn't go lower than 2.5% with 3840x2160. It will resize the image. I guess the eps value differs for pictures of other sizes.
It should be noted, that the resulting image sizes differ noticably!
Using ImageMagick's convert to barrel-distort a photo to correct a strongly visible pincushion distortion, I provide positive a, b or c values (from a database for my lens + focal length). This results in an image that is corrected, has the original width and height, but includes a non-rectangular, bent/distorted border, as the image is corrected towards its center. Simplified example:
convert rose: -virtual-pixel black -distort Barrel '+0.0 +0.1 +0.0' out.png
How can I automatically crop the black, bent border to the largest possible rectangle in the original aspect ratio within the rose?
The ImageMagick website says, that a parameter "d" is automatically calculated, that could do this (resulting in linear distortion effectively zooming into the image and pushing the bent border right outside the image bounds), but the imagemagick-calculated value seems to aim for something different (v6.6.9 on ubuntu 12.04). If I guess and manually specify a "d", I can get the intended result:
convert rose: -virtual-pixel black -distort Barrel '+0.0 +0.1 +0.0 +0.6' out.png
The given formular a+b+c+d=1 does not seem to be a proper d for my cropping case. Also, d seems to depend on the aspect ratio of the image and not only on a/b/c. How do I make ImageMagick crop the image, or, how to I calculate a proper d?
Update
I found Fred's ImageMagick script innercrop (http://www.fmwconcepts.com/imagemagick/innercrop/index.php) that does a bit what I need, but has drawbacks and is no solution for me. It asumes arbitrary outer areas, so it takes long to find the cropping rectangle. It does not work within Unix pipes, and it does not keep the original aspect ratio.
Update 2
Contemplating on the problem makes me think that calculating a "d" is not the solution, as changing d introduces more or less bending and seems to do more than just zoom. The d=1-(a+b+c) that is calculated by imagemagick results in the bent image touching the upper/lower bounds (for landscape images) or the left/right bounds (for portrait images). So I think the proper solution would be to calculate where one of the new 4 corners will be given a/b/c/d, and then crop to those new corners.
The way I understand the docs, you do not use commas to separate the parameters for the barrel-distort operator.
Here is an example image, alongside the output of the two commands you gave:
convert o.png -virtual-pixel black -distort Barrel '+0.0 +0.1 +0.0' out0.png
convert o.png -virtual-pixel black -distort Barrel '+0.0 +0.1 +0.0 +0.6' out1.png
I created the example image in order to better visualize what you possibly want to achieve.
However, I do not see the point you stated about the automatically calculated parameter 'd', and I do not see the effect you stated about using 'd=+0.6'...
I'm not sure I understand your wanted result correctly, so I'm assuming you want the area marked by the yellow rectangle cropped.
The image on the left is out0.png as created by the first command above.
In order to guess the required coordinates, we have to determine the image dimensions first:
identify out0.png
out0.png PNG 700x700 700x700+0+0 8-bit sRGB 36KB 0.000u 0:00.000
The image in the center is marked up with the white rectangle. The rectangle is there so you can look at it and tell me if that is the region you want cropped. The image on the right is the cropped image (without scaling it back to the original size).
Is this what you want? If yes, I can possibly update the answer in order to automatically determine the required coordinates of the cropping. (For now I've done it based on guessing.)
Update
I think you may have mis-understood the purpose of the barrel-distortion operation. It is meant for correcting a barrel (slight) distortion, as is produced by camera lenses. The 3 parameters a, b and c to be used for any specific combination of camera, lens and current zoom could possibly be stated in your photo's EXIF data. The formula were a+b+c+d = 1 is meant to be used when the new, distortion-corrected image should have the same dimensions as the original (distorted) image.
So to imitate the barrel-correction, we should probably use the second image from the last row above as our input:
convert out3.png -virtual-pixel gray -distort barrel '0 -0.2 0' corrected.png
Result:
In the shown image, I need to find the center points of the white blobs or I need to segment each white blob (to get an image which only contains that blob) from the background.
What is the efficient way to do it?
Seems this is what exactly you are looking for: Image Segmentation Tutorial ("BlobsDemo").
It contains demo to illustrate simple blob detection, measurement, and filtering. First it finds all the objects, then filters results to pick out objects of certain sizes. The basic concepts of thresholding, labeling, and regionprops are demonstrated with examples.
You need to use watershed algoritm for segmentation.
http://www.mathworks.com/help/images/ref/watershed.html
After segment cells use regionprops function.
For a school project I've built a scanner and connected it to matlab. The scanner scans images (16-by-16 pixels) of handwritten digits from 0 to 9. I'm using a principal component analysis in order to classify the scans. Due to the low accuracy of the scanner, I need to preprocess the scans first, before I can actually send them through the recognition machine.
One of these preprocessing-steps is to thicken the lines. So far, I've used a pretty simple averageing filter for this: H = ones(3, 3) ./ 9. This bears the problem, that the circular gap of the digits 8 and 9 is likely to be "closed". I enclose a picture of all my preprocessing-steps, where the problem is visible: the image with the caption "threshholded" still shows the gap, but it disappeared after the thickening step.
My question is: Do you know a better filter for this "thickening"-step, which would not erase the gap? Or do you have an idea for a filter which could be applied after the thickening to produced the desired result? Any other suggestions or hints are also greatly appreciated.
I=imread('numberreco.png');
subplot(1,2,1),imshow(I)
I=rgb2gray(I);
BW=~im2bw(I,graythresh(I));
BW2 = bwmorph(BW,'thin');
I1=double(I).*BW2;
subplot(1,2,2),imshow(uint8(I1))
The gap is kept, and you can start from here...
Not a very general answer, but if you have the Image Processing Toolbox, and your system doesn't depend on having multiple grey levels, then converting to binary images and using the 'thicken' operation from bwmorph() should do exactly what you want.
Thinking a bit harder, you could also use a suitably thickened binary image as a mask to restore holes - either just elementwise multiply it with the blurred greyscale image or, for more flexibility:
invert it to form a background/holes mask
remove the background with imclearborder() to leave just the holes
optionally dilate the mask
use as a logical index to clear the 'hole' areas of the blurred/brightened greyscale image.
Even without the morphological steps you can use a mask to artificially reintroduce the original holes later, e.g.:
bgmask = (thresholdedimage == 0); % assuming 0 == background
holes = imclearborder(bgmask);
... % other processing steps
brightenedimage(holes) = 0; % punch holes in updated image
I have a 2D color-map plot created with imagesc and want to export it as a .eps file using
print -depsc.
The problem is that the "original" image data is from a rather small matrix (131 x 131). When I view the image in the matlab figure window, I can see all the individual pixels if I zoom a bit closer.
When I export to eps, however, there seems to be some interpolation or anti-aliasing going on, in that neighboring pixels get blurred/blended into each other. I don't get the problem if I export a high-resolution tiff, but that format is not an option (as demanded by a publisher).
How can I obtain an eps that preserves the pixely structure of my image without applying interpolation or anti-aliasing?
The blurring actually depends on the rendering software your viewer application or printer uses. To get good results all the time, make each pixel in your image an 8x8 block of pixels of the same color. The blurring then only affects the pixels at the edge of each block. 8x8 blocks are best as they compress without nasty artifacts using DCT compression (sometimes used in eps files).
Old question, but highly ranked in Google, so here is my answer:
Open the .eps-file with a text editor, search for "Interpolate" and change the following "true" to "false". Repeat that step for all Interpolate-statements.
It might also depend on the viewer you're using, but probably just because some viewers ignore the "Interpolate"s...
Had the same problem using plot2svg in Matlab and exporting from Inkscape to eps.