alt text http://internationalpropertiesregistry.com/Server/showFile.php?file=%2FUpload%2F02468.gif358455ebc982cb93b98a258fc4d6ee60.gif
Is there a simple solution in MATLAB?
Easy answer: NOPE
For very simple: read the image as grayscale, threshold, clean up and run it through an ocr program.
%# read the image
img = imread('http://internationalpropertiesregistry.com/Server/showFile.php?file=%2FUpload%2F02468.gif358455ebc982cb93b98a258fc4d6ee60.gif');
%# threshold
bw = img < 150;
%# clean up
bw = bwareaopen(bw,3,4);
%# look at it - the number 8 is not so pretty, the rest looks reasonable
figure,imshow(bw)
Then, figure out whether there is an OCR program that can help, such as this one
For even simpler:
%# read the image
[img,map] = imread('http://internationalpropertiesregistry.com/Server/showFile.php?file=%2FUpload%2F02468.gif358455ebc982cb93b98a258fc4d6ee60.gif');
%# display
figure,imshow(img,map)
%# and type out the numbers. It's going to be SO much less work than writing complicated code
For a simple one like that you can probably just run a median filter and ocr.
A median filter will, for every pixel in the image, look at the area around it, usually a 3x3 or 5x5 pixel area, determine the median pixel value in that area and set the pixel to that median value. In a block of the same colour nothing will happen, the whole area is the same colour as the pixel under consideration so the median i sthe same as the current value (or at least almost the same permitting slight colour variations. On the other hand noise pixels, i.e. a single pixel with a differently coloured area around it will simply disappear, since the median value of the area will be the colour of all the pixels around the noise pixel.
As for the ocr, or optical character recognition, I'd just use an existing program/library, it would certainly be possible to write an ocr algorithm in Matlab, but would be a much bigger exercise than writing a simple algorithm in an hour. You'd first need to read up on ocr techniques and algorithms.
Clean up the image
Separate each character into distinct images
Compare each character against a set of reference characters
The best matches are the most likely original character
You may consult this post. I succeeded in cracking a simpler CAPTCHA with the illustrated approach.
Related
I have a task where I should count numbers of suits(diamonds, clubs, ...) in a set of playing cards image. I have created a template sub-image from my original image for diamond for example, using imcrop in Matlab. I have also converted both Original or target Image in grayscale.
I'm trying to find the match of the sub-image in the target image and counts the corresponding diamonds in the target image.
Does anyone have a suggestion?
I try to use normxcorr2 I got a plot where I can see the area with highest peak, but I don't have any ideas how to compute this.
Any suggestions of algorithms.
Thank you.
Have a look at method A) in Detect repetitive pixel patterns in an image and remove them using matlab (Disclaimer: I'm the author). Delete the rect line and replace the variable template with your (BW) template. Skip the last 3 commands and instead just count how many peaks there are:
idx = bwmorph(idx,'shrink',inf);
numberOfObjects = sum(idx)
You obviously will have to adjust some values greatly to get a good result - pattern detection isn't trivial.
I was wondering if anyone knows which kind of filter is applied by SPCImage from the Becker & Hickl system.
I am taking some FLIM data with my system and I want to create the lifetime images. For doing so I want to bin my images in the same way as it does SPCImage, so I can increase my SN ratio. The binning goes like 1x1, 3x3, 5x5, etc. I have created the function for doing a 3x3 binning, but each time it gets more complicated...
I want to do it in MATLAB, and maybe there is already a function that can help me with this.
Many thanks for your help.
This question is old, but for anyone else wondering: You want to sum the pixels in an (2M+1) x (2M+1) neighborhood for each plane (M integer). So I am pretty sure you can go about the problem by treating it like a convolution.
#This is your original 3D SDT image
#I assume that you have ordered the image with spatial dimensions along the
#first and second and the time channels are the third dimension.
img = ... #<- your 3D image goes here
#This describes your filter. M=1 means take 1 a one pixel rect around your
#center pixel and add the values to your center, etc... (i.e. M=1 equals a
#total of 3x3 pixels accumulated)
M=2
#this is the (2D) filter for your convolution
filtr = ones(2*M+1, 2*M+1);
#the resulting binned image (3D)
img_binned = convn(img, filtr, 'same');
You should definitely check the result against your calculation, but it should do the trick.
I think that you need to test/investigate image filter functions to apply to this king of images Fluorescence-lifetime imaging microscopy.
A median filter as showing here is good for smoothering things. Or a weihgted moving average filter where applied to the image erase de bright spots and only are maintained the broad features
So you need to review of the digital image processing in matlab
Here with i have attached two consecutive frames captured by a cmos camera with IR Filter.The object checker board was stationary at the time of capturing images.But the difference between two images are nearly 31000 pixels.This could be affect my result.can u tell me What kind of noise is this?How can i remove it.please suggest me any algorithms or any function possible to remove those noises.
Thank you.Sorry for my poor English.
Image1 : [1]: http://i45.tinypic.com/2wptqxl.jpg
Image2: [2]: http://i45.tinypic.com/v8knjn.jpg
That noise appears to result from camera sensor (Bayer to RGB conversion). There's the checkerboard pattern still left.
Also lossy jpg contributes a lot to the process. You should first have an access to raw images.
From those particular images I'd first try to use edge detection filters (Sobel Horizontal and Vertical) to make a mask that selects between some median/local histogram equalization for the flat areas and to apply some checker board reducing filter to the edges. The point is that probably no single filter is able to do good for both jpeg ringing artifacts and to the jagged edges. Then the real question is: what other kind of images should be processed?
From the comments: if corner points are to be made exact, then the solution more likely is to search for features (corner points with subpixel resolution) and make a mapping from one set of points to the other images set of corners, and search for the best affine transformation matrix that converts these sets to each other. With this matrix one can then perform resampling of the other image.
One can fortunately estimate motion vectors with subpixel resolution without brute force searching all possible subpixel locations: when calculating a matched filter, one gets local maximums for potential candidates of exact matches. But this is not all there is. One can try to calculate a more precise approximation of the peak location by studying the matched filter outputs in the nearby pixels. For exact match the output should be symmetric. Otherwise the 'energies' of the matched filter are biased towards the second best location. (A 2nd degree polynomial fit + finding maximum can work.)
Looking closely at these images, I must agree with #Aki Suihkonen.
In my view, the main noise comes from the jpeg compression, that causes sharp edges to "ring". I'd try a "de-speckle" type of filter on the images, and see if this makes a difference. Some info that can help you implement this can be found in this link.
In a more quick and dirty fashion, you apply one of the many standard tools, for example, given the images are a and b:
(i) just smooth the image with a Gaussian filter, this can reduce noise differences between the images by an order of magnitude. For example:
h=fspecial('gaussian',15,2);
a=conv2(a,h,'same');
b=conv2(b,h,'same');
(ii) Reduce Noise By Adaptive Filtering
a = wiener2(a,[5 5]);
b = wiener2(b,[5 5]);
(iii) Adjust ntensity Values Using Histogram Equalization
a = histeq(a);
b = histeq(b);
(iv) Adjust Intensity Values to a Specified Range
a = imadjust(a,[0 0.2],[0.5 1]);
b = imadjust(b,[0 0.2],[0.5 1]);
If your images are supposed to be black and white but you have captured them in gray scale there could be difference due to noise.
You can convert the images to black and white by defining a threshold, any pixel with a value less than that threshold should be assigned 0 and anything larger than that threshold should be assigned 1, or whatever your gray scale range is (maybe 255).
Assume your image is I, to make it black and white assuming your gray scale image level is from 0 to 255, assume you choose a threshold of 100:
ind = find(I < 100);
I(ind) = 0;
ind = find(I >= 100);
I(ind) = 255;
Now you have a black and white image, do the same thing for the other image and you should get very small difference if the camera and the subject have note moved.
Anyone knows why the pseudomedian filter is faster than the median filter?
I used medfilt2.m for median filtering and I implemented my own pseudomedian filter which is:
b = strel('square',3);
psmedIm = (0.5*imclose(noisedIm,b)) + (0.5*imopen(noisedIm,b));
where b is a square flat structuring element and noisedIm is an image noised by a salt and pepper noise.
Also I don't understand why the image generated using the pseudomedian filter isn't denoised.
Thank you!
In terms of your speed query, I'd propose that your pseudomedian filter is faster because it doesn't involve sorting. The true median filter requires that you sort elements and find the central value, which takes a fair bit of time.
The reason why your salt and pepper noise isn't removed is that you're always maintaining their effects because you're always using both the min and max values inside the structuring element when you use imclose and imopen. Because you're just weighting each by half, if there's a white pixel, the 0.5 factor contribution from the max function will bump the pixel value up, and vice versa for black pixels.
EDIT: Here's a quick demo I did that helps your pseudomedian behave a little more nicely with salt and pepper noise. The big difference is that it tries to use the 'best parts' of the opened and closed images rather than making them fight it out. I think it works quite well for eliminating the salt and pepper noise you used as an example.
img = imread('cameraman.tif');
img = imnoise(img, 'salt & pepper', 0.01);
subplot(2,2,1); imshow(img);
b = strel('square', 3);
closed = double(imclose(img, b));
opened = double(imopen(img, b));
subplot(2,2,2); imshow(closed,[]);
subplot(2,2,3); imshow(opened,[]);
img = double(img);
img = img + (closed - img) + (opened - img);
subplot(2,2,4); imshow(img,[]);
EDIT: Here's the result of running the code:
EDIT 2: Here's the underlying theory (it's not overly mathematical and based entirely on intuition!)
Salt and pepper noise exists as pure white and pure black pixels scattered randomly. The idea is that the 'closed' and 'opened' images will each eliminate one of the halves -- either the white salt noise or the black pepper noise -- and the pixel value in that location should be corrected by one of the operations. We just don't know which one. So we know that one of the images out of both 'closed' and 'open' is 'correct' for that pixel because the operation should have effectively 'median-ed' that pixel correctly. Since the one that is 'incorrect' should have exactly the same value at that pixel (white or black) as the original image, subtracting its value doesn't affect the original image. Only the 'correct' one (which differs by the exact amount required to return the image to its supposedly correct value) is right, so we adjust the image at that pixel by the corresponding amount. Thus, taking the noisy original image and adding to it both the differences gives us something with much of the noise reduced.
I've scanned an old photo with paper texture pattern and I would like to remove the texture as much as possible without lowering the image quality. Is there a way, probably using Image Processing toolbox in MATLAB?
I've tried to apply FFT transformation (using Photoshop plugin), but I couldn't find any clear white spots to be paint over. Probably the pattern is not so regular for this method?
You can see the sample below. If you need the full image I can upload it somewhere.
Unfortunately, you're pretty much stuck in the spatial domain, as the pattern isn't really repetitive enough for Fourier analysis to be of use.
As #Jonas and #michid have pointed out, filtering will help you with a problem like this. With filtering, you face a trade-off between the amount of detail you want to keep and the amount of noise (or unwanted image components) you want to remove. For example, the median filter used by #Jonas removes the paper texture completely (even the round scratch near the bottom edge of the image) but it also removes all texture within the eyes, hair, face and background (although we don't really care about the background so much, it's the foreground that matters). You'll also see a slight decrease in image contrast, which is usually undesirable. This gives the image an artificial look.
Here's how I would handle this problem:
Detect the paper texture pattern:
Apply Gaussian blur to the image (use a large kernel to make sure that all the paper texture information is destroyed
Calculate the image difference between the blurred and original images
EDIT 2 Apply Gaussian blur to the difference image (use a small 3x3 kernel)
Threshold the above pattern using an empirically-determined threshold. This yields a binary image that can be used as a mask.
Use median filtering (as mentioned by #Jonas) to replace only the parts of the image that correspond to the paper pattern.
Paper texture pattern (before thresholding):
You want as little actual image information to be present in the above image. You'll see that you can very faintly make out the edge of the face (this isn't good, but it's the best I have time for). You also want this paper texture image to be as even as possible (so that thresholding gives equal results across the image). Again, the right hand side of the image above is slightly darker, meaning that thresholding it well will be difficult.
Final image:
The result isn't perfect, but it has completely removed the highly-visible paper texture pattern while preserving more high-frequency content than the simpler filtering approaches.
EDIT
The filled-in areas are typically plain-colored and thus stand out a bit if you look at the image very closely. You could also try adding some low-strength zero-mean Gaussian noise to the filled-in areas to make them look more realistic. You'd have to pick the noise variance to match the background. Determining it empirically may be good enough.
Here's the processed image with the noise added:
Note that the parts where the paper pattern was removed are more difficult to see because the added Gaussian noise is masking them. I used the same Gaussian distribution for the entire image but if you want to be more sophisticated you can use different distributions for the face, background, etc.
A median filter can help you a bit:
img = imread('http://i.stack.imgur.com/JzJMS.jpg');
%# convert rgb to grayscale
img = rgb2gray(img);
%# apply median filter
fimg = medfilt2(img,[15 15]);
%# show
imshow(fimg,[])
Note that you may want to pad the image first to avoid edge effects.
EDIT: A smaller filter kernel than [15 15] will preserve image texture better, but will leave more visible traces of the filtering.
Well i have tried out a different approach using Anisotropc diffusion using the 2nd coefficient that operates on wider areas
Here is the output i got:
From what i can See from the Picture, the Noise has a relatively high Frequency Compared to the image itself. So applying a low Pass filter should work. Have a look at the Power spectrum abs(fft(...)) to determine the cutoff Frequency.