I have an image that show some random filled circles (e.g. see here). I want to change these circles to make some irregular shapes. In other words, I want to define a distribution by which I can expand the circles. Clearly, the resulted new objects will not be circles anymore, because the generated objects are expanded based on a distribution which is variable; see this new deformed circle.
I was wondering if there is any method that can do this? In my first try, I tried to use image dilation in Matlab, but I have no idea on how the dilation "distribution" should be used.
IM2 = imdilate(IM,SE)
If you want to do it using dilation, a solution could be:
Let say Im is your original image
ImResult = Same(Im)
ImClone = Clone(Im)
Randomly delete pixels in ImClone. The number of pixels to delete may be a percentage, or whatever you prefer
ImDilate = Dilate(ImClone), with the structuring element of size N
Result = Maximum(Result, ImDilate)
If you want different size of deformations, then you iterate from step 3 to 6, with different structuring element sizes.
But what you want is more an elastic deformation. You should take a look to the free form deformation (FFD).
I have RGB museum JPG Images. most of them have image footnotes on one or more sides, and I'd like to remove them. I do that manually using paint software. now I applied the following matlab code to remove the image footnotes automatically. I get a good result for some images but for others it not remove any border. Please, can any one help me by update this code to apply it for all images?
'rgbIm = im2double(imread('A3.JPG'));
hsv=rgb2hsv(rgbIm);
m = hsv(:,:,2);
foreground = m > 0.06; % value of background
foreground = bwareaopen(foreground, 1000); % or whatever.
labeledImage = bwlabel(foreground);
measurements = regionprops(labeledImage, 'BoundingBox');
ww = measurements.BoundingBox;
croppedImage = imcrop(rgbImage, ww);'
In order to remove the boundaries you could use "imclearborder", where it checks for labelled components at boundaries and clears them. Caution! if the ROI touches the boundary, it may remove. To avoid such instance you can use "imerode" with desired "strel" -( a line or disc) before clearing the borders. The accuracy or generalizing the method to work for all images depends entirely on "threshold" which separates the foreground and background.
More generic method could be - try to extract the properties of footnotes. For instance, If they are just some texts, you can easily remove them by using a edge detection and morphology opening with line structuring element along the cols. (basic property for text detection)
Hope it helps.
I could give you a clear idea or method if you upload the image.
For a school project I've built a scanner and connected it to matlab. The scanner scans images (16-by-16 pixels) of handwritten digits from 0 to 9. I'm using a principal component analysis in order to classify the scans. Due to the low accuracy of the scanner, I need to preprocess the scans first, before I can actually send them through the recognition machine.
One of these preprocessing-steps is to thicken the lines. So far, I've used a pretty simple averageing filter for this: H = ones(3, 3) ./ 9. This bears the problem, that the circular gap of the digits 8 and 9 is likely to be "closed". I enclose a picture of all my preprocessing-steps, where the problem is visible: the image with the caption "threshholded" still shows the gap, but it disappeared after the thickening step.
My question is: Do you know a better filter for this "thickening"-step, which would not erase the gap? Or do you have an idea for a filter which could be applied after the thickening to produced the desired result? Any other suggestions or hints are also greatly appreciated.
I=imread('numberreco.png');
subplot(1,2,1),imshow(I)
I=rgb2gray(I);
BW=~im2bw(I,graythresh(I));
BW2 = bwmorph(BW,'thin');
I1=double(I).*BW2;
subplot(1,2,2),imshow(uint8(I1))
The gap is kept, and you can start from here...
Not a very general answer, but if you have the Image Processing Toolbox, and your system doesn't depend on having multiple grey levels, then converting to binary images and using the 'thicken' operation from bwmorph() should do exactly what you want.
Thinking a bit harder, you could also use a suitably thickened binary image as a mask to restore holes - either just elementwise multiply it with the blurred greyscale image or, for more flexibility:
invert it to form a background/holes mask
remove the background with imclearborder() to leave just the holes
optionally dilate the mask
use as a logical index to clear the 'hole' areas of the blurred/brightened greyscale image.
Even without the morphological steps you can use a mask to artificially reintroduce the original holes later, e.g.:
bgmask = (thresholdedimage == 0); % assuming 0 == background
holes = imclearborder(bgmask);
... % other processing steps
brightenedimage(holes) = 0; % punch holes in updated image
I've got a question regarding the following scenario.
As I post-process an image, I gained a contour, which is unfortunately twice connected as you can see at the bottom line. To make it obvious what I want is just the outter line.
Therefore I zoomed in and marked the line, i want of the large image.
What I want from this selection is only the outter part, which I've marked as green in the next picture. Sorry for my bad drawing skills. ;)
I am using MatLab with the IPT. So I also tried to make out with bwmorph and the hbreak option, but it threw an error.
How do I solve that problem?
If you were successful could you please tell me a bit more about it?
Thank you in advance!
Sincerely
It seems your input image is a bit different than the one you posted, since I couldn't directly collect the branch points (there were too many of them). So, to start handling your problem I considering a thinning followed by branch point detection. I also dilate them and remove from the thinned image, this guarantees that in fact there is no connection (4 or 8) between the different segments in the initial image.
f = im2bw(imread('http://i.imgur.com/yeFyF.png'), 0);
g = bwmorph(f, 'thin', 'Inf');
h = g & ~bwmorph(bwmorph(g, 'branchpoints'), 'dilate');
Since h holds disconnected segments, the following operation collects the end points of all the segments:
u = bwmorph(h, 'endpoints');
Now to actually solve your problem I did some quick analysis on what you want to discard. Consider two distinct segments, a and b, in h. We say a and b overlap if the end points of one is contained in the other. By contained I simply mean if the starting x point of one is smaller or equal to the other, and the ending x point is greater or equal too. In your case, the "mountain" overlaps with the segment that you wish to remove. To determine each of them you remove, consider their area. But, since these are segments, area is a meaningless term. To handle that, I connected the end points of a segment, and used as area simply the interior points. As you can clearly notice, the area of the overlapped segment at bottom is very small, so we say it is basically a line and discard it while keeping the "mountain" segment. To do this step the image u is of fundamental importance, since with it you have a clear indication of where to start and stop tracking a contour. If you used the image has is , you would have trouble determining where to start and stop collecting the points of a contour (i.e., the raster order would give you incorrect overlapping indication).
To reconstruct the segment as a single one (currently you have three of them), consider the points you discarded from g in h, and use those that doesn't belong to the now removed bottom segment.
I'd also use bwmorph
%# find the branch point
branchImg = bwmorph(img,'branchpoints');
%# grow the pixel to 3x3
branchImg = imdilate(branchImg,ones(3));
%# hide the branch point
noBranchImg = img & ~branchImg;
%# label the three lines
lblImg = bwlabel(noBranchImg);
%# in the original image, mask label #3
%# note that it may not always be #3 that you want to mask
finalImg = img;
finalImg(lblImg==3) = 0;
%# show the result
imshow(finalImg)
This question is related to my previous post Image Processing Algorithm in Matlab in stackoverflow, which I already got the results that I wanted to.
But now I am facing another problem, and getting some artefacts in the process images. In my original images (stack of 600 images) I can't see any artefacts, please see the original image from finger nail:
But in my 10 processed results I can see these lines:
I really don't know where they come from?
Also if they belong to the camera's sensor why can't I see them in my original images? Any idea?
Edit:
I have added the following code suggested by #Jonas. It reduces the artefact, but does not completely remove them.
%averaging of images
im = D{1}(:,:);
for i = 2:100
im = imadd(im,D{i}(:,:));
end
im = im/100;
imshow(im,[]);
for i=1:100
SD{i}(:,:)=imsubtract(D{i}(:,:),im(:,:))
end
#belisarius has asked for more images, so I am going to upload 4 images from my finger with speckle pattern and 4 images from black background size( 1280x1024 ):
And here is the black background:
Your artifacts are in fact present in your original image, although not visible.
Code in Mathematica:
i = Import#"http://i.stack.imgur.com/5hM3u.png"
EntropyFilter[i, 1]
The lines are faint, but you can see them by binarization with a very low level threshold:
Binarize[i, .001]
As for what is causing them, I can only speculate. I would start tracing from the camera output itself. Also, you may post two or three images "as they come straight from the camera" to allow us some experimenting.
The camera you're using is most likely has a CMOS chip. Since they have independent column (and possibly row) amplifiers, which may have slightly different electronic properties, you can get the signal from one column more amplified than from another.
Depending on the camera, these variability in column intensity can be stable. In that case, you're in luck: Take ~100 dark images (tape something over the lens), average them, and then subtract them from each image before running the analysis. This should make the lines disappear. If the lines do not disappear (or if there are additional lines), use the post-processing scheme proposed by Amro to remove the lines after binarization.
EDIT
Here's how you'd do the background subtraction, assuming that you have taken 100 dark images and stored them in a cell array D with 100 elements:
% take the mean; convert to double for safety reasons
meanImg = mean( double( cat(3,D{:}) ), 3);
% then you cans subtract the mean from the original (non-dark-frame) image
correctedImage = rawImage - meanImg; %(maybe you need to re-cast the meanImg first)
Here is an answer that in opinion will remove the lines more gently than the above mentioned methods:
im = imread('image.png'); % Original image
imFiltered = im; % The filtered image will end up here
imChanged = false(size(im));% To document the filter performance
% 1)
% Compute the histgrams for each column in the lower part of the image
% (where the columns are most clear) and compute the mean and std each
% bin in the histogram.
histograms = hist(double(im(501:520,:)),0:255);
colMean = mean(histograms,2);
colStd = std(histograms,0,2);
% 2)
% Now loop though each gray level above zero and...
for grayLevel = 1:255
% Find the columns where the number of 'graylevel' pixels is larger than
% mean_n_graylevel + 3*std_n_graylevel). - That is columns that contains
% statistically 'many' pixel with the current 'graylevel'.
lineColumns = find(histograms(grayLevel+1,:)>colMean(grayLevel+1)+3*colStd(grayLevel+1));
% Now remove all graylevel pixels in lineColumns in the original image
if(~isempty(lineColumns))
for col = lineColumns
imFiltered(:,col) = im(:,col).*uint8(~(im(:,col)==grayLevel));
imChanged(:,col) = im(:,col)==grayLevel;
end
end
end
imshow(imChanged)
figure,imshow(imFiltered)
Here is the image after filtering
And this shows the pixels affected by the filter
You could use some sort of morphological opening to remove the thin vertical lines:
img = imread('image.png');
SE = strel('line',2,0);
img2 = imdilate(imerode(img,SE),SE);
subplot(121), imshow(img)
subplot(122), imshow(img2)
The structuring element used was:
>> SE.getnhood
ans =
1 1 1
Without really digging into your image processing, I can think of two reasons for this to happen:
The processing introduced these artifacts. This is unlikely, but it's an option. Check your algorithm and your code.
This is a side-effect because your processing reduced the dynamic range of the picture, just like quantization. So in fact, these artifacts may have already been in the picture itself prior to the processing, but they couldn't be noticed because their level was very close to the background level.
As for the source of these artifacts, it might even be the camera itself.
This is a VERY interesting question. I used to deal with this type of problem with live IR imagers (video systems). We actually had algorithms built into the cameras to deal with this problem prior to the user ever seeing or getting their hands on the image. Couple questions:
1) are you dealing with RAW images or are you dealing with already pre-processed grayscale (or RGB) images?
2) what is your ultimate goal with these images. Is the goal to simply get rid of the lines regardless of the quality in the rest of the image that results, or is the point to preserve the absolute best image quality. Are you to perform other processing afterwards?
I agree that those lines are most likely in ALL of your images. There are 2 reasons for those lines ever showing up in an image, one would be in a bright scene where OP AMPs for columns get saturated, thus causing whole columns of your image to get the brightest value camera can output. Another reason could be bad OP AMPs or ADCs (Analog to Digital Converters) themselves (Most likely not an ADC as normally there is essentially 1 ADC for th whole sensor, which would make the whole image bad, not your case). The saturation case is actually much more difficult to deal with (and I don't think this is your problem). Note: Too much saturation on a sensor can cause bad pixels and columns to arise in your sensor (which is why they say never to point your camera at the sun). The bad column problem can be dealt with. Above in another answer, someone had you averaging images. While this may be good to find out where the bad columns (or bad single pixels, or the noise matrix of your sensor) are (and you would have to average pointing the camera at black, white, essentially solid colors), it isn't the correct answer to get rid of them. By the way, what I am explaining with the black and white and averaging, and finding bad pixels, etc... is called calibrating your sensor.
OK, so saying you are able to get this calibration data, then you WILL be able to find out which columns are bad, even single pixels.
If you have this data, one way that you could erase the columns out is to:
for each bad column
for each pixel (x, y) on the bad column
pixel(x, y) = Average(pixel(x+1,y),pixel(x+1,y-1),pixel(x+1,y+1),
pixel(x-1,y),pixel(x-1,y-1),pixel(x-1,y+1))
What this essentially does is replace the bad pixel with a new pixel which is the average of the 6 remaining good pixels around it. The above is an over-simplified version of an algorithm. There are certainly cases where a singly bad pixel could be right next the bad column and shouldn't be used for averaging, or two or three bad columns right next to each other. One could imagine that you would calculate the values for a bad column, then consider that column good in order to move on to the next bad column, etc....
Now, the reason I asked about the RAW versus B/W or RGB. If you were processing a RAW, depending on the build of the sensor itself, it could be that only one sub-pixel (if you will) of the bayer filtered image sensor has the bad OP AMP. If you could detect this, then you wouldn't necessarily have to throw out the other good sub-pixel's data. Secondarily, if you are using an RGB sensor, to take a grayscale photo, and you shot it in RAW, then you may be able to calculate your own grayscale pixels. Many sensors when giving back a grayscale image when using an RGB sensor, will simply pass back the Green pixel as the overall pixel. This is due to the fact that it really serves as the luminescence of an image. This is why most image sensors implement 2 green sub-pixels for every r or g sub-pixel. If this is what they are doing (not ALL sensors do this) then you may have better luck getting rid of just the bad channel column, and performing your own grayscale conversion using.
gray = (0.299*r + 0.587*g + 0.114*b)
Apologies for the long winded answer, but I hope this is still informational to someone :-)
Since you can not see the lines in the original image, they are either there with low intensity difference in comparison with original range of image, or added by your processing algorithm.
The shape of the disturbance hints to the first option... (Unless you have an algorithm that processes each row separately.)
It seems like your sensor's columns are not uniform, try taking a picture without the finger (background only) using the same exposure (and other) settings, then subtracting it from the photo of the finger (prior to other processing). (Make sure the background is uniform before taking both images.)