DCT filter image in Matlab - matlab

I used the below function to filter an image. Basically it sets coefficients of DCT to 0 except for top-left 8x8 elements, which means it filter out all high frequency part and only left the low frequency part.
function I_out = em_DCT_filter(I_in,N)
I_trim = double(I_in)-128;
MYDCT=dctmtx(N);
dct = #(block_struct)MYDCT*block_struct.data*MYDCT';
B=blockproc(I_trim,[N,N],dct);
mask = zeros(N,N);
mask(1:N/4,1:N/4)= 1;
AnselmMask = #(block_struct)block_struct.data.*mask;
BMask=blockproc(B,[N N],AnselmMask);
InverseDct = #(block_struct)MYDCT'*block_struct.data*MYDCT;
BReversedl = blockproc(BMask,[N N],InverseDct);
I_out= uint8(BReversedl+128);
After processing, an image looks like this:
I need the function removes the details in the image (e.g. patterns on the sweater, shadow on the pants), which it seems working fine. However, the function also makes the image very fuzzy. How can I remove the details, as well as keeping the region structure clear? For example, the sweater/pants region will be more uniform coloured region than before.

You basically applied "Local Low Pass Filter".
No wonder "Fuzzy" look is the result, you removed data in the High Frequency we usually interpret as details and "Sharpness".
What you really should do is remove High Frequency details yet keep large edges in tact.
A good way to do is use something like Anisotropic Diffusion.
By using the optimized parameters you'll be able to achieve the look you're after.
In general those methods are called image abstractions.
Here's a great Open Source code for advanced Anisotropic Diffusion:
https://github.com/RoyiAvital/Fast-Anisotropic-Curvature-Preserving-Smoothing
Work with, if you can contribute, it would be amazing.

Related

Location based segmentation of objects in an image (in Matlab)

I've been working on an image segmentation problem and can't seem to get a good idea for my most recent problem.
This is what I have at the moment:
Click here for image. (This is only a generic example.)
Is there a robust algorithm that can automatically discard the right square as not belonging to the group of the other four squares (that I know should always be stacked more or less on top of each other) ?
It can sometimes be the case, that one of the stacked boxes is not found, so there's a gap or that the bogus box is on the left side.
Your input is greatly appreciated.
If you have a way of producing BW images like your example:
s = regionprops(BW, 'centroid');
centroids = cat(1, s.Centroid);
xpos = centroids(:,1); should then be the x-positions of the boxes.
From here you have multiple ways to go, depending on whether you always have just one separated box and one set of grouped boxes or not. For the "one bogus box far away, rest closely grouped" case (away from Matlab, so this is unchecked) you could even do something as simple as:
d = abs(xpos-median(xpos));
bogusbox = centroids(d==max(d),:);
imshow(BW);
hold on;
plot(bogusbox(1),bogusbox(2),'r*');
Making something that's robust for your actual use case which I am assuming doesn't consist of neat boxes is another matter; as suggested in comments, you need some idea of how close together the positioning of your good boxes is, and how separate the bogus box(es) will be.
For example, you could use other regionprops measurements such as 'BoundingBox' or 'Extrema' and define some sort of measurement of how much the boxes overlap in x relative to each other, then group using that (this could be made to work even if you have multiple stacks in an image).

artifacts in processed images

This question is related to my previous post Image Processing Algorithm in Matlab in stackoverflow, which I already got the results that I wanted to.
But now I am facing another problem, and getting some artefacts in the process images. In my original images (stack of 600 images) I can't see any artefacts, please see the original image from finger nail:
But in my 10 processed results I can see these lines:
I really don't know where they come from?
Also if they belong to the camera's sensor why can't I see them in my original images? Any idea?
Edit:
I have added the following code suggested by #Jonas. It reduces the artefact, but does not completely remove them.
%averaging of images
im = D{1}(:,:);
for i = 2:100
im = imadd(im,D{i}(:,:));
end
im = im/100;
imshow(im,[]);
for i=1:100
SD{i}(:,:)=imsubtract(D{i}(:,:),im(:,:))
end
#belisarius has asked for more images, so I am going to upload 4 images from my finger with speckle pattern and 4 images from black background size( 1280x1024 ):
And here is the black background:
Your artifacts are in fact present in your original image, although not visible.
Code in Mathematica:
i = Import#"http://i.stack.imgur.com/5hM3u.png"
EntropyFilter[i, 1]
The lines are faint, but you can see them by binarization with a very low level threshold:
Binarize[i, .001]
As for what is causing them, I can only speculate. I would start tracing from the camera output itself. Also, you may post two or three images "as they come straight from the camera" to allow us some experimenting.
The camera you're using is most likely has a CMOS chip. Since they have independent column (and possibly row) amplifiers, which may have slightly different electronic properties, you can get the signal from one column more amplified than from another.
Depending on the camera, these variability in column intensity can be stable. In that case, you're in luck: Take ~100 dark images (tape something over the lens), average them, and then subtract them from each image before running the analysis. This should make the lines disappear. If the lines do not disappear (or if there are additional lines), use the post-processing scheme proposed by Amro to remove the lines after binarization.
EDIT
Here's how you'd do the background subtraction, assuming that you have taken 100 dark images and stored them in a cell array D with 100 elements:
% take the mean; convert to double for safety reasons
meanImg = mean( double( cat(3,D{:}) ), 3);
% then you cans subtract the mean from the original (non-dark-frame) image
correctedImage = rawImage - meanImg; %(maybe you need to re-cast the meanImg first)
Here is an answer that in opinion will remove the lines more gently than the above mentioned methods:
im = imread('image.png'); % Original image
imFiltered = im; % The filtered image will end up here
imChanged = false(size(im));% To document the filter performance
% 1)
% Compute the histgrams for each column in the lower part of the image
% (where the columns are most clear) and compute the mean and std each
% bin in the histogram.
histograms = hist(double(im(501:520,:)),0:255);
colMean = mean(histograms,2);
colStd = std(histograms,0,2);
% 2)
% Now loop though each gray level above zero and...
for grayLevel = 1:255
% Find the columns where the number of 'graylevel' pixels is larger than
% mean_n_graylevel + 3*std_n_graylevel). - That is columns that contains
% statistically 'many' pixel with the current 'graylevel'.
lineColumns = find(histograms(grayLevel+1,:)>colMean(grayLevel+1)+3*colStd(grayLevel+1));
% Now remove all graylevel pixels in lineColumns in the original image
if(~isempty(lineColumns))
for col = lineColumns
imFiltered(:,col) = im(:,col).*uint8(~(im(:,col)==grayLevel));
imChanged(:,col) = im(:,col)==grayLevel;
end
end
end
imshow(imChanged)
figure,imshow(imFiltered)
Here is the image after filtering
And this shows the pixels affected by the filter
You could use some sort of morphological opening to remove the thin vertical lines:
img = imread('image.png');
SE = strel('line',2,0);
img2 = imdilate(imerode(img,SE),SE);
subplot(121), imshow(img)
subplot(122), imshow(img2)
The structuring element used was:
>> SE.getnhood
ans =
1 1 1
Without really digging into your image processing, I can think of two reasons for this to happen:
The processing introduced these artifacts. This is unlikely, but it's an option. Check your algorithm and your code.
This is a side-effect because your processing reduced the dynamic range of the picture, just like quantization. So in fact, these artifacts may have already been in the picture itself prior to the processing, but they couldn't be noticed because their level was very close to the background level.
As for the source of these artifacts, it might even be the camera itself.
This is a VERY interesting question. I used to deal with this type of problem with live IR imagers (video systems). We actually had algorithms built into the cameras to deal with this problem prior to the user ever seeing or getting their hands on the image. Couple questions:
1) are you dealing with RAW images or are you dealing with already pre-processed grayscale (or RGB) images?
2) what is your ultimate goal with these images. Is the goal to simply get rid of the lines regardless of the quality in the rest of the image that results, or is the point to preserve the absolute best image quality. Are you to perform other processing afterwards?
I agree that those lines are most likely in ALL of your images. There are 2 reasons for those lines ever showing up in an image, one would be in a bright scene where OP AMPs for columns get saturated, thus causing whole columns of your image to get the brightest value camera can output. Another reason could be bad OP AMPs or ADCs (Analog to Digital Converters) themselves (Most likely not an ADC as normally there is essentially 1 ADC for th whole sensor, which would make the whole image bad, not your case). The saturation case is actually much more difficult to deal with (and I don't think this is your problem). Note: Too much saturation on a sensor can cause bad pixels and columns to arise in your sensor (which is why they say never to point your camera at the sun). The bad column problem can be dealt with. Above in another answer, someone had you averaging images. While this may be good to find out where the bad columns (or bad single pixels, or the noise matrix of your sensor) are (and you would have to average pointing the camera at black, white, essentially solid colors), it isn't the correct answer to get rid of them. By the way, what I am explaining with the black and white and averaging, and finding bad pixels, etc... is called calibrating your sensor.
OK, so saying you are able to get this calibration data, then you WILL be able to find out which columns are bad, even single pixels.
If you have this data, one way that you could erase the columns out is to:
for each bad column
for each pixel (x, y) on the bad column
pixel(x, y) = Average(pixel(x+1,y),pixel(x+1,y-1),pixel(x+1,y+1),
pixel(x-1,y),pixel(x-1,y-1),pixel(x-1,y+1))
What this essentially does is replace the bad pixel with a new pixel which is the average of the 6 remaining good pixels around it. The above is an over-simplified version of an algorithm. There are certainly cases where a singly bad pixel could be right next the bad column and shouldn't be used for averaging, or two or three bad columns right next to each other. One could imagine that you would calculate the values for a bad column, then consider that column good in order to move on to the next bad column, etc....
Now, the reason I asked about the RAW versus B/W or RGB. If you were processing a RAW, depending on the build of the sensor itself, it could be that only one sub-pixel (if you will) of the bayer filtered image sensor has the bad OP AMP. If you could detect this, then you wouldn't necessarily have to throw out the other good sub-pixel's data. Secondarily, if you are using an RGB sensor, to take a grayscale photo, and you shot it in RAW, then you may be able to calculate your own grayscale pixels. Many sensors when giving back a grayscale image when using an RGB sensor, will simply pass back the Green pixel as the overall pixel. This is due to the fact that it really serves as the luminescence of an image. This is why most image sensors implement 2 green sub-pixels for every r or g sub-pixel. If this is what they are doing (not ALL sensors do this) then you may have better luck getting rid of just the bad channel column, and performing your own grayscale conversion using.
gray = (0.299*r + 0.587*g + 0.114*b)
Apologies for the long winded answer, but I hope this is still informational to someone :-)
Since you can not see the lines in the original image, they are either there with low intensity difference in comparison with original range of image, or added by your processing algorithm.
The shape of the disturbance hints to the first option... (Unless you have an algorithm that processes each row separately.)
It seems like your sensor's columns are not uniform, try taking a picture without the finger (background only) using the same exposure (and other) settings, then subtracting it from the photo of the finger (prior to other processing). (Make sure the background is uniform before taking both images.)

Perl - Ratio of homogeneous areas of an image

I would like to check whether an image has a lot of homogeneous areas. Therefore I would like to get some kind of value of an image that declares a ratio for images depending on the amount/size of homogeneous areas (e.g. that value could have a range from 0 to 5).
Instead of a value there could be some kind of classification as well.
[many homogeneous areas -> value/class 5 ; few homogeneous areas -> value/class 0]
I would like to do that in perl. Is there a package/function or something like that?
What you want seems to be an area of image processing research which I am not familiar with. However, GraphicsMagick's mogrify utility has a -segment option:
Use -segment to segment an image by analyzing the histograms of the color components and identifying units that are homogeneous with the fuzzy c-means technique. The scale-space filter analyzes the histograms of the three color components of the image and identifies a set of classes. The extents of each class is used to coarsely segment the image with thresholding. The color associated with each class is determined by the mean color of all pixels within the extents of a particular class. Finally, any unclassified pixels are assigned to the closest class with the fuzzy c-means technique.
I don't know if this is any use to you. You might have to hit the library on this one, and read some research. You do have access to this through PerlMagick as well. However, it does not look like it gives access to the internals, but just produces an image based on parameters.
In my tests (without really understanding what the parameters do), photos turned entirely black, whereas PNG images with large areas of similar colors were reduced to a sort of an average color. Whether you can use that fact to develop a measure is an open question I am not going to investigate ;-)

How do I detect an instance of an object in an image?

I have an image containing several specific objects. I would like to detect the positions of those objects in this image. To do that I have some model images containing the objects I would like to detect. These images are well cropped around the object instance I want to detect.
Here is an example:
In this big image,
I would like to detect the object represented in this model image:
Since you originally posted this as a 'gimme-da-codez' question, showing absolutely no effort, I'm not going to give you the code. I will describe the approach in general terms, with hints along the way and it's up to you to figure out the exact code to do it.
Firstly, if you have a template, a larger image, and you want to find instances of that template in the image, always think of cross-correlation. The theory is the same whether you're processing 1D signals (called a matched filter in signal processing) or 2D images.
Cross-correlating an image with a known template gives you a peak wherever the template is an exact match. Look up the function normxcorr2 and understand the example in the documentation.
Once you find the peak, you'll have to account for the offset from the actual location in the original image. The offset is related to the fact that cross-correlating an N point signal with an M point signal results in an N + M -1 point output. This should be clear once you read up on cross-correlation, but you should also look at the example in the doc I mentioned above to get an idea.
Once you do these two, then the rest is trivial and just involves cosmetic dressing up of your result. Here's my result after detecting the object following the above.
Here's a few code hints to get you going. Fill in the rest wherever I have ...
%#read & convert the image
imgCol = imread('http://i.stack.imgur.com/tbnV9.jpg');
imgGray = rgb2gray(img);
obj = rgb2gray(imread('http://i.stack.imgur.com/GkYii.jpg'));
%# cross-correlate and find the offset
corr = normxcorr2(...);
[~,indx] = max(abs(corr(:))); %# Modify for multiple instances (generalize)
[yPeak, xPeak] = ind2sub(...);
corrOffset = [yPeak - ..., xPeak - ...];
%# create a mask
mask = zeros(size(...));
mask(...) = 1;
mask = imdilate(mask,ones(size(...)));
%# plot the above result
h1 = imshow(imgGray);
set(h1,'AlphaData',0.4)
hold on
h2 = imshow(imgCol);
set(h2,'AlphaData',mask)
Here is the answer that I was about to post when the question was closed. I guess it's similar to yoda's answer.
You can try to use normalized cross corelation:
im=rgb2gray(imread('di-5Y01.jpg'));
imObj=rgb2gray(imread('di-FNMJ.jpg'));
score = normxcorr2(imObj,im);
imagesc(score)
The result is: (As you can see, the whitest point corresponds to the position of your object.)
The Mathworks has a classic demo of image registration using the same technique as in #yoda's answer:
Registering an Image Using Normalized Cross-Correlation

MATLAB - Restore central sub-section of an image

So, I have a 512x512 distorted image, but what I'm trying to do is restore only a 400x400 centrally-positioned subsection of the image while it is still distorted outside of it. How do I go about implementing something like that?
I was thinking to have a for loop within a for loop like
for row = 57:457
for col = 57:457
%some filter in here
end
end
But I'm not quite sure what to do next...
As a general rule, you can do a lot of things in MATLAB without loops using vectorization instead. As discussed in the comments below your question, there are filtering functions included with MATLAB such as medfilt2, wiener2 or imfilter which all work on two-dimensional images directly without the need for any loops.
To restore only the center part of your image, you apply the filter to the full image, store the result in a temporary variable and then copy over the part that you want into your distored image:
tmpimage = medfilt2(distortedimage);
finalimage = distortedimage;
finalimage(57:456,57:456)=tmpimage(57:456,57:456);
Of course if you don't care about edge effects during the reconstruction, you can just call the reconstruction for the part that interests you and avoid the tmpimage:
finalimage = distortedimage;
finalimage(57:456,57:456)=medfilt2(distortedimage(57:456,57:456));
Note how the sizes in an assignment need to match: you can't assign finalimage(57:456,57:456)=medfilt2(distortedimage) since the right-hand-size produces a 512-by-512 matrix which doesn't fit into the 400-by-400 center of finalimage.