I have a 360x360 image I want to remove lines in it
the portion on it has periodic noisy lines I am working on MATLAB
I tried median filter, but not working how to denoise this image and remove lines?
I tried this
%image 360x360
[rows, columns, numberOfColorChannels] = size(image);
subplot(2, 2, 1);
imshow(image,[]);
horizontalProfile = mean(image);
subplot(2, 2, [2, 4]);
plot(horizontalProfile, 'b-');
grid on;
bottomEnvelope = movmin(horizontalProfile, 10);
upperEnvelope = movmax(horizontalProfile, 10);
deltaGL = mean(upperEnvelope- bottomEnvelope)
hold on;
plot(bottomEnvelope, 'r-', 'LineWidth', 2);
plot(upperEnvelope, 'r-', 'LineWidth', 2);
% Compute midline
midline = (bottomEnvelope + upperEnvelope) / 2;
plot(midline, 'm-', 'LineWidth', 2);
columnsToDim = horizontalProfile > midline;
image(:, columnsToDim) = image(:, columnsToDim) - deltaGL;
subplot(2, 2, 3);
imshow(image, []);
But that did not work better
I've uploaded the image data to Google Drive
This is a perfect use case for the Fast Fourier Transform (FFT).
FFT converts an image in the spatial domain to its frequency domain. The frequency domain can be used to smoothen particular noises (vertical lines in your case) in the spatial domain by removing the corresponding high frequency signals. There are tons of sources you can inform yourself about it, so I leave this part to you.
Here is my approach.*
import cv2
import numpy as np
import matplotlib.pyplot as plt
img = cv2.imread('coin.png',0)
# get the frequency domain
f = np.fft.fft2(img)
fshift = np.fft.fftshift(f)
magnitude_spectrum = 20*np.log(np.abs(fshift))
# smoothen the vertical lines in the spatial domain =
# remove the high frequency signals (i.e. horizontal lines) in the frequency domain
rows, cols = img.shape
crow,ccol = rows//2 , cols//2
fshift[crow-5:crow+6, 0:ccol-10] = 0
fshift[crow-5:crow+6, ccol+11:] = 0
magnitude_spectrum_no_vertical = 20*np.log(np.abs(fshift))
# get the spatial domain back
f_ishift = np.fft.ifftshift(fshift)
img_back = np.fft.ifft2(f_ishift)
img_back = np.real(img_back)
This is the output image only.
Feel free to play around with different approaches: Applying a gaussian filter before FFT to improve the outcome, masking background and so on.
*: Sorry I have no MATLAB. It should be easy to port my Python script to MATLAB, though.
Related
I separated the High Frequency (HF) component and low-frequency (LF) component from an image. After this step, I applied some denoising technique to the HF and LF. Afterwards I want to combine them together. How can I do that this?
I used the below code for decomposition
%// Load an image
Orig = double(rgb2gray(imread('lena.jpg')));
O=ROFdenoise(Orig, 12);
O=uint8(O);
figure, imshow(O)
%// Transform
Orig_T = dct2(Orig);
%// Split between high - and low-frequency in the spectrum (*)
cutoff = round(0.5 * 226);
High_T = fliplr(tril(fliplr(Orig_T), cutoff));
Low_T = Orig_T - High_T;
%// Transform back
High = idct2(High_T);
Low = idct2(Low_T);
I've commented out ROFdenoise because I don't know what it does. If you've split your image in the frequency domain, you want to combine it back together in frequency too. Also; I've added some plotting to make it easier to see what's happening.
%// Load an image
Orig = double(rgb2gray(imread('Lenna.png')));
%O=ROFdenoise(Orig, 12);
O=Orig; % No denoising before DCT
O=uint8(O);
figure(1), subplot(2,2,1), imshow(O), title('Before')
%// Discrete Cosine Transform
T = dct2(Orig);
%// Split between high - and low-frequency in the spectrum (*)
cutoff = round(0.5 * 226);
highT = fliplr(tril(fliplr(T), cutoff));
lowT = T - highT;
%//Do some denoising
highT = 0*highT;
subplot(2,2,2), imshow(highT), title('High T')
subplot(2,2,4), imshow(lowT), title('Low T')
%// Combine back
denoiseT = highT + lowT;
%// Transform back
denoiseO = uint8(idct2(denoiseT));
subplot(2,2,3), imshow(denoiseO), title('After')
Also; here is Lenna
I'm trying to reconstruct the shape of a sail. I'm using the 3D sparse reconstruction method. I'm using two cameras with which I took two pictures. I managed to do the calibration of such cameras too. In the pictures it is possible to see the checkerboard and the code I wrote detects it properly.
Now, since my pictures are black and white and the quality of the cameras is quite low, I cannot use the detectFeatures method properly. Problems arise when I'm trying to use matchFeatures. To overcome this problem I decided to use instead a cpselect command. By doing so I can manually click on the features. The matching between points from the two views seems now to be correct. When I carry on with the code and try to reconstruct the 3D plot I get points all over the place. It seems deformed. The plot clearly does not represent the sail and I don't know why.
The code follows.
Thank you in advance
% % Load precomputed camera parameters
load IP_CalibrationCarlos.mat %Calibration feature
%
I1 = imread('/Users/riccardocamin/Documents/MATLAB/Frames/Scan1.1.jpg');
I2 = imread('/Users/riccardocamin/Documents/MATLAB/Frames/Scan2.1.jpg');
%
[I1, newOrigin1] = undistortImage(I1, cameraParameters, 'OutputView', 'full');
[I2, newOrigin2] = undistortImage(I2, cameraParameters, 'OutputView', 'full');
%
I1 = imcrop(I1, [80 10 1040 1300]); %Necessary so images have same size
I2 = imcrop(I2, [0 10 1067 1300]);
%
squareSize = 82; % checkerboard square size in millimeters
%
[imagePoints, boardSize, pairsUsed] = detectCheckerboardPoints(rgb2gray(I1), rgb2gray(I2));
[refPoints1, boardSize] = detectCheckerboardPoints(rgb2gray(I1));
[refPoints2, boardSize] = detectCheckerboardPoints(rgb2gray(I2));
%
% % Translate detected points back into the original image coordinates
refPoints1 = bsxfun(#plus, refPoints1, newOrigin1);
refPoints2 = bsxfun(#plus, refPoints2, newOrigin2);
%
worldPoints = generateCheckerboardPoints(boardSize, squareSize);
%
[R1, t1] = extrinsics(refPoints1, worldPoints, cameraParameters); %R = r t = translation
[R2, t2] = extrinsics(refPoints2, worldPoints, cameraParameters);
%
% % Calculate camera matrices using the |cameraMatrix| function.
cameraMatrix1 = cameraMatrix(cameraParameters, R1, t1);
cameraMatrix2 = cameraMatrix(cameraParameters, R2, t2);
%
cpselect(I1, I2); % Save them as 'matchedPoints1'and 'matchedPoints2'
%
indexPairs = matchFeatures(matchedPoints1, matchedPoints2);
% Visualize correspondences
figure;
showMatchedFeatures(I1, I2, matchedPoints1, matchedPoints2);
title('Matched Features');
%
[points3D] = triangulate(matchedPoints1, matchedPoints2, ...
cameraMatrix1, cameraMatrix2);
%
x = -points3D(:,1);
y = -points3D(:,2);
z = -points3D(:,3);
figure
scatter3(x,y,z, 25);
xlabel('X');
ylabel('Y');
zlabel('Z');
The first problem you have is that you are cropping the images. Once you do that, all your coordinates are off. You do not need to do that here, because the images do not need to be the same size.
The second question is how precisely did you select the matching points? From the picture you have posted, it seems that your matches can be off by a few pixels, which can result in a large reconstruction error. Can you try finding the centroids of the spots on the sail using regionprops?
Also, are the two cameras stationary relative to each other? If so, then you may be better off calibrating the stereo pair, and doing the dense reconstruction as in this example. In that case, the two images would have to be of the same size.
I have an image in png format of the digit ‘6’, I want to determine the position of the stem with respect to the blob using morphological operations. I have detected the blob of 6 using the code below. Now, I don't know how to detect the stem of the digit ‘6’. I tried using hough transform and edge detection algorithms but it didn't help.
Here is my code for detecting the blob:
img=imread('six.png');
img=rgb2gray(img);
figure,imshow(img);
i1=im2bw(img);
st=strel('square',20);
imdilate(i1,st);
figure,imshow(i1);
i2=imfill(i1,'holes');
figure,imshow(i2);
i1=imsubtract(i2,i1);
B = bwboundaries(i1);
figure,imshow(i1)
i2=i2-i1;
figure,imshow(i2);
text(10,10,strcat('\color{green}Objects Found:',num2str(length(B))))
hold on
for k = 1:length(B)
boundary = B{k};
plot(boundary(:,2), boundary(:,1), 'g', 'LineWidth', 0.2)
end
if eq(num2str(length(B)),'1')
h=msgbox('the number is 6');
else
h=msgbox('unknown number');
end
Here's the original six image and my current output
If you want to stick to morphological operations, you can simply find the pixels that are closest to the hole that you have already detected and remove them.
I start with the same morphological operations that you do, and add the extra step removing pixels within a distance threshold of the detected hole.
img=imread('six.png');
img=im2bw(img);
figure,imshow(img);
filled_img=imfill(img,'holes');
figure; imshow(filled_img);
filled_boundary= bwmorph(filled_img,'remove');
figure
imshow(filled_boundary)
hole = ~img & filled_img;
figure; imshow(hole);
hole_boundary = bwmorph(hole, 'remove');
figure; imshow(hole_boundary);
%Remove points on the boundary that are close to the hole
[hole_x, hole_y] = find(hole_boundary);
[fill_x, fill_y] = find(filled_boundary);
D = pdist2([hole_x, hole_y], [fill_x, fill_y]);
[distance, ~] = min(D, [], 1);
distance_threshold = 10;
top_edges = filled_boundary;
top_edges(fill_x(distance<distance_threshold), fill_y(distance<distance_threshold)) = 0;
figure; imshow(top_edges);
This is what my output image looks like
I was trying to do histogram image comparison between two RGB images which includes heads of the same persons and non-heads to see the correlation between them. The reason I am doing this is because after performing scanning using HOG to check whether the scanning window is a head or not, I am now trying to track the same head throughout consequent frames and also I want to remove some clear false positives.
I currently tried both RGB and HSV histogram comparison and used Euclidean Distance to check the difference between the histograms. The following is the code I wrote:
%RGB histogram comparison
%clear all;
img1 = imread('testImages/samehead_1.png');
img2 = imread('testImages/samehead_2.png');
img1 = rgb2hsv(img1);
img2 = rgb2hsv(img2);
%% calculate number of bins = root(pixels);
[rows, cols] = size(img1);
no_of_pixels = rows * cols;
%no_of_bins = floor(sqrt(no_of_pixels));
no_of_bins = 256;
%% obtain Histogram for each colour
% -----1st Image---------
rHist1 = imhist(img1(:,:,1), no_of_bins);
gHist1 = imhist(img1(:,:,2), no_of_bins);
bHist1 = imhist(img1(:,:,3), no_of_bins);
hFig = figure;
hold on;
h(1) = stem(1:256, rHist1);
h(2) = stem(1:256 + 1/3, gHist1);
h(3) = stem(1:256 + 2/3, bHist1);
set(h, 'marker', 'none')
set(h(1), 'color', [1 0 0])
set(h(2), 'color', [0 1 0])
set(h(3), 'color', [0 0 1])
hold off;
% -----2nd Image---------
rHist2 = imhist(img2(:,:,1), no_of_bins);
gHist2 = imhist(img2(:,:,2), no_of_bins);
bHist2 = imhist(img2(:,:,3), no_of_bins);
hFig = figure;
hold on;
h(1) = stem(1:256, rHist2);
h(2) = stem(1:256 + 1/3, gHist2);
h(3) = stem(1:256 + 2/3, bHist2);
set(h, 'marker', 'none')
set(h(1), 'color', [1 0 0])
set(h(2), 'color', [0 1 0])
set(h(3), 'color', [0 0 1])
%% concatenate values of a histogram in 3D matrix
% -----1st Image---------
M1(:,1) = rHist1;
M1(:,2) = gHist1;
M1(:,3) = bHist1;
% -----2nd Image---------
M2(:,1) = rHist2;
M2(:,2) = gHist2;
M2(:,3) = bHist2;
%% normalise Histogram
% -----1st Image---------
M1 = M1./no_of_pixels;
% -----2nd Image---------
M2 = M2./no_of_pixels;
%% Calculate Euclidean distance between the two histograms
E_distance = sqrt(sum((M2-M1).^2));
The E_distance consists of an array containing 3 distances which refer to the red histogram difference, green and blue.
The Problem is:
When I compare the histogram of a non-head(eg. a bag) with that of a head..there is a clear difference in the error. So this is acceptable and can help me to remove the false positive.
However! When I am trying to check whether the two images are heads of the same person, this technique did not help at all as the head of another person gave a less Euclidean distance than that of the same person.
Can someone explain to me if I am doing this correctly, or maybe any guidance of what I should do?
PS: I got the idea of the LAB histogram comparison from this paper (Affinity Measures section): People Looking at each other
Color histogram similarity may be used as a good clue for tracking by detection, but don't count on it to disambiguate all possible matches between people-people and people-non-people.
According to your code, there is one thing you can do to improve the comparison: currently, you are working with per-channel histograms. This is not discriminative enough because you do not know when R-G-B components co-occur (e.g. you know how many times the red channel is in range 64-96 and how many times the blue is in range 32-64, but not when these occur simultaneously). To rectify this, you must work with 3D histograms, counting the co-occurrence of colors). For a discretization of 8 bins per channel, your histograms will have 8^3=512 bins.
Other suggestions for improvement:
Weighted assignment to neighboring bins according to interpolation weights. This eliminates the discontinuities introduced by bin quantization
Hierarchical splitting of detection window into cells (1 cell, 4 cells, 16 cells, etc), each with its own histogram, where the histograms of different levels and cells are concatenated. This allows catching local color details, like the color of a shirt, or even more local, a shirt pocket/sleeve.
Working with the Earth Mover's Distance (EMD) instead of the Euclidean metric for comparing histograms. This reduces color quantization effects (differences in histograms are weighted by color-space distance instead of equal weights), and allows for some error in the localization of cells within the detection window.
Use other cues for tracking. You'll be surprised how much the similarity between the HoG descriptors of your detections helps disambiguate matches!
Assume y is a vector with random numbers following the distribution f(x)=sqrt(4-x^2)/(2*pi). At the moment I use the command hist(y,30). How can I plot the distribution function f(x)=sqrt(4-x^2)/(2*pi) into the same histogram?
Instead of normalizing numerically, you could also do it by finding a theoretical scaling factor as follows.
nbins = 30;
nsamples = max(size(y));
binsize = (max(y)-min(y)) / nsamples
hist(y,nbins)
hold on
x1=linspace(min(y),max(y),100);
scalefactor = nsamples * binsize
y1=scalefactor * sqrt(4-x^2)/(2*pi)
plot(x1,y1)
Update: How it works.
For any dataset that is large enough to give a good approximation to the pdf (call it f(x)), the integral of f(x) over this domain will be approximately unity. However we know that the area under any histogram is precisely equal to the total number of samples times the bin-width.
So a very simple scale factor to bring the pdf into line with the histogram is Ns*Wb, the total number of sample point times the width of the bins.
Let's take an example of another distribution function, the standard normal. To do exactly what you say you want, you do this:
nRand = 10000;
y = randn(1,nRand);
[myHist, bins] = hist(y,30);
pdf = normpdf(bins);
figure, bar(bins, myHist,1); hold on; plot(bins,pdf,'rx-'); hold off;
This is probably NOT what you actually want though. Why? You'll notice that your density function looks like a thin line at the bottom of your histogram plot. This is because a histogram is counts of numbers in bins, while a density function is normalized to integrate to one. If you have hundreds of items in a bin, there is no way that the density function will match that in scale, so you have a scaling or normalization problem. Either you have to normalize the histogram, or plot a scaled distribution function. I prefer to scale the distribution function so that my counts are sensical when I look at the histogram:
normalizedpdf = pdf/sum(pdf)*sum(myHist);
figure, bar(bins, myHist,1); hold on; plot(bins,normalizedpdf,'rx-'); hold off;
Your case is the same, except you'll use the function f(x) you specified instead of the normpdf command.
Let me add another example to the mix:
%# some normally distributed random data
data = randn(1e3,1);
%# histogram
numbins = 30;
hist(data, numbins);
h(1) = get(gca,'Children');
set(h(1), 'FaceColor',[.8 .8 1])
%# figure out how to scale the pdf (with area = 1), to the area of the histogram
[bincounts,binpos] = hist(data, numbins);
binwidth = binpos(2) - binpos(1);
histarea = binwidth*sum(bincounts);
%# fit a gaussian
[muhat,sigmahat] = normfit(data);
x = linspace(binpos(1),binpos(end),100);
y = normpdf(x, muhat, sigmahat);
h(2) = line(x, y*histarea, 'Color','b', 'LineWidth',2);
%# kernel estimator
[f,x,u] = ksdensity( data );
h(3) = line(x, f*histarea, 'Color','r', 'LineWidth',2);
legend(h, {'freq hist','fitted Gaussian','kernel estimator'})