Sampling an image - matlab

I have a 2D image G(m,n).
G is constructed by first acquiring k-space values and then inverse Fourier transforming.
The k-space consist m*n number of complex values.
What is meant by acquiring only 1/q of this amount (from m*n)? (q is a positive number)
In a scheme I will keep only 1/q th of the original k-space values.
Other elements 0f the original k space will make to zero/one.
Thank you.

Discarding a Fraction of Least Significant Frequency Components
One method is to use the fft2() function to convert the image to the frequency domain and delete the least significant frequency components based on their magnitudes. To find the least significant values the sort() function is used and the corresponding indices are returned. We can set the specific indices corresponding to the lowest frequency components to zero using matrix-indexing. You've pretty much described what has to be done above, but to provide more context:
• 1/q frequency components must remain.
• 1 - (1/q) frequency components must be set to zero/deleted.
%Grabbing a built-in test image%
Image = imread("peppers.png");
%Converting to grayscale if colour%
if size(Image,3) == 3
Image = rgb2gray(Image);
end
%Converting to frequency domain%
Frequency_Domain = fft2(Image);
%Sorting the the frequency domain values from greatest to least magnitude%
[Sorted_Coefficients,Sort_Indices] = sort(reshape(abs(Frequency_Domain),[numel(Frequency_Domain) 1]),'descend');
%Evaluating the number of coefficients to delete%
Number_Of_Coefficients = length(Sort_Indices);
q = 40;
Preserved_Fraction = 1/q;
Number_Of_Coefficients_To_Keep = round(Preserved_Fraction*Number_Of_Coefficients);
%Finding out which values to deleted based on the indices corresponding to the sorted array%
Delete_Indices = Sort_Indices(Number_Of_Coefficients_To_Keep+1:end);
Frequency_Domain(Delete_Indices) = 0;
%Evaluating how many frequency components were deleted%
Number_Of_Deleted_Frequency_Components = numel(Delete_Indices);
fprintf("Deleted %d frequency coefficients\n",Number_Of_Deleted_Frequency_Components);
%Converting the image back to the spatial domain%
G = uint8(ifft2(Frequency_Domain));
subplot(1,2,1); imshow(Image);
title("Original Image");
subplot(1,2,2); imshow(G);
title("Frequency Sampled Image");
disp(1 - (numel(Delete_Indices)/numel(Frequency_Domain)));

Related

Matlab - Plotting Specific Pixels (image treatment)

I'm currently struggling with an image treatment\ data plotting issue and was hoping to get some feedback from people with more experience than myself on this matter.
I'll try and breakdown the problem as to make it more understandable:
I have an original image (figureB - which is the blue chanel of the original image) of size NxM, from this image I select a specific area to study (NewfigureB), size 120x170;
I then divide this area into what I called macropixels which are 10x10 arrays of data points (pixels);
I then apply a mask to the selected area to select only the points meeting certain luminescence conditions;
So far so good. My problem comes when I try to plot a histogram of each of these macropixels when applying the luminescence mask. The final objective is to then find the peaks in these histograms.
so far this is what I've come up with. Any help would be greatly appreciated.
Many thanks
%Make the number of pixels in the matrix divisible
Macropixel = 10; %determine the size of the macropixel
[rows,columns] = size(figureB); %determine dimentions of the matrix used in the calculations
MacropixRows = floor(rows/Macropixel); %determine how many macropixels are in a row of the original matrix
MacropixColumn = floor(columns/Macropixel); %determine how many macropixels are in a column of the original matrix
%define new dim for the matrix
rows = MacropixRows * Macropixel;
columns = MacropixColumn * Macropixel;
NewfigureB = figureB(1:rows,1:columns); %divisible by the size of the macropixels created
%select area
NewfigureB = NewfigureB(1230:1349,2100:2269);
%create luminescence mask
Lmin=50;
hmax=80;
mask=false(size(NewfigureB));
mask(NewfigureB <Lmin)=true;
mask=mask & (NewfigureB<hmax);
%Apply mask
NewfigureB=NewfigureB(mask);
for jj = 1:Macropixel:120
for ii =1:Macropixel:170
histogram( NewfigureB(jj:jj+Macropixel-1, ii:ii+Macropixel-1))
end
end'''
The code you have posted has too many issues.
I tried to correct it the best I could.
I modified some parameters to feat the sample image I used.
I couldn't find your sample image, so I used the following
Here is a corrected code (please read the comments):
I = imread('Nikon-D810-Image-Sample-7.jpg');
figureB = I(:,:,3);
%Make the number of pixels in the matrix divisible
Macropixel = 10; %determine the size of the macropixel
[rows,columns] = size(figureB); %determine dimentions of the matrix used in the calculations
MacropixRows = floor(rows/Macropixel); %determine how many macropixels are in a row of the original matrix
MacropixColumn = floor(columns/Macropixel); %determine how many macropixels are in a column of the original matrix
%define new dim for the matrix
rows = MacropixRows * Macropixel;
columns = MacropixColumn * Macropixel;
NewfigureB = figureB(1:rows,1:columns); %divisible by the size of the macropixels created
%select area
NewfigureB = NewfigureB(1230:1349,2100:2269);
%create luminescence mask
Lmin=90;%50; %Change to 90 for testing
hmax=200;%80; %Change to 200 for testing
mask=false(size(NewfigureB));
mask(NewfigureB > Lmin)=true; %I think it should be > Lmin. %mask(NewfigureB <Lmin)=true;
mask=mask & (NewfigureB<hmax);
%This is not the right way to apply a mask, because the result is a vector (not a matrix).
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%Apply mask
%NewfigureB=NewfigureB(mask);
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%Assuming there are no zeros in the image, you can set masked elements to zero:
NewfigureB(~mask) = 0;
for jj = 1:Macropixel:120
for ii =1:Macropixel:170
%Copy NewfigureB(jj:jj+Macropixel-1, ii:ii+Macropixel-1) into temporary matrix MB:
MB = NewfigureB(jj:jj+Macropixel-1, ii:ii+Macropixel-1);
%Remove the zeros from MB (zeros are masked elements).
%The result is a vector (not a matrix).
MB = MB(MB ~= 0);
%histogram( NewfigureB(jj:jj+Macropixel-1, ii:ii+Macropixel-1))
figure; %Open new figure for each histogram. (I don't know if it's a good idea).
histogram(MB); %Plot the histogram of vector MB.
end
end
It's probably not exactly matches intentions.
I hope it gives you a lead...

Two-dimensional matched filter

I want to implement two dimensional matched filter for blood vessel extraction according to the paper "Detection of Blood Vessels in Retinal Images Using Two-Dimensional Matched Filters" by Chaudhuri et al., IEEE Trans. on Medical Imaging, 1989 (there's a PDF on the author's web site).
A brief discription is that blood vessel's cross-section has a gaussian distribution and therefore I want to use gaussian matched filter to increase SNR. Such a kernel may be mathematically expressed as:
K(x,y) = -exp(-x^2/2*sigma^2) for |x|<3*sigma, |y|<L/2
L here is the length of vessel with fixed orientation. Experimentally sigma=1.5 and L = 7.
My MATLAB code for this part is:
s = 1.5; %sigma
t = -3*s:3*s;
theta=0:15:165; %different rotations
%one dimensional kernel
x = 1/sqrt(6*s)*exp(-t.^2/(2*s.^2));
L=7;
%two dimensional gaussian kernel
x2 = repmat(x,L,1);
Consider the response of this filter for a pixel belonging to the background retina. Assuming the background to have constant intensity with zero mean additive Gaussian white noise, the expected value of the filter output should ideally be zero. The convolution kernel is, therefore, modified by subtracting the mean value of s(t) from the function itself. The mean value of the kernel is determined as: m = Sum(K(x,y))/(number of points).
Thus, the convolutional mask used in this algorithm is given by: K(x, y) = K(x,y) - m.
My MATLAB code:
m = sum(x2(:))/(size(x2,1)*size(x2,2));
x2 = x2-m;
A vessel may be oriented at any angle 0<theta<180 and the matched filter response is maximum when when it is aligned at theta+- 90 (cross-section distribution is gaussian not the vessel itself).
Thus we need to rotate the matched filter 12 times with 15 degree increment.
My MATLAB code is attached here but I don't get a desirable result. Any help is appreciated.
%apply rotated matched filter on image
r = {};
for k = 1:12
x3=imrotate(x2,theta(k),'crop');%figure;imagesc(x3);colormap gray;
r{k}=conv2(img,x3);
end
w=[];h = zeros(584,565);
for i = 1:565
for j = 1:584
for k = 1:32
w= [w ,r{k}(j,i)];
end
h(j,i)=max(abs(w));
w = [];
end
end
%show result
figure('Name','after matched filter');imagesc(h);colormap gray
For rotation I used imrotate which seems more sensible to me but in the paper it is different: suppose p=[x,y] be a discrete point in the kernel. To compute coefficients in the rotated kernel we have [u,v] = p*Rotation_Matrix.
Rotation_Matrix=[cos(theta),sin(theta);-sin(theta),cos(theta)]
And the kernel is:
K(x,y) = -exp(-u^2/2*s^2)
But the new kernel doesn't have a gaussian shape anymore. Using imrotate preserves gaussian shape. So what is the benefit of using Rotation matrix?
Input image is:
Output:
Matched filtering helps increase SNR but background noise is amplified too.
Am I right to use imrotate to rotate the kernel? My main problem is with rotation matrix that why and what is the right code to implement it.
The reason to build the filter from its analytic expression for each rotation, rather than using imrotate, is that the filter extent is not circular, and therefore rotating brings in "new" pixel values and pushes some other pixels out of the kernel. Furthermore, rotating a kernel constructed as here (smooth transition along one direction, step edge along the other dimension) requires different interpolation methods along each dimension, which imrotate cannot do. The resulting rotated kernel will always be wrong.
Both these issues can be easily seen when displaying the kernel you make together with two rotated versions:
This display brings an additional issues to the front: the kernel is not centered on a pixel, causing it to shift the output by half a pixel.
Note also that, when subtracting the mean, it is important that this mean be computed only over the original domain of the filter, and that any zeros used to pad this domain to a rectangular shape remain zero (these should not become negative).
The rotated kernels can be constructed as follows:
m = max(ceil(3*s),(L-1)/2);
[x,y] = meshgrid(-m:m,-m:m); % non-rotated coordinate system, contains (0,0)
t = pi/6; % angle in radian
u = cos(t)*x - sin(t)*y; % rotated coordinate system
v = sin(t)*x + cos(t)*y; % rotated coordinate system
N = (abs(u) <= 3*s) & (abs(v) <= L/2); % domain
k = exp(-u.^2/(2*s.^2)); % kernel
k = k - mean(k(N));
k(~N) = 0; % set kernel outside of domain to 0
This is the result for the three rotations used in the example above (the grey around the edges of the kernel corresponds to the value 0, the black pixels have a negative value):
Another issue is that you use conv2 with the default 'full' output shape, you should be using 'same' here, so that the output of the filter matches the input.
Note that, instead of computing all filter responses, and computing the max afterwards, it is much easier to compute the max as you compute each filter response. All of the above leads to the following code:
img = im2double(rgb2gray(img));
s = 1.5; %sigma
L = 7;
theta = 0:15:165; %different rotations
out = zeros(size(img));
m = max(ceil(3*s),(L-1)/2);
[x,y] = meshgrid(-m:m,-m:m); % non-rotated coordinate system, contains (0,0)
for t = theta
t = t / 180 * pi; % angle in radian
u = cos(t)*x - sin(t)*y; % rotated coordinate system
v = sin(t)*x + cos(t)*y; % rotated coordinate system
N = (abs(u) <= 3*s) & (abs(v) <= L/2); % domain
k = exp(-u.^2/(2*s.^2)); % kernel
k = k - mean(k(N));
k(~N) = 0; % set kernel outside of domain to 0
res = conv2(img,k,'same');
out = max(out,res);
end
out = out/max(out(:)); % force output to be in [0,1] interval that MATLAB likes
imwrite(out,'so_result.png')
I get the following output:

How to find delay between two sets of data in Matlab?

I have two sets of data taken from experiments, and they look very similar, except there is a horizontal offset between them, which I believe is due to some bugs in the instrument setting. Suppose they have the form y1=f(x1) and y2=f(x2)= f(x1+c), what's the best way to determine c so that I can take into account the offset to superimpose two data sets to become one data set?
Edit:
let's say my data sets (index 1 and 2) have the form:
x1 = 0:0.2:10;
y1 = sin(x1)
x2 = 0:0.3:10;
y2 = sin(x2+0.5)
Of course, the real data will have some noise, but say the best fit functions have the above forms. How do I find the offset c=0.5? I have looked into the cross-correlation, but I'm not sure if they can handle two data sets with different number of data (and different step sizes). Also, what if the offset values actually fall between two data points? Cross-correlation only returns the index of the data in the array, not something in between if I understand correctly.
This Matlab script calculates the random offset from -pi/2 to +pi/2 between two sine waves:
clear;
C= pi*(rand-0.5); % should be between -pi/2 and +pi/2
N=200; % should be large enough for acceptable sampling rate
N1=20; % fraction for Ts1
N2=30; % fraction for Ts2
Ts1=abs(C*N1/N); % fraction of C for accuracy
Ts2=abs(C*N2/N); % fraction of C for accuracy
Ts=min(Ts1,Ts2); % select highest sampling rate (smaller period)
fs=1/Ts;
P=4; % number of periods should be large enough for well defined correlation plot
x1 = 0:Ts:P*2*pi;
y1 = sin(x1);
x2 = 0:Ts:P*2*pi;
y2 = sin(x2+C);
subplot(3,1,1)
plot(x1,y1);
subplot(3,1,2);
plot(x2,y2);
[cor,lag]=xcorr(y1,y2);
subplot(3,1,3);
plot(lag,cor);
[~,I] = max(abs(cor));
lagdiff = lag(I);
timediff=lagdiff/fs;
In the particular case below, C = timediff = 0.5615:
write a function which takes the time shift as an input and calculates rms between overlapping portions of the two data sets. Then find the minimum of this function using optimization (fminbnd)

"Out of Memory" Matlab

Well I made 4 standalone executable from 4 different Matlab functions to build a face recognition system. I am calling those 4 executables using different batch codes and performing tasks on images. The total number of images I have is above 300k. 3 of these 4 executables is working good, but I am facing "out of memory" problem when I am trying to call the standalone executable of Fisherface function. It simply calculates unique features of each image using Fisher's linear discriminant analysis. The analysis is applied on the huge face matrix which consists of pixel values of over 150,000 images of size 60*60. Hence the size of the matrix is 150,000*3600.
Well what I understand is its happening due to shortage of contiguous memory in RAM. So as a way out, I chose to divide my large image set into number of subsets, each of which contains 3000 images. Now when an input face is provided, it searches for best matches of that input in each of those subset and finally sorts out the final list of 3 best matches with lowest distances (Euclidean). This resolved the out of memory error but the recognition rate became much lower. Because when the discriminant analysis is done in the original face matrix (which I have tested in smaller datasets containing 4000-5000 images), it gives good recognition rate.
I am seeking a way out of this problem. I want to perform all the operations on the large matrix. Is there a way to implement the function more efficiently, for example, allocating memory dynamically in Matlab? I hope I have been fairly specific in order to explain my problem. Below, I have provided the code segment of that particular executable.
function FisherfaceCorenew(matname)
load(matname);
Class_number = size(T,2) ;
Class_population = 1;
P = Class_population * Class_number; % Total number of training images
%%%%%%%%%%%%%%%%%%%%%%%% calculating the mean image
m_database = single(mean(T,2));
%%%%%%%%%%%%%%%%%%%%%%%% Calculating the deviation of each image from mean image
A = T - repmat(m_database,1,P);
L = single(A')*single(A);
[V D] = eig(L); % Diagonal elements of D are the eigenvalues for both L=A'*A and C=A*A'.
%%%%%%%%%%%%%%%%%%%%%%%% Sorting and eliminating small eigenvalues
L_eig_vec = [];
for i = 1 : P
L_eig_vec = [L_eig_vec V(:,i)];
end
%%%%%%%%%%%%%%%%%%%%%%%% Calculating the eigenvectors of covariance matrix 'C'
V_PCA = single(A) * single(L_eig_vec);
%%%%%%%%%%%%%%%%%%%%%%%% Projecting centered image vectors onto eigenspace
ProjectedImages_PCA = [];
for i = 1 : P
temp = single(V_PCA')*single(A(:,i));
ProjectedImages_PCA = [ProjectedImages_PCA temp];
end
%%%%%%%%%%%%%%%%%%%%%%%% Calculating the mean of each class in eigenspace
m_PCA = mean(ProjectedImages_PCA,2); % Total mean in eigenspace
m = zeros(P,Class_number);
Sw = zeros(P,P); %new
Sb = zeros(P,P); %new
for i = 1 : Class_number
m(:,i) = mean( ( ProjectedImages_PCA(:,((i-1)*Class_population+1):i*Class_population) ), 2 )';
S = zeros(P,P); %new
for j = ( (i-1)*Class_population+1 ) : ( i*Class_population )
S = S + (ProjectedImages_PCA(:,j)-m(:,i))*(ProjectedImages_PCA(:,j)-m(:,i))';
end
Sw = Sw + S; % Within Scatter Matrix
Sb = Sb + (m(:,i)-m_PCA) * (m(:,i)-m_PCA)'; % Between Scatter Matrix
end
%%%%%%%%%%%%%%%%%%%%%%%% Calculating Fisher discriminant basis's
% We want to maximise the Between Scatter Matrix, while minimising the
% Within Scatter Matrix. Thus, a cost function J is defined, so that this condition is satisfied.
[J_eig_vec, J_eig_val] = eig(Sb,Sw);
J_eig_vec = fliplr(J_eig_vec);
%%%%%%%%%%%%%%%%%%%%%%%% Eliminating zero eigens and sorting in descend order
for i = 1 : Class_number-1
V_Fisher(:,i) = J_eig_vec(:,i);
end
%%%%%%%%%%%%%%%%%%%%%%%% Projecting images onto Fisher linear space
for i = 1 : Class_number*Class_population
ProjectedImages_Fisher(:,i) = V_Fisher' * ProjectedImages_PCA(:,i);
end
save fisherdata.mat m_database V_PCA V_Fisher ProjectedImages_Fisher;
end
It's not easy to help you, because we can't see the sizes of your matrices.
At least you could use the Matlab clear command after you don't use a variable anymore (e.g. A).
Maybe you could use the single() command when you allocate A variable instead of in every equation.
A = single(T - repmat(m_database,1,P));
And then
L = A'*A;
Also you could use the Matlab profiler with memory usage to see your memory demand.
Another option could be to use sparse matrices or reduce to even smaller datatypes like uint8, if appropriate for some data.

Find the height and length of waves in noisy data

My goal is to find the maximum values of wave heights and wave lengths.
dwcL01 though dwcL10 is arrays of <3001x2 double> with output from a numerical wave model.
Part of my script:
%% Plotting results from SWASH
% Examination of phase velocity on deep water with different number of layers
% Wave height 3 meters, wave peroid 8 sec on a depth of 30 meters
clear all; close all; clc;
T=8;
L0=1.56*T^2;
%% Loading results tabels.
load dwcL01.tbl; load dwcL02.tbl; load dwcL03.tbl; load dwcL04.tbl;
load dwcL05.tbl; load dwcL06.tbl; load dwcL07.tbl; load dwcL08.tbl;
load dwcL09.tbl; load dwcL10.tbl;
M(:,:,1) = dwcL01; M(:,:,2) = dwcL02; M(:,:,3) = dwcL03; M(:,:,4) = dwcL04;
M(:,:,5) = dwcL05; M(:,:,6) = dwcL06; M(:,:,7) = dwcL07; M(:,:,8) = dwcL08;
M(:,:,9) = dwcL09; M(:,:,10) = dwcL10;
%% Finding position of wave crest using diff and sign.
for i=1:10
Tp(:,1,i) = diff(sign(diff([M(1,2,i);M(:,2,i)]))) < 0;
Wc(:,:,i) = M(Tp,:,i);
L(:,i) = diff(Wc(:,1,i))
end
This works fine for finding the maximum values, if the data is "smooth". The following image shows a section of my data. I get all peaks, when I only need the one around x = 40. How do I filter so I only get the "real" wave crests. The solution needs to be general so that it still works if I change the domain size, wave height or wave period.
If you're basically trying to fit this curve of data to a sine wave, have you considered performing Fourier analysis (FFT in Matlab), then checking the magnitude of that fundamental frequency? The frequency will tell you the wave spacing, and the magnitude the height, and when used over multiple periods will find an average.
See the Matlab help page for an example of the usage
but the basic gist is:
y = [...] %vector of wave data points
N=length(y); %Make sure this is an even number
Y = fft(y); %Convert into frequency domain
figure;
plot(y(1:N)); %Plot original wave data
figure;
plot(abs(Y(1:N/2))./N); %Plot only the lower half of frequencies to hide aliasing
I have one more solution that might work for you. It involves computing the 2nd-order derivative using a 5-point central difference instead of the 2-point finite differences. When using diff twice you are performing two first-order derivatives consecutively (finite 2-point differences) which are very susceptible to noise/oscillations. The advantage of using a higher-order approximation is that the neighboring points help filter out the small oscillations, and this may work for your case.
Let f(:) = squeeze(M(:,2,i)) be the array of data points, and h is the uniform spacing distance between the points:
%Better approximation of the 2nd derivative using neighboring points:
for j=3:length(f)-2
Tp(j,i) = (-f(j-2) + 16*f(j-1) - 30*f(j) + 16*f(j+1) - f(j+2))/(12*h^2);
end
Note that since this 2nd-order derivative requires the 2 neighboring points to the left and right, that the range of the loop must start at the 3rd index and end 2 short of the array length.