I have been doing some experimentations using some transformation e.g DCT on image data in Matlab.
example in DCT using 512x512 px lena image:
x = double(imread('lenna.bmp'));
R = dct2(x);
Then, i want to threshold the transform coefficients by keeping 100000 largest coefficients of R and set the remaining to zeros.
How can i do that?
Use prctile to find the value that is exceeded or equalled by exactly 100000 entries of R. Then use that value as a threshold, that is, set all lower values to zero:
threshold = prctile(R(:),(1-1e5/numel(R))*100); %// compute threshold
R(R<threshold) = 0; %// set values below the threshold to zero
Related
I have a 2D image G(m,n).
G is constructed by first acquiring k-space values and then inverse Fourier transforming.
The k-space consist m*n number of complex values.
What is meant by acquiring only 1/q of this amount (from m*n)? (q is a positive number)
In a scheme I will keep only 1/q th of the original k-space values.
Other elements 0f the original k space will make to zero/one.
Thank you.
Discarding a Fraction of Least Significant Frequency Components
One method is to use the fft2() function to convert the image to the frequency domain and delete the least significant frequency components based on their magnitudes. To find the least significant values the sort() function is used and the corresponding indices are returned. We can set the specific indices corresponding to the lowest frequency components to zero using matrix-indexing. You've pretty much described what has to be done above, but to provide more context:
• 1/q frequency components must remain.
• 1 - (1/q) frequency components must be set to zero/deleted.
%Grabbing a built-in test image%
Image = imread("peppers.png");
%Converting to grayscale if colour%
if size(Image,3) == 3
Image = rgb2gray(Image);
end
%Converting to frequency domain%
Frequency_Domain = fft2(Image);
%Sorting the the frequency domain values from greatest to least magnitude%
[Sorted_Coefficients,Sort_Indices] = sort(reshape(abs(Frequency_Domain),[numel(Frequency_Domain) 1]),'descend');
%Evaluating the number of coefficients to delete%
Number_Of_Coefficients = length(Sort_Indices);
q = 40;
Preserved_Fraction = 1/q;
Number_Of_Coefficients_To_Keep = round(Preserved_Fraction*Number_Of_Coefficients);
%Finding out which values to deleted based on the indices corresponding to the sorted array%
Delete_Indices = Sort_Indices(Number_Of_Coefficients_To_Keep+1:end);
Frequency_Domain(Delete_Indices) = 0;
%Evaluating how many frequency components were deleted%
Number_Of_Deleted_Frequency_Components = numel(Delete_Indices);
fprintf("Deleted %d frequency coefficients\n",Number_Of_Deleted_Frequency_Components);
%Converting the image back to the spatial domain%
G = uint8(ifft2(Frequency_Domain));
subplot(1,2,1); imshow(Image);
title("Original Image");
subplot(1,2,2); imshow(G);
title("Frequency Sampled Image");
disp(1 - (numel(Delete_Indices)/numel(Frequency_Domain)));
I want to reconstruct an image from a multi-level DWT transform from only 5% of the largest coefficients while setting the rest to zero.
I'm not sure which coefficients I need to choose the largest 5% from?
A, H, V, or D?
Here is what I've done so far:
% Read image
x = imread('app.bmp');
% Define wavelet name
wavelet = 'haar';
% Define wavelet decomposition level
level = 4;
% Define colormap
map = gray;
% Compute multilevel 2D wavelet decomposition
[C, S] = wavedec2(x,level,wavelet);
for i = 1:levle
% Approximation coefficients
A = appcoef2(C,S,'haar',i);
% Detailed coefficients
[H,V,D] = detcoef2('all',C,S,i);
end
Any help would be appreciated!
In order to take the highest x% values of a matrix/vector this is what I usually do:
% define the threshold (5% here)
thr = 0.05;
% sort the variable in descending order
As = sort(A(:),'descend');
% obtain the elements belonging to the top x%
At = As(1:ceil(numel(As) * thr));
Now the At variable contains the top x% values of the matrix/vector A. Since you want to keep the original shape of A and set all its elements below the threshold to 0:
% take the minimum value of the result
Am = min(At);
% take the original matrix and set all the elements below the minimum top x% to zero:
A(A < Am) = 0;
and here you obtain the final form of your A matrix/vector.
I'd like to calculate the Shanon entropy of a vector (psi) over the time period. According to this reference,
I can calculate the entropy for every single element of psi using a loop that computes the entropy at every point. What I wan't to understand is how to set up the probability of psi(tk) lying in a certain bin. and how to set up the total number of bins.
I tried using Matlab's histogram command that will generate the suitable bins (" [N,edges] = histcounts(psi)") but I don't know how to proceed from there. How do I get the probability of each element being in the xth bin?
here is my current code:
% get the number of bins
[N,edges] = histcounts(psi)
%// Compute probability
h = hist(psi);
pdf = h / length(psi);
%// Set any entries that are 0 to 1 so that log calculation equals 0.
pdf(pdf == 0) = 1;
e=[];
%// Calculate entropy
for i=1:length(N)
e(i) = -sum(pdf(i).*log2(pdf(i)));
end
any ideas?
I tried to generate 1000 the random values in normal distribution by the normrnd function.
A = normrnd(4,1,[1000 1]);
I would like to set the minimum value is 2. However, that function just can define the mean and sd. How can I set the minimum value is 2 ?
You can't. Gaussian or normally distributed numbers are in a bell curve, with the tails tailing off to infinity. What you can do is "censor" them by eliminating every number beyond a cut-off.
Since you choose mean = 4 and sigma = 1, you will end up ~95% elements of A fall within range [2,6]. The number of elements whose values smaller than 2 is about 2.5%. If you consider this figure is small, you can wrap these elements to a minimum value. For example:
A = normrnd(4,1,[1000 1]);
A(A < 2) = A(A<2) + 2 - min(A(A<2))
Of course, it is technically not gaussian distribution. However if you have total control of mean and sigma, you can get a "more gaussian like" distribution by adding an offset to A:
A = A + 2 - min(A)
Note: This assumes you can have an arbitrarily set standard deviation, which may not be the case
As others have said, you cannot specify a lower bound for a true Gaussian. However, you can generate a Gaussian and estimate 1-p percent of values to be above and then ignore p percent of values (which will fall outside your cutoff).
For example, in the following code, I am generating a Gaussian where 95% of data-points fall above 2. Then I am removing all points below 2, knowing that 5% of data will be removed.
This is a solution because setting as p gets closer to zero, your chances of getting uncensored sample data that follows your Gaussian curve and is entirely above your cutoff goes to 100% (Really it's defined by the p/n ratio, but if n is fixed this is true).
n = 1000; % number of samples
cutoff = 2; % Cutoff point for min-value
mu = 4; % Mean
p = .05; % Percentile you would like to cutoff
z = -sqrt(2) * erfcinv(p*2); % compute z score
sigma = (cutoff - mu)/z; % compute standard deviation
A = normrnd(mu,sigma,[n 1]);
I would recommend removing values below the cutoff rather than re-attributing them to the lower bound of your distribution, but that is up to you.
A(A<cutoff) = []; % removes all values of A less than cutoff
If you want to be symmetrical (which you should to prevent sample skew) the following should work.
A(A>(2*mu-cutoff)) = [];
I have a vector of data, which contains integers in the range -20 20.
Bellow is a plot with the values:
This is a sample of 96 elements from the vector data. The majority of the elements are situated in the interval -2, 2, as can be seen from the above plot.
I want to eliminate the noise from the data. I want to eliminate the low amplitude peaks, and keep the high amplitude peak, namely, peaks like the one at index 74.
Basically, I just want to increase the contrast between the high amplitude peaks and low amplitude peaks, and if it would be possible to eliminate the low amplitude peaks.
Could you please suggest me a way of doing this?
I have tried mapstd function, but the problem is that it also normalizes that high amplitude peak.
I was thinking at using the wavelet transform toolbox, but I don't know exact how to reconstruct the data from the wavelet decomposition coefficients.
Can you recommend me a way of doing this?
One approach to detect outliers is to use the three standard deviation rule. An example:
%# some random data resembling yours
x = randn(100,1);
x(75) = -14;
subplot(211), plot(x)
%# tone down the noisy points
mu = mean(x); sd = std(x); Z = 3;
idx = ( abs(x-mu) > Z*sd ); %# outliers
x(idx) = Z*sd .* sign(x(idx)); %# cap values at 3*STD(X)
subplot(212), plot(x)
EDIT:
It seems I misunderstood the goal here. If you want to do the opposite, maybe something like this instead:
%# some random data resembling yours
x = randn(100,1);
x(75) = -14; x(25) = 20;
subplot(211), plot(x)
%# zero out everything but the high peaks
mu = mean(x); sd = std(x); Z = 3;
x( abs(x-mu) < Z*sd ) = 0;
subplot(212), plot(x)
If it's for demonstrative purposes only, and you're not actually going to be using these scaled values for anything, I sometimes like to increase contrast in the following way:
% your data is in variable 'a'
plot(a.*abs(a)/max(abs(a)))
edit: since we're posting images, here's mine (before/after):
You might try a split window filter. If x is your current sample, the filter would look something like:
k = [L L L L L L 0 0 0 x 0 0 0 R R R R R R]
For each sample x, you average a band of surrounding samples on the left (L) and a band of surrounding samples on the right. If your samples are positive and negative (as yours are) you should take the abs. value first. You then divide the sample x by the average value of these surrounding samples.
y[n] = x[n] / mean(abs(x([L R])))
Each time you do this the peaks are accentuated and the noise is flattened. You can do more than one pass to increase the effect. It is somewhat sensitive to the selection of the widths of these bands, but can work. For example:
Two passes:
What you actually need is some kind of compression to scale your data, that is: values between -2 and 2 are scale by a certain factor and everything else is scaled by another factor. A crude way to accomplish such a thing, is by putting all small values to zero, i.e.
x = randn(1,100)/2; x(50) = 20; x(25) = -15; % just generating some data
threshold = 2;
smallValues = (abs(x) <= threshold);
y = x;
y(smallValues) = 0;
figure;
plot(x,'DisplayName','x'); hold on;
plot(y,'r','DisplayName','y');
legend show;
Please do not that this is a very nonlinear operation (e.g. when you have wanted peaks valued at 2.1 and 1.9, they will produce very different behavior: one will be removed, the other will be kept). So for displaying, this might be all you need, for further processing it might depend on what you are trying to do.
To eliminate the low amplitude peaks, you're going to equate all the low amplitude signal to noise and ignore.
If you have any apriori knowledge, just use it.
if your signal is a, then
a(abs(a)<X) = 0
where X is the max expected size of your noise.
If you want to get fancy, and find this "on the fly" then, use kmeans of 3. It's in the statistics toolbox, here:
http://www.mathworks.com/help/toolbox/stats/kmeans.html
Alternatively, you can use Otsu's method on the absolute values of the data, and use the sign back.
Note, these and every other technique I've seen on this thread is assuming you are doing post processing. If you are doing this processing in real time, things will have to change.