I use both Matlab and OpenCV to produce Grayscale histogram, divided into 10 bins.
In OpenCV, each bin has equal range (i.e. [0,25], [26,51], [52,77], ...).
However, in Matlab, the bin sizes are not equal (I guess it's related to some theory about different sensitivity to intensity changes between lower and higher values).
These different results make big trouble for me.
Is there an option to use calcHist with equal bin sizes? (Of course except for the option of implementing it myself...)
Answering my own question with a self-implemented function:
function h = fixedSizeBinnedHist(grayImg, numBins)
binSize = 256 / numBins;
binnedImg = floor(double(grayImg) / binSize);
maxVal = max(binnedImg(:));
numLeadingZeros = min(binnedImg(:));
numTrailingZeros = numBins - maxVal - 1;
% First, computing histogram for the existing range
h = hist(double(binnedImg(:)), maxVal - numLeadingZeros + 1);
leading = zeros(1, numLeadingZeros);
trailing = zeros(1, numTrailingZeros);
% Finally attaching needed zeros in both sides, so the histogram is in the requested size
h = [leading h trailing];
end
Related
Matlab has the function randn to draw from a normal distribution e.g.
x = 0.5 + 0.1*randn()
draws a pseudorandom number from a normal distribution of mean 0.5 and standard deviation 0.1.
Given this, is the following Matlab code equivalent to sampling from a normal distribution truncated at 0 at 1?
while x <=0 || x > 1
x = 0.5 + 0.1*randn();
end
Using MATLAB's Probability Distribution Objects makes sampling from truncated distributions very easy.
You can use the makedist() and truncate() functions to define the object and then modify (truncate it) to prepare the object for the random() function which allows generating random variates from it.
% MATLAB R2017a
pd = makedist('Normal',0.5,0.1) % Normal(mu,sigma)
pdt = truncate(pd,0,1) % truncated to interval (0,1)
sample = random(pdt,numRows,numCols) % Sample from distribution `pdt`
Once the object is created (here it is pdt, the truncated version of pd), you can use it in a variety of function calls.
To generate samples, random(pdt,m,n) produces a m x n array of samples from pdt.
Further, if you want to avoid use of toolboxes, this answer from #Luis Mendo is correct (proof below).
figure, hold on
h = histogram(cr,'Normalization','pdf','DisplayName','#Luis Mendo samples');
X = 0:.01:1;
p = plot(X,pdf(pdt,X),'b-','DisplayName','Theoretical (w/ truncation)');
You need the following steps
1. Draw a random value from uniform distribution, u.
2. Assuming the normal distribution is truncated at a and b. get
u_bar = F(a)*u +F(b) *(1-u)
3. Use the inverse of F
epsilon= F^{-1}(u_bar)
epsilon is a random value for the truncated normal distribution.
Why don't you vectorize? It will probably be faster:
N = 1e5; % desired number of samples
m = .5; % desired mean of underlying Gaussian
s = .1; % desired std of underlying Gaussian
lower = 0; % lower value for truncation
upper = 1; % upper value for truncation
remaining = 1:N;
while remaining
result(remaining) = m + s*randn(1,numel(remaining)); % (pre)allocates the first time
remaining = find(result<=lower | result>upper);
end
I tried to generate 1000 the random values in normal distribution by the normrnd function.
A = normrnd(4,1,[1000 1]);
I would like to set the minimum value is 2. However, that function just can define the mean and sd. How can I set the minimum value is 2 ?
You can't. Gaussian or normally distributed numbers are in a bell curve, with the tails tailing off to infinity. What you can do is "censor" them by eliminating every number beyond a cut-off.
Since you choose mean = 4 and sigma = 1, you will end up ~95% elements of A fall within range [2,6]. The number of elements whose values smaller than 2 is about 2.5%. If you consider this figure is small, you can wrap these elements to a minimum value. For example:
A = normrnd(4,1,[1000 1]);
A(A < 2) = A(A<2) + 2 - min(A(A<2))
Of course, it is technically not gaussian distribution. However if you have total control of mean and sigma, you can get a "more gaussian like" distribution by adding an offset to A:
A = A + 2 - min(A)
Note: This assumes you can have an arbitrarily set standard deviation, which may not be the case
As others have said, you cannot specify a lower bound for a true Gaussian. However, you can generate a Gaussian and estimate 1-p percent of values to be above and then ignore p percent of values (which will fall outside your cutoff).
For example, in the following code, I am generating a Gaussian where 95% of data-points fall above 2. Then I am removing all points below 2, knowing that 5% of data will be removed.
This is a solution because setting as p gets closer to zero, your chances of getting uncensored sample data that follows your Gaussian curve and is entirely above your cutoff goes to 100% (Really it's defined by the p/n ratio, but if n is fixed this is true).
n = 1000; % number of samples
cutoff = 2; % Cutoff point for min-value
mu = 4; % Mean
p = .05; % Percentile you would like to cutoff
z = -sqrt(2) * erfcinv(p*2); % compute z score
sigma = (cutoff - mu)/z; % compute standard deviation
A = normrnd(mu,sigma,[n 1]);
I would recommend removing values below the cutoff rather than re-attributing them to the lower bound of your distribution, but that is up to you.
A(A<cutoff) = []; % removes all values of A less than cutoff
If you want to be symmetrical (which you should to prevent sample skew) the following should work.
A(A>(2*mu-cutoff)) = [];
I'm trying to compare Matlab fft of a cosinus with two different zero padding. I thought that it wouldn't change the frequency response but when I superimpose the two curves, the frequencies are not the same. I suppose that there is something wrong with the way I do my two fft?
Fe = 8000;
F = 1680;
w = 2*pi*F;
N = 50;
P = 50;
T = 1/Fe;
t = (0:T:P*T);
x = real(exp(i*w*t))
x_reduced = x(1:P)
X = fft(x_reduced,N)
N = 1000;
Y = fft(x_reduced,N)
plot(abs(Y))
hold on
plot(abs(X),'*')
Thanks in advance
plot((0:999)/1000*Fe,abs(Y))
hold on
plot((0:49)/50*Fe,abs(X),'*')
You may need to align the frequencies of both cases.
When you pad the FFT you change the resolution of each bin (you are effectively interpolating between bins), so while the corresponding frequencies are still the same of course the actual mapping to bin indices will change. If you were to scale the two FFT plots so that the horizontal axes for both line up (i.e. bin 0 aligned on both, and bin 50 aligned with bin 1000) then the plots would match.
I want to pick values between, say, 50 and 150 using an exponential random number generator (a flat hazard function). How do I implement bounds on the built-in exponential random number function in matlab?
A quick way is to a sequence longer than you need, and throw out values outside your desired range.
dist = exprnd(100,1,1000);
%# mean of 100 ---^ ^---^--- 1x1000 random numbers
dist(dist<50 | dist>150) = []; %# will be shorter than 1000
If you don't have enough values after pruning, you can repeat and append onto the vector, or however else you want to do it.
exprandn uses rand (see >> open exprnd.m) so you can bound the output of that instead by reversing the process and sampling uniformly within the desired range [r1, r2].
sizeOut = [1, 1000]; % sample size
mu = 100; % parameter of exponential
r1 = 50; % lower bound
r2 = 150; % upper bound
r = exprndBounded(mu, sizeOut, r1, r2); % bounded output
function r = exprndBounded(mu, sizeOut, r1, r2);
minE = exp(-r1/mu);
maxE = exp(-r2/mu);
randBounded = minE + (maxE-minE).*rand(sizeOut);
r = -mu .* log(randBounded);
The drawn densities (using a non-parametric kernel estimator) look like the following for 20K samples
I'm trying to implement the following Minimum Error Thresholding (By J. Kittler and J. Illingworth) method in MATLAB.
You may have a look at the PDF:
Scribd - Minimum Error Thresholding.
DocDroid - Minimum Error Thresholding.
My code is:
function [ Level ] = MET( IMG )
%Maximum Error Thresholding By Kittler
% Finding the Min of a cost function J in any possible thresholding. The
% function output is the Optimal Thresholding.
for t = 0:255 % Assuming 8 bit image
I1 = IMG;
I1 = I1(I1 <= t);
q1 = sum(hist(I1, 256));
I2 = IMG;
I2 = I2(I2 > t);
q2 = sum(hist(I2, 256));
% J is proportional to the Overlapping Area of the 2 assumed Gaussians
J(t + 1) = 1 + 2 * (q1 * log(std(I1, 1)) + q2 * log(std(I2, 1)))...
-2 * (q1 * log(q1) + q2 * log(q2));
end
[~, Level] = min(J);
%Level = (IMG <= Level);
end
I've tried it on the following image:
Original size image.
The target is to extract a binary image of the letters (Hebrew Letters).
I applied the code on sub blocks of the image (40 x 40).
Yet I got results which are inferior to K-Means Clustering method.
Did I miss something?
Anyone has a better idea?
Thanks.
P.S.
Would anyone add "Adaptive-Thresholding" to the subject tags (I can't as I'm new).
Thresholding is a rather tricky business. In the many years I've been thresholding images I have not found one single technique that always performs well, and I have come to distrust the claims of universally excellent performance in CS journals.
The maximum error thresholding method only works on nicely bimodal histogram (but it works well on those). Your image looks like signal and background may not be clearly separated enough for this thresholding method to work.
If you want to make sure that the code works fine, you could create a test program like this and check both whether you get good initial segmentation, as well as at what level of 'bimodality' the code breaks down.
I think your code is not fully correct. You use the absolute histogram of the image instead of the relative histogram which is used in the paper. In addition, your code is rather inefficient as it computes two histograms per possible threshold. I implemented the algorithm myself. Maybe, someone can make use of it:
function [ optimalThreshold, J ] = kittlerMinimimErrorThresholding( img )
%KITTLERMINIMIMERRORTHRESHOLDING Compute an optimal image threshold.
% Computes the Minimum Error Threshold as described in
%
% 'J. Kittler and J. Illingworth, "Minimum Error Thresholding," Pattern
% Recognition 19, 41-47 (1986)'.
%
% The image 'img' is expected to have integer values from 0 to 255.
% 'optimalThreshold' holds the found threshold. 'J' holds the values of
% the criterion function.
%Initialize the criterion function
J = Inf * ones(255, 1);
%Compute the relative histogram
histogram = double(histc(img(:), 0:255)) / size(img(:), 1);
%Walk through every possible threshold. However, T is interpreted
%differently than in the paper. It is interpreted as the lower boundary of
%the second class of pixels rather than the upper boundary of the first
%class. That is, an intensity of value T is treated as being in the same
%class as higher intensities rather than lower intensities.
for T = 1:255
%Split the hostogram at the threshold T.
histogram1 = histogram(1:T);
histogram2 = histogram((T+1):end);
%Compute the number of pixels in the two classes.
P1 = sum(histogram1);
P2 = sum(histogram2);
%Only continue if both classes contain at least one pixel.
if (P1 > 0) && (P2 > 0)
%Compute the standard deviations of the classes.
mean1 = sum(histogram1 .* (1:T)') / P1;
mean2 = sum(histogram2 .* (1:(256-T))') / P2;
sigma1 = sqrt(sum(histogram1 .* (((1:T)' - mean1) .^2) ) / P1);
sigma2 = sqrt(sum(histogram2 .* (((1:(256-T))' - mean2) .^2) ) / P2);
%Only compute the criterion function if both classes contain at
%least two intensity values.
if (sigma1 > 0) && (sigma2 > 0)
%Compute the criterion function.
J(T) = 1 + 2 * (P1 * log(sigma1) + P2 * log(sigma2)) ...
- 2 * (P1 * log(P1) + P2 * log(P2));
end
end
end
%Find the minimum of J.
[~, optimalThreshold] = min(J);
optimalThreshold = optimalThreshold - 0.5;