Histogram Equalization method without use of histeq - matlab

I am new to Matlab and am trying to implement code to perform the same function as histeq without actual use of the function. In my code the image colour I get changes drastically when it should not change that much. The average intensity in the image (ranging between 0 and 255) is 105.3196. The image is of an open source pollen particle.
Any help would be much appreciated. The sooner the better! Please could any help be simplified as my Matlab understanding is limited. Thanks.
clc;
clear all;
close all;
pollenJpg = imread ('pollen.jpg', 'jpg');
greyscalePollen = rgb2gray (pollenJpg);
histEqPollen = histeq(greyscalePollen);
averagePollen = mean2 (greyscalePollen)
sizeGreyScalePollen = size(greyscalePollen);
rowsGreyScalePollen = sizeGreyScalePollen(1,1);
columnsGreyScalePollen = sizeGreyScalePollen(1,2);
for i = (1:rowsGreyScalePollen)
for j = (1:columnsGreyScalePollen)
if (greyscalePollen(i,j) > averagePollen)
greyscalePollen(i,j) = greyscalePollen(i,j) + (0.1 * averagePollen);
if (greyscalePollen(i,j) > 255)
greyscalePollen(i,j) = 255;
end
elseif (greyscalePollen(i,j) < averagePollen)
greyscalePollen(i,j) = greyscalePollen(i,j) - (0.1 * averagePollen);
if (greyscalePollen(i,j) > 0)
greyscalePollen(i,j) = 0;
end
end
end
end
figure;
imshow (pollenJpg);
title ('Original Image');
figure;
imshow (greyscalePollen);
title ('Attempted Histogram Equalization of Image');
figure;
imshow (histEqPollen);
title ('True Histogram Equalization of Image');

To implement the equalisation algorithm described on the Wikipedia page, follow these these steps:
Decide on a binSize to group greyscale values. (This is a tweakable, the larger the bin, the less accurate the result from the ideal case, but I think it can cause problems if chosen too small on real images).
Then, calculate the probability of a pixel being a shade of grey:
pixelCount = imageWidth * imageHeight
histogram = all zero
for each pixel in image at coordinates i, j
histogram[floor(pixel / 255 / 10) + 1] += 1 / pixelCount // 1-based arrays, not 0-based
// Note a technicality here: you may need to
// write special code to handle pixels of 255,
// because they will fall in their own bin. Or instead use rounding with an offset.
The histogram in this calculation is scaled (divided by the pixel count) so that the values make sense as probabilities. You can of course factor the division out of the for loop.
Now you need to calculate the accumulative sum of this:
histogramSum = all zero // length of histogramSum must be one bigger than histogram
for i == 1 .. length(histogram)
histogramSum[i + 1] = histogramSum[i] + histogram[i]
Now you have to invert this function and this is the tricky part. The best is to not calculate an explicit inverse, but calculate it on the spot, and apply it on the image. The basic idea is to search for the pixel value in the histogramSum (find the closest index below), and then do a linear interpolation between the index and the next index.
foreach pixel in image at coordinates i, j
hIndex = findIndex(pixel, histogramSum) // You have to write findIndex, it should be simple
equilisationFactor = (pixel - histogramSum[hIndex])/(histogramSum[hIndex + 1] - histogramSum[hIndex]) * binSize
// This above is the linear interpolation step.
// Notice the technicality that you need to handle:
// histogramSum[hIndex + 1] may be out of bounds
equalisedImage[i, j] = pixel * equilisationFactor
Edit: without drilling into the maths, I can't be 100% sure, but I think that division by 0 errors are possible. These can occur if one bin is empty, so consecutive sums are equal. So you need special code to handle this case too. The best you can do is take the value for the factor as halfway between hIndex, hIndex + n, where n is the highest value for which histogramSum[hIndex + n] == histogramSum[hIndex].
And that should be it, once you have dealt with all the technicalities.
The above algorithm is slow (especially in the findIndex step). You may be able to optimize this with a special lookup datastructure. But only do that when it's working, and only if necessary.
One more thing about your Matlab code: the rows and columns are inverted. Because of the symmetry in the algorithm, the result is the same, but it can cause puzzling bugs in other algorithms, and be very confusing if you examine pixel values during debugging. In the pseudocode above I used them the same as you, though.

Relatively few (5) lines of code can do this. I used a low contrast file called 'pollen.jpg' that I found at http://commons.wikimedia.org/wiki/File%3ALepismium_lorentzianum_pollen.jpg
I read it in using your code, run all the above, then do the following:
% find out the index of pixels sorted by intensity:
[gv gi] = sort(greyscalePollen(:));
% create a table of "approximately equal" intensity values:
N = numel(gv);
newVals = repmat(0:255, [ceil(N/256) 1]);
% perform lookup:
% the pixels in sorted order need new values from "equal bins" table:
newImg = zeros(size(greyscalePollen));
newImg(gi) = newVals(1:N);
% if the size of the image doesn't divide into 256, the last bin will have
% slightly fewer pixels in it than the others
When I run this algorithm, and then create a composite of the four images (original, your attempt, my attempt, and histeq), you get the following:
I think it's convincing. The images are not exactly identical - I believe that is because the matlab histeq routine ignores all pixels with value 0. Since it is fully vectorized it is also pretty fast (although not nearly as fast as histeq by about a factor 15 on my image.
EDIT: a bit of explanation might be in order. The repmat command I use to create the newVals matrix creates a matrix that looks like this:
0 1 2 3 4 ... 255
0 1 2 3 4 ... 255
0 1 2 3 4 ... 255
...
0 1 2 3 4 ... 255
Since matlab stores matrices in "first index first" order, if you read this matrix with a single index (as I do in the line newVals(1:N)), you access first all the zeros, then all the ones, etc:
0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 ...
So - when I know the indices of the pixels in the order of their intensity (as returned by the second argument of the sort command, which I called gi), then I can easily assign the value 0 to the first N/256 pixels, the value 1 to the next N/256 etc, with the command I used:
newImg(gi) = newVals(1:N);
I hope this makes the code a little easier to understand.

Related

How to create an adjacency/joint probability matrix in matlab

From a binary matrix, I want to calculate a kind of adjacency/joint probability density matrix (not quite sure how to label it as so please feel free to rename).
For example, I start with this matrix:
A = [1 1 0 1 1
1 0 0 1 1
0 0 0 1 0]
I want to produce this output:
Output = [1 4/5 1/5
4/5 1 1/5
1/5 1/5 1]
Basically, for each row, I want to calculate the proportion of times where they agreed (1 and 1 or 0 and 0). A will always agree with itself and thus have it as 1 along the diagonal. No matter how many different js are added it will still result in a 3x3, but an extra i variable will result in a 4x4.
I like to think of the inputs along i in the A matrix as the person and Js as the question and so the final output is a 3x3 (number of persons) matrix.
I am having some trouble with this on matlab. If you could please help point me in the right direction that would be fabulous.
So, you can do this in two parts.
bothOnes = A*A';
gives you a matrix showing how many 1s each pair of rows share, and
bothZeros = (1-A)*(1-A)';
gives you a matrix showing how many 0s each pair of rows share.
If you just add them up, you get how many elements they share of either type:
bothSame = A*A' + (1-A)*(1-A)';
Then just divide by the row length to get the desired fractional representation:
output = (A*A' + (1-A)*(1-A)') / size(A, 2);
That should get you there.
Note that this only works if A contains only 1's and 0's, but it can be adapted for other cases.
Here are some alternatives, assuming A can only contain 0 and 1:
If you have the Statistics Toolbox:
result = 1-squareform(pdist(A, 'hamming'));
Manual approach with implicit expansion:
result = mean(permute(A, [1 3 2])==permute(A, [3 1 2]), 3);
Using bitwise operations. This is a more esoteric approach, and is only valid if A has at most 53 columns, due to floating-point limitations:
t = bin2dec(char(A+'0')); % convert each row from binary to decimal
u = bitxor(t, t.'); % bitwise xor
v = mean(dec2bin(u)-'0', 2); % compute desired values
result = 1 - reshape(v, size(A,1), []); % reshape to obtain result

How do you apply a custom median filter with threshold?

I am trying to create a custom median filter for my image processing where for a 3x3 neighbourhood, the central pixel (being changed) is excluded. My kernel is therefore
1 1 1
1 0 1
1 1 1
But I want to only change the central pixel to the median of the surrounding pixels if its value deviates by more than the surrounding pixels by some threshold value. E.g. if the pixel is more than 10 times the median of the surrounding pixels, then the central pixel value is changed to the median.
I've looked at using ordfilt2 and I can create a median filter with it. But I am not sure how I can implement the threshold condition. I am essentially trying to remove any outliers within my image which meet the threshold condition within my kernel.
Thanks for any help.
You don't have a single function for doing that, but ord2filt is a good start.
N = uint8([1 1 1 ; 1 0 1 ; 1 1 1]); % neighborhood, faster with integer class
J = (ordfilt2(I,4,N) + ordfilt2(I,5,N))/2; % median of even set
M = I>J+10; % put here your threshold method
Out = I;
Out(M) = J(M);
Rem: question already asked here, but without any good answer IMO.
I suggest the following approach:
%defines input
A = repmat(1:5,5,1);
%step 1: median filtering, ignoring the central pixel
fun = #(x) median([x(1:ceil(length(x(:))/2-1)),x(ceil(length(x(:))/2+1):end)]);
filteredA = nlfilter(A,[3 3],fun);
%step 2: changing each pixel, onlyt if its 10 times bigger from the median
result = A;
changeMask = (A./filteredA)>10 | (A./filteredA)<0.1;
result(changeMask ) = filteredA(changeMask);

Calculating the Local Ternary Pattern of an image?

I am calculating the Local Ternary Pattern of an image. My code is given below. Am I going in the right direction or not?
function [ I3 ] = LTP(I2)
m=size(I2,1);
n=size(I2,2);
for i=2:m-1
for j=2:n-1
J0=I2(i,j);
I3(i-1,j-1)=I2(i-1,j-1)>J0;
end
end
I2 is the image LTP is applied to.
This isn't quite correct. Here's an example of LTP given a 3 x 3 image patch and a threshold t:
(source: hindawi.com)
The range that you assign a pixel in a window to 0 is when the threshold is between c - t and c + t, where c is the centre intensity of the pixel. Therefore, because the intensity is 34 in the centre of this window, the range is between [29,39]. Any values that are beyond 39 get assigned 1 and any values that are below 29 get assigned -1. Once you determine the ternary codes, you split up the codes into upper and lower patterns. Basically, any values that get assigned a -1 get assigned 0 for upper patterns and any values that get assigned a -1 get assigned 1 for lower patterns. Also, for the lower pattern, any values that are 1 from the original window get mapped to 0. The final pattern is reading the bit pattern starting from the east location with respect to the centre (row 2, column 3), then going around counter-clockwise. Therefore, you should probably modify your function so that you're outputting both lower patterns and upper patterns in your image.
Let's write the corrected version of your code. Bear in mind that I will not give an optimized version. Let's get a basic algorithm working, and it'll be up to you on how you want to optimize this. As such, change your code to something like this, bearing in mind all of the stuff I talked about above. BTW, your function is not defined properly. You can't use spaces to define your function, as well as your variables. It will interpret each word in between spaces as variables or functions, and that's not what you want. Assuming your neighbourhood size is 3 x 3 and your image is grayscale, try something like this:
function [ ltp_upper, ltp_lower ] = LTP(im, t)
%// Get the dimensions
rows=size(im,1);
cols=size(im,2);
%// Reordering vector - Essentially for getting binary strings
reorder_vector = [8 7 4 1 2 3 6 9];
%// For the upper and lower LTP patterns
ltp_upper = zeros(size(im));
ltp_lower = zeros(size(im));
%// For each pixel in our image, ignoring the borders...
for row = 2 : rows - 1
for col = 2 : cols - 1
cen = im(row,col); %// Get centre
%// Get neighbourhood - cast to double for better precision
pixels = double(im(row-1:row+1,col-1:col+1));
%// Get ranges and determine LTP
out_LTP = zeros(3, 3);
low = cen - t;
high = cen + t;
out_LTP(pixels < low) = -1;
out_LTP(pixels > high) = 1;
out_LTP(pixels >= low & pixels <= high) = 0;
%// Get upper and lower patterns
upper = out_LTP;
upper(upper == -1) = 0;
upper = upper(reorder_vector);
lower = out_LTP;
lower(lower == 1) = 0;
lower(lower == -1) = 1;
lower = lower(reorder_vector);
%// Convert to a binary character string, then use bin2dec
%// to get the decimal representation
upper_bitstring = char(48 + upper);
ltp_upper(row,col) = bin2dec(upper_bitstring);
lower_bitstring = char(48 + lower);
ltp_lower(row,col) = bin2dec(lower_bitstring);
end
end
Let's go through this code slowly. First, I get the dimensions of the image so I can iterate over each pixel. Also, bear in mind that I'm assuming that the image is grayscale. Once I do this, I allocate space to store the upper and lower LTP patterns per pixel in our image as we will need to output this to the user. I have decided to ignore the border pixels where when we consider a pixel neighbourhood, if the window goes out of bounds, we ignore these locations.
Now, for each valid pixel that is within the valid borders of the image, we extract our pixel neighbourhood. I convert these to double precision to allow for negative differences, as well as for better precision. I then calculate the low and high ranges, then create a LTP pattern following the guidelines we talked about above.
Once I calculate the LTP pattern, I create two versions of the LTP pattern, upper and lower where any values of -1 for the upper pattern get mapped to 0 and 1 for the lower pattern. Also, for the lower pattern, any values that were 1 from the original window get mapped to 0. After, this, I extract out the bits in the order that I laid out - starting from the east, go counter-clockwise. That's the purpose of the reorder_vector as this will allow us to extract those exact locations. These locations will now become a 1D vector.
This 1D vector is important, as we now need to convert this vector into character string so that we can use bin2dec to convert the value into a decimal number. These numbers for the upper and lower LTPs are what are finally used for the output, and we place those in the corresponding positions of both output variables.
This code is untested, so it'll be up to you to debug this if it doesn't work to your specifications.
Good luck!

Matlab im2col function

This is a bit more of a general question, but, no matter how many times I read the description of MATLAB's im2col function, I cannot fully understand it. I need it for the computational efficiency because MATLAB is awful with nested for loops. Here's what I'm attempting to do, but using nested for loops:
[TRIMMED]=TM_FILTER(IMAGE, FILTER_SIZE, PERCENT)
Takes a 2-D array and returns the array, filtered with a
square trimed mean filter with length/width equal to FILTER_SIZE and percent equal to PERCENT.
%}
function [trimmed]=tm_filter(image, filter_size, percent)
if rem(filter_size, 2)==0 %make sure filter has a center pixel
error('filter size must be odd numbered'); %error and return if number is odd
return
end
if percent > 100 || percent < 0
error('Percentage must be ? [0, 100]');
return
end
[rows, columns]=size(image); %figure out pixels needed
n=(filter_size-1)/2; %n is pixel distance from center pixel to boundaries
padded=(padarray(image, [n,n],128)); %padding on boundaries so center pixel always has neighborhood
for i=1+n:rows %rows from first non-padded entry to last nonpadded entry
for j=1+n:columns %colums from first non-padded entry to last nonpadded entry
subimage=padded(i-n:i+n,j-n:j+n); %neighborhood same size as filter
average=trimmean(trimmean(subimage, percent), percent); %computes trimmed mean of neighborhood as trimmed mean of vector of trimmed means
trimmed(i-n, j-n)=average; %stores averaged pixel in new array
end
end
trimmed=uint8(trimmed); %converts image to gray levels from 0-255
Here is the code you want: note the entire nested loop was replaced with a single statement.
[TRIMMED]=TM_FILTER(IMAGE, FILTER_SIZE, PERCENT)
Takes a 2-D array and returns the array, filtered with a
square trimed mean filter with length/width equal to FILTER_SIZE and percent equal to PERCENT.
%}
function [trimmed]=tm_filter(image, filter_size, percent)
if rem(filter_size, 2)==0 %make sure filter has a center pixel
error('filter size must be odd numbered'); %error and return if number is odd
return
end
if percent > 100 || percent < 0
error('Percentage must be ? [0, 100]');
return
end
trimmed = (uint8)trimmean(im2col(image, filter_size), percent);
Explanation:
the im2col function turns each region of filter_size into a column. Your trimmean function can then operate on each of the regions (columns) in a single operation - much more efficient than extracting each shape in turn. Also note this requires only a single application of trimmean - in your original you first do it on the columns, then again on the rows, which will actually cause a more severe trim than I think you intended (exclude 50% first time, then 50% again - feels like excluding 75%. Not exactly true but you get my point). Also you will find that changing the order of operations (row, then column vs column, then row) would change the result because the filter is nonlinear.
For example
im = reshape(1:9, [3 3]);
disp(im2col(im,[2 2])
results in
1 2 4 5
2 3 5 6
4 5 7 8
5 6 8 9
since you took each of the 4 possible blocks of 2x2 from this matrix:
1 4 7
2 5 8
3 6 9
and turned them into columns
Note - with this technique (applied to the unpadded image) you do lose some pixels on the edge; your method added some padding so that every pixel (even ones on the edge) has a complete neighborhood, and as such the filter returns an image that is the same size as the original (but it's not clear what the effect of padding/filtering will be near the margin, and especially the corner: you have almost 75% percent of pixels fixed at 128 and that is likely to dominate the behavior in the corner).
why im2col? why not nlfilter?
>> trimmed = nlfilter( image, [filter_size filter_size],...
#(x) treimmean( trimmean(x, percent), percent ) );
Are you sure you process the entire image?
i and j only goes up to rows and columns respectively. However, when you update trimmed you access i-n and j-n. What about the last n rows and columns?
Why do you apply trimmean twice for each block? Isn't it more appropriate to process the block at once, as in trimmean( x(:), percent)?
I believe the results of trimmean( trimmean(x, percent), percent) will be different than those of trimmean( x(:), percent). Have you give it a thought?
A small remark, it is best not to use i and j as variable names in matlab.

Extremely large weighted average

I am using 64 bit matlab with 32g of RAM (just so you know).
I have a file (vector) of 1.3 million numbers (integers). I want to make another vector of the same length, where each point is a weighted average of the entire first vector, weighted by the inverse distance from that position (actually it's position ^-0.1, not ^-1, but for example purposes). I can't use matlab's 'filter' function, because it can only average things before the current point, right? To explain more clearly, here's an example of 3 elements
data = [ 2 6 9 ]
weights = [ 1 1/2 1/3; 1/2 1 1/2; 1/3 1/2 1 ]
results=data*weights= [ 8 11.5 12.666 ]
i.e.
8 = 2*1 + 6*1/2 + 9*1/3
11.5 = 2*1/2 + 6*1 + 9*1/2
12.666 = 2*1/3 + 6*1/2 + 9*1
So each point in the new vector is the weighted average of the entire first vector, weighting by 1/(distance from that position+1).
I could just remake the weight vector for each point, then calculate the results vector element by element, but this requires 1.3 million iterations of a for loop, each of which contains 1.3million multiplications. I would rather use straight matrix multiplication, multiplying a 1x1.3mil by a 1.3milx1.3mil, which works in theory, but I can't load a matrix that large.
I am then trying to make the matrix using a shell script and index it in matlab so only the relevant column of the matrix is called at a time, but that is also taking a very long time.
I don't have to do this in matlab, so any advice people have about utilizing such large numbers and getting averages would be appreciated. Since I am using a weight of ^-0.1, and not ^-1, it does not drop off that fast - the millionth point is still weighted at 0.25 compared to the original points weighting of 1, so I can't just cut it off as it gets big either.
Hope this was clear enough?
Here is the code for the answer below (so it can be formatted?):
data = load('/Users/mmanary/Documents/test/insertion.txt');
data=data.';
total=length(data);
x=1:total;
datapad=[zeros(1,total) data];
weights = ([(total+1):-1:2 1:total]).^(-.4);
weights = weights/sum(weights);
Fdata = fft(datapad);
Fweights = fft(weights);
Fresults = Fdata .* Fweights;
results = ifft(Fresults);
results = results(1:total);
plot(x,results)
The only sensible way to do this is with FFT convolution, as underpins the filter function and similar. It is very easy to do manually:
% Simulate some data
n = 10^6;
x = randi(10,1,n);
xpad = [zeros(1,n) x];
% Setup smoothing kernel
k = 1 ./ [(n+1):-1:2 1:n];
% FFT convolution
Fx = fft(xpad);
Fk = fft(k);
Fxk = Fx .* Fk;
xk = ifft(Fxk);
xk = xk(1:n);
Takes less than half a second for n=10^6!
This is probably not the best way to do it, but with lots of memory you could definitely parallelize the process.
You can construct sparse matrices consisting of entries of your original matrix which have value i^(-1) (where i = 1 .. 1.3 million), multiply them with your original vector, and sum all the results together.
So for your example the product would be essentially:
a = rand(3,1);
b1 = [1 0 0;
0 1 0;
0 0 1];
b2 = [0 1 0;
1 0 1;
0 1 0] / 2;
b3 = [0 0 1;
0 0 0;
1 0 0] / 3;
c = sparse(b1) * a + sparse(b2) * a + sparse(b3) * a;
Of course, you wouldn't construct the sparse matrices this way. If you wanted to have less iterations of the inside loop, you could have more than one of the i's in each matrix.
Look into the parfor loop in MATLAB: http://www.mathworks.com/help/toolbox/distcomp/parfor.html
I can't use matlab's 'filter' function, because it can only average
things before the current point, right?
That is not correct. You can always add samples (i.e, adding or removing zeros) from your data or from the filtered data. Since filtering with filter (you can also use conv by the way) is a linear action, it won't change the result (it's like adding and removing zeros, which does nothing, and then filtering. Then linearity allows you to swap the order to add samples -> filter -> remove sample).
Anyway, in your example, you can take the averaging kernel to be:
weights = 1 ./ [3 2 1 2 3]; % this kernel introduces a delay of 2 samples
and then simply:
result = filter(w,1,[data, zeros(1,3)]); % or conv (data, w)
% removing the delay introduced by the kernel
result = result (3:end-1);
You considered only 2 options:
Multiplying 1.3M*1.3M matrix with a vector once or multiplying 2 1.3M vectors 1.3M times.
But you can divide your weight matrix to as many sub-matrices as you wish and do a multiplication of n*1.3M matrix with the vector 1.3M/n times.
I assume that the fastest will be when there will be the smallest number of iterations and n is such that creates the largest sub-matrix that fits in your memory, without making your computer start swapping pages to your hard drive.
with your memory size you should start with n=5000.
you can also make it faster by using parfor (with n divided by the number of processors).
The brute force way will probably work for you, with one minor optimisation in the mix.
The ^-0.1 operations to create the weights will take a lot longer than the + and * operations to compute the weighted-means, but you re-use the weights across all the million weighted-mean operations. The algorithm becomes:
Create a weightings vector with all the weights any computation would need:
weights = (-n:n).^-0.1
For each element in the vector:
Index the relevent portion of the weights vector to consider the current element as the 'centre'.
Perform the weighted-mean with the weights portion and the entire vector. This can be done with a fast vector dot-multiply followed by a scalar division.
The main loop does n^2 additions and subractions. With n equal to 1.3 million that's 3.4 trillion operations. A single core of a modern 3GHz CPU can do say 6 billion additions/multiplications a second, so that comes out to around 10 minutes. Add time for indexing the weights vector and overheads, and I still estimate you could come in under half an hour.