Quantizing an image in matlab - matlab

So I'm trying to figure out why my code doesn't seem to be displaying the properly uniformed quantized image into 4 levels.
Q1 =uint8(zeros(ROWS, COLS, CHANNELS));
for band = 1 : CHANNELS,
for x = 1 : ROWS,
for y = 1 : COLS,
Q1(ROWS,COLS,CHANNELS) = uint8(double(I1(ROWS,COLS,CHANNELS) / 2^4)*2^4);
end
end
end
No5 = figure;
imshow(Q1);
title('Part D: K = 4');

It is because you are not quantifying. You divide a double by 16, then multiply again by 16, then convert it to uint8. The right way to quantize is to divide by 16, throw away any decimals, then multiply by 16:
Q1 = uint8(floor(I1 / 16) * 16);
In the code snippet above, I assume I1 is a double. Convert it to double if its not: I1=double(I1).
Note that you don't need the loops, MATLAB will apply the operation to each element in the matrix.
Note also that if I1 is an integer type, you can do something like this:
Q1 = (uint8(I1) / 16) * 16;
but this is actually equivalent to replacing the floor by round in the first example. This means you get an uneven distribution of values: 0-7 are mapped to 0, 8-23 are mapped to 16, etc. and 248-255 are all mapped to 255 (not a multiple of 16!). That is, 8 numbers are mapped to 0, and 8 are mapped to 255, instead of mapping 16 numbers to each possible multiple of 16 as the floor code does.
The 16 in the code above means that there will be 256/16=16 different grey levels in the output. If you want a different number, say n, use 256/n instead of 16.

It's because you are using ROWS, COLS, CHANNELS as your index, it should be x,y,band. Also, the final multiplication of 2^4 has be after the uint8 cast otherwise no rounding ever takes place.
In practice you should avoid the for loops in Matlab since matrix operations are much faster. Replace your code with
Q1=uint8(double(I1/2^4))*2^4
No5 = figure;
imshow(Q1);
title('Part D: K = 4');

Related

MATLAB does not show the correct calculation

I'm trying to do a simple algorithm on an image but i figured out there is a problem, here is a piece of my code:
I = imread('C:/test.bmp' ,'bmp');
z = I(1, 1, 1);
c = I(1, 1, 2);
b = I(1, 1, 3);
v = z+c+b
this piece of code should print the sum of R,G and B values of the first pixel. when In print each R,G and B individually, there are 123, 43, 140. but the value of v(sum) is always equal to 255! I tried it with different pics, but I get the same result!
I have no idea why is this happening, happy to get help.
Depending on the colour depth and file format, imread outputs different data types. In this case it was a uint8 with a maximum of 255. The sum of unit8 variables is also a uint8.
The easiest way to deal with it is using im2double while loading the images. Every colour channel is scaled to a double value in the [0,1] interval.
I = im2double(imread('C:/test.bmp' ,'bmp'));

Random numbers with constant sum in MATLAB [duplicate]

[I'm splitting a population number into different matrices and want to test my code using random numbers for now.]
Quick question guys and thanks for your help in advance -
If I use;
100*rand(9,1)
What is the best way to make these 9 numbers add to 100?
I'd like 9 random numbers between 0 and 100 that add up to 100.
Is there an inbuilt command that does this because I can't seem to find it.
I see the mistake so often, the suggestion that to generate random numbers with a given sum, one just uses a uniform random set, and just scale them. But is the result truly uniformly random if you do it that way?
Try this simple test in two dimensions. Generate a huge random sample, then scale them to sum to 1. I'll use bsxfun to do the scaling.
xy = rand(10000000,2);
xy = bsxfun(#times,xy,1./sum(xy,2));
hist(xy(:,1),100)
If they were truly uniformly random, then the x coordinate would be uniform, as would the y coordinate. Any value would be equally likely to happen. In effect, for two points to sum to 1 they must lie along the line that connects the two points (0,1), (1,0) in the (x,y) plane. For the points to be uniform, any point along that line must be equally likely.
Clearly uniformity fails when I use the scaling solution. Any point on that line is NOT equally likely. We can see the same thing happening in 3-dimensions. See that in the 3-d figure here, the points in the center of the triangular region are more densely packed. This is a reflection of non-uniformity.
xyz = rand(10000,3);
xyz = bsxfun(#times,xyz,1./sum(xyz,2));
plot3(xyz(:,1),xyz(:,2),xyz(:,3),'.')
view(70,35)
box on
grid on
Again, the simple scaling solution fails. It simply does NOT produce truly uniform results over the domain of interest.
Can we do better? Well, yes. A simple solution in 2-d is to generate a single random number that designates the distance along the line connecting the points (0,1) and 1,0).
t = rand(10000000,1);
xy = t*[0 1] + (1-t)*[1 0];
hist(xy(:,1),100)
It can be shown that ANY point along the line defined by the equation x+y = 1, in the unit square, is now equally likely to have been chosen. This is reflected by the nice, flat histogram.
Does the sort trick suggested by David Schwartz work in n-dimensions? Clearly it does so in 2-d, and the figure below suggests that it does so in 3-dimensions. Without deep thought on the matter, I believe that it will work for this basic case in question, in n-dimensions.
n = 10000;
uv = [zeros(n,1),sort(rand(n,2),2),ones(n,1)];
xyz = diff(uv,[],2);
plot3(xyz(:,1),xyz(:,2),xyz(:,3),'.')
box on
grid on
view(70,35)
One can also download the function randfixedsum from the file exchange, Roger Stafford's contribution. This is a more general solution to generate truly uniform random sets in the unit hyper-cube, with any given fixed sum. Thus, to generate random sets of points that lie in the unit 3-cube, subject to the constraint they sum to 1.25...
xyz = randfixedsum(3,10000,1.25,0,1)';
plot3(xyz(:,1),xyz(:,2),xyz(:,3),'.')
view(70,35)
box on
grid on
One simple way is to pick 8 random numbers between 0 and 100. Add 0 and 100 to the list to give 10 numbers. Sort them. Then output the difference between each successive pair of numbers. For example, here's 8 random numbers between 0 and 100:
96, 38, 95, 5, 13, 57, 13, 20
So add 0 and 100 and sort.
0, 5, 13, 13, 20, 38, 57, 95, 96, 100
Now subtract:
5-0 = 5
13-5 = 8
13-13 = 0
20-13 = 7
38-20 = 18
57-38 = 19
95-57 = 38
96-95 = 1
100-96 = 4
And there you have it, nine numbers that sum to 100: 0, 1, 4, 5, 7, 8, 18, 19, 38. That I got a zero and a one was just a strange bit of luck.
It is not too late to give the right answer
Let's talk about sampling X1...XN in the range [0...1] such that Sum(X1, ..., XN) is equal to 1. Then you could rescale it to 100
This is called Dirichlet distribution, and below is the code to sample from it. Simplest case is when all parameters are equal to 1, then all marginal distributions for X1, ..., XN would be U(0,1). In general case, with parameters different from 1s, marginal distributions might have peaks.
----------------- taken from here ---------------------
The Dirichlet is a vector of unit-scale gamma random variables, normalized by their sum. So, with no error checking, this will get you that:
a = [1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0]; // 9 numbers to sample
n = 10000;
r = drchrnd(a,n)
function r = drchrnd(a,n)
p = length(a);
r = gamrnd(repmat(a,n,1),1,n,p);
r = r ./ repmat(sum(r,2),1,p);
Take a list of N - 1 numbers, create a list of N + 1 numbers by inserting 0 and 100, sort the list, and diff them down to a total of N numbers.

Histogram Equalization method without use of histeq

I am new to Matlab and am trying to implement code to perform the same function as histeq without actual use of the function. In my code the image colour I get changes drastically when it should not change that much. The average intensity in the image (ranging between 0 and 255) is 105.3196. The image is of an open source pollen particle.
Any help would be much appreciated. The sooner the better! Please could any help be simplified as my Matlab understanding is limited. Thanks.
clc;
clear all;
close all;
pollenJpg = imread ('pollen.jpg', 'jpg');
greyscalePollen = rgb2gray (pollenJpg);
histEqPollen = histeq(greyscalePollen);
averagePollen = mean2 (greyscalePollen)
sizeGreyScalePollen = size(greyscalePollen);
rowsGreyScalePollen = sizeGreyScalePollen(1,1);
columnsGreyScalePollen = sizeGreyScalePollen(1,2);
for i = (1:rowsGreyScalePollen)
for j = (1:columnsGreyScalePollen)
if (greyscalePollen(i,j) > averagePollen)
greyscalePollen(i,j) = greyscalePollen(i,j) + (0.1 * averagePollen);
if (greyscalePollen(i,j) > 255)
greyscalePollen(i,j) = 255;
end
elseif (greyscalePollen(i,j) < averagePollen)
greyscalePollen(i,j) = greyscalePollen(i,j) - (0.1 * averagePollen);
if (greyscalePollen(i,j) > 0)
greyscalePollen(i,j) = 0;
end
end
end
end
figure;
imshow (pollenJpg);
title ('Original Image');
figure;
imshow (greyscalePollen);
title ('Attempted Histogram Equalization of Image');
figure;
imshow (histEqPollen);
title ('True Histogram Equalization of Image');
To implement the equalisation algorithm described on the Wikipedia page, follow these these steps:
Decide on a binSize to group greyscale values. (This is a tweakable, the larger the bin, the less accurate the result from the ideal case, but I think it can cause problems if chosen too small on real images).
Then, calculate the probability of a pixel being a shade of grey:
pixelCount = imageWidth * imageHeight
histogram = all zero
for each pixel in image at coordinates i, j
histogram[floor(pixel / 255 / 10) + 1] += 1 / pixelCount // 1-based arrays, not 0-based
// Note a technicality here: you may need to
// write special code to handle pixels of 255,
// because they will fall in their own bin. Or instead use rounding with an offset.
The histogram in this calculation is scaled (divided by the pixel count) so that the values make sense as probabilities. You can of course factor the division out of the for loop.
Now you need to calculate the accumulative sum of this:
histogramSum = all zero // length of histogramSum must be one bigger than histogram
for i == 1 .. length(histogram)
histogramSum[i + 1] = histogramSum[i] + histogram[i]
Now you have to invert this function and this is the tricky part. The best is to not calculate an explicit inverse, but calculate it on the spot, and apply it on the image. The basic idea is to search for the pixel value in the histogramSum (find the closest index below), and then do a linear interpolation between the index and the next index.
foreach pixel in image at coordinates i, j
hIndex = findIndex(pixel, histogramSum) // You have to write findIndex, it should be simple
equilisationFactor = (pixel - histogramSum[hIndex])/(histogramSum[hIndex + 1] - histogramSum[hIndex]) * binSize
// This above is the linear interpolation step.
// Notice the technicality that you need to handle:
// histogramSum[hIndex + 1] may be out of bounds
equalisedImage[i, j] = pixel * equilisationFactor
Edit: without drilling into the maths, I can't be 100% sure, but I think that division by 0 errors are possible. These can occur if one bin is empty, so consecutive sums are equal. So you need special code to handle this case too. The best you can do is take the value for the factor as halfway between hIndex, hIndex + n, where n is the highest value for which histogramSum[hIndex + n] == histogramSum[hIndex].
And that should be it, once you have dealt with all the technicalities.
The above algorithm is slow (especially in the findIndex step). You may be able to optimize this with a special lookup datastructure. But only do that when it's working, and only if necessary.
One more thing about your Matlab code: the rows and columns are inverted. Because of the symmetry in the algorithm, the result is the same, but it can cause puzzling bugs in other algorithms, and be very confusing if you examine pixel values during debugging. In the pseudocode above I used them the same as you, though.
Relatively few (5) lines of code can do this. I used a low contrast file called 'pollen.jpg' that I found at http://commons.wikimedia.org/wiki/File%3ALepismium_lorentzianum_pollen.jpg
I read it in using your code, run all the above, then do the following:
% find out the index of pixels sorted by intensity:
[gv gi] = sort(greyscalePollen(:));
% create a table of "approximately equal" intensity values:
N = numel(gv);
newVals = repmat(0:255, [ceil(N/256) 1]);
% perform lookup:
% the pixels in sorted order need new values from "equal bins" table:
newImg = zeros(size(greyscalePollen));
newImg(gi) = newVals(1:N);
% if the size of the image doesn't divide into 256, the last bin will have
% slightly fewer pixels in it than the others
When I run this algorithm, and then create a composite of the four images (original, your attempt, my attempt, and histeq), you get the following:
I think it's convincing. The images are not exactly identical - I believe that is because the matlab histeq routine ignores all pixels with value 0. Since it is fully vectorized it is also pretty fast (although not nearly as fast as histeq by about a factor 15 on my image.
EDIT: a bit of explanation might be in order. The repmat command I use to create the newVals matrix creates a matrix that looks like this:
0 1 2 3 4 ... 255
0 1 2 3 4 ... 255
0 1 2 3 4 ... 255
...
0 1 2 3 4 ... 255
Since matlab stores matrices in "first index first" order, if you read this matrix with a single index (as I do in the line newVals(1:N)), you access first all the zeros, then all the ones, etc:
0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 ...
So - when I know the indices of the pixels in the order of their intensity (as returned by the second argument of the sort command, which I called gi), then I can easily assign the value 0 to the first N/256 pixels, the value 1 to the next N/256 etc, with the command I used:
newImg(gi) = newVals(1:N);
I hope this makes the code a little easier to understand.

Random numbers that add to 100: Matlab

[I'm splitting a population number into different matrices and want to test my code using random numbers for now.]
Quick question guys and thanks for your help in advance -
If I use;
100*rand(9,1)
What is the best way to make these 9 numbers add to 100?
I'd like 9 random numbers between 0 and 100 that add up to 100.
Is there an inbuilt command that does this because I can't seem to find it.
I see the mistake so often, the suggestion that to generate random numbers with a given sum, one just uses a uniform random set, and just scale them. But is the result truly uniformly random if you do it that way?
Try this simple test in two dimensions. Generate a huge random sample, then scale them to sum to 1. I'll use bsxfun to do the scaling.
xy = rand(10000000,2);
xy = bsxfun(#times,xy,1./sum(xy,2));
hist(xy(:,1),100)
If they were truly uniformly random, then the x coordinate would be uniform, as would the y coordinate. Any value would be equally likely to happen. In effect, for two points to sum to 1 they must lie along the line that connects the two points (0,1), (1,0) in the (x,y) plane. For the points to be uniform, any point along that line must be equally likely.
Clearly uniformity fails when I use the scaling solution. Any point on that line is NOT equally likely. We can see the same thing happening in 3-dimensions. See that in the 3-d figure here, the points in the center of the triangular region are more densely packed. This is a reflection of non-uniformity.
xyz = rand(10000,3);
xyz = bsxfun(#times,xyz,1./sum(xyz,2));
plot3(xyz(:,1),xyz(:,2),xyz(:,3),'.')
view(70,35)
box on
grid on
Again, the simple scaling solution fails. It simply does NOT produce truly uniform results over the domain of interest.
Can we do better? Well, yes. A simple solution in 2-d is to generate a single random number that designates the distance along the line connecting the points (0,1) and 1,0).
t = rand(10000000,1);
xy = t*[0 1] + (1-t)*[1 0];
hist(xy(:,1),100)
It can be shown that ANY point along the line defined by the equation x+y = 1, in the unit square, is now equally likely to have been chosen. This is reflected by the nice, flat histogram.
Does the sort trick suggested by David Schwartz work in n-dimensions? Clearly it does so in 2-d, and the figure below suggests that it does so in 3-dimensions. Without deep thought on the matter, I believe that it will work for this basic case in question, in n-dimensions.
n = 10000;
uv = [zeros(n,1),sort(rand(n,2),2),ones(n,1)];
xyz = diff(uv,[],2);
plot3(xyz(:,1),xyz(:,2),xyz(:,3),'.')
box on
grid on
view(70,35)
One can also download the function randfixedsum from the file exchange, Roger Stafford's contribution. This is a more general solution to generate truly uniform random sets in the unit hyper-cube, with any given fixed sum. Thus, to generate random sets of points that lie in the unit 3-cube, subject to the constraint they sum to 1.25...
xyz = randfixedsum(3,10000,1.25,0,1)';
plot3(xyz(:,1),xyz(:,2),xyz(:,3),'.')
view(70,35)
box on
grid on
One simple way is to pick 8 random numbers between 0 and 100. Add 0 and 100 to the list to give 10 numbers. Sort them. Then output the difference between each successive pair of numbers. For example, here's 8 random numbers between 0 and 100:
96, 38, 95, 5, 13, 57, 13, 20
So add 0 and 100 and sort.
0, 5, 13, 13, 20, 38, 57, 95, 96, 100
Now subtract:
5-0 = 5
13-5 = 8
13-13 = 0
20-13 = 7
38-20 = 18
57-38 = 19
95-57 = 38
96-95 = 1
100-96 = 4
And there you have it, nine numbers that sum to 100: 0, 1, 4, 5, 7, 8, 18, 19, 38. That I got a zero and a one was just a strange bit of luck.
It is not too late to give the right answer
Let's talk about sampling X1...XN in the range [0...1] such that Sum(X1, ..., XN) is equal to 1. Then you could rescale it to 100
This is called Dirichlet distribution, and below is the code to sample from it. Simplest case is when all parameters are equal to 1, then all marginal distributions for X1, ..., XN would be U(0,1). In general case, with parameters different from 1s, marginal distributions might have peaks.
----------------- taken from here ---------------------
The Dirichlet is a vector of unit-scale gamma random variables, normalized by their sum. So, with no error checking, this will get you that:
a = [1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0]; // 9 numbers to sample
n = 10000;
r = drchrnd(a,n)
function r = drchrnd(a,n)
p = length(a);
r = gamrnd(repmat(a,n,1),1,n,p);
r = r ./ repmat(sum(r,2),1,p);
Take a list of N - 1 numbers, create a list of N + 1 numbers by inserting 0 and 100, sort the list, and diff them down to a total of N numbers.

MATLAB converting a vector of values to uint32

I have a vector containing the values 0, 1, 2 and 3. What I want to do is take the lower two bits from each set of 16 elements drawn from this vector and append them all together to get one uint32. Anyone know an easy way to do this?
Follow-up: What if the number of elements in the vector isn't an integer multiple of 16?
Here's a vectorized version:
v = floor(rand(64,1)*4);
nWord = size(v,1)/16;
sum(reshape([bitget(v,2) bitget(v,1)]',[32 nWord]).*repmat(2.^(31:(-1):0)',[1 nWord ]))
To refine what was suggested by Jacob in his answer and mtrw in his comment, here's the most succinct version I can come up with (given a 1-by-N variable vec containing the values 0 through 3):
value = uint32(vec(1:16)*4.^(0:15)');
This treats the first element in the array as the least-significant bit in the result. To treat the first element as the most-significant bit, use the following:
value = uint32(vec(16:-1:1)*4.^(0:15)');
EDIT: This addresses the new revision of the question...
If the number of elements in your vector isn't a multiple of 16, then the last series of numbers you extract from it will have less than 16 values. You will likely want to pad the higher bits of the series with zeroes to make it a 16-element vector. Depending on whether the first element in the series is the least-significant bit (LSB) or most-significant bit (MSB), you will end up padding the series differently:
v = [2 3 1 1 3 1 2 2]; % A sample 8-element vector
v = [v zeros(1,8)]; % If v(1) is the LSB, set the higher bits to zero
% or...
v = [zeros(1,8) v]; % If v(1) is the MSB, again set the higher bits to zero
If you want to process the entire vector all at once, here is how you would do it (with any necessary zero-padding included) for the case when vec(1) is the LSB:
nValues = numel(vec);
nRem = rem(nValues,16);
vec = [vec(:) zeros(1,nRem)]; % Pad with zeroes
vec = reshape(vec,16,[])'; % Reshape to an N-by-16 matrix
values = uint32(vec*4.^(0:15)');
and when vec(1) is the MSB:
nValues = numel(vec);
nRem = rem(nValues,16);
vec = [vec(1:(nValues-nRem)) zeros(1,nRem) ...
vec((nValues-nRem+1):nValues)]; % Pad with zeroes
vec = reshape(vec,16,[])'; % Reshape to an N-by-16 matrix
values = uint32(fliplr(vec)*4.^(0:15)');
I think you should have a look at bitget and bitshift. It should be possible to be something like this (pseudo-matlab code as I haven't worked with Matlab for a long time):
result = 0;
for i = 1:16 do
result += bitshift(bitget(vector(i), 2:-1:1), 2);
Note that this will give you the last bits of the first vector in the highest bits, so you might want to descend i from 16 to 1 instead