Let's say I have a 10 x 10 matrix. What I do then is iterate through the whole matrix with an 3 x 3 matrix (except from the edges to make it easier), and from this 3 x 3 matrix I get the mean/average value of this 3 x 3 space. What I then want to do is to replace the original matrix values, with those new mean/average values.
Can someone please explain to me how I can do that? Some code examples would be appreciated.
What you're trying to do is called convolution. In MATLAB, you can do the following (read up more on convolution and how it's done in MATLAB).
conv2( myMatrix , ones(3)/9 , 'same' );
A little deciphering is in order. myMatrix is the matrix you're working on (the 10x10 one you mentioned). The ones(3)/9 command creates a so-called mask of filter kernel, which is
1/9 1/9 1/9
1/9 1/9 1/9
1/9 1/9 1/9
When you take this mask and move it around, multiplying value-by-value the mask's entries with 3x3 entries of the image and then add the results (dot product, essentially), the you get the average of the 9 values that ended up underneath this mask. So once you've placed this mask over every 3x3 segment of you matrix (image, I imagine) and replaced the value in the middle by the average, you get the result of that command. You're welcome to experiment with it further. The 'same' flag simply means that the matrix you're getting back is the same size as you original one. This is important because, as you've yourself realized, there are multiple ways of dealing with the edges.
To do this, you need to keep the original intact until you have got all the means. This means that if you implement this using a loop, you have to store the averages in different matrix. To get the borders too, the easiest way is to copy the original matrix to the new matrix, although only the borders are needed to be copied.
This average3x3 function copies the input Matrix to AveragedMatrix and then goes through all the elements that are not on any border, calculates the mean of 3x3 space and stores it in the corresponding element of AveragedMatrix.
function [AveragedMatrix] = average3x3(Matrix)
AveragedMatrix = Matrix;
if ((size(Matrix, 1) < 3) || (size(Matrix, 2) < 3))
fprintf('Matrix is too small, minimum matrix size is 3x3.\n');
return
end
for RowIndex = 2:(size(Matrix, 1)-1)
Rows = RowIndex-1:RowIndex+1;
for ColIndex = 2:(size(Matrix, 2)-1)
Columns = ColIndex-1:ColIndex+1;
AveragedMatrix(RowIndex,ColIndex) = mean(mean(Matrix(Rows,Columns)));
end
end
return
To use this function, you can try:
A = randi(10,10);
AveragedA = average3x3(A);
Related
The 6 faces method is a very cheap and fast way to calibrate and accelerometer like my MPU6050, here a great description of the method.
I made 6 tests to calibrate the accelerometer based on the g vector.
After that i build up a matrix and in each row is stored the mean of each axis expressed in m/s^2, thanks to this question i automatically calculated the mean for each column in each file.
The tests were randomly performed, i tested all the six positions, but i didn't follow any path.
So i sorted manually the final matrix, based on the sort of the Y matrix, my reference matrix.
The Y elements are fixed.
The matrix manually sorted is the following
Here how i manually sorted the matrix
meanmatrix=[ax ay az];
mean1=meanmatrix(1,:);
mean2=meanmatrix(2,:);
mean3=meanmatrix(3,:);
mean4=meanmatrix(4,:);
mean5=meanmatrix(5,:);
mean6=meanmatrix(6,:);
meanmatrix= [mean1; mean3; mean2; mean4;mean6;mean5];
Based on the Y matrix constrain how can sort my matrix without knwowing "a priori" wich is the test stored in the row?
Assuming that the bias on the accelerometer is not huge, you can look at the rows of your matrix and see with which of the rows in your Y matrix matches.
sorted_meanmatrix = zeros(size(meanmatrix));
for rows = 1:length(Y)
% Calculates the square of distance and see which row has a nearest distcance
[~,index] = min(sum((meanmatrix - Y(rows,:)).^2, 2));
sorted_meanmatrix(rows,:) = meanmatrix(index,:);
end
I have two black and white images and I need to calculate the mutual information.
Image 1 = X
Image 2 = Y
I know that the mutual information can be defined as:
MI = entropy(X) + entropy(Y) - JointEntropy(X,Y)
MATLAB already has built-in functions to calculate the entropy but not to calculate the joint entropy. I guess the true question is: How do I calculate the joint entropy of two images?
Here is an example of the images I'd like to find the joint entropy of:
X =
0 0 0 0 0 0
0 0 1 1 0 0
0 0 1 1 0 0
0 0 0 0 0 0
0 0 0 0 0 0
Y =
0 0 0 0 0 0
0 0 0.38 0.82 0.38 0.04
0 0 0.32 0.82 0.68 0.17
0 0 0.04 0.14 0.11 0
0 0 0 0 0 0
To calculate the joint entropy, you need to calculate the joint histogram between two images. The joint histogram is essentially the same as a normal 1D histogram but the first dimension logs intensities for the first image and the second dimension logs intensities for the second image. This is very similar to what is commonly referred to as a co-occurrence matrix. At location (i,j) in the joint histogram, it tells you how many intensity values we have encountered that have intensity i in the first image and intensity j in the second image.
What is important is that this logs how many times we have seen this pair of intensities at the same corresponding locations. For example, if we have a joint histogram count of (7,3) = 2, this means that when we were scanning both images, when we encountered the intensity of 7, at the same corresponding location in the second image, we encountered the intensity of 3 for a total of 2 times.
Constructing a joint histogram is very simple to do.
First, create a 256 x 256 matrix (assuming your image is unsigned 8-bit integer) and initialize them to all zeroes. Also, you need to make sure that both of your images are the same size (width and height).
Once you do that, take a look at the first pixel of each image, which we will denote as the top left corner. Specifically, take a look at the intensities for the first and second image at this location. The intensity of the first image will serve as the row while the intensity of the second image will serve as the column.
Find this location in the matrix and increment this spot in the matrix by 1.
Repeat this for the rest of the locations in your image.
After you're done, divide all entries by the total number of elements in either image (remember they should be the same size). This will give us the joint probability distribution between both images.
One would be inclined to do this with for loops, but as it is commonly known, for loops are notoriously slow and should be avoided if at all possible. However, you can easily do this in MATLAB in the following way without loops. Let's assume that im1 and im2 are the first and second images you want to compare to. What we can do is convert im1 and im2 into vectors. We can then use accumarray to help us compute the joint histogram. accumarray is one of the most powerful functions in MATLAB. You can think of it as a miniature MapReduce paradigm. Simply put, each data input has a key and an associated value. The goal of accumarray is to bin all of the values that belong to the same key and do some operation on all of these values. In our case, the "key" would be the intensity values, and the values themselves are the value of 1 for every intensity value. We would then want to add up all of the values of 1 that map to the same bin, which is exactly how we'd compute a histogram. The default behaviour for accumarray is to add all of these values. Specifically, the output of accumarray would be an array where each position computes the sum of all values that mapped to that key. For example, the first position would be the summation of all values that mapped to the key of 1, the second position would be the summation of all values that mapped to the key of 2 and so on.
However, for the joint histogram, you want to figure out which values map to the same intensity pair of (i,j), and so the keys here would be a pair of 2D coordinates. As such, any intensities that have an intensity of i in the first image and j in the second image in the same spatial location shared between the two images go to the same key. Therefore in the 2D case, the output of accumarray would be a 2D matrix where each element (i,j) contains the summation of all values that mapped to key (i,j), similar to the 1D case that was mentioned previously which is exactly what we are after.
In other words:
indrow = double(im1(:)) + 1;
indcol = double(im2(:)) + 1; %// Should be the same size as indrow
jointHistogram = accumarray([indrow indcol], 1);
jointProb = jointHistogram / numel(indrow);
With accumarray, the first input are the keys and the second input are the values. A note with accumarray is that if each key has the same value, you can simply assign a constant to the second input, which is what I've done and it's 1. In general, this is an array with the same number of rows as the first input. Also, take special note of the first two lines. There will inevitably be an intensity of 0 in your image, but because MATLAB starts indexing at 1, we need to offset both arrays by 1.
Now that we have the joint histogram, it's really simple to calculate the joint entropy. It is similar to the entropy in 1D, except now we are just summing over the entire joint probability matrix. Bear in mind that it will be very likely that your joint histogram will have many 0 entries. We need to make sure that we skip those or the log2 operation will be undefined. Let's get rid of any zero entries now:
indNoZero = jointHistogram ~= 0;
jointProb1DNoZero = jointProb(indNoZero);
Take notice that I searched the joint histogram instead of the joint probability matrix. This is because the joint histogram consists of whole numbers while the joint probability matrix will lie between 0 and 1. Because of the division, I want to avoid comparing any entries in this matrix with 0 due to numerical roundoff and instability. The above will also convert our joint probability matrix into a stacked 1D vector, which is fine.
As such, the joint entropy can be calculated as:
jointEntropy = -sum(jointProb1DNoZero.*log2(jointProb1DNoZero));
If my understanding of calculating entropy for an image in MATLAB is correct, it should calculate the histogram / probability distribution over 256 bins, so you can certainly use that function here with the joint entropy that was just calculated.
What if we have floating-point data instead?
So far, we have assumed that the images that you have dealt with have intensities that are integer-valued. What if we have floating point data? accumarray assumes that you are trying to index into the output array using integers, but we can still certainly accomplish what we want with this small bump in the road. What you would do is simply assign each floating point value in both images to have a unique ID. You would thus use accumarray with these IDs instead. To facilitate this ID assigning, use unique - specifically the third output from the function. You would take each of the images, put them into unique and make these the indices to be input into accumarray. In other words, do this instead:
[~,~,indrow] = unique(im1(:)); %// Change here
[~,~,indcol] = unique(im2(:)); %// Change here
%// Same code
jointHistogram = accumarray([indrow indcol], 1);
jointProb = jointHistogram / numel(indrow);
indNoZero = jointHistogram ~= 0;
jointProb1DNoZero = jointProb(indNoZero);
jointEntropy = -sum(jointProb1DNoZero.*log2(jointProb1DNoZero));
Note that with indrow and indcol, we are directly assigning the third output of unique to these variables and then using the same joint entropy code that we computed earlier. We also don't have to offset the variables by 1 as we did previously because unique will assign IDs starting at 1.
Aside
You can actually calculate the histograms or probability distributions for each image individually using the joint probability matrix. If you wanted to calculate the histograms / probability distributions for the first image, you would simply accumulate all of the columns for each row. To do it for the second image, you would simply accumulate all of the rows for each column. As such, you can do:
histogramImage1 = sum(jointHistogram, 1);
histogramImage2 = sum(jointHistogram, 2);
After, you can calculate the entropy of both of these by yourself. To double check, make sure you turn both of these into PDFs, then compute the entropy using the standard equation (like above).
How do I finally compute Mutual Information?
To finally compute Mutual Information, you're going to need the entropy of the two images. You can use MATLAB's built-in entropy function, but this assumes that there are 256 unique levels. You probably want to apply this for the case of there being N distinct levels instead of 256, and so you can use what we did above with the joint histogram, then computing the histograms for each image in the aside code above, and then computing the entropy for each image. You would simply repeat the entropy calculation that was used jointly, but apply it to each image individually:
%// Find non-zero elements for first image's histogram
indNoZero = histogramImage1 ~= 0;
%// Extract them out and get the probabilities
prob1NoZero = histogramImage1(indNoZero);
prob1NoZero = prob1NoZero / sum(prob1NoZero);
%// Compute the entropy
entropy1 = -sum(prob1NoZero.*log2(prob1NoZero));
%// Repeat for the second image
indNoZero = histogramImage2 ~= 0;
prob2NoZero = histogramImage2(indNoZero);
prob2NoZero = prob2NoZero / sum(prob2NoZero);
entropy2 = -sum(prob2NoZero.*log2(prob2NoZero));
%// Now compute mutual information
mutualInformation = entropy1 + entropy2 - jointEntropy;
Hope this helps!
I have a task that is compute the mean value of a sub-image that extract from input image I. Let explain my task. I have a image I (i.e, 9x9), and a window (i.e size 3x3). The window will be run from top-left to bottom-right of image. Hence, it will extract the input image into many subimage. I want to compute the mean value of these sub-images. Could you suggest to me some matlab code to compute it.
This is my solution. But it does not work.
First, I defined a window as Gaussian
Second, the Gaussian function will run from top-left to bottom-right using convolution function. (Note that, it must be use Gaussian Kernel)
Compute the mean value of each sub-window
%% Given Image I,Defined a Gaussian Kernel
sigma=3;
K=fspecial('gaussian',round(2*sigma)*2+1,sigma);
KI=conv2(I,K,'same');
%% mean value
mean(KI)
The problem in here is that mean value off all sub-image will have size similar image I. Because each pixel in image will made a sub-image. But my code returns only a value. What is problem?
If it is your desire to compute the average value in each sub-image once you filter your image with a Gaussian kernel, simply convolve your image with a mean or average filter. This will collect sub-images within your original image and for each output location, you will compute the average value.
Going with your initial assumption that the mask size is 3 x 3, simply use conv2 in conjunction with a 3 x 3 mask that has all 1/9 coefficients. In other words:
%// Your code
%% Given Image I,Defined a Gaussian Kernel
sigma=3;
K=fspecial('gaussian',round(2*sigma)*2+1,sigma);
KI=conv2(I,K,'same');
%// New code
mask = (1/9)*ones(3,3);
out = conv2(KI, mask, 'same');
Each location in out will give you what the average value was for each 3 x 3 sub-image in your Gaussian filtered result.
You can also create the averaging mask by using fspecial with the flag average and specifying the size / width of your mask. Given that you are already using it in your code, you already know of its existence. As such, you can also do:
mask = fspecial('average', 3);
The above code assumes the width and height of the mask are the same, and so it'll create a 3 x 3 mask of all 1/9 coefficients.
Aside
conv2 is designed for general 2D signals. If you are looking to filter an image, I recommend you use imfilter instead. You should have access to it, since fspecial is part of the Image Processing Toolbox, and so is imfilter. imfilter is known to be much more efficient than conv2, and also makes use of Intel Integrated Performance Primitives (Intel IPP) if available (basically if you are running MATLAB on a computer that has an Intel processor that supports IPP). Therefore, you should really perform your filtering this way:
%// Your code
%% Given Image I,Defined a Gaussian Kernel
sigma=3;
K=fspecial('gaussian',round(2*sigma)*2+1,sigma);
KI=imfilter(I,K,'replicate'); %// CHANGE
%// New code
mask = fspecial('average', 3);
out = imfilter(KI, mask, 'replicate'); %// CHANGE
The replicate flag is for handling the boundary conditions. When your mask goes out of bounds of the original image, replicate simply replicates the border of each side of your image so that the mask can fit comfortably within the image when performing your filtering.
Edit
Given your comment, you want to extract the subimages that are seen in KI. You can use the very powerful im2col function that's part of the Image Processing Toolbox. You call it like so:
B = im2col(A,[m n]);
A will be your input image, and B will be a matrix that is of size mn x L where L would be the total number of possible sub-images that exist in your image and m, n are the height and width of each sub-image respectively. How im2col works is that for each sub-image that exists in your image, it warps them so that it fits into a single column in B. Therefore, each column in B produces a single sub-image that is warped into a column. You can then use each column in B for your GMM modelling.
However, im2col only returns valid sub-images that don't go out of bounds. If you want to handle the edge and corner cases, you'll need to pad the image first. Use padarray to facilitate this padding. Therefore, to do what you're asking, we simply do:
Apad = padarray(KI, [1 1], 'replicate');
B = im2col(Apad, [3 3]);
The first line of code will pad the image so that you have a 1 pixel border that surrounds the image. This will allow you to extract 3 x 3 sub-images at the border locations. I use the replicate flag so that you can simply duplicate the border pixels. Next, we use im2col so that you get 3 x 3 sub-images that are then stored in B. As such, B will become a 9 x L matrix where each column gives you a 3 x 3 sub-image.
Be mindful that im2col warps these columns in column-major format. That means that for each sub-image that you have, it takes each column in the sub-image and stacks them on top of each other giving you a 9 x 1 column. You will have L total sub-images, and these are concatenated horizontally to produce a 9 x L matrix. Also, keep in mind that the sub-images are read top-to-bottom, then left-to-right as this is the nature of MATLAB operating in column-major order.
I have a big matrix (1,000 rows and 50,000 columns). I know some columns are correlated (the rank is only 100) and I suspect some columns are even proportional. How can I find such proportional columns? (one way would be looping corr(M(:,j),M(:,k))), but is there anything more efficient?
If I am understanding your problem correctly, you wish to determine those columns in your matrix that are linearly dependent, which means that one column is proportional or a scalar multiple of another. There's a very basic algorithm based on QR Decomposition. For QR decomposition, you can take any matrix and decompose it into a product of two matrices: Q and R. In other words:
A = Q*R
Q is an orthogonal matrix with each column as being a unit vector, such that multiplying Q by its transpose gives you the identity matrix (Q^{T}*Q = I). R is a right-triangular or upper-triangular matrix. One very useful theory by Golub and Van Loan in their 1996 book: Matrix Computations is that a matrix is considered full rank if all of the values of diagonal elements of R are non-zero. Because of the floating point precision on computers, we will have to threshold and check for any values in the diagonal of R that are greater than this tolerance. If it is, then this corresponding column is an independent column. We can simply find the absolute value of all of the diagonals, then check to see if they're greater than some tolerance.
We can slightly modify this so that we would search for values that are less than the tolerance which would mean that the column is not independent. The way you would call up the QR factorization is in this way:
[Q,R] = qr(A, 0);
Q and R are what I just talked about, and you specify the matrix A as input. The second parameter 0 stands for producing an economy-size version of Q and R, where if this matrix was rectangular (like in your case), this would return a square matrix where the dimensions are the largest of the two sizes. In other words, if I had a matrix like 5 x 8, producing an economy-size matrix will give you an output of 5 x 8, where as not specifying the 0 will give you an 8 x 8 matrix.
Now, what we actually need is this style of invocation:
[Q,R,E] = qr(A, 0);
In this case, E would be a permutation vector, such that:
A(:,E) = Q*R;
The reason why this is useful is because it orders the columns of Q and R in such a way that the first column of the re-ordered version is the most probable column that is independent, followed by those columns in decreasing order of "strength". As such, E would tell you how likely each column is linearly independent and those "strengths" are in decreasing order. This "strength" is exactly captured in the diagonals of R corresponding to this re-ordering. In fact, the strength is proportional to this first element. What you should do is check to see what diagonals of R in the re-arranged version are greater than this first coefficient scaled by the tolerance and you use these to determine which of the corresponding columns are linearly independent.
However, I'm going to flip this around and determine the point in the R diagonals where the last possible independent columns are located. Anything after this point would be considered linearly dependent. This is essentially the same as checking to see if any diagonals are less than the threshold, but we are using the re-ordering of the matrix to our advantage.
In any case, putting what I have mentioned in code, this is what you should do, assuming your matrix is stored in A:
%// Step #1 - Define tolerance
tol = 1e-10;
%// Step #2 - Do QR Factorization
[Q, R, E] = qr(A,0);
diag_R = abs(diag(R)); %// Extract diagonals of R
%// Step #3 -
%// Find the LAST column in the re-arranged result that
%// satisfies the linearly independent property
r = find(diag_R >= tol*diag_R(1), 1, 'last');
%// Step #4
%// Anything after r means that the columns are
%// linearly dependent, so let's output those columns to the
%// user
idx = sort(E(r+1:end));
Note that E will be a permutation vector, and I'm assuming you want these to be sorted so that's why we sort them after the point where the vectors fail to become linearly independent anymore. Let's test out this theory. Suppose I have this matrix:
A =
1 1 2 0
2 2 4 9
3 3 6 7
4 4 8 3
You can see that the first two columns are the same, and the third column is a multiple of the first or second. You would just have to multiply either one by 2 to get the result. If we run through the above code, this is what I get:
idx =
1 2
If you also take a look at E, this is what I get:
E =
4 3 2 1
This means that column 4 was the "best" linearly independent column, followed by column 3. Because we returned [1,2] as the linearly dependent columns, this means that columns 1 and 2 that both have [1,2,3,4] as their columns are a scalar multiple of some other column. In this case, this would be column 3 as columns 1 and 2 are half of column 3.
Hope this helps!
Alternative Method
If you don't want to do any QR factorization, then I can suggest reducing your matrix into its row-reduced Echelon form, and you can determine the basis vectors that make up the column space of your matrix A. Essentially, the column space gives you the minimum set of columns that can generate all possible linear combinations of output vectors if you were to apply this matrix using matrix-vector multiplication. You can determine which columns form the column space by using the rref command. You would provide a second output to rref that gives you a vector of elements that tell you which columns are linearly independent or form a basis of the column space for that matrix. As such:
[B,RB] = rref(A);
RB would give you the locations of which columns for the column space and B would be the row-reduced echelon form of the matrix A. Because you want to find those columns that are linearly dependent, you would want to return a set of elements that don't contain these locations. As such, define a linearly increasing vector from 1 to as many columns as you have, then use RB to remove these entries in this vector and the result would be those linearly dependent columns you are seeking. In other words:
[B,RB] = rref(A);
idx = 1 : size(A,2);
idx(RB) = [];
By using the above code, this is what we get:
idx =
2 3
Bear in mind that we identified columns 2 and 3 to be linearly dependent, which makes sense as both are multiples of column 1. The identification of which columns are linearly dependent are different in comparison to the QR factorization method, as QR orders the columns based on how likely that particular column is linearly independent. Because columns 1 to 3 are related to each other, it shouldn't matter which column you return. One of these forms the basis of the other two.
I haven't tested the efficiency of using rref in comparison to the QR method. I suspect that rref does Gaussian row eliminations, where the complexity is worse compared to doing QR factorization as that algorithm is highly optimized and efficient. Because your matrix is rather large, I would stick to the QR method, but use rref anyway and see what you get!
If you normalize each column by dividing by its maximum, proportionality becomes equality. This makes the problem easier.
Now, to test for equality you can use a single (outer) loop over columns; the inner loop is easily vectorized with bsxfun. For greater speed, compare each column only with the columns to its right.
Also to save some time, the result matrix is preallocated to an approximate size (you should set that). If the approximate size is wrong, the only penalty will be a little slower speed, but the code works.
As usual, tests for equality between floating-point values should include a tolerance.
The result is given as a 2-column matrix (S), where each row contains the indices of two rows that are proportional.
A = [1 5 2 6 3 1
2 5 4 7 6 1
3 5 6 8 9 1]; %// example data matrix
tol = 1e-6; %// relative tolerance
A = bsxfun(#rdivide, A, max(A,[],1)); %// normalize A
C = size(A,2);
S = NaN(round(C^1.5),2); %// preallocate result to *approximate* size
used = 0; %// number of rows of S already used
for c = 1:C
ind = c+find(all(abs(bsxfun(#rdivide, A(:,c), A(:,c+1:end))-1)<tol));
u = numel(ind); %// number of columns proportional to column c
S(used+1:used+u,1) = c; %// fill in result
S(used+1:used+u,2) = ind; %// fill in result
used = used + u; %// update number of results
end
S = S(1:used,:); %// remove unused rows of S
In this example, the result is
S =
1 3
1 5
2 6
3 5
meaning column 1 is proportional to column 3; column 1 is proportional to column 5 etc.
If the determinant of a matrix is zero, then the columns are proportional.
There are 50,000 Columns, or 2 times 25,000 Columns.
It is easiest to solve the determinant of a 2 by 2 matrix.
Hence: to Find the proportional matrix, the longest time solution is to
define the big-matrix on a spreadsheet.
Apply the determinant formula to a square beginning from 1st square on the left.
Copy it for every row & column to arrive at the answer in the next spreadsheet.
Find the Columns with Determinants Zero.
This is Quite basic,not very time consuming and should be result oriented.
Manual or Excel SpreadSheet(Efficient)
a process of mine produces 256 binary (logical) matrices, one for each level of a grayscale source image.
Here is the code :
so = imread('bio_sd.bmp');
co = rgb2gray(so);
for l = 1:256
bw = (co == l); % Binary image from level l of original image
be = ordfilt2(bw, 1, ones(3, 3)); % Convolution filter
bl(int16(l)) = {bwlabel(be, 8)}; % Component labelling
end
I obtain a cell array of 256 binary images. Such a binary image contains 1s if the source-image pixel at that location has the same level as the index of the binary image.
ie. the binary image bl{12} contains 1s where the source image has pixels with the level 12.
I'd like to create new image by combining the 256 binary matrices back to a grayscale image.
But i'm very new to Matlab and i wonder if someone can help me to code it :)
ps : i'm using matlab R2010a student edition.
this whole answer only applies to the original form of the question.
Lets assume you can get all your binary matrices together into a big n-by-m-by-256 matrix binaryimage(x,y,greyvalue). Then you can calculate your final image as
newimage=sum(bsxfun(#times,binaryimage,reshape(0:255,1,1,[])),3)
The magic here is done by bsxfun, which multiplies the 3D (n x m x 256) binaryimage with the 1 x 1 x 256 vector containing the grey values 0...255. This produces a 3D image where for fixed x and y, the vector (y,x,:) contains many zeros and (for the one grey value G where the binary image contained a 1) it contains the value G. So now you only need to sum over this third dimension to get a n x m image.
Update
To test that this works correctly, lets go the other way first:
fullimage=floor(rand(100,200)*256);
figure;imshow(fullimage,[0 255]);
is a random greyscale image. You can calculate the 256 binary matrices like this:
binaryimage=false([size(fullimage) 256]);
for i=1:size(fullimage,1)
for j=1:size(fullimage,2)
binaryimage(i,j,fullimage(i,j)+1)=true;
end
end
We can now apply the solution I gave above
newimage=sum(bsxfun(#times,binaryimage,reshape(0:255,1,1,[])),3);
and verify that I returns the original image:
all(newimage(:)==fullimage(:))
which gives 1 (true) :-).
Update 2
You now mention that your binary images are in a cell array, I assume binimg{1:256}, with each cell containing an n x m binary array. If you can it probably makes sense to change the code that produces this data to create the 3D binary array I use above - cells are mostly usefull if different cells contain data of different types, shapes or sizes.
If there are good reasons to stick with a cell array, you can convert it to a 3D array using
binaryimage = reshape(cell2mat(reshape(binimg,1,256)),n,m,256);
with n and m as used above. The inner reshape is not necessary if you already have size(binimg)==[1 256]. So to sum it up, you need to use your cell array binimg to calculate the 3D matrix binaryimage, which you can then use to calculate the newimage that you are interested in using the code at the very beginning of my answer.
Hope this helps...
What your code does...
I thought it may be best to first go through what the code you posted is actually doing, since there are a couple of inconsistencies. I'll go through each line in your loop:
bw = (co == l);
This simply creates a binary matrix bw with ones where your grayscale image co has a pixel intensity equal to the loop value l. I notice that you loop from 1 to 256, and this strikes me as odd. Typically, images loaded into MATLAB will be an unsigned 8-bit integer type, meaning that the grayscale values will span the range 0 to 255. In such a case, the last binary matrix bw that you compute when l = 256 will always contain all zeroes. Also, you don't do any processing for pixels with a grayscale level of 0. From your subsequent processing, I'm guessing you purposefully want to ignore grayscale values of 0, in which case you probably only need to loop from 1 to 255.
be = ordfilt2(bw, 1, ones(3, 3));
What you are essentially doing here with ORDFILT2 is performing a binary erosion operation. Any values of 1 in bw that have a 0 as one of their 8 neighbors will be set to 0, causing islands of ones to erode (i.e. shrink in size). Small islands of ones will disappear, leaving only the larger clusters of contiguous pixels with the same grayscale level.
bl(int16(l)) = {bwlabel(be, 8)};
Here's where you may be having some misunderstandings. Firstly, the matrices in bl are not logical matrices. In your example, the function BWLABEL will find clusters of 8-connected ones. The first cluster found will have its elements labeled as 1 in the output image, the second cluster found will have its elements labeled as 2, etc. The matrices will therefore contain positive integer values, with 0 representing the background.
Secondly, are you going to use these labeled clusters for anything? There may be further processing you do for which you need to identify separate clusters at a given grayscale intensity level, but with regard to creating a grayscale image from the elements in bl, the specific label value is unnecessary. You only need to identify zero versus non-zero values, so if you aren't using bl for anything else I would suggest that you just save the individual values of be in a cell array and use them to recreate a grayscale image.
Now, onto the answer...
A very simple solution is to concatenate your cell array of images into a 3-D matrix using the function CAT, then use the function MAX to find the indices where the non-zero values occur along the third dimension (which corresponds to the grayscale value from the original image). For a given pixel, if there is no non-zero value found along the third dimension (i.e. it is all zeroes) then we can assume the pixel value should be 0. However, the index for that pixel returned by MAX will default to 1, so you have to use the maximum value as a logical index to set the pixel to 0:
[maxValue,grayImage] = max(cat(3,bl{:}),[],3);
grayImage(~maxValue) = 0;
Note that for the purposes of displaying or saving the image you may want to change the type of the resulting image grayImage to an unsigned 8-bit integer type, like so:
grayImage = uint8(grayImage);
The simplest solution would be to iterate through each of your logical matrices in turn, multiply it by its corresponding weight, and accumulate into an output matrix which will represent your final image.