Extremely large weighted average - matlab

I am using 64 bit matlab with 32g of RAM (just so you know).
I have a file (vector) of 1.3 million numbers (integers). I want to make another vector of the same length, where each point is a weighted average of the entire first vector, weighted by the inverse distance from that position (actually it's position ^-0.1, not ^-1, but for example purposes). I can't use matlab's 'filter' function, because it can only average things before the current point, right? To explain more clearly, here's an example of 3 elements
data = [ 2 6 9 ]
weights = [ 1 1/2 1/3; 1/2 1 1/2; 1/3 1/2 1 ]
results=data*weights= [ 8 11.5 12.666 ]
i.e.
8 = 2*1 + 6*1/2 + 9*1/3
11.5 = 2*1/2 + 6*1 + 9*1/2
12.666 = 2*1/3 + 6*1/2 + 9*1
So each point in the new vector is the weighted average of the entire first vector, weighting by 1/(distance from that position+1).
I could just remake the weight vector for each point, then calculate the results vector element by element, but this requires 1.3 million iterations of a for loop, each of which contains 1.3million multiplications. I would rather use straight matrix multiplication, multiplying a 1x1.3mil by a 1.3milx1.3mil, which works in theory, but I can't load a matrix that large.
I am then trying to make the matrix using a shell script and index it in matlab so only the relevant column of the matrix is called at a time, but that is also taking a very long time.
I don't have to do this in matlab, so any advice people have about utilizing such large numbers and getting averages would be appreciated. Since I am using a weight of ^-0.1, and not ^-1, it does not drop off that fast - the millionth point is still weighted at 0.25 compared to the original points weighting of 1, so I can't just cut it off as it gets big either.
Hope this was clear enough?
Here is the code for the answer below (so it can be formatted?):
data = load('/Users/mmanary/Documents/test/insertion.txt');
data=data.';
total=length(data);
x=1:total;
datapad=[zeros(1,total) data];
weights = ([(total+1):-1:2 1:total]).^(-.4);
weights = weights/sum(weights);
Fdata = fft(datapad);
Fweights = fft(weights);
Fresults = Fdata .* Fweights;
results = ifft(Fresults);
results = results(1:total);
plot(x,results)

The only sensible way to do this is with FFT convolution, as underpins the filter function and similar. It is very easy to do manually:
% Simulate some data
n = 10^6;
x = randi(10,1,n);
xpad = [zeros(1,n) x];
% Setup smoothing kernel
k = 1 ./ [(n+1):-1:2 1:n];
% FFT convolution
Fx = fft(xpad);
Fk = fft(k);
Fxk = Fx .* Fk;
xk = ifft(Fxk);
xk = xk(1:n);
Takes less than half a second for n=10^6!

This is probably not the best way to do it, but with lots of memory you could definitely parallelize the process.
You can construct sparse matrices consisting of entries of your original matrix which have value i^(-1) (where i = 1 .. 1.3 million), multiply them with your original vector, and sum all the results together.
So for your example the product would be essentially:
a = rand(3,1);
b1 = [1 0 0;
0 1 0;
0 0 1];
b2 = [0 1 0;
1 0 1;
0 1 0] / 2;
b3 = [0 0 1;
0 0 0;
1 0 0] / 3;
c = sparse(b1) * a + sparse(b2) * a + sparse(b3) * a;
Of course, you wouldn't construct the sparse matrices this way. If you wanted to have less iterations of the inside loop, you could have more than one of the i's in each matrix.
Look into the parfor loop in MATLAB: http://www.mathworks.com/help/toolbox/distcomp/parfor.html

I can't use matlab's 'filter' function, because it can only average
things before the current point, right?
That is not correct. You can always add samples (i.e, adding or removing zeros) from your data or from the filtered data. Since filtering with filter (you can also use conv by the way) is a linear action, it won't change the result (it's like adding and removing zeros, which does nothing, and then filtering. Then linearity allows you to swap the order to add samples -> filter -> remove sample).
Anyway, in your example, you can take the averaging kernel to be:
weights = 1 ./ [3 2 1 2 3]; % this kernel introduces a delay of 2 samples
and then simply:
result = filter(w,1,[data, zeros(1,3)]); % or conv (data, w)
% removing the delay introduced by the kernel
result = result (3:end-1);

You considered only 2 options:
Multiplying 1.3M*1.3M matrix with a vector once or multiplying 2 1.3M vectors 1.3M times.
But you can divide your weight matrix to as many sub-matrices as you wish and do a multiplication of n*1.3M matrix with the vector 1.3M/n times.
I assume that the fastest will be when there will be the smallest number of iterations and n is such that creates the largest sub-matrix that fits in your memory, without making your computer start swapping pages to your hard drive.
with your memory size you should start with n=5000.
you can also make it faster by using parfor (with n divided by the number of processors).

The brute force way will probably work for you, with one minor optimisation in the mix.
The ^-0.1 operations to create the weights will take a lot longer than the + and * operations to compute the weighted-means, but you re-use the weights across all the million weighted-mean operations. The algorithm becomes:
Create a weightings vector with all the weights any computation would need:
weights = (-n:n).^-0.1
For each element in the vector:
Index the relevent portion of the weights vector to consider the current element as the 'centre'.
Perform the weighted-mean with the weights portion and the entire vector. This can be done with a fast vector dot-multiply followed by a scalar division.
The main loop does n^2 additions and subractions. With n equal to 1.3 million that's 3.4 trillion operations. A single core of a modern 3GHz CPU can do say 6 billion additions/multiplications a second, so that comes out to around 10 minutes. Add time for indexing the weights vector and overheads, and I still estimate you could come in under half an hour.

Related

Draw non full matrix of random numbers

I am doing a Monte-Carlo simulation, where each repetition requires the sum or product of a random number of random variables. My problem is how to do this efficiently as the entire simulation should be as vectorized as possible.
For example, say we want to take the sum of 5, 10 and 3 random numbers, represented by the vector len = [5;10;3]. Then what I am currently doing is drawing a full matrix of random numbers:
A = randn(length(len),max(len));
Creating a mask of the non-needed numbers:
lenlen = repmat(len,1,max(len));
idx = repmat(1:max(len),length(len),1);
mask = idx>lenlen;
and then I can "pad", the matrix as I am interested in the sum the padding have to be zero (for the case with the product the padding had to be 1)
A(mask)=0;
To obtain:
A =
1.7708 -1.4609 -1.5637 -0.0340 0.9796 0 0 0 0 0
1.8034 -1.5467 0.3938 0.8777 0.6813 1.0594 -0.3469 1.7472 -0.4697 -0.3635
1.5937 -0.1170 1.5629 0 0 0 0 0 0 0
Whereafter I can sum them together
B = sum(A,2);
However, I find it rather superfluous that I have to draw too many random numbers and then throw them away. In the real case, I need in the range of hundred thousands of repetitions and the vector len might vary a lot, i.e. it can easily be that I have to draw twice or three times the number of random numbers than of what is needed.
You can generate the exact amount of random numbers required, create a grouping variable with repelem, and compute the sum of each group using accumarray:
len = [5; 10; 3];
B = accumarray(repelem(1:numel(len), len).', randn(sum(len),1));
You could just use arrayfun or a loop. You say "efficient" and "vectorized" in the same breath, but they are not necessarily the same thing - since the new(ish) JIT compiler, loops are pretty fast in MATLAB. arrayfun is basically a loop in disguise, but means you could create B like so:
len = [5;10;3];
B = arrayfun( #(x) sum( randn(x,1) ), len );
For each element in len, this creates a vector of length len(i) and takes the sum. The output is an array with one value for each value in len.
This will certainly be a lot more memory friendly for large values and largely different values within len. It may therefore be quicker, your mileage may vary but it cuts out a lot of the operations you're doing.
You mention wanting to take the product sometimes, in which case use prod in place of sum.
Edit: rough and ready benchmark to compare arrayfun and a loop...
len = randi([1e3, 1e7], 100, 1);
tic;
B = arrayfun( #(x) sum( randn(x,1) ), len );
toc % ~8.77 seconds
tic;
out=zeros(size(len));
for ii = 1:numel(len)
out(ii) = sum(randn(len(ii),1));
end
toc % ~8.80 seconds
The "advantage" of the loop over arrayfun is you can pre-generate all of the random numbers in one go, then index. This isn't necesarryily quicker because you're addressing much bigger chunks of memory, and the call to randn is the main bottleneck anyway!
tic;
out = zeros(size(len));
rnd = randn(sum(len),1);
idx = [0; cumsum(len)]; % note: cumsum is very quick (~0.001sec here) so negligible
for ii = 1:numel(len)
out(ii) = sum(rnd(idx(ii)+1:idx(ii+1)),1);
end
toc % ~10.2 sec! Slower because of massive call to randn and the indexing into large array.
As stated at the top, arrayfun and looping are basically the same under the hood, so no reason to expect a big time difference.
The sum of multiple random numbers drawn from a specific distribution is also a random number with a (different) specific distribution. Therefore you can just cut the middleman and draw directly from the latter distribution.
In your case you are summing 3, 10 and 5 numbers drawn from a N(0,1) distribution. As explained here, the resulting distributions therefore are N(0,3), N(0,10) and N(0,5). This page explains how you can draw from non-standard normal distributions in Matlab. As such, we can in this case generate those numbers with randn(3,1).*sqrt([5; 10; 3]).
In case you would want 1000 triples, you could then use
randn(3,1000).*sqrt([5; 10; 3])
or pre Matlab2016b
bsxfun(#times, randn(3,1000), sqrt([5; 10; 3]))
which is of course very fast.
Different distributions have different summation rules, but as long as you are not summing up numbers drawn from different distributions the rules are usually quite simple and found quickly with google.
You can do this using a combination of cumsum and diff. The plan is:
Create all the random numbers in a single call to randn up front
Then, use cumsum to produce a vector of cumulative summations
Use cumsum on the list of number-of-samples-per-result to work out where to read out the results
We also need diff to correct for the prior summations.
Note that this method might lose accuracy if you weren't using randn for the random samples, as cumsum would then build up arithmetic rounding errors.
% We want 100 sums of random numbers
numSamples = 100;
% Here's where we define how many random samples contribute to each sum
numRandsPerSample = randi(5, 1, numSamples);
% Let's make all the random numbers in one call
allRands = randn(1, sum(numRandsPerSample));
% Use CUMSUM to build up a cumulative sum of the whole of allRands. We also
% need a leading 0 for the first sum.
allRandsCS = [0, cumsum(allRands)];
% Use CUMSUM again to pick out the places we need to pick from
% allRandsCS
endIdxs = 1 + [0, cumsum(numRandsPerSample)];
% Use DIFF to subtract the prior sums from the result.
result = diff(allRandsCS(endIdxs))

Use of bsxfun with singleton expansion with matrixes of three dimensions

I'm using bsxfun to vectorize an operation with singleton expansion between matrixes of sizes:
MS: (nms, nls)
KS: (nks, nls)
The operation is the sum of the absolute differences between each value MS(m,l) with m in 1:nms and l in 1:nls, and every KS(k,l) with k in 1:nks.
I achieve this through the code:
[~, nls] = size(MS);
MS = reshape(MS',1,nls,[]);
R = sum(abs(bsxfun(#minus,MS,KS)));
R is of size (nls, nms).
I want to generalize this operation to a list of samples, so the new sizes will be:
MS: (nxs, nls, nms)
KS: (nxs, nls, nks)
This can be achieved easily with a for loop that executes the first piece of code for each 2 dimensional matrixes, but I suspect that performance may be much better by generalizing the previous code by adding a new dimension.
R has would be of size: (nxs, nls, nms)
I have tried to reshape MS to 4 dimensions with no success. Could this be done with reshaping and bsxfun?
You might need this:
% generate small dummy data
nxs = 2;
nls = 3;
nms = 4;
nks = 5;
MS = rand(nxs, nls, nms);
KS = rand(nxs, nls, nks);
R = sum(abs(bsxfun(#minus,MS,permute(KS,[1,2,4,3]))),4)
This will produce a matrix of size [2,3,4], i.e. [nxs,nls,nms]. Each element [k1,k2,k3] will correspond to
R(k1,k2,k3) == sum_k abs(MS(k1,k2,k3) - KS(k1,k2,k))
For instance, in my random run
R(2,1,3)
ans =
1.255765020150647
>> sum(abs(MS(2,1,3)-KS(2,1,:)))
ans =
1.255765020150647
The trick is to introduce singleton dimensions with permute: permute(KS,[1,2,4,3]) is of size [nxs,nls,1,nks], while MS of size [nxs,nls,nms] is implicitly also of size [nxs,nls,nms,1]: every array in MATLAB is assumed to possess a countably infinite number of trailing singleton dimensions. From here it's easy to see how you can bsxfun together arrays of size [nxs,nls,nms,1] and [nxs,nls,1,nks], respectively, to obtain one with size [nxs,nls,nms,nks]. Summing along dimension 4 seals the deal.
I noted in a comment, that it might be faster to permute the summing index to be in the first place. Turns out that this by itself makes the code run slower. However, by reshaping the arrays to have decreasing dimension sizes, the overall performance increases (due to optimal memory access). Compare this:
% generate larger dummy data
nxs = 20;
nls = 30;
nms = 40;
nks = 500;
MS = rand(nxs, nls, nms);
KS = rand(nxs, nls, nks);
MS2 = permute(MS,[4 3 2 1]);
KS2 = permute(KS,[3 4 2 1]);
R3 = permute(squeeze(sum(abs(bsxfun(#minus,MS2,KS2)),1)),[3 2 1]);
What I did was put the summing nks dimension into first place, and order the rest of the dimensions in decreasing order. This could be done automatically, I just didn't want to overcomplicate the example. In your use case you'll probably know the magnitude of the dimensions anyway.
Runtimes with the above two codes: 0.07028 s for the original, 0.051162 s for the reordered one (best out of 5). Larger examples don't fit into memory for me now, unfortunately.

Find the number of zero elements in a matrix in MATLAB [duplicate]

This question already has answers here:
Find specific value's count in a vector
(4 answers)
Closed 8 years ago.
I have a NxM matrix for example named A. After some processes I want to count the zero elements.
How can I do this in one line code? I tried A==0 which returns a 2D matrix.
There is a function to find the number of nonzero matrix elements nnz. You can use this function on a logical matrix, which will return the number of true.
In this case, we apply nnz on the matrix A==0, hence the elements of the logical matrix are true, if the original element was 0, false for any other element than 0.
A = [1, 3, 1;
0, 0, 2;
0, 2, 1];
nnz(A==0) %// returns 3, i.e. the number of zeros of A (the amount of true in A==0)
The credits for the benchmarking belong to Divarkar.
Benchmarking
Using the following paramters and inputs, one can benchmark the solutions presented here with timeit.
Input sizes
Small sized datasize - 1:10:100
Medium sized datasize - 50:50:1000
Large sized datasize - 500:500:4000
Varying % of zeros
~10% of zeros case - A = round(rand(N)*5);
~50% of zeros case - A = rand(N);A(A<=0.5)=0;
~90% of zeros case - A = rand(N);A(A<=0.9)=0;
The results are shown next -
1) Small Datasizes
2. Medium Datasizes
3. Large Datasizes
Observations
If you look closely into the NNZ and SUM performance plots for medium and large datasizes, you would notice that their performances get closer to each other for 10% and 90% zeros cases. For 50% zeros case, the performance gap between SUM and NNZ methods is comparatively wider.
As a general observation across all datasizes and all three fraction cases of zeros,
SUM method seems to be the undisputed winner. Again, an interesting thing was observed here that the general case solution sum(A(:)==0) seems to be better in performance than sum(~A(:)).
some basic matlab to know: the (:) operator will flatten any matrix into a column vector , ~ is the NOT operator flipping zeros to ones and non zero values to zero, then we just use sum:
sum(~A(:))
This should be also about 10 times faster than the length(find... scheme, in case efficiency is important.
Edit: in the case of NaN values you can resort to the solution:
sum(A(:)==0)
I'll add something to the mix as well. You can use histc and compute the histogram of the entire matrix. You specify the second parameter to be which bins the numbers should be collected at. If we just want to count the number of zeroes, we can simply specify 0 as the second parameter. However, if you specify a matrix into histc, it will operate along the columns but we want to operate on the entire matrix. As such, simply transform the matrix into a column vector A(:) and use histc. In other words, do this:
histc(A(:), 0)
This should be equivalent to counting the number of zeroes in the entire matrix A.
Well I don't know if I'm answering well the question but you could code it as follows :
% Random Matrix
M = [1 0 4 8 0 6;
0 0 7 4 8 0;
8 7 4 0 6 0];
n = size(M,1); % Number of lines of M
p = size(M,2); % Number of columns of M
nbrOfZeros = 0; % counter
for i=1:n
for j=1:p
if M(i,j) == 0
nbrOfZeros = nbrOfZeros + 1;
end
end
end
nbrOfZeros

Histogram Equalization method without use of histeq

I am new to Matlab and am trying to implement code to perform the same function as histeq without actual use of the function. In my code the image colour I get changes drastically when it should not change that much. The average intensity in the image (ranging between 0 and 255) is 105.3196. The image is of an open source pollen particle.
Any help would be much appreciated. The sooner the better! Please could any help be simplified as my Matlab understanding is limited. Thanks.
clc;
clear all;
close all;
pollenJpg = imread ('pollen.jpg', 'jpg');
greyscalePollen = rgb2gray (pollenJpg);
histEqPollen = histeq(greyscalePollen);
averagePollen = mean2 (greyscalePollen)
sizeGreyScalePollen = size(greyscalePollen);
rowsGreyScalePollen = sizeGreyScalePollen(1,1);
columnsGreyScalePollen = sizeGreyScalePollen(1,2);
for i = (1:rowsGreyScalePollen)
for j = (1:columnsGreyScalePollen)
if (greyscalePollen(i,j) > averagePollen)
greyscalePollen(i,j) = greyscalePollen(i,j) + (0.1 * averagePollen);
if (greyscalePollen(i,j) > 255)
greyscalePollen(i,j) = 255;
end
elseif (greyscalePollen(i,j) < averagePollen)
greyscalePollen(i,j) = greyscalePollen(i,j) - (0.1 * averagePollen);
if (greyscalePollen(i,j) > 0)
greyscalePollen(i,j) = 0;
end
end
end
end
figure;
imshow (pollenJpg);
title ('Original Image');
figure;
imshow (greyscalePollen);
title ('Attempted Histogram Equalization of Image');
figure;
imshow (histEqPollen);
title ('True Histogram Equalization of Image');
To implement the equalisation algorithm described on the Wikipedia page, follow these these steps:
Decide on a binSize to group greyscale values. (This is a tweakable, the larger the bin, the less accurate the result from the ideal case, but I think it can cause problems if chosen too small on real images).
Then, calculate the probability of a pixel being a shade of grey:
pixelCount = imageWidth * imageHeight
histogram = all zero
for each pixel in image at coordinates i, j
histogram[floor(pixel / 255 / 10) + 1] += 1 / pixelCount // 1-based arrays, not 0-based
// Note a technicality here: you may need to
// write special code to handle pixels of 255,
// because they will fall in their own bin. Or instead use rounding with an offset.
The histogram in this calculation is scaled (divided by the pixel count) so that the values make sense as probabilities. You can of course factor the division out of the for loop.
Now you need to calculate the accumulative sum of this:
histogramSum = all zero // length of histogramSum must be one bigger than histogram
for i == 1 .. length(histogram)
histogramSum[i + 1] = histogramSum[i] + histogram[i]
Now you have to invert this function and this is the tricky part. The best is to not calculate an explicit inverse, but calculate it on the spot, and apply it on the image. The basic idea is to search for the pixel value in the histogramSum (find the closest index below), and then do a linear interpolation between the index and the next index.
foreach pixel in image at coordinates i, j
hIndex = findIndex(pixel, histogramSum) // You have to write findIndex, it should be simple
equilisationFactor = (pixel - histogramSum[hIndex])/(histogramSum[hIndex + 1] - histogramSum[hIndex]) * binSize
// This above is the linear interpolation step.
// Notice the technicality that you need to handle:
// histogramSum[hIndex + 1] may be out of bounds
equalisedImage[i, j] = pixel * equilisationFactor
Edit: without drilling into the maths, I can't be 100% sure, but I think that division by 0 errors are possible. These can occur if one bin is empty, so consecutive sums are equal. So you need special code to handle this case too. The best you can do is take the value for the factor as halfway between hIndex, hIndex + n, where n is the highest value for which histogramSum[hIndex + n] == histogramSum[hIndex].
And that should be it, once you have dealt with all the technicalities.
The above algorithm is slow (especially in the findIndex step). You may be able to optimize this with a special lookup datastructure. But only do that when it's working, and only if necessary.
One more thing about your Matlab code: the rows and columns are inverted. Because of the symmetry in the algorithm, the result is the same, but it can cause puzzling bugs in other algorithms, and be very confusing if you examine pixel values during debugging. In the pseudocode above I used them the same as you, though.
Relatively few (5) lines of code can do this. I used a low contrast file called 'pollen.jpg' that I found at http://commons.wikimedia.org/wiki/File%3ALepismium_lorentzianum_pollen.jpg
I read it in using your code, run all the above, then do the following:
% find out the index of pixels sorted by intensity:
[gv gi] = sort(greyscalePollen(:));
% create a table of "approximately equal" intensity values:
N = numel(gv);
newVals = repmat(0:255, [ceil(N/256) 1]);
% perform lookup:
% the pixels in sorted order need new values from "equal bins" table:
newImg = zeros(size(greyscalePollen));
newImg(gi) = newVals(1:N);
% if the size of the image doesn't divide into 256, the last bin will have
% slightly fewer pixels in it than the others
When I run this algorithm, and then create a composite of the four images (original, your attempt, my attempt, and histeq), you get the following:
I think it's convincing. The images are not exactly identical - I believe that is because the matlab histeq routine ignores all pixels with value 0. Since it is fully vectorized it is also pretty fast (although not nearly as fast as histeq by about a factor 15 on my image.
EDIT: a bit of explanation might be in order. The repmat command I use to create the newVals matrix creates a matrix that looks like this:
0 1 2 3 4 ... 255
0 1 2 3 4 ... 255
0 1 2 3 4 ... 255
...
0 1 2 3 4 ... 255
Since matlab stores matrices in "first index first" order, if you read this matrix with a single index (as I do in the line newVals(1:N)), you access first all the zeros, then all the ones, etc:
0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 ...
So - when I know the indices of the pixels in the order of their intensity (as returned by the second argument of the sort command, which I called gi), then I can easily assign the value 0 to the first N/256 pixels, the value 1 to the next N/256 etc, with the command I used:
newImg(gi) = newVals(1:N);
I hope this makes the code a little easier to understand.

Update only one matrix element for iterative computation

I have a 3x3 matrix, A. I also compute a value, g, as the maximum eigen value of A. I am trying to change the element A(3,3) = 0 for all values from zero to one in 0.10 increments and then update g for each of the values. I'd like all of the other matrix elements to remain the same.
I thought a for loop would be the way to do this, but I do not know how to update only one element in a matrix without storing this update as one increasingly larger matrix. If I call the element at A(3,3) = p (thereby creating a new matrix Atry) I am able (below) to get all of the values from 0 to 1 that I desired. I do not know how to update Atry to get all of the values of g that I desire. The state of the code now will give me the same value of g for all iterations, as expected, as I do not know how to to update Atry with the different values of p to then compute the values for g.
Any suggestions on how to do this or suggestions for jargon or phrases for me to web search would be appreciated.
A = [1 1 1; 2 2 2; 3 3 0];
g = max(eig(A));
% This below is what I attempted to achieve my solution
clear all
p(1) = 0;
Atry = [1 1 1; 2 2 2; 3 3 p];
g(1) = max(eig(Atry));
for i=1:100;
p(i+1) = p(i)+ 0.01;
% this makes a one giant matrix, not many
%Atry(:,i+1) = Atry(:,i);
g(i+1) = max(eig(Atry));
end
This will also accomplish what you want to do:
A = #(x) [1 1 1; 2 2 2; 3 3 x];
p = 0:0.01:1;
g = arrayfun(#(x) eigs(A(x),1), p);
Breakdown:
Define A as an anonymous function. This means that the command A(x) will return your matrix A with the (3,3) element equal to x.
Define all steps you want to take in vector p
Then "loop" through all elements in p by using arrayfun instead of an actual loop.
The function looped over by arrayfun is not max(eig(A)) but eigs(A,1), i.e., the 1 largest eigenvalue. The result will be the same, but the algorithm used by eigs is more suited for your type of problem -- instead of computing all eigenvalues and then only using the maximum one, you only compute the maximum one. Needless to say, this is much faster.
First, you say 0.1 increments in the text of your question, but your code suggests you are actually interested in 0.01 increments? I'm going to operate under the assumption you mean 0.01 increments.
Now, with that out of the way, let me state what I believe you are after given my interpretation of your question. You want to iterate over the matrix A, where for each iteration you increase A(3, 3) by 0.01. Given that you want all values from 0 to 1, this implies 101 iterations. For each iteration, you want to calculate the maximum eigenvalue of A, and store all these eigenvalues in some vector (which I will call gVec). If this is correct, then I believe you just want the following:
% Specify the "Current" A
CurA = [1 1 1; 2 2 2; 3 3 0];
% Pre-allocate the values we want to iterate over for element (3, 3)
A33Vec = (0:0.01:1)';
% Pre-allocate a vector to store the maximum eigenvalues
gVec = NaN * ones(length(A33Vec), 1);
% Loop over A33Vec
for i = 1:1:length(A33Vec)
% Obtain the version of A that we want for the current i
CurA(3, 3) = A33Vec(i);
% Obtain the maximum eigen value of the current A, and store in gVec
gVec(i, 1) = max(eig(CurA));
end
EDIT: Probably best to paste this code into your matlab editor. The stack-overflow automatic text highlighting hasn't done it any favors :-)
EDIT: Go with Rody's solution (+1) - it is much better!