Optimizing a function that compares 1.7 million entries against itself - matlab

I have to calculate the distance of all UK postcodes against each other, then sum the population of all postcodes within 1 mile. The postcode & population list are stored in a text file. I'm most familiar with matlab, but I also have Stata & PSPP available. The program is currently scheduled to take about 2 weeks. Is there anything I can do to accelerate this process??? Here is my code. Matlab generated the script to import the text data. The distance function is from the mapping toolbox, and performs the great circle formula.
Any help is greatly appreciated.
function pcdistance(postcode, pop, lat, lon)
%Finds total population for UK postcode within 1 mile radius
fid = fopen('PPC.txt','a');
n = length(postcode);
%Calculates distance of 1 postcode at a time, against all others
%All data that doesn't meet rules is deleted
for i = 1:n;
dist = [];
dist(:,1)= pop;
for j = 1:n;
dist(j,2) = distance(lat(i),lon(i),lat(j),lon(j),3963.17);
good = dist(1:j,2)<= 1;
end
dist = dist(good,:);
tot = sum(dist(:,1));
fprintf(1,'%s,%d;',postcode{i},tot)
end
%Find sum of population within 1 mile
fclose(fid);
end
Here's a small sample of the inputs, from the txt file. The columns are "postcode, pop, lat, long" respectively.
"BD7 1DB",749,53.79,-1.76
"M15 6AA",748,53.46,-2.24
"WR2 6AJ",748,52.19,-2.24
"M15 6PF",745,53.46,-2.23
"IP7 7RA",741,52.12,0.96
"CF62 4WA",740,51.41,-3.41
"M1 2AR",738,53.47,-2.22
"NG1 4BR",737,52.95,-1.14
"ST16 3AW",735,52.81,-2.11
"AB25 1LE",733,57.15,-2.10
"WF2 9AG",730,53.68,-1.50
"DT11 8RH",730,50.86,-2.12
"CW1 5NP",729,53.09,-2.41
"TR12 7RH",724,50.08,-5.25
"ST5 5DY",723,53.00,-2.27
"HA1 3HP",723,51.57,-0.33
"DL10 7NP",722,54.37,-1.62
"M1 7HR",719,53.47,-2.23
"B18 4AS",719,52.49,-1.93
"OX13 6JB",716,51.68,-1.30
Here's the corrected code.
function pcdistance4(postcode, pop, lat, lon)
%Finds total population for UK postcode within 1 mile radius
fid = fopen('PPC.txt','A');
n = length(postcode);
% Pre-allocation
dist = zeros(n,2);
tot = zeros(n,1);
tic
for i = 1:n;
dist(:,1)= pop;
dist(:,2) = distance(lat(i),lon(i),lat(:),lon(:),3963.17);
good = dist(:, 2) <= 1 & dist(:,2) ~=0;
tot(i) = sum(dist(good, 1));
tot(i) = tot(i) + pop(i);
end
toc
tic
for j = 900001:n;
fprintf(fid,'%s,%d;\n',postcode{j},tot(j));
end
toc
fclose(fid);
end

Just a few general tips to get you started:
For your personal education: run your code using profiler to see where the majority of the computational time is spent. This will be the first clue on where to begin your optimization. (link to doc)
You shouldn't be writing to the hard drive on every step of the loop, since I\O are very costly in computation time. Instead you should save a bunch of strings to memory and write these "chunks" every once in a while. link1 link2
You can try using parfor instead of for(link to doc). Or possibly even CUDA if available to you. (link to doc)
Consider using the Geodetic Toolbox. It could be easier to convert the lat\lng coordinates to UTM (i.e. cartesian) and then use some standard functions to find distances.
Also:
My suggestion: don't use i and j as indices for your loops - this is often considered bad practice (due to possible confusion with imaginary numbers).

You should consider memory-complexity as well. Consider pre-allocating the variable dist outside the for-loops and overwrite it element by element.
For example (see comments in the modified function):
function pcdistance(postcode, pop, lat, lon)
fid = fopen('PPC.txt','a');
n = length(postcode);
% Pre-allocation
dist = zeros(n,n);
for i = 1:n;
% Avoid "deleting" the variable, you can overwrite it as the number of
% elements is always the same
% dist = [];
dist(:,1)= pop;
for j = 1:n;
% Unfortunately I do tno have the mentioned toolbox, but there is a
% high chance that you can avoid the for-loop. Probabily something
% like:
% dist(:, 2) = distance(lat(i), lon(i), lat, lon, ...)
% Try to vectorize it.
dist(j,2) = distance(lat(i),lon(i),lat(j),lon(j),3963.17);
% There is no need for this operation, is highly redundant and
% computationally expensive:
% - in the first loop you will check 1
% - in the second loop you will check two elements (1 redundant)
% - in the jth loop you will check j elements (j-1 redundant)
% The total redundant operations are 1+2+3+...+n-1.
%good = dist(1:j,2)<= 1;
end
% better do this
good = dist(:, 2) <= 1;
% also memory expensive.
% dist = dist(good,:);
% Better do the indexing directly
tot = sum(dist(good, 1));
end
% Write outside as recommended by Dev-iL
%Find sum of population within 1 mile
fclose(fid);
end

I cannot believe that distance() can only compare two points at a time when there should be no problem in vectorizing some sinus and cosine functions. So here is a trimmed vectorized version which i wrote for my own purposes some time ago. Maybe because i did not have that toolbox or i did not know about it. Frankly it does not give the exact same result as distance(), which i just tested. If you need the exact result of distance() you better not use this vectorized version.
function dist = distance_on_earth(lat0, lon0, lats, lons, radius)
degree2radians = pi/180;
% phi = 90 - latitude
phi0 = (90-lat0)*degree2radians;
phis = (90-lats)*degree2radians;
% theta = longitude
theta0 = lon0*degree2radians;
thetas = lons*degree2radians;
% sperical distance:
cosine = sin(phi0)*sin(phis)*cos(theta0-thetas)+cos(phi0)*cos(phis);
arc = acos(cosine);
dist = arc*radius;
Also in addition to what Dev-iL suggested, you can at least take the following line out of the inner loop:
good = dist(1:j,2)<= 1;
Good luck!
Nras

The uk is so small that you can still get reasonable results without worrying about the curving of the earth. You can simply estimate the distance by using the difference in latitude and longtitude.
This example is a bit oversimplified, but would suggest that once the data is read in, you could do the actual calculations in under an hour.
x=rand(1.7e6,1); %Fake x data
y=x; %Fake y data
tic
for t=1:1.7e3 % One thousandst part of the work to be done
(x-0.5).^2+(x-0.2).^2>0.01; %Simple distance calculation from a point (0.5,0.2), then comparing to treshold
end
toc %Runs for about 2 seconds
Using the real distance may take a bit longer, but it should still not take more than 1 or 2 hours to complete.

Related

Verify Law of Large Numbers in MATLAB

The problem:
If a large number of fair N-sided dice are rolled, the average of the simulated rolls is likely to be close to the mean of 1,2,...N i.e. the expected value of one die. For example, the expected value of a 6-sided die is 3.5.
Given N, simulate 1e8 N-sided dice rolls by creating a vector of 1e8 uniformly distributed random integers. Return the difference between the mean of this vector and the mean of integers from 1 to N.
My code:
function dice_diff = loln(N)
% the mean of integer from 1 to N
A = 1:N
meanN = sum(A)/N;
% I do not have any idea what I am doing here!
V = randi(1e8);
meanvector = V/1e8;
dice_diff = meanvector - meanN;
end
First of all, make sure everytime you ask a question that it is as clear as possible, to make it easier for other users to read.
If you check how randi works, you can see this:
R = randi(IMAX,N) returns an N-by-N matrix containing pseudorandom
integer values drawn from the discrete uniform distribution on 1:IMAX.
randi(IMAX,M,N) or randi(IMAX,[M,N]) returns an M-by-N matrix.
randi(IMAX,M,N,P,...) or randi(IMAX,[M,N,P,...]) returns an
M-by-N-by-P-by-... array. randi(IMAX) returns a scalar.
randi(IMAX,SIZE(A)) returns an array the same size as A.
So, if you want to use randi in your problem, you have to use it like this:
V=randi(N, 1e8,1);
and you need some more changes:
function dice_diff = loln(N)
%the mean of integer from 1 to N
A = 1:N;
meanN = mean(A);
V = randi(N, 1e8,1);
meanvector = mean(V);
dice_diff = meanvector - meanN;
end
For future problems, try using the command
help randi
And matlab will explain how the function randi (or other function) works.
Make sure to check if the code above gives the desired result
As pointed out, take a closer look at the use of randi(). From the general case
X = randi([LowerInt,UpperInt],NumRows,NumColumns); % UpperInt > LowerInt
you can adapt to dice rolling by
Rolls = randi([1 NumSides],NumRolls,NumSamplePaths);
as an example. Exchanging NumRolls and NumSamplePaths will yield Rolls.', or transpose(Rolls).
According to the Law of Large Numbers, the updated sample average after each roll should converge to the true mean, ExpVal (short for expected value), as the number of rolls (trials) increases. Notice that as NumRolls gets larger, the sample mean converges to the true mean. The image below shows this for two sample paths.
To get the sample mean for each number of dice rolls, I used arrayfun() with
CumulativeAvg1 = arrayfun(#(jj)mean(Rolls(1:jj,1)),[1:NumRolls]);
which is equivalent to using the cumulative sum, cumsum(), to get the same result.
CumulativeAvg1 = (cumsum(Rolls(:,1))./(1:NumRolls).'); % equivalent
% MATLAB R2019a
% Create Dice
NumSides = 6; % positive nonzero integer
NumRolls = 200;
NumSamplePaths = 2;
% Roll Dice
Rolls = randi([1 NumSides],NumRolls,NumSamplePaths);
% Output Statistics
ExpVal = mean(1:NumSides);
CumulativeAvg1 = arrayfun(#(jj)mean(Rolls(1:jj,1)),[1:NumRolls]);
CumulativeAvgError1 = CumulativeAvg1 - ExpVal;
CumulativeAvg2 = arrayfun(#(jj)mean(Rolls(1:jj,2)),[1:NumRolls]);
CumulativeAvgError2 = CumulativeAvg2 - ExpVal;
% Plot
figure
subplot(2,1,1), hold on, box on
plot(1:NumRolls,CumulativeAvg1,'b--','LineWidth',1.5,'DisplayName','Sample Path 1')
plot(1:NumRolls,CumulativeAvg2,'r--','LineWidth',1.5,'DisplayName','Sample Path 2')
yline(ExpVal,'k-')
title('Average')
xlabel('Number of Trials')
ylim([1 NumSides])
subplot(2,1,2), hold on, box on
plot(1:NumRolls,CumulativeAvgError1,'b--','LineWidth',1.5,'DisplayName','Sample Path 1')
plot(1:NumRolls,CumulativeAvgError2,'r--','LineWidth',1.5,'DisplayName','Sample Path 2')
yline(0,'k-')
title('Error')
xlabel('Number of Trials')

Fastest approach to copying/indexing variable parts of 3D matrix

I have large sets of 3D data consisting of 1D signals acquired in 2D space.
The first step in processing this data is thresholding all signals to find the arrival of a high-amplitude pulse. This pulse is present in all signals and arrives at different times.
After thresholding, the 3D data set should be reordered so that every signal starts at the arrival of the pulse and what came before is thrown away (the end of the signals is of no importance, as of now i concatenate zeros to the end of all signals so the data remains the same size).
Now, I have implemented this in the following manner:
First, i start by calculating the sample number of the first sample exceeding the threshold in all signals
M = randn(1000,500,500); % example matrix of realistic size
threshold = 0.25*max(M(:,1,1)); % 25% of the maximum in the first signal as threshold
[~,index] = max(M>threshold); % indices of first sample exceeding threshold in all signals
Next, I want all signals to be shifted so that they all start with the pulse. For now, I have implemented it this way:
outM = zeros(size(M)); % preallocation for speed
for i = 1:size(M,2)
for j = 1:size(M,3)
outM(1:size(M,1)+1-index(1,i,j),i,j) = M(index(1,i,j):end,i,j);
end
end
This works fine, and i know for-loops are not that slow anymore, but this easily takes a few seconds for the datasets on my machine. A single iteration of the for-loop takes about 0.05-0.1 sec, which seems slow to me for just copying a vector containing 500-2000 double values.
Therefore, I have looked into the best way to tackle this, but for now I haven't found anything better.
I have tried several things: 3D masks, linear indexing, and parallel loops (parfor).
for 3D masks, I checked to see if any improvements are possible. Therefore i first contruct a logical mask, and then compare the speed of the logical mask indexing/copying to the double nested for loop.
%% set up for logical mask copying
AA = logical(ones(500,1)); % only copy the first 500 values after the threshold value
Mask = logical(zeros(size(M)));
Jepla = zeros(500,size(M,2),size(M,3));
for i = 1:size(M,2)
for j = 1:size(M,3)
Mask(index(1,i,j):index(1,i,j)+499,i,j) = AA;
end
end
%% speed comparison
tic
Jepla = M(Mask);
toc
tic
for i = 1:size(M,2)
for j = 1:size(M,3)
outM(1:size(M,1)+1-index(1,i,j),i,j) = M(index(1,i,j):end,i,j);
end
end
toc
The for-loop is faster every time, even though there is more that's copied.
Next, linear indexing.
%% setup for linear index copying
%put all indices in 1 long column
LongIndex = reshape(index,numel(index),1);
% convert to linear indices and store in new variable
linearIndices = sub2ind(size(M),LongIndex,repmat(1:size(M,2),1,size(M,3))',repelem(1:size(M,3),size(M,2))');
% extend linear indices with those of all values to copy
k = zeros(numel(M),1);
count = 1;
for i = 1:numel(LongIndex)
values = linearIndices(i):size(M,1)*i;
k(count:count+length(values)-1) = values;
count = count + length(values);
end
k = k(1:count-1);
% get linear indices of locations in new matrix
l = zeros(length(k),1);
count = 1;
for i = 1:numel(LongIndex)
values = repelem(LongIndex(i)-1,size(M,1)-LongIndex(i)+1);
l(count:count+length(values)-1) = values;
count = count + length(values);
end
l = k-l;
% create new matrix
outM = zeros(size(M));
%% speed comparison
tic
outM(l) = M(k);
toc
tic
for i = 1:size(M,2)
for j = 1:size(M,3)
outM(1:size(M,1)+1-index(1,i,j),i,j) = M(index(1,i,j):end,i,j);
end
end
toc
Again, the alternative approach, linear indexing, is (a lot) slower.
After this failed, I learned about parallelisation, and though this would for sure speed up my code.
By reading some of the documentation around parfor and trying it out a bit, I changed my code to the following:
gcp;
outM = zeros(size(M));
inM = mat2cell(M,size(M,1),ones(size(M,2),1),size(M,3));
tic
parfor i = 1:500
for j = 1:500
outM(:,i,j) = [inM{i}(index(1,i,j):end,1,j);zeros(index(1,i,j)-1,1)];
end
end
end
toc
I changed it so that "outM" and "inM" would both be sliced variables, as I read this is best. Still this is very slow, a lot slower than the original for loop.
So now the question, should I give up on trying to improve the speed of this operation? Or is there another way in which to do this? I have searched a lot, and for now do not see how to speed this up.
Sorry for the long question, but I wanted to show what I tried.
Thank you in advance!
Not sure if an option in your situation, but looks like cell arrays are actually faster here:
outM2 = cell(size(M,2),size(M,3));
tic;
for i = 1:size(M,2)
for j = 1:size(M,3)
outM2{i,j} = M(index(1,i,j):end,i,j);
end
end
toc
And a second idea which also came out faster, batch all data which have to be shifted by the same value:
tic;
for i = 1:unique(index).'
outM(1:size(M,1)+1-i,index==i) = M(i:end,index==i);
end
toc
It totally depends on your data if this approach is actually faster.
And yes integer valued and logical indexing can be mixed

How do I create a dynamic average? (Don't know how else to call it)

I have a 10000x1 matrix. I need to find the percentage of information contained in each case. The method of doing so is to make another matrix that contains the sum of remaining cells.
Example:
% info is a 10000x1
% info_n contains percentages of information.
info_n = zeros([10000 1]);
info_n(1)= info(1) /sum(info);
info_n(2)=(info(1)+info(2)) /sum(info);
info_n(3)=(info(1)+info(2)+info(3))/sum(info);
I need to do this but all the way to info_n(10000).
So far, I tried this:
for i = 1:10000
info_n(i)=(info(i))/sum(info);
end
but I can't think of a way to add previous information. Thank you for the help!
You can use cumsum ("cumulative sum") for this task:
info_n = cumsum(info)/sum(info);
EDIT: #LuisMendo's comment sparked my curiosity. It seems that you actually gain something by using his method if the length N of the vector is below about 2^15 but the advantage after that drops. This is because cumsum probably needs O(N^2) but sum only needs O(N) time.
On the left you see the times as well as the ratio of the times of the two methods plotted against the logarithm of the size. On the right you see the actual absolute time differences, which varies a lot depending on what else is currently running:
Testing script:
N = 2.^(1:28);
y1 = zeros(size(N));
y2 = zeros(size(N));
vec = 1:numel(N);
for k=vec;
disp(k)
info = zeros(N(k),1);
% flawr's suggestion
tic
info_n = cumsum(info);
info_n = info_n / info_n(end);
y1(k) = toc;
% LuisMendo's suggestion
tic
info_m = cumsum(info)/sum(info);
y2(k) = toc;
end
subplot(1,2,1)
semilogy(vec,y1,vec,y2,vec,y1./y2,vec,vec.^0);
legend('LuisMendo','flawr','LM/f','Location','SouthEast');
subplot(1,2,2)
plot(vec,y1-y2)

Matlab -- random walk with boundaries, vectorized

Suppose I have a vector J of jump sizes and an initial starting point X_0. Also I have boundaries 0, B (assume 0 < X_0 < B). I want to do a random walk where X_i = [min(X_{i-1} + J_i,B)]^+. (positive part). Basically if it goes over a boundary, it is made equal to the boundary. Anyone know a vectorized way to do this? The current way I am doing it consists of doing cumsums and then finding places where it violates a condition, and then starting from there and repeating the cumsum calculation, etc until I find that I stop violating the boundaries. It works when the boundaries are rarely hit, but if they are hit all the time, it basically becomes a for loop.
In the code below, I am doing this across many samples. To 'fix' the ones that go out of the boundary, I have to loop through the samples to check...(don't think there is a vectorized 'find')
% X_init is a row vector describing initial resource values to use for
% each sample
% J is matrix where each col is a sequence of Jumps (columns = sample #)
% In this code the jumps are subtracted, but same thing
X_intvl = repmat(X_init,NumJumps,1) - cumsum(J);
X = [X_init; X_intvl];
for sample = 1:NumSamples
k = find(or(X_intvl(:,sample) > B, X_intvl(:,sample) < 0),1);
while(~isempty(k))
change = X_intvl(k-1,sample) - X_intvl(k,sample);
X_intvl(k:end,sample) = X_intvl(k:end,sample)+change;
k = find(or(X_intvl(:,sample) > B, X_intvl(:,sample) < 0),1);
end
end
Interesting question (+1).
I faced a similar problem a while back, although slightly more complex as my lower and upper bound depended on t. I never did work out a fully-vectorized solution. In the end, the fastest solution I found was a single loop which incorporates the constraints at each step. Adapting the code to your situation yields the following:
%# Set the parameters
LB = 0; %# Lower bound
UB = 5; %# Upper bound
T = 100; %# Number of observations
N = 3; %# Number of samples
X0 = (1/2) * (LB + UB); %# Arbitrary start point halfway between LB and UB
%# Generate the jumps
Jump = randn(N, T-1);
%# Build the constrained random walk
X = X0 * ones(N, T);
for t = 2:T
X(:, t) = max(min(X(:, t-1) + Jump(:, t-1), UB), 0);
end
X = X';
I would be interested in hearing if this method proves faster than what you are currently doing. I suspect it will be for cases where the constraint is binding in more than one or two places. I can't test it myself as the code you provided is not a "working" example, ie I can't just copy and paste it into Matlab and run it, as it depends on several variables for which example (or simulated) values are not provided. I tried adapting it myself, but couldn't get it to work properly?
UPDATE: I just switched the code around so that observations are indexed on columns and samples are indexed on rows, and then I transpose X in the last step. This will make the routine more efficient, since Matlab allocates memory for numeric arrays column-wise - hence it is faster when performing operations down the columns of an array (as opposed to across the rows). Note, you will only notice the speed-up for large N.
FINAL THOUGHT: These days, the JIT accelerator is very good at making single loops in Matlab efficient (double loops are still pretty slow). Therefore personally I'm of the opinion that every time you try and obtain a fully-vectorized solution in Matlab, ie no loops, you should weigh up whether the effort involved in finding a clever solution is worth the slight gains in efficiency to be made over an easier-to-obtain method that utilizes a single loop. And it is important to remember that fully-vectorized solutions are sometimes slower than solutions involving single loops when T and N are small!
I'd like to propose another vectorized solution.
So, first we should set the parameters and generate random Jumpls. I used the same set of parameters as Colin T Bowers:
% Set the parameters
LB = 0; % Lower bound
UB = 20; % Upper bound
T = 1000; % Number of observations
N = 3; % Number of samples
X0 = (1/2) * (UB + LB); % Arbitrary start point halfway between LB and UB
% Generate the jumps
Jump = randn(N, T-1);
But I changed generation code:
% Generate initial data without bounds
X = cumsum(Jump, 2);
% Apply bounds
Amplitude = UB - LB;
nsteps = ceil( max(abs(X(:))) / Amplitude - 0.5 );
for ii = 1:nsteps
ind = abs(X) > (1/2) * Amplitude;
X(ind) = Amplitude * sign(X(ind)) - X(ind);
end
% Shifting X
X = X0 + X;
So, instead of for loop I'm using cumsum function with smart post-processing.
N.B. This solution works significantly slower than Colin T Bowers's one for tight bounds (Amplitude < 5), but for loose bounds (Amplitude > 20) it works much faster.

tensile tests in matlab

The problem says:
Three tensile tests were carried out on an aluminum bar. In each test the strain was measured at the same values of stress. The results were
where the units of strain are mm/m.Use linear regression to estimate the modulus of elasticity of the bar (modulus of elasticity = stress/strain).
I used this program for this problem:
function coeff = polynFit(xData,yData,m)
% Returns the coefficients of the polynomial
% a(1)*x^(m-1) + a(2)*x^(m-2) + ... + a(m)
% that fits the data points in the least squares sense.
% USAGE: coeff = polynFit(xData,yData,m)
% xData = x-coordinates of data points.
% yData = y-coordinates of data points.
A = zeros(m); b = zeros(m,1); s = zeros(2*m-1,1);
for i = 1:length(xData)
temp = yData(i);
for j = 1:m
b(j) = b(j) + temp;
temp = temp*xData(i);
end
temp = 1;
for j = 1:2*m-1
s(j) = s(j) + temp;
temp = temp*xData(i);
end
end
for i = 1:m
for j = 1:m
A(i,j) = s(i+j-1);
end
end
% Rearrange coefficients so that coefficient
% of x^(m-1) is first
coeff = flipdim(gaussPiv(A,b),1);
The problem is solved without a program as follows
MY ATTEMPT
T=[34.5,69,103.5,138];
D1=[.46,.95,1.48,1.93];
D2=[.34,1.02,1.51,2.09];
D3=[.73,1.1,1.62,2.12];
Mod1=T./D1;
Mod2=T./D2;
Mod3=T./D3;
xData=T;
yData1=Mod1;
yData2=Mod2;
yData3=Mod3;
coeff1 = polynFit(xData,yData1,2);
coeff2 = polynFit(xData,yData2,2);
coeff3 = polynFit(xData,yData3,2);
x1=(0:.5:190);
y1=coeff1(2)+coeff1(1)*x1;
subplot(1,3,1);
plot(x1,y1,xData,yData1,'o');
y2=coeff2(2)+coeff2(1)*x1;
subplot(1,3,2);
plot(x1,y2,xData,yData2,'o');
y3=coeff3(2)+coeff3(1)*x1;
subplot(1,3,3);
plot(x1,y3,xData,yData3,'o');
What do I have to do to get this result?
As a general advice:
avoid for loops wherever possible.
avoid using i and j as variable names, as they are Matlab built-in names for the imaginary unit (I really hope that disappears in a future release...)
Due to m being an interpreted language, for-loops can be very slow compared to their compiled alternatives. Matlab is named MATtrix LABoratory, meaning it is highly optimized for matrix/array operations. Usually, when there is an operation that cannot be done without a loop, Matlab has a built-in function for it that runs way way faster than a for-loop in Matlab ever will. For example: computing the mean of elements in an array: mean(x). The sum of all elements in an array: sum(x). The standard deviation of elements in an array: std(x). etc. Matlab's power comes from these built-in functions.
So, your problem. You have a linear regression problem. The easiest way in Matlab to solve this problem is this:
%# your data
stress = [ %# in Pa
34.5 69 103.5 138] * 1e6;
strain = [ %# in m/m
0.46 0.95 1.48 1.93
0.34 1.02 1.51 2.09
0.73 1.10 1.62 2.12]' * 1e-3;
%# make linear array for the data
yy = strain(:);
xx = repmat(stress(:), size(strain,2),1);
%# re-formulate the problem into linear system Ax = b
A = [xx ones(size(xx))];
b = yy;
%# solve the linear system
x = A\b;
%# modulus of elasticity is coefficient
%# NOTE: y-offset is relatively small and can be ignored)
E = 1/x(1)
What you did in the function polynFit is done by A\b, but the \-operator is capable of doing it way faster, way more robust and way more flexible than what you tried to do yourself. I'm not saying you shouldn't try to make these thing yourself (please keep on doing that, you learn a lot from it!), I'm saying that for the "real" results, always use the \-operator (and check your own results against it as well).
The backslash operator (type help \ on the command prompt) is extremely useful in many situations, and I advise you learn it and learn it well.
I leave you with this: here's how I would write your polynFit function:
function coeff = polynFit(X,Y,m)
if numel(X) ~= numel(X)
error('polynFit:size_mismathc',...
'number of elements in matrices X and Y must be equal.');
end
%# bad condition number, rank errors, etc. taken care of by \
coeff = bsxfun(#power, X(:), m:-1:0) \ Y(:);
end
I leave it up to you to figure out how this works.