How to fit an exponential curve to damped harmonic oscillation data in MATLAB? - matlab

I'm trying to fit an exponential curve to data sets containing damped harmonic oscillations. The data is a bit complicated in the sense that the sinusoidal oscillations contain many frequencies as seen below:
I need to find the rate of decay in the data. The method I am using can be found here. How it works, is it takes the log of the y values above the steady state value and then uses:
lsqlin(A,y1(:),-A,-y1(:),[],[],[],[],[],optimset('algorithm','active-set','display','off'))
To fit it.
However, this results in the following data fits:
I tried using a linear regression fit which obviously didn't work because it took the average. I also tried RANSAC thinking that there is more data near the peaks. It worked a bit better than the linear regression but the method is flawed as there are times when more points exist at the wrong regions.
Does anyone know of a good method to just fit the peaks for this data?
Currently, I'm thinking of dividing the 500 data points into 10 different regions and in each region find the largest value. At the end, I should have 50 points that I can fit using any of the exponential fitting methods mentioned above. What do you think of this method?

Thought I'd give everyone an update of potential solutions that may work. As mentioned earlier, the data is complicated by the varying sinusoidal frequencies, so certain methods may not work because of this. The methods listed below can be good depending on the data and the frequencies involved.
First off, I assume that the data has the form:
y = average + b*e^-(c*x)
In my case, the average is 290 so we have:
y = 290 + b*e^-(c*x)
With that being said, let's dive into the different methods that I tried:
findpeaks() Method
This is the method that Alexander Büse suggested. It's a pretty good method for most data, but for my data, since there's multiple sinusoidal frequencies, it gets the wrong peaks. The red x's show the peaks.
% Find Peaks Method
[max_num,max_ind] = findpeaks(y(ind));
plot(max_ind,max_num,'x','Color','r'); hold on;
x1 = max_ind;
y1 = log(max_num-290);
coeffs = polyfit(x1,y1,1)
b = exp(coeffs(2));
c = coeffs(1);
RANSAC
RANSAC is good if you have most of your data at the peaks. You see that in mine, because of the multiple frequencies, more peaks exist near the top. However, the problem with my data is that not all the data sets are like this. Hence, it occasionally worked.
% RANSAC Method
ind = (y > avg);
x1 = x(ind);
y1 = log(y(ind) - avg);
iterNum = 300;
thDist = 0.5;
thInlrRatio = .1;
[t,r] = ransac([x1;y1'],iterNum,thDist,thInlrRatio);
k1 = -tan(t);
b1 = r/cos(t);
% plot(x1,k1*x1+b1,'r'); hold on;
b = exp(b1);
c = k1;
Lsqlin Method
This method is the one used here. It uses Lsqlin to constrain the system. However, it seems to ignore the data in the middle. Depending on your data set, this could work really well as it did for the person in the original post.
% Lsqlin Method
avg = 290;
ind = (y > avg);
x1 = x(ind);
y1 = log(y(ind) - avg);
A = [ones(numel(x1),1),x1(:)]*1.00;
coeffs = lsqlin(A,y1(:),-A,-y1(:),[],[],[],[],[],optimset('algorithm','active-set','display','off'));
b = exp(coeffs(2));
c = coeffs(1);
Find Peaks in Period
This is the method I mentioned in my post where I get the peak in each region, . This method works pretty well and from this I realized that my data may not actually have a perfect exponential fit. We see that it is unable to fit the large peaks at the beginning. I was able to make this a bit better by only using the first 150 data points and ignoring the steady state data points. Here I found the peak every 25 data points.
% Incremental Method 2 Unknowns
x1 = [];
y1 = [];
max_num=[];
max_ind=[];
incr = 25;
for i=1:floor(size(y,1)/incr)
[max_num(end+1),max_ind(end+1)] = max(y(1+incr*(i-1):incr*i));
max_ind(end) = max_ind(end) + incr*(i-1);
if max_num(end) > avg
x1(end+1) = max_ind(end);
y1(end+1) = log(max_num(end)-290);
end
end
plot(max_ind,max_num,'x','Color','r'); hold on;
coeffs = polyfit(x1,y1,1)
b = exp(coeffs(2));
c = coeffs(1);
Using all 500 data points:
Using the first 150 data points:
Find Peaks in Period With b Constrained
Since I want it to start at the first peak, I constrained the b value. I know the system is y=290+b*e^-c*x and I constrain it such that b=y(1)-290. By doing so, I just need to solve for c where c=(log(y-290)-logb)/x. I can then take the average or median of c. This method is quite good as well, it doesn't fit the value near the end as well but that isn't as big of a deal since the change there is minimal.
% Incremental Method 1 Unknown (b is constrained y(1)-290 = b)
b = y(1) - 290;
c = [];
max_num=[];
max_ind=[];
incr = 25;
for i=1:floor(size(y,1)/incr)
[max_num(end+1),max_ind(end+1)] = max(y(1+incr*(i-1):incr*i));
max_ind(end) = max_ind(end) + incr*(i-1);
if max_num(end) > avg
c(end+1) = (log(max_num(end)-290)-log(b))/max_ind(end);
end
end
c = mean(c); % Or median(c) works just as good
Here I take the peak for every 25 data points and then take the mean of c
Here I take the peak for every 25 data points and then take the median of c
Here I take the peak for every 10 data points and then take the mean of c

If the main goal is to extract the damping parameter from the fit, maybe you want to consider fitting directly a damped sine curve to your data. Something like this (created with the curve fitting tool):
[xData, yData] = prepareCurveData( x, y );
ft = fittype( 'a + sin(b*x - c).*exp(d*x)', 'independent', 'x', 'dependent', 'y' );
opts = fitoptions( 'Method', 'NonlinearLeastSquares' );
opts.Display = 'Off';
opts.StartPoint = [1 0.285116122712545 0.805911873245316 0.63235924622541];
[fitresult, gof] = fit( xData, yData, ft, opts );
plot( fitresult, xData, yData );
Especially since some of your example data really don't have many data points in the interesting region (above the noise).
If however, you really need to fit directly to maxima of the experimental data, you could use the findpeaks function to select only the maxima and then fit to them. You may want to play a bit with the MinPeakProminence parameter to adjust it to your needs.

Related

Where did i do wrong when i tried to approximatee this data using polynomial?

I am starting to learn numerical analysis using MATLAB in my course. so far we have covered polynomial interpolation (spline, polyfit, constraint spline, etc.) I was doing this practice question and I can not get the correct answer. I have uploaded the code I used and the question, where did I do wrong? thanks in advance!
close all; clear all; clc;
format long e
x = linspace(0,1,8);
xplot = linspace(0,1);
f = #(x) atan(x.*(x+1));
y_val = f(xplot);
c = polyfit(x,f(x),7);
p = polyval(c,0.7);
err = abs(f(0.7)-p)/(f(0.7))
The question I encountered is seen in the picture
After some playing around, it seems to be a matter of computing the absolute error instead of the relative absolute error.
The code below yields the desired answer. And yes, it is pretty unclear from the question which error is intended.
% Definitions
format long e
x = linspace(0,1,8)';
xplot= linspace(0,1);
f = #(x) atan(x.*(x+1));
y_val = f(xplot);
% Degree of polynomial
n = 7;
% Points to evaluate function
point1 = 0.5;
point2 = 0.7;
% Fit
c= polyfit(x,f(x),n);
% Evaluate
approxPoint1 = polyval(c, point1);
approxPoint2 = polyval(c, point2);
% Absolute errors
errPoint1 = abs( f(point1) - approxPoint1)
errPoint2 = abs( f(point2) - approxPoint2)
What you did wrong was :
mixing absolute and relative values when calculating errors to feed resulting variable err.
incorrectly placing abs() parentheses when calculating err: Your abs() only fixes the numerator, but then the denominator. To obtain |f(0.7)| you also need another abs(f(0.7))
for point x=0.7 instead of
err = abs(f(0.7)-p)/(f(0.7))
could well simply be
err = abs(f(.7)-p));
you only calculate err for assessment point 0.5 . In order to choose among the possible candidates of what seems to be a MATLAB Associate test multichoice answer, one needs err on 0.5 and err on 0.7 and then match the pair in the correct order, among all offered possible answers.
Although it's common practice to approach polynomial approximation of N points with an N-1 degree polynomial, it's often possible to approximate below satisfactory error with lower degree polynomials than N-1.
Lower degree polynomials means less calculations, less time spent approximating points. If one obtains a fair enough approximation with for instance a degree 4 polynomial, why waste time calculating an approximation with a higher degree all the way up to N-1?
Since the question does not tell what degree should have the approximating polynomial, you have to find it sweeping all polynomial degrees from 1 to up to a reasonable order.
The last error I found is that you have used linspace without specifying amount of points, MATLAB then takes by default 100 points hoping it's going to be ok.
Well, in your question 100 points is way too low an amount of points as I am going to show after supplying the following lines, as mentioned in previous point, sweeping all possible approximating polynomials, NOT on default 100 points you tacitly chose.
np=8; % numel(x) amount supplied points
N=1e3; % amount points x grid we build to measure f(x)
x= linspace(0,1,np);
xplot = linspace(0,1,N);
f = #(x) atan(x.*(x+1)); % function to approximate
y_val = f(xplot); % f(x)
xm=[.5 .7]; % points where to asses error
% finding poly coeffs
err1=zeros(2,np+2); % 1st row are errors on x=.5, 2nd row errors on x=.7
figure(1);
ax1=gca
hp1=plot(xplot,y_val)
grid on;
hp1.LineWidth=3;
hp1.Color='r';
hold on
for k=1:1:np+2
c = polyfit(x,f(x),k);
p_01 = polyval(c,xm(1));
err1(1,k) = abs(f(xm(1))-p_01);
% err(1,k) = abs((f(0.5)-p_05)/(f(0.5)))
p_02 = polyval(c,xm(2));
err1(2,k) = abs(f(xm(2))-p_02);
% err(2,k) = abs((f(0.7)-p_07)/(f(0.7)))
plot(x,polyval(c,x),'LineWidth',1.5); %'Color','b');
end
err1
.
.
The only pair of errors matching in the correct order are indeed those of polynomial order 7, but the total smallest error corresponds to the approximating polynomial of order 6.
What happens when taking linspace without defining a large enough amount of points? let's have a look:
np=8; % numel(x) amount supplied points
% N=1e3; % amount points x grid we build to measure f(x)
x= linspace(0,1,np);
xplot = linspace(0,1); %,N);
f = #(x) atan(x.*(x+1)); % function to approximate
y_val = f(xplot); % f(x)
xm=[.5 .7]; % points where to asses error
% finding poly coeffs
err1=zeros(2,np+2); % 1st row are errors on x=.5, 2nd row errors on x=.7
figure(1);
ax1=gca
hp1=plot(xplot,y_val)
grid on;
hp1.LineWidth=3;
hp1.Color='r';
hold on
for k=1:1:np+2
c = polyfit(x,f(x),k);
p_01 = polyval(c,xm(1));
err1(1,k) = abs(f(xm(1))-p_01);
% err(1,k) = abs(f(0.5)-p_05)/abs(f(0.5))
p_02 = polyval(c,xm(2));
err1(2,k) = abs(f(xm(2))-p_02);
% err(2,k) = abs(f(0.7)-p_07)/abs(f(0.7))
plot(x,polyval(c,x),'LineWidth',1.5); %'Color','b');
end
err1
With only 100 points all errors come up way too large, not a single error anywhere near 1e-5 or 1e-6.
This is why one couldn't tell which pair to go for, because all obtained values where at least 5 orders of magnitude away from landing zone.
I was about to include a plot with legend, but the visualization of this particular approach is in this case and in my opinion at best misleading, as in both plots for 100 and 1000 points, at 1st glance, both look as if the errors should be similar regardless of the amount of grid points used.
But as shown above 1e2 points cannot approximate the function,
it's like pitch dark, looking for something and pointing torch 180 from where we should be aiming at, not a chance to spot it.
Yet 1e3 grid points produce a pair of errors matching one of the possible answers, this is option D.
I hope it helps, thanks for reading my answer.

Matlab fit poisson function to histogram

I am trying to fit a Poisson function to a histogram in Matlab: the example calls for using hist() (which is deprecated) so I want to use histogram() instead, especially as you cannot seem to normalize a hist(). I then want to apply a poisson function to it using poisspdf() or any other standard function (preferably no toolboxes!). The histogram is probability scaled, which is where the issue with the poisson function comes from AFAIK.
clear
clc
lambda = 5;
range = 1000;
rangeVec = 1:range;
randomData = poissrnd(lambda, 1, range);
histoFigure = histogram(randomData, 'Normalization', 'probability');
hold on
poissonFunction = poisspdf(randomData, lambda);
poissonFunction2 = poisspdf(histoFigure, lambda);
plot(poissonFunction)
plot(poissonFunction2)
I have tried multiple different methods of creating the poisson function + plotting and neither of them seems to work: the values within this function are not consistent with the histogram values as they differ by several decimals.
This is what the image should look like
however currently I can only get the bar graphs to show up correctly.
You're not specifing the x-data of you're curve. Then the sample number is used and since you have 1000 samples, you get the ugly plot. The x-data that you use is randomData. Using
plot(randomData, poissonFunction)
will lead to lines between different samples, because the samples follow each other randomly. To take each sample only once, you can use unique. It is important that the x and y values stay connected to each other, so it's best to put randomData and poissonFunction in 1 matrix, and then use unique:
d = [randomData;poissonFunction].'; % make 1000x2 matrix to find unique among rows
d = unique(d,'rows');
You can use d to plot the data.
Full code:
clear
clc
lambda = 5;
range = 1000;
rangeVec = 1:range;
randomData = poissrnd(lambda, 1, range);
histoFigure = histogram(randomData, 'Normalization', 'probability');
hold on
poissonFunction = poisspdf(randomData, lambda);
d = [randomData; poissonFunction].';
d = unique(d, 'rows');
plot(d(:,1), d(:,2))
With as result:

How to make a vector that follows a certain trend?

I have a set of data with over 4000 points. I want to exclude grooves from them, ideally from the point from which they start. The data look for example like this:
The problem with this is the noise I get at the top of the plateaus. I have an idea, in which I would take an average value of the most common within some boundaries (again, ideally sth like the red line here:
and then I would construct a temporary matrix, which would fill up one by one with Y if they are less than this average. If the Y(i) would rise above average, the matrix would find its minima and compare it with the global minima. If the temporary matrix's minima wouldn't be sth like 80% of the global minima, it would be discarded as noise.
I've tried using mean(Y), interpolating and fitting it in a polynomial (the green line) - none of those method would cut it to the point I would be satisfied.
I need this to be extremely robust and it doesn't need to be quick. The top and bottom values can vary a lot, as well as the shape of the plateaus. The groove width is more or less the same.
Do you have any ideas? Again, the point is to extract the values that would make the groove.
How about a median filter?
Let's define some noisy data similar to yours, and plot it in blue:
x = .2*sin((0:9999)/1000); %// signal
x(1000:1099) = x(1000:1099) + sin((0:99)/50*pi); %// noise: spike
x(5000:5199) = x(5000:5199) - sin((0:199)/100*pi); %// noise: wider spike
x = x + .05*sin((0:9999)/10); %// noise: high-freq ripple
plot(x)
Now apply the median filter (using medfilt2 from the Image Processing Toolbox) and plot in red. The parameter k controls the filter memory. It should chosen to be large compared to noise variations, and small compared to signal variations:
k = 500; %// filter memory. Choose as needed
y = medfilt2(x,[1 k]);
hold on
plot(y, 'r', 'linewidth', 2)
In case you don't have the image processing toolbox and can't use medfilt2 a method that's more manual. Skip the extreme values, and do a curve fit with sin1 as curve type. Note that this will only work if the signal is in fact a sine wave!
x = linspace(0,3*pi,1000);
y1 = sin(x) + rand()*sin(100*x).*(mod(round(10*x),5)<3);
y2 = 20*(mod(round(5*x),5) == 0).*sin(20*x);
y = y1 + y2; %// A messy sine-wave
yy = y; %// Store the messy sine-wave
[~, idx] = sort(y);
y(idx(1:round(0.15*end))) = y(idx(round(0.15*end))); %// Flatten out the smallest values
y(idx(round(0.85*end):end)) = y(idx(round(0.85*end)));%// Flatten out the largest values
[foo goodness output] = fit(x.',y.', 'sin1'); %// Do a curve fit
plot(foo,x,y) %// Plot it
hold on
plot(x,yy,'black')
Might not be perfect, but it's a step in the right direction.

How can we produce kappa and delta in the following model using Matlab?

I have a following stochastic model describing evolution of a process (Y) in space and time. Ds and Dt are domain in space (2D with x and y axes) and time (1D with t axis). This model is usually known as mixed-effects model or components-of-variation models
I am currently developing Y as follow:
%# Time parameters
T=1:1:20; % input
nT=numel(T);
%# Grid and model parameters
nRow=100;
nCol=100;
[Grid.Nx,Grid.Ny,Grid.Nt] = meshgrid(1:1:nCol,1:1:nRow,T);
xPower=0.1;
tPower=1;
noisePower=1;
detConstant=1;
deterministic_mu = detConstant.*(((Grid.Nt).^tPower)./((Grid.Nx).^xPower));
beta_s = randn(nRow,nCol); % mean-zero random effect representing location specific variability common to all times
gammaTemp = randn(nT,1);
for t = 1:nT
gamma_t(:,:,t) = repmat(gammaTemp(t),nRow,nCol); % mean-zero random effect representing time specific variability common to all locations
end
var=0.1;% noise has variance = 0.1
for t=1:nT
kappa_st(:,:,t) = sqrt(var)*randn(nRow,nCol);
end
for t=1:nT
Y(:,:,t) = deterministic_mu(:,:,t) + beta_s + gamma_t(:,:,t) + kappa_st(:,:,t);
end
My questions are:
How to produce delta in the expression for Y and the difference in kappa and delta?
Help explain, through some illustration using Matlab, if I am correctly producing Y?
Please let me know if you need some more information/explanation. Thanks.
First, I rewrote your code to make it a bit more efficient. I see you generate linearly-spaced grids for x,y and t and carry out the computation for all points in this grid. This approach has severe limitations on the maximum attainable grid resolution, since the 3D grid (and all variables defined with it) can consume an awfully large amount of memory if the resolution goes up. If the model you're implementing will grow in complexity and size (it often does), I'd suggest you throw this all into a function accepting matrix/vector inputs for s and t, which will be a bit more flexible in this regard -- processing "blocks" of data that will otherwise not fit in memory will be a lot easier that way.
Then, I generated the the delta_st term with rand instead of randn since the noise should be "white". Now I'm very unsure about that last one, and I didn't have time to read through the paper you linked to -- can you tell me on what pages I can find relevant the sections for the delta_st?
Now, the code:
%# Time parameters
T = 1:1:20; % input
nT = numel(T);
%# Grid and model parameters
nRow = 100;
nCol = 100;
% noise has variance = 0.1
var = 0.1;
xPower = 0.1;
tPower = 1;
noisePower = 1;
detConstant = 1;
[Grid.Nx,Grid.Ny,Grid.Nt] = meshgrid(1:nCol,1:nRow,T);
% deterministic mean
deterministic_mu = detConstant .* Grid.Nt.^tPower ./ Grid.Nx.^xPower;
% mean-zero random effect representing location specific
% variability common to all times
beta_s = repmat(randn(nRow,nCol), [1 1 nT]);
% mean-zero random effect representing time specific
% variability common to all locations
gamma_t = bsxfun(#times, ones(nRow,nCol,nT), randn(1, 1, nT));
% mean zero random effect capturing the spatio-temporal
% interaction not found in the larger-scale deterministic mu
kappa_st = sqrt(var)*randn(nRow,nCol,nT);
% mean zero random effect representing the micro-scale
% spatio-temporal variability that is modelled by white
% noise (i.i.d. at different time steps) in Ds·Dt
delta_st = noisePower * (rand(nRow,nCol,nT)-0.5);
% Final result:
Y = deterministic_mu + beta_s + gamma_t + kappa_st + delta_st;
Your implementation samples beta, gamma and kappa as if they are white (e.g. their values at each (x,y,t) are independent). The descriptions of the terms suggest that this is not meant to be the case. It looks like delta is supposed to capture the white noise, while the other terms capture the correlations over their respective domains. e.g. there is a non-zero correlation between gamma(t_1) and gamma(t_1+1).
If you wish to model gamma as a stationary Gaussian Markov process with variance var_g and correlation cor_g between gamma(t) and gamma(t+1), you can use something like
gamma_t = nan( nT, 1 );
gamma_t(1) = sqrt(var_g)*randn();
K_g = cor_g/var_g;
K_w = sqrt( (1-K_g^2)*var_g );
for t = 2:nT,
gamma_t(t) = K_g*gamma_t(t-1) + K_w*randn();
end
gamma_t = reshape( gamma_t, [ 1 1 nT ] );
The formulas I've used for gains K_g and K_w in the above code (and the initialization of gamma_t(1)) produce the desired stationary variance \sigma^2_0 and one-step covariance \sigma^2_1:
Note that the implementation above assumes that later you will sum the terms using bsxfun to do the "repmat" for you:
Y = bsxfun( #plus, deterministic_mu + kappa_st + delta_st, beta_s );
Y = bsxfun( #plus, Y, gamma_t );
Note that I haven't tested the above code, so you should confirm with sampling that it does actually produce a zero noise process of the specified variance and covariance between adjacent samples. To sample beta the same procedure can be extended into two dimensions, but the principles are essentially the same. I suspect kappa should be similarly modeled as a Markov Gaussian Process, but in all three dimensions and with a lower variance to represent higher-order effects not captured in mu, beta and gamma.
Delta is supposed to be zero mean stationary white noise. Assuming it to be Gaussian with variance noisePower one would sample it using
delta_st = sqrt(noisePower)*randn( [ nRows nCols nT ] );

find out the orientation, length and radius of capped rectangular object

I have a image as shown as fig.1. I am trying to fit this binary image with a capped rectangular (fig.2) to figure out:
the orientation (the angle between the long axis and the horizontal axis)
the length (l) and radius (R) of the object. What is the best way to do it?
Thanks for the help.
My very naive idea is using least square fit to find out these information however I found out there is no equation for capped rectangle. In matlab there is a function called rectangle can create the capped rectangle perfectly however it seems just for the plot purpose.
I solved this 2 different ways and have notes on each approach below. Each method varies in complexity so you will need to decide the best trade for your application.
First Approach: Least-Squares-Optimization:
Here I used unconstrained optimization through Matlab's fminunc() function. Take a look at Matlab's help to see the options you can set prior to optimization. I made some fairly simple choices just to get this approach working for you.
In summary, I setup a model of your capped rectangle as a function of the parameters, L, W, and theta. You can include R if you wish but personally I don't think you need that; by examining continuity with the half-semi-circles at each edge, I think it may be sufficient to let R = W, by inspection of your model geometry. This also reduces the number of optimization parameters by one.
I made a model of your capped rectangle using boolean layers, see the cappedRectangle() function below. As a result, I needed a function to calculate finite difference gradients of the model with respect to L, W, and theta. If you don't provide these gradients to fminunc(), it will attempt to estimate these but I found that Matlab's estimates didn't work well for this application, so I provided my own as part of the error function that gets called by fminunc() (see below).
I didn't initially have your data so I simply right-clicked on your image above and downloaded: 'aRIhm.png'
To read your data I did this (creates the variable cdata):
image = importdata('aRIhm.png');
vars = fieldnames(image);
for i = 1:length(vars)
assignin('base', vars{i}, image.(vars{i}));
end
Then I converted to double type and "cleaned-up" the data by normalizing. Note: this pre-processing was important to get the optimization to work properly, and may have been needed since I didn't have your raw data (as mentioned I downloaded your image from the webpage for this question):
data = im2double(cdata);
data = data / max(data(:));
figure(1); imshow(data); % looks the same as your image above
Now get the image sizes:
nY = size(data,1);
nX = size(data,2);
Note #1: you might consider adding the center of the capped rectangle, (xc,yc), as optimization parameters. These extra degrees of freedom will make a difference in the overall fitting results (see comment on final error function values below). I didn't set that up here but you can follow the approach I used for L, W, and theta, to add that functionality with the finite difference gradients. You will also need to setup the capped rectangle model as a function of (xc,yc).
EDIT: Out of curiosity I added the optimization over the capped rectangle center, see the results at the bottom.
Note #2: for "continuity" at the ends of the capped rectangle, let R = W. If you like, you can later include R as an explicit optimization
parameter following the examples for L, W, theta. You might even want to have say R1 and R2 at each endpoint as variables?
Below are arbitrary starting values that I used to simply illustrate an example optimization. I don't know how much information you have in your application but in general, you should try to provide the best initial estimates that you can.
L = 25;
W = L;
theta = 90;
params0 = [L W theta];
Note that you will get different results based on your initial estimates.
Next display the starting estimate (the cappedRectangle() function is defined later):
capRect0 = reshape(cappedRectangle(params0,nX,nY),nX,nY);
figure(2); imshow(capRect0);
Define an anonymous function for the error metric (errorFunc() is listed below):
f = #(x)errorFunc(x,data);
% Define several optimization parameters for fminunc():
options = optimoptions(#fminunc,'GradObj','on','TolX',1e-3, 'Display','iter');
% Call the optimizer:
tic
[x,fval,exitflag,output] = fminunc(f,params0,options);
time = toc;
disp(['convergence time (sec) = ',num2str(time)]);
% Results:
disp(['L0 = ',num2str(L),'; ', 'L estimate = ', num2str(x(1))]);
disp(['W0 = ',num2str(W),'; ', 'W estimate = ', num2str(x(2))]);
disp(['theta0 = ',num2str(theta),'; ', 'theta estimate = ', num2str(x(3))]);
capRectEstimate = reshape(cappedRectangle(x,nX,nY),nX,nY);
figure(3); imshow(capRectEstimate);
Below is the output from fminunc (for more details on each column see Matlab's help):
Iteration f(x) step optimality CG-iterations
0 0.911579 0.00465
1 0.860624 10 0.00457 1
2 0.767783 20 0.00408 1
3 0.614608 40 0.00185 1
.... and so on ...
15 0.532118 0.00488281 0.000962 0
16 0.532118 0.0012207 0.000962 0
17 0.532118 0.000305176 0.000962 0
You can see that the final error metric values have not decreased that much relative to the starting value, this indicates to me that the model function probably doesn't have enough degrees of freedom to really "fit" the data that well, so consider adding extra optimization parameters, e.g., image center, as discussed earlier.
EDIT: Added optimization over the capped rectangle center, see results at the bottom.
Now print the results (using a 2011 Macbook Pro):
Convergence time (sec) = 16.1053
L0 = 25; L estimate = 58.5773
W0 = 25; W estimate = 104.0663
theta0 = 90; theta estimate = 36.9024
And display the results:
EDIT: The exaggerated "thickness" of the fitting results above are because the model is trying to fit the data while keeping its center fixed, resulting in larger values for W. See updated results at bottom.
You can see by comparing the data to the final estimate that even a relatively simple model starts to resemble the data fairly well.
You can go further and calculate error bars for the estimates by setting up your own Monte-Carlo simulations to check accuracy as a function of noise and other degrading factors (with known inputs that you can generate to produce simulated data).
Below is the model function I used for the capped rectangle (note: the way I did image rotation is kind of sketchy numerically and not very robust for finite-differences but its quick and dirty and gets you going):
function result = cappedRectangle(params, nX, nY)
[x,y] = meshgrid(-(nX-1)/2:(nX-1)/2,-(nY-1)/2:(nY-1)/2);
L = params(1);
W = params(2);
theta = params(3); % units are degrees
R = W;
% Define r1 and r2 for the displaced rounded edges:
x1 = x - L;
x2 = x + L;
r1 = sqrt(x1.^2+y.^2);
r2 = sqrt(x2.^2+y.^2);
% Capped Rectangle prior to rotation (theta = 0):
temp = double( (abs(x) <= L) & (abs(y) <= W) | (r1 <= R) | (r2 <= R) );
cappedRectangleRotated = im2double(imrotate(mat2gray(temp), theta, 'bilinear', 'crop'));
result = cappedRectangleRotated(:);
return
And then you will also need the error function called by fminunc:
function [error, df_dx] = errorFunc(params,data)
nY = size(data,1);
nX = size(data,2);
% Anonymous function for the model:
model = #(params)cappedRectangle(params,nX,nY);
% Least-squares error (analogous to chi^2 in the literature):
f = #(x)sum( (data(:) - model(x) ).^2 ) / sum(data(:).^2);
% Scalar error:
error = f(params);
[df_dx] = finiteDiffGrad(f,params);
return
As well as the function to calculate the finite difference gradients:
function [df_dx] = finiteDiffGrad(fun,x)
N = length(x);
x = reshape(x,N,1);
% Pick a small delta, dx should be experimented with:
dx = norm(x(:))/10;
% define an array of dx values;
h_array = dx*eye(N);
df_dx = zeros(size(x));
f = #(x) feval(fun,x);
% Finite Diff Approx (use "centered difference" error is O(h^2);)
for j = 1:N
hj = h_array(j,:)';
df_dx(j) = ( f(x+hj) - f(x-hj) )/(2*dx);
end
return
Second Approach: use regionprops()
As others have pointed out you can also use Matlab's regionprops(). Overall I think this could work the best with some tuning and checking to insure that its doing what you expect. So the approach would be to call it like this (it certainly is a lot simpler than the first approach!):
data = im2double(cdata);
data = round(data / max(data(:)));
s = regionprops(data, 'Orientation', 'MajorAxisLength', ...
'MinorAxisLength', 'Eccentricity', 'Centroid');
And then the struct result s:
>> s
s =
Centroid: [345.5309 389.6189]
MajorAxisLength: 365.1276
MinorAxisLength: 174.0136
Eccentricity: 0.8791
Orientation: 30.9354
This gives enough information to feed into a model of a capped rectangle. At first glance this seems like the way to go, but it seems like you have your mind set on another approach (maybe the first approach above).
Anyway, below is an image of the results (in red) overlaid on top of your data which you can see looks quite good:
EDIT: I couldn't help myself, I suspected that by including the image center as an optimization parameter, much better results could be obtained, so I went ahead and did it just to check. Sure enough, with the same starting estimates used earlier in the Least-Squares Estimation, here are the results:
Iteration f(x) step optimality CG-iterations
0 0.911579 0.00465
1 0.859323 10 0.00471 2
2 0.742788 20 0.00502 2
3 0.530433 40 0.00541 2
... and so on ...
28 0.0858947 0.0195312 0.000279 0
29 0.0858947 0.0390625 0.000279 1
30 0.0858947 0.00976562 0.000279 0
31 0.0858947 0.00244141 0.000279 0
32 0.0858947 0.000610352 0.000279 0
By comparison with the earlier values we can see that the new least-square error values are quite a bit smaller when including the image center, confirming what we suspected earlier (so no big surprise).
The updated estimates for the capped rectangle parameters are thus:
Convergence time (sec) = 96.0418
L0 = 25; L estimate = 89.0784
W0 = 25; W estimate = 80.4379
theta0 = 90; theta estimate = 31.614
And relative to the image array center we get:
xc = -22.9107
yc = 35.9257
The optimization takes longer but the results are improved as seen by visual inspection:
If performance is an issue you may want to consider writing your own optimizer or first try tuning Matlab's optimization parameters, perhaps using different algorithm options as well; see the optimization options above.
Here is the code for the updated model:
function result = cappedRectangle(params, nX, nY)
[X,Y] = meshgrid(-(nX-1)/2:(nX-1)/2,-(nY-1)/2:(nY-1)/2);
% Extract params to make code more readable:
L = params(1);
W = params(2);
theta = params(3); % units are degrees
xc = params(4); % new param: image center in x
yc = params(5); % new param: image center in y
% Shift coordinates to the image center:
x = X-xc;
y = Y-yc;
% Define R = W as a constraint:
R = W;
% Define r1 and r2 for the rounded edges:
x1 = x - L;
x2 = x + L;
r1 = sqrt(x1.^2+y.^2);
r2 = sqrt(x2.^2+y.^2);
temp = double( (abs(x) <= L) & (abs(y) <= W) | (r1 <= R) | (r2 <= R) );
cappedRectangleRotated = im2double(imrotate(mat2gray(temp), theta, 'bilinear', 'crop'));
result = cappedRectangleRotated(:);
and then prior to calling fminunc() I adjusted the parameter list:
L = 25;
W = L;
theta = 90;
% set image center to zero as intial guess:
xc = 0;
yc = 0;
params0 = [L W theta xc yc];
Enjoy.
First I have to say that I do not have the answer to all of your questions but I can help you with the orientation.
I suggest using principal component analysis on the binary image. A good tutorial on PCA is given by Jon Shlens. In Figure 2 of his tutorial there is an example what it can be used for. In Section 5 of his paper you can see some sort of instruction how to compute the principal components. With singular value decomposition it is much easier as shown in Section 6.1.
To use PCA you have to get measurements for which you want to compute the principal components. In your case each white pixel is a measurement which is represented by the pixel location (x, y)'. You will have N two-dimensional vectors that give your measurements. Thus, your measurement 2xN matrix X is represented by the concatenated vectors.
When you have built this matrix proceed as given in Section 6.1. The singular values are representing the "strength" of the different components. Thus, the largest singular value represents the long axis of your ellipse. The second largest singular value (and it should only be two at all) is represents the other (perpendicular) axis.
Remember, if the ellipse is a circle your singular values should be equal but with your discrete image representation you will not get a perfect circle.