I am trying to fit histogram data that seem to follow a poisson distribution. I declare the function as follows and try to fit it by using the least squares method.
xdata; ydata; % Arrays in which I have stored the data.
%Ydata tell us how many times the xdata is repeated in the set.
fun= #(x,xdata) (exp(-x(1))*(x(1).^(xdata)) )./(factorial(xdata)) %Function I
% want to use in the fit. It is a poisson distribution.
x0=[60]; %Approximated value of the parameter lambda to help the fit
p=lsqcurvefit(fun,x0,xdata,ydata); % Fit in the least square sense
I however encounter the next problem
Error using snls (line 48)
Objective function is returning undefined values at initial point.
lsqcurvefit cannot continue.
I have seen online that it sometimes had to do with a division by zero for example. This can be solved by adding a small amount in the denominator so that that indetermination never happens. However, that is not my case. What is the problem then?
I implemented both methods (Maximum Likelihood and PDF Curve Fitting).
You can see the code in my Stack Overflow Q45118312 Github Repository.
Results:
As you can see, the Maximum Likelihood is simpler and better (MSE Wise).
So you have no reason to use the PDF Curve Fitting method.
Part of the code which does the heavy lifting is:
%% Simulation Parameters
numTests = 50;
numSamples = 1000;
paramLambdaBound = 10;
epsVal = 1e-6;
hPoissonPmf = #(paramLambda, vParamK) ((paramLambda .^ vParamK) * exp(-paramLambda)) ./ factorial(vParamK);
for ii = 1:ceil(1000 * paramLambdaBound)
if(hPoissonPmf(paramLambdaBound, ii) <= epsVal)
break;
end
end
vValGrid = [0:ii];
vValGrid = vValGrid(:);
vParamLambda = zeros([numTests, 1]);
vParamLambdaMl = zeros([numTests, 1]); %<! Maximum Likelihood
vParamLambdaCf = zeros([numTests, 1]); %<! Curve Fitting
%% Generate Data and Samples
for ii = 1:numTests
paramLambda = paramLambdaBound * rand([1, 1]);
vDataSamples = poissrnd(paramLambda, [numSamples, 1]);
vDataHist = histcounts(vDataSamples, [vValGrid - 0.5; vValGrid(end) + 0.5]) / numSamples;
vDataHist = vDataHist(:);
vParamLambda(ii) = paramLambda;
vParamLambdaMl(ii) = mean(vDataSamples); %<! Maximum Likelihood
vParamLambdaCf(ii) = lsqcurvefit(hPoissonPmf, 2, vValGrid, vDataHist, 0, inf); %<! Curve Fitting
end
This is the wrong way to do so.
You have data you believe acts according to Poisson Distribution.
Since the Poisson Distribution is parameterized by single parameter (Lambda) then what you need to do is apply Parameter Estimation.
The classic way to do so is by Maximum Likelihood Estimation.
For this case, Poisson Distribution, you need to follow the MLE of Poisson Distribution.
Namely, just calculate the sample mean of data as can be seen in poissfit().
Related
I'm attempting to use scale space implementation to fit n Gaussian curves to peaks in a noisy time series digital signal (measuring voltage).
To test it I created the following sample sum of three gaussians with noise (0.2*rand, sorry no picture, i'm new here)
amp = [2; 0.9; 1.3];
mu = [19; 23; 28];
sigma = [4.8; 1.3; 2.5];
x = linspace(1,50,1000);
for n=1:3, y(n,:) = A(n)*exp(-(x-B(n)).^2./(2*C(n)^2)); end
noisysignal = y(1,:) + y(2,:) + y(3,:) + 0.2*rand(1,numel(x))
I found this article http://www.engineering.wright.edu/~agoshtas/GMIP94.pdf posted by user355856 answer to thread "Peak decomposition"!
I believe my code generates the correct result for plotting the zero crossings as a function of the gaussian filter resolution sigma, but I have two issues. The first is that it seems yet another fitting routine would be needed to identify the approximate location of the arch intercepts for approximating the initial peak sigma and mu values. The second is that the edges of the scale space plot have substantial arches that definitely do not correspond to any peak. I'm not sure how to screen these out effectively. Last thing is that is used a spacing of 50 when calculating the second derivative central finite difference since too much more destroyed feature, and to much less results in a forest of zero crossings. Would there be a better way to filter that to control random zero crossings in the gaussian peak tails?
function [crossing] = scalespace(x, y, sigmalimit)
figure; hold on; ylim([0 sigmalimit]);
for sigma = 1:sigmalimit %
yconv = convkernel(sigma, y); %convolve with kernel
xconv = linspace(x(1), x(end), length(yconv));
yconvpp = d2centralfinite(xconv, yconv, 50); % 50 was empirically chosen
num = 0;
for i = 1 : length(yconvpp)-1
if sign(yconvpp(i)) ~= sign(yconvpp(i+1))
crossing(sigma, num+1) = xconv(i);
num = num+1;
end
end
plot(crossing(sigma, crossing(sigma, :) ~= 0),...
sigma*ones(1, numel(crossing(sigma, crossing(sigma, :) ~= 0))), '.');
end
function [yconv] = convkernel(sigma, y)
t = sigma^2;
C = 3; % for kernel truncation
M = C*round(sqrt(t))+1;
window = (-M) : (+M);
G = zeros(1, length(window));
G(:) = (1/(2*pi()*t))*exp(-(window.^2)./(2*t));
yconv = conv(G, y);
This is my first post and I apologize in advance for any issues in style. I'm fairly new to programming, so any advice regarding the programming style or information provided in this question would be much appreciated. I also read through Amro's answer about matlab's GMM function! if anyone feels that would be a more efficient approach to modeling multiple gaussians in a digital signal.
Thank you!
I have a data-set which is loaded into matlab. I need to do exponential fitting for the plotted curve without using the curve fitting tool cftool.
I want to do this manually through executing a code/function that will output the values of a and b corresponding to the equation:
y = a*exp(b*x)
Then be using those values, I will do error optimization and create the best fit for the data I have.
Any help please?
Thanks in advance.
Try this...
f = fit(x,y,'exp1');
I think the typical objective in this type of assignment is to recognize that by taking the log of both sides, various methods of polynomial fit approaches can be used.
ln(y) = ln(a) + ln( exp(x).^b )
ln(y) = ln(a) + b * ln( exp(x) )
There can be difficulties with this approach when errors such as noise are involved due to the behavior of ln as it approaches zero.
In this exercise I have a set of data that present an exponential curve and I want to fit them exponentially and get the values of a and b. I used the following code and it worked with the data I have.
"trail.m" file:
%defining the data used
load trialforfitting.txt;
xdata= trialforfitting(:,1);
ydata= trialforfitting(:,2);
%calling the "fitcurvedemo" function
[estimates, model] = fitcurvedemo(xdata,ydata)
disp(sse);
plot(xdata, ydata, 'o'); %Data curve
hold on
[sse, FittedCurve] = model(estimates);
plot(xdata, FittedCurve, 'r')%Fitted curve
xlabel('Voltage (V)')
ylabel('Current (A)')
title('Exponential Fitting to IV curves');
legend('data', ['Fitting'])
hold off
"fitcurvedemo.m" file:
function [estimates, model] = fitcurvedemo(xdata, ydata)
%Call fminsearch with a random starting point.
start_point = rand(1, 2);
model = #expfun;
estimates = fminsearch(model, start_point);
%"expfun" accepts curve parameters as inputs, and outputs
%the sum of squares error [sse] expfun is a function handle;
%a value that contains a matlab object methods and the constructor
%"FMINSEARCH" only needs sse
%estimate returns the value of A and lambda
%model computes the exponential function
function [sse, FittedCurve] = expfun(params)
A = params(1);
lambda = params(2);
%exponential function model to fit
FittedCurve = A .* exp(lambda * xdata);
ErrorVector = FittedCurve - ydata;
%output of the expfun function [sum of squares of error]
sse = sum(ErrorVector .^ 2);
end
end
I have a new set of data that doesn't work with this code and give the appropriate exponential fit for the data curve plotted.
I have a question about the sgolay function in Matlab R2013a. My database has 165 spectra with 2884 variables and I would like to take the first and second derivatives of them. How might I define the inputs K and F to sgolay?
Below is an example:
sgolay is used to smooth a noisy sinusoid and compare the resulting first and second derivatives to the first and second derivatives computed using diff. Notice how using diff amplifies the noise and generates useless results.
K = 4; % Order of polynomial fit
F = 21; % Window length
[b,g] = sgolay(K,F); % Calculate S-G coefficients
dx = .2;
xLim = 200;
x = 0:dx:xLim-1;
y = 5*sin(0.4*pi*x)+randn(size(x)); % Sinusoid with noise
HalfWin = ((F+1)/2) -1;
for n = (F+1)/2:996-(F+1)/2,
% Zero-th derivative (smoothing only)
SG0(n) = dot(g(:,1), y(n - HalfWin: n + HalfWin));
% 1st differential
SG1(n) = dot(g(:,2), y(n - HalfWin: n + HalfWin));
% 2nd differential
SG2(n) = 2*dot(g(:,3)', y(n - HalfWin: n + HalfWin))';
end
SG1 = SG1/dx; % Turn differential into derivative
SG2 = SG2/(dx*dx); % and into 2nd derivative
% Scale the "diff" results
DiffD1 = (diff(y(1:length(SG0)+1)))/ dx;
DiffD2 = (diff(diff(y(1:length(SG0)+2)))) / (dx*dx);
subplot(3,1,1);
plot([y(1:length(SG0))', SG0'])
legend('Noisy Sinusoid','S-G Smoothed sinusoid')
subplot(3, 1, 2);
plot([DiffD1',SG1'])
legend('Diff-generated 1st-derivative', 'S-G Smoothed 1st-derivative')
subplot(3, 1, 3);
plot([DiffD2',SG2'])
legend('Diff-generated 2nd-derivative', 'S-G Smoothed 2nd-derivative')
Taking derivatives in an inherently noisy process. Thus, if you already have some noise in your data, indeed, it will be magnified as you take higher order derivatives. Savitzky-Golay is a very useful way of combining smoothing and differentiation into one operation. It's a general method and it computes derivatives to an arbitrary order. There are trade-offs, though. Other special methods exist for data with a certain structure.
In terms of your application, I don't have any concrete answers. Much depends on the nature of the data (sampling rate, noise ratio, etc.). If you use too much smoothing, you'll smear your data or produce aliasing. Same thing if you over-fit the data by using high order polynomial coefficients, K. In your demo code you should also plot the analytical derivatives of the sin function. Then play with different amounts of input noise and smoothing filters. Such a tool with known exact answers may be helpful if you can approximate aspects of your real data. In practice, I try to use as little smoothing as possible in order to produce derivatives that aren't too noisy. Often this means a third-order polynomial (K = 3) and a window size, F, as small as possible.
So yes, many suggest that you use your eyes to tune these parameters. However, there has also been some very recent research on choosing the coefficients automatically: On the Selection of Optimum Savitzky-Golay Filters (2013). There are also alternatives to Savitzky-Golay, e.g., this paper based on regularization, but you may need to implement them yourself in Matlab.
By the way, a while back I wrote a little replacement for sgolay. Like you, I only needed the second output, the differentiation filters, G, so that's all it calculates. This function is also faster (by about 2–4 times):
function G=sgolayfilt(k,f)
%SGOLAYFILT Savitzky-Golay differentiation filters
s = vander(0.5*(1-f):0.5*(f-1));
S = s(:,f:-1:f-k);
[~,R] = qr(S,0);
G = S/R/R';
A full version of this function with input validation is available on my GitHub.
So I have had a few posts the last few days about using MatLab to perform a convolution (see here). But I am having issues and just want to try and use the convolution property of Fourier Transforms. I have the code below:
width = 83.66;
x = linspace(-400,400,1000);
a2 = 1.205e+004 ;
al = 1.778e+005 ;
b1 = 94.88 ;
c1 = 224.3 ;
d = 4.077 ;
measured = al*exp(-((abs((x-b1)./c1).^d)))+a2;
%slit
rect = #(x) 0.5*(sign(x+0.5) - sign(x-0.5));
rt = rect(x/width);
subplot(5,1,1);plot(x,measured);title('imported data-super gaussian')
subplot(5,1,2);plot(x,(real(fftshift(fft(rt)))));title('transformed slit')
subplot(5,1,3);plot(x,rt);title('slit')
u = (fftshift(fft(measured)));
l = u./(real(fftshift(fft(rt))));
response = (fftshift(ifft(l)));
subplot(5,1,4);plot(x,real(response));title('response')
%Data Check
check = conv(rt,response,'full');
z = linspace(min(x),max(x),length(check));
subplot(5,1,5);plot(z,real(check));title('check')
My goal is to take my case, which is $measured = rt \ast signal$ and find signal. Once I find my signal, I convolve it with the rectangle and should get back measured, but I do not get that.
I have very little matlab experience, and pretty much 0 signal processing experience (working with DFTs). So any advice on how to do this would be greatly appreciated!
After considering the problem statement and woodchips' advice, I think we can get closer to a solution.
Input: u(t)
Output: y(t)
If we assume the system is causal and linear we would need to shift the rect function to occur before the response, like so:
rt = rect(((x+270+(83.66/2))/83.66));
figure; plot( x, measured, x, max(measured)*rt )
Next, consider the response to the input. It looks to me to be first order. If we assume as such, we will have a system transfer function in the frequency domain of the form:
H(s) = (b1*s + b0)/(s + a0)
You had been trying to use convolution to and FFT's to find the impulse response, "transfer function" in the time domain. However, the FFT of the rect, being a sinc has a zero crossing periodically. These zero points make using the FFT to identify the system extremely difficult. Due to:
Y(s)/U(s) = H(s)
So we have U(s) = A*sinc(a*s), with zeros, which makes the division go to infinity, which doesn't make sense for a real system.
Instead, let's attempt to fit coefficients to the frequency domain linear transfer function that we postulate is of order 1 since there are no overshoots, etc, 1st order is a reasonable place to start.
EDIT
I realized my first answer here had a unstable system description, sorry! The solution to the ODE is very stiff due to the rect function, so we need to crank down the maximum time step and use a stiff solver. However, this is still a tough problem to solve this way, a more analytical approach may be more tractable.
We use fminsearch to find the continuous time transfer function coefficients like:
function x = findTf(c0,u,y,t)
% minimize the error for the estimated
% parameters of the transfer function
% use a scaled version without an offset for the response, the
% scalars can be added back later without breaking the solution.
yo = (y - min(y))/max(y);
x = fminsearch(#(c) simSystem(c,u,y,t),c0);
end
% calculate the derivatives of the transfer function
% inputs and outputs using the estimated coefficient
% vector c
function out = simSystem(c,u,y,t)
% estimate the derivative of the input
du = diff([0; u])./diff([0; t]);
% estimate the second derivative of the input
d2u = diff([0; du])./diff([0; t]);
% find the output of the system, corresponds to measured
opt = odeset('MaxStep',mean(diff(t))/100);
[~,yp] = ode15s(#(tt,yy) odeFun(tt,yy,c,du,d2u,t),t,[y(1) u(1) 0],opt);
% find the error between the actual measured output and the output
% from the system with the estimated coefficients
out = sum((yp(:,1) - y).^2);
end
function dy = odeFun(t,y,c,du,d2u,tx)
dy = [c(1)*y(3)+c(2)*y(2)-c(3)*y(1);
interp1(tx,du,t);
interp1(tx,d2u,t)];
end
Something like that anyway should get you going.
x = findTf([1 1 1]',rt',measured',x');
I have a following stochastic model describing evolution of a process (Y) in space and time. Ds and Dt are domain in space (2D with x and y axes) and time (1D with t axis). This model is usually known as mixed-effects model or components-of-variation models
I am currently developing Y as follow:
%# Time parameters
T=1:1:20; % input
nT=numel(T);
%# Grid and model parameters
nRow=100;
nCol=100;
[Grid.Nx,Grid.Ny,Grid.Nt] = meshgrid(1:1:nCol,1:1:nRow,T);
xPower=0.1;
tPower=1;
noisePower=1;
detConstant=1;
deterministic_mu = detConstant.*(((Grid.Nt).^tPower)./((Grid.Nx).^xPower));
beta_s = randn(nRow,nCol); % mean-zero random effect representing location specific variability common to all times
gammaTemp = randn(nT,1);
for t = 1:nT
gamma_t(:,:,t) = repmat(gammaTemp(t),nRow,nCol); % mean-zero random effect representing time specific variability common to all locations
end
var=0.1;% noise has variance = 0.1
for t=1:nT
kappa_st(:,:,t) = sqrt(var)*randn(nRow,nCol);
end
for t=1:nT
Y(:,:,t) = deterministic_mu(:,:,t) + beta_s + gamma_t(:,:,t) + kappa_st(:,:,t);
end
My questions are:
How to produce delta in the expression for Y and the difference in kappa and delta?
Help explain, through some illustration using Matlab, if I am correctly producing Y?
Please let me know if you need some more information/explanation. Thanks.
First, I rewrote your code to make it a bit more efficient. I see you generate linearly-spaced grids for x,y and t and carry out the computation for all points in this grid. This approach has severe limitations on the maximum attainable grid resolution, since the 3D grid (and all variables defined with it) can consume an awfully large amount of memory if the resolution goes up. If the model you're implementing will grow in complexity and size (it often does), I'd suggest you throw this all into a function accepting matrix/vector inputs for s and t, which will be a bit more flexible in this regard -- processing "blocks" of data that will otherwise not fit in memory will be a lot easier that way.
Then, I generated the the delta_st term with rand instead of randn since the noise should be "white". Now I'm very unsure about that last one, and I didn't have time to read through the paper you linked to -- can you tell me on what pages I can find relevant the sections for the delta_st?
Now, the code:
%# Time parameters
T = 1:1:20; % input
nT = numel(T);
%# Grid and model parameters
nRow = 100;
nCol = 100;
% noise has variance = 0.1
var = 0.1;
xPower = 0.1;
tPower = 1;
noisePower = 1;
detConstant = 1;
[Grid.Nx,Grid.Ny,Grid.Nt] = meshgrid(1:nCol,1:nRow,T);
% deterministic mean
deterministic_mu = detConstant .* Grid.Nt.^tPower ./ Grid.Nx.^xPower;
% mean-zero random effect representing location specific
% variability common to all times
beta_s = repmat(randn(nRow,nCol), [1 1 nT]);
% mean-zero random effect representing time specific
% variability common to all locations
gamma_t = bsxfun(#times, ones(nRow,nCol,nT), randn(1, 1, nT));
% mean zero random effect capturing the spatio-temporal
% interaction not found in the larger-scale deterministic mu
kappa_st = sqrt(var)*randn(nRow,nCol,nT);
% mean zero random effect representing the micro-scale
% spatio-temporal variability that is modelled by white
% noise (i.i.d. at different time steps) in Ds·Dt
delta_st = noisePower * (rand(nRow,nCol,nT)-0.5);
% Final result:
Y = deterministic_mu + beta_s + gamma_t + kappa_st + delta_st;
Your implementation samples beta, gamma and kappa as if they are white (e.g. their values at each (x,y,t) are independent). The descriptions of the terms suggest that this is not meant to be the case. It looks like delta is supposed to capture the white noise, while the other terms capture the correlations over their respective domains. e.g. there is a non-zero correlation between gamma(t_1) and gamma(t_1+1).
If you wish to model gamma as a stationary Gaussian Markov process with variance var_g and correlation cor_g between gamma(t) and gamma(t+1), you can use something like
gamma_t = nan( nT, 1 );
gamma_t(1) = sqrt(var_g)*randn();
K_g = cor_g/var_g;
K_w = sqrt( (1-K_g^2)*var_g );
for t = 2:nT,
gamma_t(t) = K_g*gamma_t(t-1) + K_w*randn();
end
gamma_t = reshape( gamma_t, [ 1 1 nT ] );
The formulas I've used for gains K_g and K_w in the above code (and the initialization of gamma_t(1)) produce the desired stationary variance \sigma^2_0 and one-step covariance \sigma^2_1:
Note that the implementation above assumes that later you will sum the terms using bsxfun to do the "repmat" for you:
Y = bsxfun( #plus, deterministic_mu + kappa_st + delta_st, beta_s );
Y = bsxfun( #plus, Y, gamma_t );
Note that I haven't tested the above code, so you should confirm with sampling that it does actually produce a zero noise process of the specified variance and covariance between adjacent samples. To sample beta the same procedure can be extended into two dimensions, but the principles are essentially the same. I suspect kappa should be similarly modeled as a Markov Gaussian Process, but in all three dimensions and with a lower variance to represent higher-order effects not captured in mu, beta and gamma.
Delta is supposed to be zero mean stationary white noise. Assuming it to be Gaussian with variance noisePower one would sample it using
delta_st = sqrt(noisePower)*randn( [ nRows nCols nT ] );