I am fitting an exponential decay function with lsqvurcefit in Matlab. To do this I first normalize my data because they differ several orders of magnitude. However Im not sure how to denormalize my fitted parameters.
My fitting model is s = O + A * exp(-t/T) where t and s are known and t is in the order of 10^-3 and s in the order of 10^5. So I subtract from them their mean and divide them by their standarddeviation. My goal is to find the best A, O and T that at the given times t will result most near s. However I dont know how to denormalize my resulting A O and T.
Might somebody know how to do this? I only found this question on SO about normalisation, but does not really address the same problem.
When you normalize, you must record the means and standard deviations for each of your featuers. Then you can easily use those values to denormalize.
e.g.
A = [1 4 7 2 9]';
B = 100 475 989 177 399]';
So you could just normalize right away:
An = (A - mean(A)) / std(A)
but then you can't get back to the original A. So first save the means and stds.
Am = mean(A); Bm = mean(B);
As = std(A); Bs = std(B);
An = (A - Am)/As;
Bn = (B - Bm)/Bs;
now do whatever processing you want and then to denormalize:
Ad = An*As + Am;
Bd = Bn*Bs + Bm;
I'm sure you can see that that's going to be an issue if you have a lot of features (i.e. you have to type code out for each feature, what a mission!) so lets assume your data is arranged as a matrix, data, where each sample is a row and each column is a feature. Now you can do it like this:
data = [A, B]
means = mean(data);
stds = std(data);
datanorm = bsxfun(#rdivide, bsxfun(#minus, data, means), stds);
%// Do processing on datanorm
datadenorm = bsxfun(#plus, bsxfun(#times, datanorm, stds), means);
EDIT:
After you have fit your model parameters (A,O and T) using normalized t and f then your model will expect normalized inputs and produce normalized outputs. So to use it you should first normalize t and then denormalize f.
So to find a new f by running the model on a normalized new t. So f(tn) where tn = (t - tm)/ts and tm is the mean of your training (or fitting) t set and ts the std. Then to get your correct magnitude f you must denormalize only f, so the full solution would be
f(tn)*fs + fm
So once again, all you need to do is save the mean and std you used to normalize.
Related
So after beating my head against the wall for a few hours, I looked online for a solution to my problem, and it worked great. I just want to know what caused the issue with the way I was originally going about it.
here are some more details. The input is a 20x20px image from the MNIST datset, and there are 5000 samples, so X, or A1 is 5000x400. There are 25 nodes in the single hidden layer. The output is a one hot vector of 0-9 digits. y (not Y, which is the one hot encoding of y) is a 5000x1 vector with the value of 1-10.
Here was my original code for the cost function:
Y = zeros(m, num_labels);
for i = 1:m
Y(i, y(i)) = 1;
endfor
H = sigmoid(Theta2*[ones(1,m);sigmoid(Theta1*[ones(m, 1) X]'))
J = (1/m) * sum(sum((-Y*log(H]))' - (1-Y)*log(1-H]))')))
But then I found this:
A1 = [ones(m, 1) X];
Z2 = A1 * Theta1';
A2 = [ones(size(Z2, 1), 1) sigmoid(Z2)];
Z3 = A2*Theta2';
H = A3 = sigmoid(Z3);
J = (1/m)*sum(sum((-Y).*log(H) - (1-Y).*log(1-H), 2));
I see that this may be slightly cleaner, but what functionally causes my original code to get 304.88 and the other to get ~ 0.25? Is it the element wise multiplication?
FYI, this is the same problem as this question if you need the formal equation written out.
Thanks for any help I can get! I really want to understand where I'm going wrong
Transfer from the comments:
With a quick look, in J = (1/m) * sum(sum((-Y*log(H]))' - (1-Y)*log(1-H]))'))) there is definetely something going on with the parenthesis, but probably on how you pasted it here, not with the original code as this would throw an error when you run it. If I understand correctly and Y, H are matrices, then in your 1st version Y*log(H) is matrix multiplication while in the 2nd version Y.*log(H) is an entrywise multiplication (not matrix-multiplication, just c(i,j)=a(i,j)*b(i,j) ).
Update 1:
In regards to your question in the comment.
From the first screenshot, you represent each value yk(i) in the entry Y(i,k) of the Y matrix and each value h(x^(i))k as H(i,k). So basically, for each i,k you want to compute Y(i,k) log(H(i,k)) + (1-Y(i,k)) log(1-H(i,k)). You can do it for all the values together and store the result in matrix C. Then C = Y.*log(H) + (1-Y).*log(1-H) and each C(i,k) has the above mentioned value. This is an operation .* because you want to do the operation for each element (i,k) of each matrix (in contrast to multiplying the matrices which is totally different). Afterwards, to get the sum of all the values inside the 2D dimensional matrix C, you use the octave function sum twice: sum(sum(C)) to sum both columnwise and row-wise (or as # Irreducible suggested, just sum(C(:))).
Note there may be other errors as well.
The final goal I am trying to achieve is the generation of a ten minutes time series: to achieve this I have to perform an FFT operation, and it's the point I have been stumbling upon.
Generally the aimed time series will be assigned as the sum of two terms: a steady component U(t) and a fluctuating component u'(t). That is
u(t) = U(t) + u'(t);
So generally, my code follows this procedure:
1) Given data
time = 600 [s];
Nfft = 4096;
L = 340.2 [m];
U = 10 [m/s];
df = 1/600 = 0.00167 Hz;
fn = Nfft/(2*time) = 3.4133 Hz;
This means that my frequency array should be laid out as follows:
f = (-fn+df):df:fn;
But, instead of using the whole f array, I am only making use of the positive half:
fpos = df:fn = 0.00167:3.4133 Hz;
2) Spectrum Definition
I define a certain spectrum shape, applying the following relationship
Su = (6*L*U)./((1 + 6.*fpos.*(L/U)).^(5/3));
3) Random phase generation
I, then, have to generate a set of complex samples with a determined distribution: in my case, the random phase will approach a standard Gaussian distribution (mu = 0, sigma = 1).
In MATLAB I call
nn = complex(normrnd(0,1,Nfft/2),normrnd(0,1,Nfft/2));
4) Apply random phase
To apply the random phase, I just do this
Hu = Su*nn;
At this point start my pains!
So far, I only generated Nfft/2 = 2048 complex samples accounting for the fpos content. Therefore, the content accounting for the negative half of f is still missing. To overcome this issue, I was thinking to merge the real and imaginary part of Hu, in order to get a signal Huu with Nfft = 4096 samples and with all real values.
But, by using this merging process, the 0-th frequency order would not be represented, since the imaginary part of Hu is defined for fpos.
Thus, how to account for the 0-th order by keeping a procedure as the one I have been proposing so far?
I have intensity points which is marked as pink in above plot, and these are stored in variable and is given as
intensity_info =[ 35.9349
46.4465
46.4790
45.7496
44.7496
43.4790
42.5430
41.4351
40.1829
37.4114
33.2724
29.5447
26.8373
24.8171
24.2724
24.2487
23.5228
23.5228
24.2048
23.7057
22.5228
22.0000
21.5210
20.7294
20.5430
20.2504
20.2943
21.0219
22.0000
23.1096
25.2961
29.3364
33.4351
37.4991
40.8904
43.2706
44.9798
47.4553
48.9324
48.6855
48.5210
47.9781
47.2285
45.5342
34.2310 ];
I also have information of point A, B and C which is calculated by :
[maxtab, mintab] = peakdet(intensity_info, 1); % maxtab has A and B information and
% mintab has C information
peakdet.m matlab code can be found here: (http://www.billauer.co.il/peakdet.html). I want to calculate point D (where there is sight increase in intensity value i.e. if we come down from point A intensity decreases but at point D there is slight increase in intensity). As seen from graph below point C can also lie in the left of point D and in this case if we come down from point B intensity decrease and at D there is slight increase in intensity. Intensity values for below graph below is given as:
intensity_info =[29.3424
39.4847
43.7934
47.4333
49.9123
51.4772
52.1189
51.6601
48.8904
45.0000
40.9561
36.5868
32.5904
31.0439
29.9982
27.9579
26.6965
26.7312
28.5631
29.3912
29.7496
29.7715
29.7294
30.2706
30.1847
29.7715
29.2943
29.5667
31.0877
33.5228
36.7496
39.7496
42.5009
45.7934
49.1847
52.2048
53.9123
54.7276
54.9781
55.0000
54.9781
54.7276
53.9342
51.4246
38.2512];
and Point A ,B and C calculated in same manner as above.
How can I calculate point D in these cases?
I'm not MATLAB literate, but if the 'tabs' are subtables, then perhaps you can manipulate them to create other subtables... something like (I repeat, illiterate)
left_of_graph = part of graph from A to C
right_of_graph = part of graph from C to B
left_delta = some fraction of the difference between A's y-value and C's y-value
right_delta = some fraction of the difference between C's y-value and B's y-value
[left_maxtab,left_mintab] = peakdet(left_of_graph,left_delta)
[right_maxtab,right_mintab] = peakdet(right_of_graph,right_delta)
I do have some experience with peak analysis, so I will say this will help, not answer, the problem. You can find all the peaks you want, but that doesn't mean the data is worth squinting at. Noise and imperfect resolution are real. Happy hunting!
PS You might also scan for all points higher than both neighbors in the whole band. That is gauranteed to miss none of the 'true' maxima, but to give you more 'false' maxima than you can count (although your data looks pretty smooth!).
The solution your looking for is a numerical method based on iterations,in your particular case bisection is the one who best suits cause others like uniform sequential search do not take an interval as an input.
This is the implementation for bisection:
function [ a, b, L ] = Bisection( f, a, b, e, d )
%[a,b] is the interval for your local maxima; e is the error for the result and d is the step(dx).
L = b - a;
while(L > e)
xa = ((a + b)/2) - d/2;
xb = ((a + b)/2) + d/2;
ya = subs(f,xa);
yb = subs(f,xb);
if(ya < yb)
a = xa;
else
b = xb;
end
L = b - a;
end
end
The method before is pretty efficient and straightforward to use, although there are others even better(in performance) like Fibonacci and Gold Section methods.
Cheers.
I have found the alternative solution. The extrema.m helped in finding Point D in above two garphs. The extrema.m can be downloaded from (http://www.mathworks.com/matlabcentral/fileexchange/12275-extrema-m-extrema2-m) and used in following way to find point D:
[ymax,imax,ymin,imin] = extrema(intensity_info);
figure;plot(x,intensity_info,x(imax),ymax,'g.',x(imin),ymin,'r.');
I’m currently a Physics student and for several weeks have been compiling data related to ‘Quantum Entanglement’. I’ve now got to a point where I have to plot my data (which should resemble a cos² graph - and does) to a sort of “best fit” cos² graph. The lab script says the following:
A more precise determination of the visibility V (this is basically how 'clean' the data is) follows from the best fit to the measured data using the function:
f(b) = A/2[1-Vsin(b-b(center)/P)]
Granted this probably doesn’t mean much out of context, but essentially A is the amplitude, b is an angle and P is the periodicity. Hence this is also a “wave” like the experimental data I have found.
From this I understand, as previously mentioned, I am making a “best fit” curve. However, I have been told that this isn’t possible with Excel and that the best approach is Matlab.
I know intermediate JavaScript but do not know Matlab and was hoping for some direction.
Is there a tutorial I can read for this? Is it possible for someone to go through it with me? I really have no idea what it entails, so any feed back would be greatly appreciated.
Thanks a lot!
Initial steps
I guess we should begin by getting a representation in Matlab of the function that you're trying to model. A direct translation of your formula looks like this:
function y = targetfunction(A,V,P,bc,b)
y = (A/2) * (1 - V * sin((b-bc) / P));
end
Getting hold of the data
My next step is going to be to generate some data to work with (you'll use your own data, naturally). So here's a function that generates some noisy data. Notice that I've supplied some values for the parameters.
function [y b] = generateData(npoints,noise)
A = 2;
V = 1;
P = 0.7;
bc = 0;
b = 2 * pi * rand(npoints,1);
y = targetfunction(A,V,P,bc,b) + noise * randn(npoints,1);
end
The function rand generates random points on the interval [0,1], and I multiplied those by 2*pi to get points randomly on the interval [0, 2*pi]. I then applied the target function at those points, and added a bit of noise (the function randn generates normally distributed random variables).
Fitting parameters
The most complicated function is the one that fits a model to your data. For this I use the function fminunc, which does unconstrained minimization. The routine looks like this:
function [A V P bc] = bestfit(y,b)
x0(1) = 1; %# A
x0(2) = 1; %# V
x0(3) = 0.5; %# P
x0(4) = 0; %# bc
f = #(x) norm(y - targetfunction(x(1),x(2),x(3),x(4),b));
x = fminunc(f,x0);
A = x(1);
V = x(2);
P = x(3);
bc = x(4);
end
Let's go through line by line. First, I define the function f that I want to minimize. This isn't too hard. To minimize a function in Matlab, it needs to take a single vector as a parameter. Therefore we have to pack our four parameters into a vector, which I do in the first four lines. I used values that are close, but not the same, as the ones that I used to generate the data.
Then I define the function I want to minimize. It takes a single argument x, which it unpacks and feeds to the targetfunction, along with the points b in our dataset. Hopefully these are close to y. We measure how far they are from y by subtracting from y and applying the function norm, which squares every component, adds them up and takes the square root (i.e. it computes the root mean square error).
Then I call fminunc with our function to be minimized, and the initial guess for the parameters. This uses an internal routine to find the closest match for each of the parameters, and returns them in the vector x.
Finally, I unpack the parameters from the vector x.
Putting it all together
We now have all the components we need, so we just want one final function to tie them together. Here it is:
function master
%# Generate some data (you should read in your own data here)
[f b] = generateData(1000,1);
%# Find the best fitting parameters
[A V P bc] = bestfit(f,b);
%# Print them to the screen
fprintf('A = %f\n',A)
fprintf('V = %f\n',V)
fprintf('P = %f\n',P)
fprintf('bc = %f\n',bc)
%# Make plots of the data and the function we have fitted
plot(b,f,'.');
hold on
plot(sort(b),targetfunction(A,V,P,bc,sort(b)),'r','LineWidth',2)
end
If I run this function, I see this being printed to the screen:
>> master
Local minimum found.
Optimization completed because the size of the gradient is less than
the default value of the function tolerance.
A = 1.991727
V = 0.979819
P = 0.695265
bc = 0.067431
And the following plot appears:
That fit looks good enough to me. Let me know if you have any questions about anything I've done here.
I am a bit surprised as you mention f(a) and your function does not contain an a, but in general, suppose you want to plot f(x) = cos(x)^2
First determine for which values of x you want to make a plot, for example
xmin = 0;
stepsize = 1/100;
xmax = 6.5;
x = xmin:stepsize:xmax;
y = cos(x).^2;
plot(x,y)
However, note that this approach works just as well in excel, you just have to do some work to get your x values and function in the right cells.
I have a series of numbers. I calculated the "auto-regression" between them using Yule-Walker method.
But now how do I extend the series?
Whole working is as follows:
a) the series I use:
143.85 141.95 141.45 142.30 140.60 140.00 138.40 137.10 138.90 139.85 138.75 139.85 141.30 139.45 140.15 140.80 142.50 143.00 142.35 143.00 142.55 140.50 141.25 140.55 141.45 142.05
b) this data is loaded in to data using:
data = load('c:\\input.txt', '-ascii');
c) the calculation of the coefficients:
ar_coeffs = aryule(data,9);
this gives:
ar_coeffs =
1.0000 -0.9687 -0.0033 -0.0103 0.0137 -0.0129 0.0086 0.0029 -0.0149 0.0310
d) Now using this, how do I calculate the next number in the series?
[any other method of doing this (except using aryule()) is also fine... this is what I did, if you have a better idea, please let me know!]
For a real valued sequence x of length N, and a positive order p:
coeff = aryule(x, p)
returns the AR coefficients of order p of the data x (Note that coeff(1) is a normalizing factor). In other words it models values as a linear combination of the past p values. So to predict the next value, we use the last p values as:
x(N+1) = sum_[k=0:p] ( coeff(k)*x(N-k) )
or in actual MATLAB code:
p = 9;
data = [...]; % the seq you gave
coeffs = aryule(data, p);
nextValue = -coeffs(2:end) * data(end:-1:end-p+1)';
EDIT: If you have access to System Identification Toolbox, then you can use any of a number of functions to estimate AR/ARMAX models (ar/arx/armax) (or even find the order of AR model using selstruc):
m = ar(data, p, 'yw'); % yw for Yule-Walker method
pred = predict(m, data, 1);
coeffs = m.a;
nextValue = pred(end);
subplot(121), plot(data)
subplot(122), plot( cell2mat(pred) )
Your data has a non-zero mean. Doesn't the Yule-Walker model assume the data is the output of a linear filter excited by a zero-mean white noise process?
If you remove the mean, this example using ARYULE and LPC might be what you're looking for. The procedure boils down to:
a = lpc(data,9); % uses Yule-Walker modeling
pred = filter(-a(2:end),1,data);
disp(pred(end)); % the predicted value at time N+1