This code will run with the econometrics toolbox,
model = arima('Constant',0.5,'AR',{0.9999},'Variance',.4);
rng('default')
Y = simulate(model,50);
figure
plot(Y)
xlim([0,50])
title('Simulated AR(1) Process')
rng('default')
Y = simulate(model,50,'NumPaths',1000);
Y1=Y(:,1);
for ii = 1:50
Mdl = arima(1,0,0);
EstMdl = estimate(Mdl, [Y(:,ii)]);
end
How can I store the p-values from the EstMdl for each iteration (i.e. a vector with 5 pvalues) ?
Use summarize (requires ≥ R2018a) to get the results of estimate.
Showing the results for one iteration here:
>> ii=1;
>> Mdl = arima(1,0,0);
>> EstMdl = estimate(Mdl, [Y(:,ii)]);
ARIMA(1,0,0) Model (Gaussian Distribution):
Value StandardError TStatistic PValue
_______ _____________ __________ __________
Constant 622.14 427.99 1.4536 0.14605
AR{1} 0.87561 0.085586 10.231 1.4432e-24
Variance 0.37122 0.079507 4.669 3.0263e-06
>> Results = summarize(EstMdl);
>> PValues = Results.Table.PValue
PValues =
0.1460
0.0000
0.0000
Related
i'm trying to evaluate a G11 symfun in the code below but it fails and keep showing me the variable t that I choose to be set as specified in the code, I even tried to use the 'subs' command but it failed also. I define the necessary symbols and variables but the t variable does not evaluated in just 'G11' symfun, there are another similar symfun such G12 or K12 but the t variable evaluated in them
here is my code
% Defining the variables as symbols.
clear all, close all
clc
syms x xi q_1 q_2 q_3 q_4 q_01 q_02 q_03 q_04 EI Mr M m L t Omega S1 S2 S3
...
S4 S5 S6 S7 S8 w L_dot U U_dot g L_ddot M11 M22
% Defining the general coordinates and the trial function.
L_ddot=0;
g=9.8;
Mr=M/(M+m);
PHI(x,t)=[sqrt(2).* sin(pi.*(xi)) sqrt(2).* sin(2*pi.*(xi)) sqrt(2).*
sin(3.*pi.*(xi)) sqrt(2).* sin(4.*pi.*(xi))];
q_1(t)= S1*exp(sqrt(-1)*w*t);
q_01(t)= S5*exp(sqrt(-1)*w*t);
q_2(t)= S2*exp(sqrt(-1)*w*t);
q_02(t)= S6*exp(sqrt(-1)*w*t);
q_3(t)= S3*exp(sqrt(-1)*w*t);
q_03(t)= S7*exp(sqrt(-1)*w*t);
q_4(t)= S4*exp(sqrt(-1)*w*t);
q_04(t)= S8*exp(sqrt(-1)*w*t);
Q_v(t) =[q_1;q_2;q_3;q_4];
Q_w(t) =[q_01;q_02;q_03;q_04];
V(x,t) = PHI*Q_v(t);
W(x,t) = PHI*Q_w(t);
% Defining the coeficients of the ODE.
U(x,t)=U;
L(x,t)=1+0.1*t;
L_dot(x,t)=diff(L,'t',1);
L_ddot=0;
M11=int(PHI'*PHI,'xi',[0 1]);
M22=M11;
G11=(2*(L_dot/L))* int((2-xi)*PHI'*diff(PHI,'xi',1),'xi',[0 1])+2*Mr*
(U/L)*int(PHI'*diff(PHI,'xi',1),'xi',[0 1]);
G12=-2*Omega*int(PHI'*PHI,'xi',[0 1]);
G21=-G12;
G22=G11;
K11=(EI/((M+m)*L^4))*int(PHI'*diff(PHI,'xi',4),'xi',[0 1])+ (L_dot/L)^2
*int((2-xi)^2 *...
PHI'*diff(PHI,'xi',2),'xi',[0 1])+ ((L_ddot*L-2*L_dot^2)/L^2)*int((1-
xi)*PHI'*diff(PHI,'xi',1),'xi',[0 1]) ...
+ 2*Mr*(L_dot*U/L^2)*int((2-xi)*PHI'*diff(PHI,'xi',2),'xi',[0 1])+ Mr*
(U/L)^2 *int(PHI'*diff(PHI,'xi',2),'xi',[0 1])-...
(g-((L_ddot-2*L_dot^2)/L))*int((1-xi)*PHI'*diff(PHI,'xi',2),'xi',[0 1])+
(g/L)*int(PHI'*diff(PHI,'xi',1),'xi',[0 1])-...
Omega^2 * int(PHI'*PHI,'xi',[0 1]);
K12= -2*Omega*(L_dot/L)*int((2-xi)*PHI'*diff(PHI,'xi',1),'xi',[0 1])-2*Mr*
((U*Omega)/L)*int(PHI'*diff(PHI,'xi',1),'xi',[0 1]);
K21=-K12;
K22=K11;
% evaluating the Coefficient matrices for the time history 1 to 80 seconds
by the stepping of 0.1 .
t=0:0.1:80;
m=8;
M=2;
Mr=0.2;
x=1;
U=2; % the flow velocity
Omega=90;
EI=8.9782;
FUNM11 = matlabFunction(M11);
Mmatrix11 = feval(FUNM11);
FUNG11 = matlabFunction(G11);
Gmatrix11 = feval(FUNG11,t,U,Mr,L_dot,L);
FUNG12 = matlabFunction(G12);
Gmatrix12 = feval(FUNG12, t,x,Omega);
FUNK11 = matlabFunction(K11);
Kmatrix11 = feval(FUNK11, t,M,x,Omega,m,U,EI);
FUNK12 = matlabFunction(K12);
Kmatrix12 = feval(FUNK12, t,M,x,U,m,Omega);
Mmatrix22=Mmatrix11;
Gmatrix21=-Gmatrix12;
Gmatrix22=Gmatrix11;
Kmatrix21=-Kmatrix12;
Kmatrix22=Kmatrix11;
% Assembling the Coeficient matrices
Q=[Q_v;Q_w];
Mmatrix=[Mmatrix11 ,zeros(size(Mmatrix11)); zeros(size(Mmatrix11))
Mmatrix22];
Gmatrix=[Gmatrix11 Gmatrix12; Gmatrix21 Gmatrix22];
Kmatrix=[Kmatrix11 Kmatrix12; Kmatrix21 Kmatrix22];
After assembling my coefficient matrices I try to solve this algebric equation for w:
eqn=det(-w^2.*Mmatrix+i*w.*Gmatrix+Kmatrix)==0;
which w is the frequency of the system.
I have the following code:
for k = 1:256
for t = 1:10000
% R matrix
buffer = corrcoef(matrixA(:,k),matrixB(:,t));
correlation_matrix(k,t) = buffer (2,1);
end
end
I calculate the pearson correlation of the columns of two matrices. This works fine for me and the results are correct. However the process seems to be very very slow. Does anyone have an idea how to accelerate the calculations here?
You can remove the loop entirely by using corr from the stats toolbox
>> matrixA = randn(100, 256);
>> matrixB = randn(100, 10000);
>> size(corr(matrixA, matrixB))
ans =
256 10000
Just concatenate the matrices, calculate all the correlations in one operation, and then extract the relevant ones.
>> matrixA = rand(100,256);
>> matrixB = rand(100,10000);
>> matrixC = [matrixA,matrixB];
>> c = corrcoef(matrixC);
>> correlation_matrix = c(1:256, 257:10256)
Should be quite a lot faster.
I have recently studied the concepts of CCA and wanted to implement it in MATLAB. However there is an existing matlab command canoncorr present. I wanted to write my own code. I have studied it extensively and found three approaches :
1: Hardoon : The approach uses lagrange multipliers to decompose the problem into an generalised eigenvalue problem. The code can be found here : cca_hardoon
For sanity sake I am also giving the code here : The data has to be centered previously.
function [Wx, Wy, r] = cca(X,Y)
% CCA calculate canonical correlations
%
% [Wx Wy r] = cca(X,Y) where Wx and Wy contains the canonical correlation
% vectors as columns and r is a vector with corresponding canonical
% correlations.
%
% Update 31/01/05 added bug handling.
if (nargin ~= 2)
disp('Inocorrect number of inputs');
help cca;
Wx = 0; Wy = 0; r = 0;
return;
end
% calculating the covariance matrices
z = [X; Y];
C = cov(z.');
sx = size(X,1);
sy = size(Y,1);
Cxx = C(1:sx, 1:sx) + 10^(-8)*eye(sx);
Cxy = C(1:sx, sx+1:sx+sy);
Cyx = Cxy';
Cyy = C(sx+1:sx+sy,sx+1:sx+sy) + 10^(-8)*eye(sy);
%calculating the Wx cca matrix
Rx = chol(Cxx);
invRx = inv(Rx);
Z = invRx'*Cxy*(Cyy\Cyx)*invRx;
Z = 0.5*(Z' + Z); % making sure that Z is a symmetric matrix
[Wx,r] = eig(Z); % basis in h (X)
r = sqrt(real(r)); % as the original r we get is lamda^2
Wx = invRx * Wx; % actual Wx values
% calculating Wy
Wy = (Cyy\Cyx) * Wx;
% by dividing it by lamda
Wy = Wy./repmat(diag(r)',sy,1);
2. MATLAB approach Please note the centering of data is done within the code itself.
3. CCA by Normal SVD only : This approach does not require the qr decomposition and utilizes the svd decomposition only. I have referred top this article here : cca by svd. Please refer to the text articles below which are taken from the referred article.
I have tried to code this program myself but unsuccessfully.
function [A,B,r,U,V] = cca_by_svd(x,y)
% computing the means
N = size(x,1); mu_x = mean(x,1); mu_y = mean(y,1);
% substracting the means
x = x - repmat(mu_x,N,1); y = y - repmat(mu_y,N,1);
x = x.'; y = y.';
% computing the covariance matrices
Cxx = (1/N)*x*(x.'); Cyy = (1/N)*y*(y.'); Cxy = (1/N)*x*(y.');
%dimension
m = min(rank(x),rank(y));
%m = min(size(x,1),size(y,1));
% computing the quare root inverse of the matrix
[V,D]=eig(Cxx); d = diag(D);
% Making all the eigen values positive
d = (d+abs(d))/2; d2 = 1./sqrt(d); d2(d==0)=0; Cxx_iv=V*diag(d2)*inv(V);
% computing the quare root inverse of the matrix
[V,D]=eig(Cyy); d = diag(D);
% Making all the eigen values positive
d = (d+abs(d))/2; d2 = 1./sqrt(d); d2(d==0)=0; Cyy_iv=V*diag(d2)*inv(V);
Omega = Cxx_iv*Cxy*Cyy_iv;
[C,Sigma,D] = svd(Omega);
A = Cxx_iv*C; A = A(:,1:m);
B = Cyy_iv*D.'; B = B(:,1:m);
A = real(A); B = real(B);
U = A.'*x; V = B.'*y;
r = Sigma(1:m,1:m);
I am running this code snippet:
clc;clear all;close all;
load carbig;
X = [Displacement Horsepower Weight Acceleration MPG];
nans = sum(isnan(X),2) > 0;
x = X(~nans,1:3);
y = X(~nans,4:5);
[A1, B1, r1, U1, V1] = canoncorr(x, y);
[A2, B2, r2, U2, V2] = cca_by_svd(x, y);
[A3, B3, r3] = cca(x.',y.',1);
The projection vector is coming out to be this :
>> A1
A1 =
0.0025 0.0048
0.0202 0.0409
-0.0000 -0.0027
>> A2
A2 =
0.0025 0.0048
0.0202 0.0410
-0.0000 -0.0027
>> A3
A3 =
-0.0302 -0.0050 -0.0022
0.0385 -0.0420 -0.0176
0.0020 0.0027 -0.0001
>> B1
B1 =
-0.1666 -0.3637
-0.0916 0.1078
>> B2
B2 =
-0.1668 -0.3642
-0.0917 0.1079
>> B3
B3 =
0.0000 + 0.0000i 0.3460 + 0.0000i 0.1336 + 0.0000i
0.0000 + 0.0000i -0.0967 + 0.0000i 0.0989 + 0.0000i
Question: Can someone please tell me where I am going wrong. The three approaches that I have referred all solve the same problem and ideally their solutions should converge. I admit my code 'cca_by_svd' may be wrong but hardoon's code and matlab's output should be same. Please point out to me where I am going wrong. edit I have rechecked and corrected my code. Now for this dataset the method 2 and 3 converge.
There's a few things that cca(X,Y) doesn't do that canoncorr does:
One is normalizing the data. If you add X = normc(X')' (also for Y) to your cca(X,Y) function, the output r will match that of canoncorr. If you look into canoncorr's code, you'll see that it starts by QR decomposition of X and Y.
Another difference is that eig sorts the eigenvalues in ascending order, so cca(X,Y) should flip the output of eig(Z).
NOTE: Despite correcting these differences, I wasn't able to fully recover Wx and Wy to match the outputs of canoncorr. Ideally, Wx'*Wx should look exactly alike between cca and canoncorr.
Hi I have this function to calculate the coefficient list for the Newton polynomial:
function p = polynom(x,y,c)
m = length(x);
p = c(m)*ones(size(y));
for k = m-1:-1:1
p = p.*(y-x(k)) + c(k);
end
I already have another program that finds the divided differences c correctly. For x=[3 1 5 6], y=[1 -3 2 4] I get c=[1.0000 2.0000 -0.3750 0.1750] which is correct.
But when I use the above function it gives as a result:
p =
-3.0000 -53.6000 -0.1000 1.3500
But the correct answer should be:
p =
0.1750 -1.9500 7.5250 -8.7500
What is wrong with my function?
This is the code I've used. I've used it long time ago, so I'll post whole code. I hope this is what you need.
table.m
%test table
X = [1 1.05 1.10 1.15 1.2 1.25];
Y = [0.68269 0.70628 0.72867 0.74986 0.76986 0.78870];
position.m
%M-file position.m with function position(x)
%that for argument x returns
%value 1 if x<X(2),
%value 2 if x>X(n-1)
%else 0
function position=position(x)
table;
n=length(X);
if x<X(2)
position=1;
else
if x>X(n-1)
position=2;
else position=0;
end
end
Newton.m
function Newton=Newton(x)
table;
%position(x);
n=length(X);
A=zeros(n,n+1);
% table of final differences - upper triangle
for i=1:n
A(i,1)=X(i);
A(i,2)=Y(i);
end
for i=3:n+1
for j=1:(n-i+2)
A(j,i)=A(j+1,i-1)-A(j,i-1);
end
end
%showing matrix of final differences, A
A
h=X(2)-X(1);
q1=(x-X(1))/h;
q2=(x-X(n))/h;
s1=q1;
s2=q2;
%making I Newton polynomial
if (position(x)==1)
p1=A(1,2);
f=1;
for i=1:n-1
f=f*i;
p1=p1+(A(1,i+2)*s1)/f;
s1=s1*(q1-i);
end
Newton=p1;
else
%making II Newton polynomial
if (position(x)==2)
p2=A(n,2);
f1=1;
for i=1:n-1
f1=f1*i;
p2=p2+(A(n-i,i+2)*s2)/f1;
s2=s2*(q2+i);
end
Newton=p2;
%else, printing error
else Newton='There are more suitable methods than Newton(I,II)';
end
end
Was wondering how I could achieve this in Matlab:
T(s) = 1/(s+1) -> T(jw) = 1/(jw+1)
Setting s = jw doesn't help.
For better understanding:
R(s) = some transfer function
L(s) = some transfer function
T(s) = R(s) * L(s)
value_at_jw = T(jw)
You basically want to evaluate your transfer function at a certain frequency.
The result would be a complex number.
You can't just substitute s with a frequency, you would need to create a polynomial or an anonymous function out from your denominator and numerator. Which is one way, an interesting one. Another very simple way is to use the outputs of the bode function:
Imagine a transfer function G and a frequency value jw you want to insert for 's':
G = tf([2 1 ], [1 1 1])
jw = 1i*2000 % or easier without the complex "i"
G =
2 s + 1
-----------
s^2 + s + 1
Now you want to know magnitude and phase for the frequency s = jw
[mag,phase] = bode( G, imag(jw) ) % or just w
The rest is math, you now have magnitude and phase=angle of your complex result. A complex number of form z = a + bi can be created as follows:
z = mag*( cos(phase)+1i*sin(phase) )
returns:
z = -4.3522e-04 - 9.0032e-04i
If you have Matlab's control systems toolbox installed ($$$), you can do a sort of symbolic computations by defining transfer functions, either by giving the polynomial coefficients with tf or as a system factored in poles and zeros using zpk:
>> R = tf([1], [1, 1])
Transfer function:
1
-----
s + 1
>> L = zpk([1,2],[3,4,5], 6)
Zero/pole/gain:
6 (s-1) (s-2)
-----------------
(s-3) (s-4) (s-5)
You can convert between these two formats, and they can be used for simple math:
>> R * L
Zero/pole/gain:
6 (s-1) (s-2)
-----------------------
(s+1) (s-3) (s-4) (s-5)
A complex frequency response, finally, can be obtained using freqresp:
>> f = logspace(-2, 2, 200);
>> frequency_response = squeeze(freqresp(T, f, 'Hz'));
>> subplot(211)
>> loglog(f, abs(frequency_response))
>> subplot(212)
>> semilogx(f, angle(frequency_response))
Here are two ways.
wmax = 20;
dw = 0.1;
w = -wmax:dw:wmax;
T = 1./(1 + j*w);
subplot(2,1,1)
hold on
grid on
p = plot(w, abs(T))
title('Magnitude')
subplot(2,1,2)
hold on
grid on
p = plot(w, angle(T))
title('Phase')
// or
H = freqs(1, [1 1], w);
figure
subplot(2,1,1)
hold on
grid on
p = plot(w, abs(H))
title('Magnitude')
subplot(2,1,2)
hold on
grid on
p = plot(w, angle(H))
title('Phase')
The result is, of course, identical
Hope that helps.