I need to generate two chaotic sequences based on chen's hyperchaotic system.It has to be generated from the following four formulas
X=ay-x;
Y=-xz+dx+cy-q;
Y=xy-bz;
Q=x+k;
where a,b,c,d,x,y,z,q are all initialised as follows.
I need only X and Y
where
X=[x1,x2,...x4n]
Y=[y1,y2,...y4n]
a=36 ;
b=3 ;
c=28 ;
d=16 ;
k=0.2 ;
x=0.3 ;
y=-0.4 ;
z=1.2 ;
q=1 ;
n=256 ;
I tried the following code but i'm not able to get it properly.
clc
clear all
close all
w=imread('C:\Users\Desktop\a.png');
[m n]=size(w)
a=36;
b=3;
c=28;
d=16;
k=0.2;
x(1)=0.3;
y(1)=-0.4;
z(1)=1.2;
q(1)=1;
for i=1:1:4(n)
x(i+1)=(a*(y(i)-x(i)));
y(i+1)=-(x(i)*z(i))+(d*x(i))+(c*y(i))-q(i);
z(i+1)=(x(i)*y(i))-(b*z(i));
q(i+1)=x(i)+k;
end
disp(x);
disp(y);
pls help. thanks in advance.
Your code isn't even close to doing what you want it to. Fortunately, I'm vaguely interested in the problem and I have a bunch of spare time, so I thought I'd try and implement it step by step to show you what to do. I've left a few gaps for you to fill in.
It sounds like you want to integrate the hyperchaotic chen system, which has various definitions online, but you seem to be focusing on
So let's write a matlab function that defines that system
function vdot = chen(t, v, a, b, c, d, k)
% Here you unpack the input vector v -
x = v(1); y = v(2); z = v(3); q = v(4);
% Here you need to implement your equations as xdot, ydot etc.
% xdot = ...
% ydot = ...
% I'll leave that for you to do yourself.
% Then you pack them up into an output vector -
vdot = [xdot; ydot; zdot; qdot];
end
Save that in a file called chen.m. Now you need to define the values of the parameters a, b, c, d and k, as well as your initial condition.
% You need to define the values of a, b, c, d, k here.
% a = ...
% b = ...
% You also need to define the vector v0, which is a 4x1 vector of your
% initial conditions
% v0 = ...
%
This next line creates a function that can be used by Matlab's integration routines. The first parameter t is the current time (which you don't actually use) and the second parameter is a 4x1 vector containing x, y, z, q.
>> fun = #(t,v) chen(t,v,a,b,c,d,k)
Now you can use ode45 (which does numerical integration using a 4th order runge-kutta scheme) to integrate it and plot some paths. The first argument to ode45 is the function you want to be integrated, the second argument is the timespan to be integrated over (I chose to integrate from 0 to 100, maybe you want to do something different) and the third argument is your initial condition (which hopefully you already defined).
>> [t, v] = ode45(fun, [0 100], v0);
The outputs are t, a vector of times, and v, which will be a matrix whose columns are the different components (x, y, z, q) and whose rows are the values of the components at each point in time. So you can pull out a column for each of the x and y components, and plot them
>> x = v(:,1);
>> y = v(:,2);
>> plot(x,y)
Which gives a reasonably chaotic looking plot:
#Abirami Anbalagan and Sir #Chris Taylor, I have also studied hyperchaotic system up to some extent. According to me, for system to be chaotic, values should be like
a= 35; b= 3; c= 12; d= 7;
v(n) = [-422 -274 0 -2.4]transpose
where v(n) is a 4*1 Matrix.
Related
I have data like this:
y = [0.001
0.0042222222
0.0074444444
0.0106666667
0.0138888889
0.0171111111
0.0203333333
0.0235555556
0.0267777778
0.03]
and
x = [3.52E-06
9.72E-05
0.0002822918
0.0004929136
0.0006759156
0.0008199029
0.0009092797
0.0009458332
0.0009749509
0.0009892005]
and I want y to be a function of x with y = a(0.01 − b*n^−cx).
What is the best and easiest computational approach to find the best combination of the coefficients a, b and c that fit to the data?
Can I use Octave?
Your function
y = a(0.01 − b*n−cx)
is in quite a specific form with 4 unknowns. In order to estimate your parameters from your list of observations I would recommend that you simplify it
y = β1 + β2β3x
This becomes our objective function and we can use ordinary least squares to solve for a good set of betas.
In default Matlab you could use fminsearch to find these β parameters (lets call it our parameter vector, β), and then you can use simple algebra to get back to your a, b, c and n (assuming you know either b or n upfront). In Octave I'm sure you can find an equivalent function, I would start by looking in here: http://octave.sourceforge.net/optim/index.html.
We're going to call fminsearch, but we need to somehow pass in your observations (i.e. x and y) and we will do that using anonymous functions, so like example 2 from the docs:
beta = fminsearch(#(x,y) objfun(x,y,beta), beta0) %// beta0 are your initial guesses for beta, e.g. [0,0,0] or [1,1,1]. You need to pick these to be somewhat close to the correct values.
And we define our objective function like this:
function sse = objfun(x, y, beta)
f = beta(1) + beta(2).^(beta(3).*x);
err = sum((y-f).^2); %// this is the sum of square errors, often called SSE and it is what we are trying to minimise!
end
So putting it all together:
y= [0.001; 0.0042222222; 0.0074444444; 0.0106666667; 0.0138888889; 0.0171111111; 0.0203333333; 0.0235555556; 0.0267777778; 0.03];
x= [3.52E-06; 9.72E-05; 0.0002822918; 0.0004929136; 0.0006759156; 0.0008199029; 0.0009092797; 0.0009458332; 0.0009749509; 0.0009892005];
beta0 = [0,0,0];
beta = fminsearch(#(x,y) objfun(x,y,beta), beta0)
Now it's your job to solve for a, b and c in terms of beta(1), beta(2) and beta(3) which you can do on paper.
I have equation F(f)=a*f^3+b*f+c. I have known vectors of the data, p, independent variable, 'f'. I need to find values of a, b, c.
What I tried:
function [ val ] = myfunc(par_fit,f,p)
% This gives me a,b,c
% p= af^3 +bf +c
val = norm(p - (par_fit(1)*(f.^3))+ (par_fit(2)*f) + (par_fit(3)));
end
my_par = fminsearch(#(par_fit) myfunc(par_fit,f,p),rand(1,3));
This gives me my_par = [1.9808 -2.2170 -24.8039], or a=1.9808, b=-2.2170, and c=-24.8039, but I require that b should be larger than 5, and c should be larger than zero.
I think your problem might be because your objective function is incorrect:
val = norm(p - (par_fit(1)*(f.^3))+ (par_fit(2)*f) + (par_fit(3)));
should probably be:
val = norm(p-(par_fit(1)*f.^3+par_fit(2)*f+par_fit(3)));
But you can constrain the values of variables when you do minimisation by using fmincon rather than fminsearch. By setting the lb input to [-Inf -Inf 0], the first two coefficients are allowed to be any real number, but the third coefficient must be greater than or equal to zero. For example: (I've also shown how to solve the problem (without the non-negativity constraint) using a matrix method)
% Sample data
f=(0:.1:1).';
p=2*f.^3+3*f+1+randn(size(f))
% Create Van der Monde matrix
M=[f.^3 f f.^0];
C=M\p; % Solve the matrix problem in a least squares sense if size(f)>size(F)
my_par=fmincon(#(c) norm(p-(c(1)*f.^3+c(2)*f+c(3))),rand(1,3),[],[],[],[],[-Inf 5 0],[])
C.'
plot(f,p,'o',f,M*C,f,my_par(1)*f.^3+my_par(2)*f+my_par(3))
I have a state space system with matrices A,B,C and D.
I can either create a state space system, sys1 = ss(A,B,C,D), of it or compute the transfer function matrix, sys2 = C*inv(z*I - A)*B + D
However when I draw the bode plot of both systems, they are different while they should be the same.
What is going wrong here? Does anyone have a clue? I know btw that the bodeplot generated by sys1 is correct.
The system can be downloaded here: https://dl.dropboxusercontent.com/u/20782274/system.mat
clear all;
close all;
clc;
Ts = 0.01;
z = tf('z',Ts);
% Discrete system
A = [0 1 0; 0 0 1; 0.41 -1.21 1.8];
B = [0; 0; 0.01];
C = [7 -73 170];
D = 1;
% Set as state space
sys1 = ss(A,B,C,D,Ts);
% Compute transfer function
sys2 = C*inv(z*eye(3) - A)*B + D;
% Compute the actual transfer function
[num,den] = ss2tf(A,B,C,D);
sys3 = tf(num,den,Ts);
% Show bode
bode(sys1,'b',sys2,'r--',sys3,'g--');
Edit: I made a small mistake, the transfer function matrix is sys2 = C*inv(z*I - A)*B + D, instead of sys2 = C*inv(z*I - A)*B - D which I did wrote done before. The problem still holds.
Edit 2: I have noticted that when I compute the denominator, it is correct.
syms z;
collect(det(z*eye(3) - A),z)
Your assumption that sys2 = C*inv(z*I- A)*B + D is incorrect. The correct equivalent to your state-space system (A,B,C,D) is sys2 = C*inv(s*I- A)*B + D. If you want to express it in terms of z, you'll need to invert the relationship z = exp(s*T). sys1 is the correct representation of your state-space system. What I would suggest for sys2 is to do as follows:
sys1 = ss(mjlsCE.A,mjlsCE.B,mjlsCE.C,mjlsCE.D,Ts);
sys1_c = d2c(sys1);
s = tf('s');
sys2_c = sys1_c.C*inv(s*eye(length(sys1_c.A)) - sys1_c.A)*sys1_c.B + sys1_c.D;
sys2_d = c2d(sys2_c,Ts);
That should give you the correct result.
Due to inacurracy of the inverse function extra unobservable poles and zeros are added to the system. For this reason you need to compute the minimal realization of your transfer function matrix.
Meaning
% Compute transfer function
sys2 = minreal(C*inv(z*eye(3) - A)*B + D);
What you are noticing is actually a numerical instability regarding pole-zero pair cancellations.
If you run the following code:
A = [0, 1, 0; 0, 0, 1; 0.41, -1.21, 1.8] ;
B = [0; 0; 0.01] ;
C = [7, -73, 170] ;
D = 1 ;
sys_ss = ss(A, B, C, D) ;
sys_tf_simp = tf(sys_ss) ;
s = tf('s') ;
sys_tf_full = tf(C*inv(s*eye(3) - A)*B + D) ;
zero(sys_tf_simp)
zero(sys_tf_full)
pole(sys_tf_simp)
pole(sys_tf_full)
you will see that the transfer function formulated by matrices directly has a lot more poles and zeros than the one formulated by MatLab's tf function. You will also notice that every single pair of these "extra" poles and zeros are equal- meaning that they cancel with each other if you were to simply the rational expression. MatLab's tf presents the simplified form, with equal pole-zero pairs cancelled out. This is algebraically equivalent to the unsimplified form, but not numerically.
When you call bode on the unsimplified transfer function, MatLab begins its numerical plotting routine with the pole-zero pairs not cancelled algebraically. If the computer was perfect, the result would be the same as in the simplified case. However, numerical error when evaluating the numerator and denominators effectively leaves some of the pole-zero pairs "uncancelled" and as many of these poles are in the far right side of the s plane, they drastically influence the output behavior.
Check out this link for info on this same problem but from the perspective of design: http://ctms.engin.umich.edu/CTMS/index.php?aux=Extras_PZ
In your original code, you can think of the output drawn in green as what the naive designer wanted to see when he cancelled all his unstable poles with zeros, but the output drawn in red is what he actually got because in practice, finite-precision and real-world tolerances prevent the poles and zeros from cancelling perfectly.
Why is an unobservable / uncontrollable pole? I think this issue comes only because the inverse of a transfer function matrix is inaccurate in Matlab.
Note:
A is 3x3 and the minimal realization has also order 3.
What you did is the inverse of a transfer function matrix, not a symbolic or numeric matrix.
# Discrete system
Ts = 0.01;
A = [0 1 0; 0 0 1; 0.41 -1.21 1.8];
B = [0; 0; 0.01];
C = [7 -73 170];
D = 1;
z = tf('z', Ts)) # z is a discrete tf
A1 = z*eye(3) - A # a tf matrix with a direct feedthrough matrix A
# inverse it, multiply with C and B from left and right, and plus D
G = D + C*inv(A1)*B
G is now a scalar (SISO) transfer function.
Without "minreal", G has order 9 (funny, I don't know how Matlab computes it, perhaps the "Adj(.)/det(.)" method). Matlab cannot cancel the common factors in the numerator and the denominator, because z is of class 'tf' rather than a symbolic variable.
Do you agree or do I have misunderstanding?
We have an equation similar to the Fredholm integral equation of second kind.
To solve this equation we have been given an iterative solution that is guaranteed to converge for our specific equation. Now our only problem consists in implementing this iterative prodedure in MATLAB.
For now, the problematic part of our code looks like this:
function delta = delta(x,a,P,H,E,c,c0,w)
delt = #(x)delta_a(x,a,P,H,E,c0,w);
for i=1:500
delt = #(x)delt(x) - 1/E.*integral(#(xi)((c(1)-c(2)*delt(xi))*ms(xi,x,a,P,H,w)),0,a-0.001);
end
delta=delt;
end
delta_a is a function of x, and represent the initial value of the iteration. ms is a function of x and xi.
As you might see we want delt to depend on both x (before the integral) and xi (inside of the integral) in the iteration. Unfortunately this way of writing the code (with the function handle) does not give us a numerical value, as we wish. We can't either write delt as two different functions, one of x and one of xi, since xi is not defined (until integral defines it). So, how can we make sure that delt depends on xi inside of the integral, and still get a numerical value out of the iteration?
Do any of you have any suggestions to how we might solve this?
Using numerical integration
Explanation of the input parameters: x is a vector of numerical values, all the rest are constants. A problem with my code is that the input parameter x is not being used (I guess this means that x is being treated as a symbol).
It looks like you can do a nesting of anonymous functions in MATLAB:
f =
#(x)2*x
>> ff = #(x) f(f(x))
ff =
#(x)f(f(x))
>> ff(2)
ans =
8
>> f = ff;
>> f(2)
ans =
8
Also it is possible to rebind the pointers to the functions.
Thus, you can set up your iteration like
delta_old = #(x) delta_a(x)
for i=1:500
delta_new = #(x) delta_old(x) - integral(#(xi),delta_old(xi))
delta_old = delta_new
end
plus the inclusion of your parameters...
You may want to consider to solve a discretized version of your problem.
Let K be the matrix which discretizes your Fredholm kernel k(t,s), e.g.
K(i,j) = int_a^b K(x_i, s) l_j(s) ds
where l_j(s) is, for instance, the j-th lagrange interpolant associated to the interpolation nodes (x_i) = x_1,x_2,...,x_n.
Then, solving your Picard iterations is as simple as doing
phi_n+1 = f + K*phi_n
i.e.
for i = 1:N
phi = f + K*phi
end
where phi_n and f are the nodal values of phi and f on the (x_i).
How solve a system of ordinary differential equation ..an initial value problem ....with parameters dependent on time or independent variable?
say the equation I have
Dy(1)/dt=a(t)*y(1)+b(t)*y(2);
Dy(2)/dt=-a(t)*y(3)+b(t)*y(1);
Dy(3)/dt=a(t)*y(2);
where a(t) is a vector and b(t) =c*a(t); where the value of a and b are changing with time not in monotone way and each time step.
I tried to solve this using this post....but when I applied the same principle ...I got the error message
"Error using griddedInterpolant The point coordinates are not
sequenced in strict monotonic order."
Can someone please help me out?
Please read until the end to see whether the first part or second part of the answer is relevant to you:
Part 1:
First create an .m file with a function that describe your calculation and functions that will give a and b. For example: create a file called fun_name.m that will contain the following code:
function Dy = fun_name(t,y)
Dy=[ a(t)*y(1)+b(t)*y(2); ...
-a(t)*y(3)+b(t)*y(1); ...
a(t)*y(2)] ;
end
function fa=a(t);
fa=cos(t); % or place whatever you want to place for a(t)..
end
function fb=b(t);
fb=sin(t); % or place whatever you want to place for b(t)..
end
Then use a second file with the following code:
t_values=linspace(0,10,101); % the time vector you want to use, or use tspan type vector, [0 10]
initial_cond=[1 ; 0 ; 0];
[tv,Yv]=ode45('fun_name',t_values,initial_cond);
plot(tv,Yv(:,1),'+',tv,Yv(:,2),'x',tv,Yv(:,3),'o');
legend('y1','y2','y3');
Of course for the fun_name.m case I wrote you need not use sub functions for a(t) and b(t), you can just use the explicit functional form in Dy if that is possible (like cos(t) etc).
Part 2: If a(t) , b(t) are just vectors of numbers you happen to have that cannot be expressed as a function of t (as in part 1), then you'll need to have also a time vector for which each of them happens, this can be of course the same time you'll use for the ODE, but it need not be, as long as an interpolation will work. I'll treat the general case, when they have different time spans or resolutions. Then you can do something of the following, create the fun_name.m file:
function Dy = fun_name(t, y, at, a, bt, b)
a = interp1(at, a, t); % Interpolate the data set (at, a) at times t
b = interp1(at, b, t); % Interpolate the data set (bt, b) at times t
Dy=[ a*y(1)+b*y(2); ...
-a*y(3)+b*y(1); ...
a*y(2)] ;
In order to use it, see the following script:
%generate bogus `a` ad `b` function vectors with different time vectors `at` and `bt`
at= linspace(-1, 11, 74); % Generate t for a in a generic case where their time span and sampling can be different
bt= linspace(-3, 33, 122); % Generate t for b
a=rand(numel(at,1));
b=rand(numel(bt,1));
% or use those you have, but you also need to pass their time info...
t_values=linspace(0,10,101); % the time vector you want to use
initial_cond=[1 ; 0 ; 0];
[tv,Yv]= ode45(#(t,y) fun_name(t, y, at, a, bt, b), t_values, initial_cond); %
plot(tv,Yv(:,1),'+',tv,Yv(:,2),'x',tv,Yv(:,3),'o');
legend('y1','y2','y3');