Matlab Simulink Model of nonlinear Model - matlab

I'm trying to create a Matlab simulink model of the following equation:
I am very new to simulink and need some help getting started.

Ok, this is very easy to do.
set the equation so the result is the highest derivative. in you case d^3y/dt^3
There you have. nothing more to do.
How to follow from here you may ask:
you got x, and you can derive it, or apply any equation you want to it. The only doubt may come is: where the hell should i get y from?
Easy! you have the equation, integrate the result once and use that value for 4*(dydy/dt^2)^2 , integrate it again and use it for the last item and integrate it again and use it to multiply x. That's the advantage of simulink. You can close a loop using the "result" in the equation to calculate the "result" (this is no 100% true, as you use the value of 1 step before in each integration, but it works).
This is the power of simulink, still I strongly recommend you to read a bit about it, so you can understand why to use simulink, but I think playing with it is necessary to learn so: go!

In general when setting up equations in Simulink you should set up a number of integrator blocks to get all your states. When that is done you can sum the different factors together.
Unfortunately I can not post the model I made for the equaltion because of my low reputation points (new here).
dddy ddy dy y
+ --------> 1/s ------> 1/s -----> 1/s ----->

Related

How to compute derivative of a neural network model at a particular point using MATLAB?

In using neural networks for solving differential equations, using MATLAB, I could find derivative of the model using the command 'dlgradient'. i.e.
gradientsU = dlgradient(sum(U,'all'),{dlX},'EnableHigherDerivatives',true)
where U is the model found using fullyconnect and dlX is the input.
Now, how we can calculate derivative at a particular point of the model U.
To be specific I wanted to add derivative of the model at a particular point, say U_0=U'(5) to the loss function. So, how can I compute that?
I have followed the documentation given in MATLAB, given by,
"To evaluate Rosenbrock's function and its gradient at the point [–1,2], create a dlarray of the point and then call dlfeval on the function handle #rosenbrock.
x0 = dlarray([-1,2]);
[fval,gradval] = dlfeval(#rosenbrock,x0);
But I have already called using dlfeval to evaluate the model, so, not able to call dlfeval again. And when I'm trying to directly compute using the command dlfeval(#U,U_0), the output is always zero.
It will be really helpful if some insight could be provided. Thanks in advance.

about backpropagation and sigmoid function

I have been reading this ebook about ANN:https://www4.rgu.ac.uk/files/chapter3%20-%20bp.pdf
and got a doubt about the effect of the sigmoid function for calculating the errorB. In the text says that if I have threshold neuron I can use:
Target-Output
but because I have a sigmoid function involved I should add:
Output(1-Output)
and end up with:
ErrorB=OutputB(1-OutputB)(TargetB-OutputB)
I mean why I should add the part of O(1-O), I have tried with different values, but I really do not get the intuition why it should be in that way.
Any help?
Thanks
As Kelu stated, that part of the equation is based on derivatives of your transfer function (in this case sigmoid). To understand why you need derivatives, you need to understand how the delta rule works(*):
Your overall goal is to minimize the error in the network's output using gradient descent. Gradient descent itself tries to find a minimum in the error function (E) by taking steps proportional to the negative of the gradient. A gradient is simply the derivative and the reason you're working with derivatives mathematically is that gradients point in the direction of the greatest rate of increase of the (error) function. Conclusion: Since you wanna minimize the error, you go the opposite way of the gradient.
This is the intuitive reason for using gradients. If you want the mathematical derivation, you should check this basic wiki article (additional comment as it's not mentioned anywhere: the g'(x) in the article is the first derivative of g(x))
Other transfer functions can be used, e.g. linear (in this case there is no g'(x) term as the derivative is simply a constant) or hyperbolic tangent in which case the derivative is something different again.
(*) Equation is derived from following equation where you start by minimizing the error of the output:
It is like that because of the fact that Output(1-Output) is a derivative of sigmoid function (simplified). In general, this part is based on derivatives, you can try with different functions (from sigmoid) and then you have to use their derivatives too to get a proper learning rate.
If you want you can take a look at my implementation (it's far from perfect, but maybe you will get some idea from it ;)), it's a simple project I made on my university - https://github.com/kelostrada/neuron-network

signal generation model in Simulink from matlab

How do I generate following signal in simulink:
t=(0:1000)/1000*10*pi;
I want to build the model of the following matlab code:
t=(0:1000)/1000*10*pi;
x = (t).*sin(t);
y = (t).*cos(t);
z = t;
This is fairly basic stuff. Have you gone through any Simulink tutorial, introduction videos/webinars or even the getting started guide of the documentation?
Here are a few suggestions to help you answer your question:
Set the stop time of your model to 1000s and use a fixed-step solver with a step time of 1s.
Use a Clock block with a decimation of 1. That's your 0:1000 vector.
Feed the output of your Clock block to a Gain block, with the gain set to 1/(10000*pi). That's your t vector.
Feed your t signal to two Trigonometric Function blocks, one set to sin and one set to cos. That will generate two signals, sin(t) and cos(t).
Now multiply your t signal with your sin(t) signal using a Product block, to generate your x signal (t*sin(t)).
Do the same thing with t and cos(t) to generate your y signal. z is already done since it's equal to t.
EDIT following comments
The answer to your comment is really basic Simulink stuff. You should learn how to use Simulink before trying to do advance stuff like VR in Simulink. It's a bit like trying to run before you can walk.
Here are a few resources that may be useful:
Simulink Videos and Examples
Simulink Webinars
Simulink tutorial
Getting Started with Simulink in the Simulink Documentation
I don't know much about VRML, but be aware that the coordinate system in VRML is different from that in MATLAB/Simulink (see http://www.mathworks.co.uk/help/sl3d/vrml.html). You should also have a look at Virtual World Connection to a Model in the Simulink 3D Animation documentation.

How to smooth rectangular signal with high order rate-limiter in Simulink?

Imagine I have a rectangular reference value for the position/displacement x and I need to smooth it.
The math for translatoric movements is quite simple:
speed: v = x'
acceleration: a = v' = x''
jerk. j = a' = v'' = x'''
I need to limit all these values. So I thought about using rate limiters in Simulink:
This approach works perfect for ramp signals, as you can see in the following output:
BUT, my reference signals for x are no ramps, they are rectangles/steps. Hence the rate limiters are not working, because the derivatives they get to limit are already infinite and Simulink throws an error. How can I resolve this problem? Is there actually a more elegant way to implement the high order rate-limiters? I guess this approach could be unstable in some cases.
continue reading: related question
Even though it seems absurd, the following approach is working: integration and instant derivation does the trick:
leading to:
More elegant, faster and simpler solutions for the whole smoothing problem are highly appreciated!
It's generally not a good idea to differentiate signals in Simulink because of numerical issues, I would advise to start with the higher order derivatives (e.g. acceleration) and integrate, much more robust numerically. This is what the doc about the derivative block says:
The Derivative block output might be very sensitive to the dynamics of
the entire model. The accuracy of the output signal depends on the
size of the time steps taken in the simulation. Smaller steps allow a
smoother and more accurate output curve from this block. However,
unlike with blocks that have continuous states, the solver does not
take smaller steps when the input to this block changes rapidly.
Depending on the dynamics of the driving signal and model, the output
signal of this block might contain unexpected fluctuations. These
fluctuations are primarily due to the driving signal output and solver
step size.
Because of these sensitivities, structure your models to use
integrators (such as Integrator blocks) instead of Derivative blocks.
Integrator blocks have states that allow solvers to adjust step size
and improve accuracy of the simulation. See Circuit Model for an
example of choosing the best-form mathematical model to avoid using
Derivative blocks in your models.
See also Best-Form Mathematical Models for more details.
I was trying to do something similar. I was looking for a "Smooth Ramp". Here is what I found:
A simpler approach is to combine ramp with a second order lag. Then the signal approachs s-shape. And your derivatives will exist and be smooth as well. Only thing to remember is that the 2nd or lag must be critically damped.
Y(s) = H(s)*X(s) where H(s) = K*wo^2/(s^2 + 2*zeta*wo*s + wo^2). Here you define zeta = 1.0. Then the s-shape is retained for any K and wo value. Note that X(s) has already been hit by a ramp. In matlab or any other tools, linear ramp and 2nd lag are standard blocks.
Good luck!
I think the 'Transfer Fcn' block is what you're looking for.
If you leave the equation in the default form 1/(s+1) you have a low-pass filter which can be tuned to what you need by changing the numerator and denominator coefficients.

Generate bifurcation diagram for 2D system

Drawing bifurcation diagram for 1D system is clear but if I have 2D system on the following form
dx/dt=f(x,y,r),
dy/dt=g(x,y,r)
And I want to generate a bifurcation diagram in MATLAB for x versus r.
What is the main idea to do that or any hints which could help me?
You first have to do some math:
Setting each of the functions to zero gives you two functions y(x) (called the nullclines), which you can plot in a phase diagram. Where the two lines intersect are the fixed-points (equilibria) of your system.
Now, you have to take the jacobian of your system and plug each of those fixed-points in, which will give you the linear stability analysis of the system.
The location of the fixed points and the stability of each point can now be computed as a you vary r (the bifurcation parameter).
For the programming:
-use newton's method (fsolve in MATLAB) to find where the equations are zero
-eig will help you find the eigenvalues of the system.
However
It depends on your system.
If you're supposed to be looking for limit cycles or chaos or something, you'll have to use one of the ode solvers and then the analysis becomes more tricky. I suppose you could develop a poincare-bendixson algorithm, but that would be involved and details would depend on your system.
I don't think MATLAB has anything built in that would give you a bifurcation diagram. There is this third-party solution:
http://www.mathworks.com/matlabcentral/fileexchange/8382