Solution of the harmonic oscilator using odeint does not match matlab solution - matlab

I am solving the harmonic oscilator using odeint and matlab. The equation I am trying to solve is x'' = -x + 0.15*x'. The odeint code can be found here
The exact solution is (per matlab):
S = (exp(-(3*z)/40)*(1591*cos((1591^(1/2)*z)/40) + 3*1591^(1/2)*sin((1591^(1/2)*z)/40)))/1591;
When I run odeint, using this integration method:
{
runge_kutta4< state_type > stepper;
integrate_const( stepper , []( const state_type &x , state_type &dxdt , double t ) {
dxdt[0] = x[1]; dxdt[1] = -x[0] - 0.15*x[1]; }
, x , 0.0 , 10.0 , 0.01 );
}
//]
//[ harm_iterator_const_step]
std::for_each( make_const_step_time_iterator_begin( stepper , harmonic_oscillator, x , 0.0 , 10.0 , 0.01 ) ,
make_const_step_time_iterator_end( stepper , harmonic_oscillator, x ) ,
[]( std::pair< const state_type & , const double & > x ) {
std::cout << x.second << "\t" << x.first[0] << "\t" << x.first[1] << "\n"; } );
//]
I get the following picture for the 10 seconds: odeint
But the graph in matlab is: matlab
I could add the set of data, but as you can see from the picture, even the initial value in the odeint is wrong. It should be 1, but is (according the the code:
| 0 | -0.421912 | 0.246405 |
I have attached the first 0.1 seconds from both solutions to show the big discrepancy between the solutions:
Matlab:
position
1.0000
0.9950
0.9803
0.9560
0.9226
0.8806
0.8304
0.7728
0.7084
0.6378
0.5621
Odeint:
position
-0.421912
-0.419429
-0.416908
-0.414348
-0.411751
-0.409117
-0.406446
-0.403738
-0.400994
-0.398215
-0.395399
I would expect that both applications render similar responses, but I cannot explain why they are so different. The analytical solution shows an exact match with the solver ode45 from matlab, which indicates to me that odeint is doing something weird.
Can someone please help understand the odeint internals so I can use it for my project? Currently, odeint would be an option, but I am sure if I am properly understanding the solution from odeint.

I found my error. In the code that I copied from github, I did not realise that x was called by reference, so everytime I changed integrator, the value of x changed. This value was feeded into the next integator and rather than having fresh values, I gave the wrong values. I had a hiden integrator that I forgot to remove and this was causing the values to changed.
I think, me writing the question, helped me to understand the code I have copied. It was worth the time.

Related

(App Designer) Error using matlab.ui.control.EditField/set.Value (line 96) 'Value' must be a character vector

So I am creating an app to find a value based on several inputs but hit an error as one of the outputs won't show.
This is the app layout
1
The problem is the total cost section won't show the value when I clicked the calculate button. The 'Q Optimal' works just fine.
2
the formula associated with the button on the right side looks like this:
dm=app.MinimumDemandEditField.Value;
dM=app.MaximumDemandEditField.Value;
tm=app.MinimumLeadTimeEditField.Value;
tM=app.MaximumLeadTimeEditField.Value;
r1=app.ReorderLevelEditField.Value;
Et = 0.5*(tm+tM);
vart = 1/12*(tM-tm)^2;
Ed = 0.5*(dm+dM);
vard = 1/12*(dM-dm)^2;
ED = 1/4*(dm+dM)*(tm+tM);
varD = 1/144*(3*(dm+dM)^2*(tM-tm)^2+3*(dM-dm)^2*(tm+tM)^2+(tM-
tm)^2*(dM-dm)^2);
gt = 1/(tM-tm);
fd = 1/(dM-dm);
fD = 1/((dM-dm)*(tM-tm));
f1=app.FixedCostEditField.Value;
c1=app.VariableCostEditField.Value;
h=app.HoldingCostEditField.Value;
s=50*c1;
app.ShortageCostEditField.Value = s
A1=c1+(h/Ed)*(r1-ED);
A2=fD*(r1*(tM-tm)*log(r1/(tM*dm))-(r1^2/dM)+(r1*tM)-
(r1*tm)*log((dM*tM)/r1)-Et);
syms x;
f=(x-r1)*fD;
EB= int(f,r1,dM*tM);
A3=Ed*f1+h*Ed*(fD*((r1^2*tm/2)-(dm*r1/2)*(tM^2-tm^2)+(dm^2/6)*
(tM^3-tm^3)-((r1^3/6*dM)-(dM*r1*tm^2/2)+(dM^2*tm^3/6)))+(fD/18)*
(tM^3-tm^3)*(dM^3-dm^3)-r1*ED+Ed*s*EB);
Q=(1/h)*((Ed*(A1+h*A2-c1)+(h*(ED-r1))));
Eoh=fD*((((r1^3*tM)/2)-(((dm*r1)/2)*(tM^2-tm^2))+(((dm^2)/6)*
(tM^3-tm^3))-((r1^3)/(6*dM))-((dM*r1*tm^2)/2)+((dM^2*tm^3)/6))+
((Q^2)/2*Ed)-(Q*ED/Ed)+((fD/(18*Ed))*((tM^3-tm^3)*(dM^3-dm^3)))+
(Q*r1/Ed)-(r1*ED/Ed));
TC= f1+c1*Q+h*Eoh+s*EB;
app.QOptimalEditField.Value = Q
app.TotalCostEditField.Value = TC
Running this gives the error:
3
I suspect the problem is with my integration process. Have I missed something or is there a better way to do this?
Thank you in advance
Regards,
Kevin Renard
I solved the problem as I noticed that the last error notification states that the input value must be a double scalar and the value of the integration process is not a double scalar, so I revised the integration code into:
syms x;
f=(x-r1)*fD;
EB= double(int(f,r1,dM*tM));
1

3-layered Neural network doesen't learn properly

So, I'm trying to implement a neural network with 3 layers in python, however I am not the brightest person so anything with more then 2 layers is kinda difficult for me. The problem with this one is that it gets stuck at .5 and does not learn I have no actual clue where it went wrong. Thank you for anyone with the patience to explain the error to me. (I hope the code makes sense)
import numpy as np
def sigmoid(x):
return 1/(1+np.exp(-x))
def reduce(x):
return x*(1-x)
l0=[np.array([1,1,0,0]),
np.array([1,0,1,0]),
np.array([1,1,1,0]),
np.array([0,1,0,1]),
np.array([0,0,1,0]),
]
output=[0,1,1,0,1]
syn0=np.random.random((4,4))
syn1=np.random.random((4,1))
for justanumber in range(1000):
for i in range(len(l0)):
l1=sigmoid(np.dot(l0[i],syn0))
l2=sigmoid(np.dot(l1,syn1))
l2_err=output[i]-l2
l2_delta=reduce(l2_err)
l1_err=syn1*l2_delta
l1_delta=reduce(l1_err)
syn1=syn1.T
syn1+=l0[i].T*l2_delta
syn1=syn1.T
syn0=syn0.T
syn0+=l0[i].T*l1_delta
syn0=syn0.T
print l2
PS. I know that it might be a piece of trash as a script but that is why I asked for assistance
Your computations are not fully correct. For example, the reduce is called on the l1_err and l2_err, where it should be called on l1 and l2.
You are performing stochastic gradient descent. In this case with such few parameters, it oscilates hugely. In this case use a full batch gradient descent.
The bias units are not present. Although you can still learn without bias, technically.
I tried to rewrite your code with minimal changes. I have commented your lines to show the changes.
#!/usr/bin/python3
import matplotlib.pyplot as plt
import numpy as np
def sigmoid(x):
return 1/(1+np.exp(-x))
def reduce(x):
return x*(1-x)
l0=np.array ([np.array([1,1,0,0]),
np.array([1,0,1,0]),
np.array([1,1,1,0]),
np.array([0,1,0,1]),
np.array([0,0,1,0]),
]);
output=np.array ([[0],[1],[1],[0],[1]]);
syn0=np.random.random((4,4))
syn1=np.random.random((4,1))
final_err = list ();
gamma = 0.05
maxiter = 100000
for justanumber in range(maxiter):
syn0_del = np.zeros_like (syn0);
syn1_del = np.zeros_like (syn1);
l2_err_sum = 0;
for i in range(len(l0)):
this_data = l0[i,np.newaxis];
l1=sigmoid(np.matmul(this_data,syn0))[:]
l2=sigmoid(np.matmul(l1,syn1))[:]
l2_err=(output[i,:]-l2[:])
#l2_delta=reduce(l2_err)
l2_delta=np.dot (reduce(l2), l2_err)
l1_err=np.dot (syn1, l2_delta)
#l1_delta=reduce(l1_err)
l1_delta=np.dot(reduce(l1), l1_err)
# Accumulate gradient for this point for layer 1
syn1_del += np.matmul(l2_delta, l1).T;
#syn1=syn1.T
#syn1+=l1.T*l2_delta
#syn1=syn1.T
# Accumulate gradient for this point for layer 0
syn0_del += np.matmul(l1_delta, this_data).T;
#syn0=syn0.T
#syn0-=l0[i,:].T*l1_delta
#syn0=syn0.T
# The error for this datpoint. Mean sum of squares
l2_err_sum += np.mean (l2_err ** 2);
l2_err_sum /= l0.shape[0]; # Mean sum of squares
syn0 += gamma * syn0_del;
syn1 += gamma * syn1_del;
print ("iter: ", justanumber, "error: ", l2_err_sum);
final_err.append (l2_err_sum);
# Predicting
l1=sigmoid(np.matmul(l0,syn0))[:]# 1 x d * d x 4 = 1 x 4;
l2=sigmoid(np.matmul(l1,syn1))[:] # 1 x 4 * 4 x 1 = 1 x 1
print ("Predicted: \n", l2)
print ("Actual: \n", output)
plt.plot (np.array (final_err));
plt.show ();
The output I get is:
Predicted:
[[0.05214011]
[0.97596354]
[0.97499515]
[0.03771324]
[0.97624119]]
Actual:
[[0]
[1]
[1]
[0]
[1]]
Therefore the network was able to predict all the toy training examples. (Note in real data you would not like to fit the data at its best as it leads to overfitting). Note that you may get a bit different result, as the weight initialisations are different. Also, try to initialise the weight between [-0.01, +0.01] as a rule of thumb, when you are not working on a specific problem and you specifically know the initialisation.
Here is the convergence plot.
Note that you do not need to actually iterate over each example, instead you can do matrix multiplication at once, which is much faster. Also, the above code does not have bias units. Make sure you have bias units when you re-implement the code.
I would recommend you go through the Raul Rojas' Neural Networks, a Systematic Introduction, Chapter 4, 6 and 7. Chapter 7 will tell you how to implement deeper networks in a simple way.

Find the largest x for which x^b+a = a

Stability (Numerical analysis)
Trying to apply the answer I saw in this question, a+x=a worked just fine with a+eps(a)/2. Suppose we have x^b+a=a, where b is a small integer, say 3 and a=2000. Then a+(eps(a))^3 or a+(eps(a)/2)^3 will always return number a. Can someone help with the measurement of x? Any way, even different from eps will do just fine.
p.s. 1938+(eps(1938)/0.00000000469)^3 is the last number that returns ans = 1.9380e+003.
1938+(eps(1938)/0.0000000047)^3 returns a=1938. Does that have to do with anything?
x = (eps(a)/2).^(1/(b-eps(a)/2))
if b = 3,
(eps(1938)/2).^(1/(3-eps(1938)/2)) > eps(1938)/0.0000000047

Fortran behavior of a tiny program

I am new in fortran90 (30 minutes ago...) and I have this program:
program example1
implicit none
real (kind=8) :: x,y,z
x = 3.d0
y = 2.d-1
z = x + y
print *, "y = ", y
print *, "x = ", x
print *, "z = ", z
end program example1
but when i run it with:
gfortran example1.f90
./a.out
the output is:
y = 0.20000000000000001
x = 3.0000000000000000
z = 3.2000000000000002
why is not 3.2000000000000000 ???
What I am doing wrong?
Why y has a 1 in last digit?? and why z has a 2 in the last digit??
Sorry if it is a dumb question, but I just dont understand what I am doing wrong...
Thanks!!
There's absolutely nothing wrong with your program. The issue has to do with real's inability to represent decimals precisely, without an error. The problem is that numbers not composed of negative powers of 2 must be represented approximately. That's why there is a small error in the 16-th decimal place. For more information about representation of reals take a look at the article on wikipedia. This is another great article on the same subject.
If you replace 0.2 with 0.25, the problem will go away, because 0.25 is 2 ^ -2.

MATLAB result is inappropriate

I'm new in MATLAB, i cannot get the answer in the format that i want.
I have a basic function call, but every execution of the program gives the result in the following format :
357341279027200000/23794118819840001
It's supposed to be in decimal, for example for same execution : 15.0181.
I could not figure out why this is happening ? Can you help me, thank you !!
Type format long on the command prompt or in your script.
If that doesnt work because the value is too large, try using vpa
Note that it's just visual, internally the value computed is precise.
>d = 357341279027200000/23794118819840001
d =
15.0181
>> d * 23794118819840001 == 357341279027200000
ans =
1
>> 15.0181 * 23794118819840001 == 357341279027200000
ans =
0
Are you sure that you are not using format rat (rational). This is the reason why you may be having fractional values. If you want decimals, try format long or format long g (Long g provides the optimal length and accuracy as a decimal, up to 10 places.)