Solve system of differential equation in python - scipy

I'm trying to solve a system of differential equations in python.
I have a system composed by two equations where I have two variables, A and B.
The initial condition are that A0=1e17 and B0=0, they change simultaneously.
I wrote the following code using ODEINT:
import numpy as np
from scipy.integrate import odeint
def dmdt(m,t):
A, B = m
dAdt = A-B
dBdt = (A-B)*A
return [dAdt, dBdt]
# Create time domain
t = np.linspace(0, 100, 1)
# Initial condition
A0=1e17
B0=0
m0=[A0, B0]
solution = odeint(dmdt, m0, t)
Apparently I obtain an output different from the expected one but I don't understand the error.
Can someone help me?
Thanks

From A*A'-B'=0 one concludes
B = 0.5*(A^2 - A0^2)
Inserted into the first equation that gives
A' = A - 0.5*A^2 + 0.5*A0^2
= 0.5*(A0^2+1 - (A-1)^2)
This means that the A dynamic has two fixed points at about A0+1 and -A0+1, is growing inside that interval, the upper fixed point is stable. However, in standard floating point numbers there is no difference between 1e17 and 1e17+1. If you want to see the difference, you have to encode it separately.
Also note that the standard error tolerances atol and rtol in the range somewhere between 1e-6 and 1e-9 are totally incompatible with the scales of the problem as originally stated, also highlighting the need to rescale and shift the problem into a more appreciable range of values.
Setting A = A0+u with |u| in an expected scale of 1..10 then gives
B = 0.5*u*(2*A0+u)
u' = A0+u - 0.5*u*(2*A0+u) = (1-u)*A0 - 0.5*u^2
This now suggests that the time scale be reduced by A0, set t=s/A0. Also, B = A0*v. Insert the direct parametrizations into the original system to get
du/ds = dA/dt / A0 = (A0+u-A0*v)/A0 = 1 + u/A0 - v
dv/ds = dB/dt / A0^2 = (A0+u-A0*v)*(A0+u)/A0^2 = (1+u/A0-v)*(1+u/A0)
u(0)=v(0)=0
Now in floating point and the expected range for u, we get 1+u/A0 == 1, so effectively u'(s)=v'(s)=1-v which gives
u(s)=v(s)=1-exp(-s)`,
A(t) = A0 + 1-exp(-A0*t) + very small corrections
B(t) = A0*(1-exp(-A0*t)) + very small corrections
The system in s,u,v should be well-computable by any solver in the default tolerances.

Related

Definite integration using int command

Firstly, I'm quite new to Matlab.
I am currently trying to do a definite integral with respect to y of a particular function. The function that I want to integrate is
(note that the big parenthesis is multiplying with the first factor - I can't get the latex to not make it look like power)
I have tried plugging the above integral into Desmos and it worked as intended. My plan was to vary the value of x and y and will be using for loop via matlab.
However, after trying to use the int function to calculate the definite integral with the code as follow:
h = 5;
a = 2;
syms y
x = 3.8;
p = 2.*x.^2+2.*a.*y;
q = x.^2+y.^2;
r = x.^2+a.^2;
f = (-1./sqrt(1-(p.^2./(4.*q.*r)))).*(2.*sqrt(q).*sqrt(r).*2.*a-p.*2.*y.*sqrt(r)./sqrt(q))./(4.*q.*r);
theta = int(f,y,a+0.01,h) %the integral is undefined at y=2, hence the +0.01
the result is not quite as expected
theta =
int(-((8*461^(1/2)*(y^2 + 361/25)^(1/2))/5 - (461^(1/2)*y*(8*y + 1444/25))/(5*(y^2 + 361/25)^(1/2)))/((1 - (4*y + 722/25)^2/((1844*y^2)/25 + 665684/625))^(1/2)*((1844*y^2)/25 + 665684/625)), y, 21/10, 5)
After browsing through various posts, the common mistake is the undefined interval but the +0.01 should have fixed it. Any guidance on what went wrong is much appreciated.
The Definite Integrals example in the docs shows exactly this type of output when a closed form cannot be computed. You can approximate it numerically using vpa, i.e.
F = int(f,y,a,h);
theta = vpa(F);
Or you can do a numerical computation directly
theta = vpaintegral(f,y,a,h);
From the docs:
The vpaintegral function is faster and provides control over integration tolerances.

Can you please tell me how to solve four transcendental equations of four unknowns?

I have tried solving 4 equations of 4 unknowns in MATLAB and Mathematica.
I used vpasolve for finding the unknowns in MATLAB. Here is the MATLAB code.
Y1 = 0.02;
l1 = 0.0172;
syms Y2 Y3 l2 l3;
lambda = [0.0713 0.0688 0.0665];
b1 = 0.1170;
b3 = 0.1252;
t2_1 = (2*pi/lambda(1))*l2;
t3_1 = (2*pi/lambda(1))*l3;
t2_3 = (2*pi/lambda(3))*l2;
t3_3 = (2*pi/lambda(3))*l3;
t1_1 = (2*pi/lambda(1))*l1;
t1_3 = (2*pi/lambda(3))*l1;
eq1 = 2*Y1*tan(t1_1)+Y2*tan(t2_1)+Y3*tan(t3_1)==0;
eq2 = 2*Y1*tan(t1_3)+Y2*tan(t2_3)+Y3*tan(t3_3)==0;
eq3 = b1== (t1_1*Y1)+(t2_1*(Y2/2)*((sec(t2_1)^2)/(sec(t1_1)^2)))+(t3_1*(Y3/2)*((sec(t3_1)^2)/(sec(t1_1)^2)));
eq4 = b3== (t1_3*Y1)+(t2_3*(Y2/2)*((sec(t2_3)^2)/(sec(t1_3)^2)))+(t3_3*(Y3/2)*((sec(t3_3)^2)/(sec(t1_3)^2)));
E=[eq1 eq2 eq3 eq4];
S=vpasolve(E,[Y2,Y3,l2,l3]);
For the same equations, I wrote the below code in Mathematica.
eqns = {2*Y1*Tan[t11]+Y2*Tan[t21]+Y3*Tan[t31]==0,
2*Y1*Tan[t13]+Y2*Tan[t23]+Y3*Tan[t33]==0,
t11*Y1 + t21*(Y2/2)*(Sec[t21]^2/Sec[t11]^2) + t31*(Y3/2)*(Sec[t31]^2/Sec[t11]^2)-b1==0,
t13*Y1 + t23*(Y2/2)*(Sec[t23]^2/Sec[t13]^2) + t33*(Y3/2)*(Sec[t33]^2/Sec[t13]^2)-b3==0};
NMinimize[Norm[Map[First, eqns]], {Y2,Y3,l2,l3}]
But both are giving me different solutions and those are not the required solutions. I suppose I should use some other function for solving the equations. Can anyone help me finding out how to solve these equations? Your help is highly appreciated. Thank you.
EDIT
If I use the below code for finding the roots, I'm able to find the solutions but I just want positive roots. I tried the code which you have mentioned for getting positive roots but its not working for this and I don't know why. Can you please check this once?
Y1=0.0125;l1=0.010563;lambda={0.0426,0.0401,0.0403,0.0423,0.0413}; b1 = 0.0804;
b3 = 0.0258;
t2_1 = (2*Pi/lambda[[1]])*l2;
t3_1 = (2*Pi/lambda[[1]])*l3;
t2_3 = (2*Pi/lambda[[5]])*l2;
t3_3 = (2*Pi/lambda[[5]])*l3;
t1_1 = (2*Pi/lambda[[1]])*l1;
t1_3 = (2*Pi/lambda[[5]])*l1;
eqns = {2*Y1*Tan[t11]+Y2*Tan[t21]+Y3*Tan[t31]==0,
2*Y1*Tan[t13]+Y2*Tan[t23]+Y3*Tan[t33]==0,
t11*Y1 + t21*(Y2/2)*(Sec[t21]^2/Sec[t11]^2) + t31*(Y3/2)*(Sec[t31]^2/Sec[t11]^2)-b1==0,
t13*Y1 + t23*(Y2/2)*(Sec[t23]^2/Sec[t13]^2) + t33*(Y3/2)*(Sec[t33]^2/Sec[t13]^2)-b3==0};
Try
Y1=0.02;l1=0.0172;lambda={0.0713,0.0688,0.0665};b1=0.1170;b3=0.1252;
t21=(2*Pi/lambda[[1]])*l2;t31=(2*Pi/lambda[[1]])*l3;t23=(2*Pi/lambda[[3]])*l2;
t33=(2*Pi/lambda[[3]])*l3;t11=(2*Pi/lambda[[1]])*l1;t13=(2*Pi/lambda[[3]])*l1;
eqns={2*Y1*Tan[t11]+Y2*Tan[t21]+Y3*Tan[t31]==0,
2*Y1*Tan[t13]+Y2*Tan[t23]+Y3*Tan[t33]==0,
t11*Y1+t21*(Y2/2)*(Sec[t21]^2/Sec[t11]^2)+t31*(Y3/2)*(Sec[t31]^2/Sec[t11]^2)-b1==0,
t13*Y1+t23*(Y2/2)*(Sec[t23]^2/Sec[t13]^2)+t33*(Y3/2)*(Sec[t33]^2/Sec[t13]^2)-b3==0};
tbl=Table[
{y2i,y3i,l2i,l3i}=RandomReal[{0,.2},4];
Quiet[root=Check[FindRoot[eqns,{{Y2,y2i,0,.2},{Y3,y3i,0,.2},{l2,l2i,0,.2},{l3,l3i,0,.2}}],False]];
If[root===False,Nothing,root]
,{256}];
roots=Sort[Map[{Y2,Y3,l2,l3}/.#&,tbl],Norm[#1]<Norm[#2]&]
If[roots=={},"It found no roots in that range using those coefficients",
Map[First,eqns]/.{Y2->roots[[1,1]],Y3->roots[[1,2]],l2->roots[[1,3]],l3->roots[[1,4]]}]
That will look for roots starting at 256 different random locations greater than, but near 0. The way I have written FindRoot this time it will stop if the search is outside the range 0,.2 You can change that range if needed.
Next your functions have many local minima that trap FindRoot. So it should discard all local minima found inside that range.
And I have hidden the warning messages, usually not a good idea to do.
Then it will Sort the roots to show those nearest 0 first.
If you want it to sort to show the results nearest some given value then I can modify the Sort to show those first, I just need to know where you want the roots nearest to.
And finally it substitutes the Y2,Y3,l2,l3 of the smallest root it found back into the four equations to demonstrate that the results are very very near zero and this is a real root instead of a local minima.
After trying that a few times I found one root at {0.0203704,0.0225972,0.0163842,0.0181147} and that looks very close to your required values if I swap Y2 and Y3 and swap l2 and l3. Is that perhaps the root that you are looking for?
If you need more then please tell me exactly what is needed.
Please check ALL this VERY carefully to make certain I have made no mistakes.

BFGS Fails to Converge

The model I'm working on is a multinomial logit choice model. It's a very specific dataset so other existing MNLogit libraries don't fit with my data.
So basically, it's a very complex function which takes 11 parameters and returns a loglikelihood value. Then I need to find the optimal parameter values that can minimize the loglikelihood using scipy.optimize.minimize.
Here are the problems that I encounter with different methods:
'Nelder-Mead’: it works well, and always give me the correct answer. However, it's EXTREMELY slow. For another function with a more complicated setup, it takes 15 hours to get to the optimal point. At the same time, the same function takes only 1 hour on Matlab using fminunc (which uses BFGS by default)
‘BFGS’: This is the method used by Matlab. It works well for any simply functions. However, for the function that I have, it always fails to converge and returns 'Desired error not necessarily achieved due to precision loss.’. I've spent lots of time playing around with the options but still failed to work.
'Powell': It quickly converges successfully but returns a wrong answer. The code is printed below (x0 is the correct answer, Nelder-Mead works for whatever initial value), and you can get the data here: https://www.dropbox.com/s/aap2dhor5jyxy94/data.csv
Thanks!
import pandas as pd
import numpy as np
from scipy.optimize import minimize
# https://www.dropbox.com/s/aap2dhor5jyxy94/data.csv
df = pd.read_csv('data.csv', index_col=0)
dfhh = df.hh
B = df.ix[:,'b0':'b4'].values # NT*5
P = df.ix[:,'p1':'p4'].values # NT*4
F = df.ix[:,'f1':'f4'].values # NT*4
SDV = df.ix[:,'lagb1':'lagb4'].values
def Li(x):
b1 = x[0] # coeff on prices
b2 = x[1] # coeff on features
a = x[2:7] # take first 4 values as alpha
E = np.exp(a + b1*P + b2*F) # (1*4) + (NT*4) + (NT*4) build matrix (NT*J) for each exp()
E = np.insert(E, 0, 1, axis=1) # (NT*5)
denom = E.sum(1)
return -np.log((B * E).sum(1) / denom).sum()
x0 = np.array([-32.31028223, 0.23965953, 0.84739154, 0.25418215,-3.38757007,-0.38036966])
np.random.seed(0)
x0 = x0 + np.random.rand(6)
minL = minimize(Li, x0, method='Nelder-Mead',options={'xtol': 1e-8, 'disp': True})
# minL = minimize(Li, x0, method='BFGS')
# minL = minimize(Li, x0, method='Powell', options={'xtol': 1e-12, 'ftol': 1e-12})
print minL
Update: 03/07/14 Simpler Version of the Code
Now Powell works well with very small tolerance, however the speed of Powell is slower than Nelder-Mead in this case. BFGS still fails to work.

How can I plot data to a “best fit” cos² graph in Matlab?

I’m currently a Physics student and for several weeks have been compiling data related to ‘Quantum Entanglement’. I’ve now got to a point where I have to plot my data (which should resemble a cos² graph - and does) to a sort of “best fit” cos² graph. The lab script says the following:
A more precise determination of the visibility V (this is basically how 'clean' the data is) follows from the best fit to the measured data using the function:
f(b) = A/2[1-Vsin(b-b(center)/P)]
Granted this probably doesn’t mean much out of context, but essentially A is the amplitude, b is an angle and P is the periodicity. Hence this is also a “wave” like the experimental data I have found.
From this I understand, as previously mentioned, I am making a “best fit” curve. However, I have been told that this isn’t possible with Excel and that the best approach is Matlab.
I know intermediate JavaScript but do not know Matlab and was hoping for some direction.
Is there a tutorial I can read for this? Is it possible for someone to go through it with me? I really have no idea what it entails, so any feed back would be greatly appreciated.
Thanks a lot!
Initial steps
I guess we should begin by getting a representation in Matlab of the function that you're trying to model. A direct translation of your formula looks like this:
function y = targetfunction(A,V,P,bc,b)
y = (A/2) * (1 - V * sin((b-bc) / P));
end
Getting hold of the data
My next step is going to be to generate some data to work with (you'll use your own data, naturally). So here's a function that generates some noisy data. Notice that I've supplied some values for the parameters.
function [y b] = generateData(npoints,noise)
A = 2;
V = 1;
P = 0.7;
bc = 0;
b = 2 * pi * rand(npoints,1);
y = targetfunction(A,V,P,bc,b) + noise * randn(npoints,1);
end
The function rand generates random points on the interval [0,1], and I multiplied those by 2*pi to get points randomly on the interval [0, 2*pi]. I then applied the target function at those points, and added a bit of noise (the function randn generates normally distributed random variables).
Fitting parameters
The most complicated function is the one that fits a model to your data. For this I use the function fminunc, which does unconstrained minimization. The routine looks like this:
function [A V P bc] = bestfit(y,b)
x0(1) = 1; %# A
x0(2) = 1; %# V
x0(3) = 0.5; %# P
x0(4) = 0; %# bc
f = #(x) norm(y - targetfunction(x(1),x(2),x(3),x(4),b));
x = fminunc(f,x0);
A = x(1);
V = x(2);
P = x(3);
bc = x(4);
end
Let's go through line by line. First, I define the function f that I want to minimize. This isn't too hard. To minimize a function in Matlab, it needs to take a single vector as a parameter. Therefore we have to pack our four parameters into a vector, which I do in the first four lines. I used values that are close, but not the same, as the ones that I used to generate the data.
Then I define the function I want to minimize. It takes a single argument x, which it unpacks and feeds to the targetfunction, along with the points b in our dataset. Hopefully these are close to y. We measure how far they are from y by subtracting from y and applying the function norm, which squares every component, adds them up and takes the square root (i.e. it computes the root mean square error).
Then I call fminunc with our function to be minimized, and the initial guess for the parameters. This uses an internal routine to find the closest match for each of the parameters, and returns them in the vector x.
Finally, I unpack the parameters from the vector x.
Putting it all together
We now have all the components we need, so we just want one final function to tie them together. Here it is:
function master
%# Generate some data (you should read in your own data here)
[f b] = generateData(1000,1);
%# Find the best fitting parameters
[A V P bc] = bestfit(f,b);
%# Print them to the screen
fprintf('A = %f\n',A)
fprintf('V = %f\n',V)
fprintf('P = %f\n',P)
fprintf('bc = %f\n',bc)
%# Make plots of the data and the function we have fitted
plot(b,f,'.');
hold on
plot(sort(b),targetfunction(A,V,P,bc,sort(b)),'r','LineWidth',2)
end
If I run this function, I see this being printed to the screen:
>> master
Local minimum found.
Optimization completed because the size of the gradient is less than
the default value of the function tolerance.
A = 1.991727
V = 0.979819
P = 0.695265
bc = 0.067431
And the following plot appears:
That fit looks good enough to me. Let me know if you have any questions about anything I've done here.
I am a bit surprised as you mention f(a) and your function does not contain an a, but in general, suppose you want to plot f(x) = cos(x)^2
First determine for which values of x you want to make a plot, for example
xmin = 0;
stepsize = 1/100;
xmax = 6.5;
x = xmin:stepsize:xmax;
y = cos(x).^2;
plot(x,y)
However, note that this approach works just as well in excel, you just have to do some work to get your x values and function in the right cells.

Solving a system of equations using Python/Scipy for a set of measurements

I have an physical instrument of measurement (force platform with load cells) which gives me three values, A, B and C. It happens, though, that these values - that should be orthogonal - actually are somewhat coupled, due to physical characteristics of the measuring device, which causes cross-talk between applied and returned values of force and torque.
Then, it is recommended that a calibration matrix be used to transform the measured values into a better estimate of the actual values, like this:
The problem is that it is necessary to perform a SET of measurements, so that different measured(Fz, Mx, My) and actual(Fz, Mx, My) are least-squared to get some C matrix that works best for the system as a whole.
I can solve Ax = B problems with scipy.linalg.lststq, or even scipy.linalg.solve (giving an exact solution) for ONE measurement, but how should I proceed to consider a set of different measurements, each one with its own equation giving a potentially different 3x3 matrix?
Any help is much appreciated, thanks for reading.
I posted a similar question containing just the mathematical part of this at math.stackexchange.com, and this answer solved the problem:
math.stackexchange.com/a/232124/27435
In case anyone have a similar problem in the future, here is the almost literal Scipy implementation of that answer (first lines are initialization boilerplate code):
import numpy
import scipy.linalg
### Origin of the coordinate system: upper left corner!
"""
1----------2
| |
| |
4----------3
"""
platform_width = 600
platform_height = 400
# positions of each load cell (one per corner)
loadcell_positions = numpy.array([[0, 0],
[platform_width, 0],
[platform_width, platform_height],
[0, platform_height]])
platform_origin = numpy.array([platform_width, platform_height]) * 0.5
# applying a known force at known positions and taking the measurements
measurements_per_axis = 5
total_load = 50
results = []
for x in numpy.linspace(0, platform_width, measurements_per_axis):
for y in numpy.linspace(0, platform_height, measurements_per_axis):
position = numpy.array([x,y])
for loadpos in loadcell_positions:
moments = platform_origin-loadpos * total_load
load = numpy.array([total_load])
result = numpy.hstack([load, moments])
results.append(result)
results = numpy.array(results)
noise = numpy.random.rand(*results.shape) - 0.5
measurements = results + noise
# now expand ("stuff") the 3x3 matrix to get a linearly independent 3x3 matrix
expands = []
for n in xrange(measurements.shape[0]):
k = results[n,:]
m = measurements[n,:]
expand = numpy.zeros((3,9))
expand[0,0:3] = m
expand[1,3:6] = m
expand[2,6:9] = m
expands.append(expand)
expands = numpy.vstack(expands)
# perform the actual regression
C = scipy.linalg.lstsq(expands, measurements.reshape((-1,1)))
C = numpy.array(C[0]).reshape((3,3))
# the result with pure noise (not actual coupling) should be
# very close to a 3x3 identity matrix (and is!)
print C
Hope this helps someone!