I am trying to use bvp4c to solve a system of 4 odes. The issue is that one of the boundaries is unknown.
Can bvp4c handle this? In my code L is the unknown I am solving for.
I get an error message printed below.
function mat4bvp
L = 8;
solinit = bvpinit(linspace(0,L,100),#mat4init);
sol = bvp4c(#mat4ode,#mat4bc,solinit);
sint = linspace(0,L);
Sxint = deval(sol,sint);
end
% ------------------------------------------------------------
function dtdpdxdy = mat4ode(s,y,L)
Lambda = 0.3536;
dtdpdxdy = [y(2)
-sin(y(1)) + Lambda*(L-s)*cos(y(1))
cos(y(1))
sin(y(1))];
end
% ------------------------------------------------------------
function res = mat4bc(ya,yb,L)
res = [ ya(1)
ya(2)
ya(3)
ya(4)
yb(1)];
end
% ------------------------------------------------------------
function yinit = mat4init(s)
yinit = [ cos(s)
0
0
0
];
end
Unfortunately I get the following error message ;
>> mat4bvp
Not enough input arguments.
Error in mat4bvp>mat4ode (line 13)
-sin(y(1)) + Lambda*(L-s)*cos(y(1))
Error in bvparguments (line 105)
testODE = ode(x1,y1,odeExtras{:});
Error in bvp4c (line 130)
bvparguments(solver_name,ode,bc,solinit,options,varargin);
Error in mat4bvp (line 4)
sol = bvp4c(#mat4ode,#mat4bc,solinit);
One trick to transform a variable end point into a fixed one is to change the time scale. If x'(t)=f(t,x(t)) is the differential equation, set t=L*s, s from 0 to 1, and compute the associated differential equation for y(s)=x(L*s)
y'(s)=L*x'(L*s)=L*f(L*s,y(s))
The next trick to employ is to transform the global variable into a part of the differential equation by computing it as constant function. So the new system is
[ y'(s), L'(s) ] = [ L(s)*f(L(s)*s,y(s)), 0 ]
and the value of L occurs as additional free left or right boundary value, increasing the number of variables = dimension of the state vector to the number of boundary conditions.
I do not have Matlab readily available, in Python with the tools in scipy this can be implemented as
from math import sin, cos
import numpy as np
from scipy.integrate import solve_bvp, odeint
import matplotlib.pyplot as plt
# The original function with the interval length as parameter
def fun0(t, y, L):
Lambda = 0.3536;
#print t,y,L
return np.array([ y[1], -np.sin(y[0]) + Lambda*(L-t)*np.cos(y[0]), np.cos(y[0]), np.sin(y[0]) ]);
# Wrapper function to apply both tricks to transform variable interval length to a fixed interval.
def fun1(s,y):
L = y[-1];
dydt = np.zeros_like(y);
dydt[:-1] = L*fun0(L*s, y[:-1], L);
return dydt;
# Implement evaluation of the boundary condition residuals:
def bc(ya, yb):
return [ ya[0],ya[1], ya[2], ya[3], yb[0] ];
# Define the initial mesh with 5 nodes:
x = np.linspace(0, 1, 3)
# This problem has multiple solutions. Try two initial guesses.
L_a=8
L_b=9
y_a = odeint(lambda y,t: fun1(t,y), [0,0,0,0,L_a], x)
y_b = odeint(lambda y,t: fun1(t,y), [0,0,0,0,L_b], x)
# Now we are ready to run the solver.
res_a = solve_bvp(fun1, bc, x, y_a.T)
res_b = solve_bvp(fun1, bc, x, y_b.T)
L_a = res_a.sol(0)[-1]
L_b = res_b.sol(0)[-1]
print "L_a=%.8f, L_b=%.8f" % ( L_a,L_b )
# Plot the two found solutions. The solution are in a spline form, use this to produce a smooth plot.
x_plot = np.linspace(0, 1, 100)
y_plot_a = res_a.sol(x_plot)[0]
y_plot_b = res_b.sol(x_plot)[0]
plt.plot(L_a*x_plot, y_plot_a, label='L=%.8f'%L_a)
plt.plot(L_b*x_plot, y_plot_b, label='L=%.8f'%L_b)
plt.legend()
plt.xlabel("t")
plt.ylabel("y")
plt.grid(); plt.show()
which produces
Trying different initial values for L finds other solutions on quite different scales, among them
L=0.03195111
L=0.05256775
L=0.05846539
L=0.06888907
L=0.08231966
L=4.50411522
L=6.84868060
L=20.01725616
L=22.53189063
Related
I am trying to solve the following 4th order BVP
y'''' = K - C*y
My x variable is a linspace with 100 nodes. As you can see, K is a vector of the same length=100 and makes the equation nonhomogeneous. When I press solve, however, there is the following error:
Cell In [11], line 18, in fun(x, y)
17 def fun(x, y):
---> 18 ans = vector-np.multiply(C,y[0])
19 return np.vstack((y[1],y[2],y[3],ans))
ValueError: operands could not be broadcast together with shapes (100,) (99,)
Why does the solver suddenly change the length of y by 1 and how can I fix this error?
EDIT: I must add that the solver works fine when K is absent i.e. the equation is homogeneous.
from scipy.integrate import solve_bvp
import numpy as np
L = 10
nodes = 100
A = 1000
B = 1500
C = 0.05
x = np.linspace(0,L,nodes)
vector = np.ones(nodes)
def fun(x, y):
ans = vector-np.multiply(C,y[0])
return np.vstack((y[1],y[2],y[3],ans))
def bc(ya, yb):
return np.array([ya[2], yb[2], ya[3]+A/B, yb[3]])
y_a = np.zeros((4, x.size))
res_a = solve_bvp(fun, bc, x, y_a)
res1 = res_a.sol(x)[0]
res2 = res_a.sol(x)[1]
res3 = B*res_a.sol(x)[2]
res4 = B*res_a.sol(x)[3]
The solver establishes in the first round a system for polynomial approximations over the nodes-1=99 segments of the first subdivision.
There is no guarantee that the subdivision remains unchanged in the later solver rounds. So your ODE right-side function has to work with arbitrary x arrays. This means that parameters given as a function table need to be interpolated for the general x array. There are procedures in numpy.interp for instantaneous interpolation and scipy.interpolate.interp1d to generate interpolation functions.
I have written a function that, given parameters, can apply a piecewise linear fit, with arbitrarily many piecewise sections, to some data.
I am trying to fit the function to my data using scipy.optimize.curve_fit, but I am receiving an "OptimizeWarning: Covariance of the parameters could not be estimated" error. I believe this may be because of the nested lambda functions I am using to define the piecewise sections.
Is there an easy way to tweak my code to get round this, or a different scipy optimisation function that might be more suitable?
#The piecewise function
def piecewise_linear(x, *params):
N=len(params)/2
if N.is_integer():N=int(N)
else:raise(ValueError())
c=params[0]
xbounds=params[1:N]
grads=params[N:]
#First we define our conditions, which are true if x is a member of a given
#bin.
conditions=[]
#first and last bins are a special case:
cond0=lambda x: x<xbounds[0]
condl=lambda x: x>=xbounds[-1]
conditions.append(cond0(x))
for i in range(len(xbounds)-1):
cond=lambda x : (x >= xbounds[i]) & (x < xbounds[i+1])
conditions.append(cond(x))
conditions.append(condl(x))
#Next we define our linear regression function for each bin. The offset
#for each bin depends on where the previous bin ends, so we define
#the regression functions recursively:
functions=[]
func0 = lambda x: grads[0]*x +c
functions.append(func0)
for i in range(len(grads)-1):
func = (lambda j: lambda x: grads[j+1]*(x-xbounds[j])\
+functions[j](xbounds[j]))(i)
functions.append(func)
return np.piecewise(x,conditions,functions)
#Some data
x=np.arange(100)
y=np.array([*np.arange(0,19,1),*np.arange(20,59,2),\
*np.arange(60,20,-1),*np.arange(21,42,1)]) + np.random.randn(100)
#A first guess of parameters
cguess=0
boundguess=[20,30,50]
gradguess=[1,1,1,1]
p0=[cguess,*boundguess,*gradguess]
fit=scipy.optimize.curve_fit(piecewise_linear,x,y,p0=p0)
Here is example code that fits two straight lines to a curved data set with a breakpoint, where the line parameters and breakpoint are all fitted. This example uses scipy's Differential Evolution genetic algorithm to determine initial parameter estimates for the regression. That module uses the Latin Hypercube algorithm to ensure a thorough search of parameter space, which requires bounds within which to search. In this example those search bounds are derived from the data itself. Note that it is much easier to find ranges for the initial parameter estimates than to give specific values.
import numpy, scipy, matplotlib
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
from scipy.optimize import differential_evolution
import warnings
xData = numpy.array([19.1647, 18.0189, 16.9550, 15.7683, 14.7044, 13.6269, 12.6040, 11.4309, 10.2987, 9.23465, 8.18440, 7.89789, 7.62498, 7.36571, 7.01106, 6.71094, 6.46548, 6.27436, 6.16543, 6.05569, 5.91904, 5.78247, 5.53661, 4.85425, 4.29468, 3.74888, 3.16206, 2.58882, 1.93371, 1.52426, 1.14211, 0.719035, 0.377708, 0.0226971, -0.223181, -0.537231, -0.878491, -1.27484, -1.45266, -1.57583, -1.61717])
yData = numpy.array([0.644557, 0.641059, 0.637555, 0.634059, 0.634135, 0.631825, 0.631899, 0.627209, 0.622516, 0.617818, 0.616103, 0.613736, 0.610175, 0.606613, 0.605445, 0.603676, 0.604887, 0.600127, 0.604909, 0.588207, 0.581056, 0.576292, 0.566761, 0.555472, 0.545367, 0.538842, 0.529336, 0.518635, 0.506747, 0.499018, 0.491885, 0.484754, 0.475230, 0.464514, 0.454387, 0.444861, 0.437128, 0.415076, 0.401363, 0.390034, 0.378698])
def func(xArray, breakpoint, slopeA, offsetA, slopeB, offsetB):
returnArray = []
for x in xArray:
if x < breakpoint:
returnArray.append(slopeA * x + offsetA)
else:
returnArray.append(slopeB * x + offsetB)
return returnArray
# function for genetic algorithm to minimize (sum of squared error)
def sumOfSquaredError(parameterTuple):
warnings.filterwarnings("ignore") # do not print warnings by genetic algorithm
val = func(xData, *parameterTuple)
return numpy.sum((yData - val) ** 2.0)
def generate_Initial_Parameters():
# min and max used for bounds
maxX = max(xData)
minX = min(xData)
maxY = max(yData)
minY = min(yData)
slope = 10.0 * (maxY - minY) / (maxX - minX) # times 10 for safety margin
parameterBounds = []
parameterBounds.append([minX, maxX]) # search bounds for breakpoint
parameterBounds.append([-slope, slope]) # search bounds for slopeA
parameterBounds.append([minY, maxY]) # search bounds for offsetA
parameterBounds.append([-slope, slope]) # search bounds for slopeB
parameterBounds.append([minY, maxY]) # search bounds for offsetB
result = differential_evolution(sumOfSquaredError, parameterBounds, seed=3)
return result.x
# by default, differential_evolution completes by calling curve_fit() using parameter bounds
geneticParameters = generate_Initial_Parameters()
# call curve_fit without passing bounds from genetic algorithm
fittedParameters, pcov = curve_fit(func, xData, yData, geneticParameters)
print('Parameters:', fittedParameters)
print()
modelPredictions = func(xData, *fittedParameters)
absError = modelPredictions - yData
SE = numpy.square(absError) # squared errors
MSE = numpy.mean(SE) # mean squared errors
RMSE = numpy.sqrt(MSE) # Root Mean Squared Error, RMSE
Rsquared = 1.0 - (numpy.var(absError) / numpy.var(yData))
print()
print('RMSE:', RMSE)
print('R-squared:', Rsquared)
print()
##########################################################
# graphics output section
def ModelAndScatterPlot(graphWidth, graphHeight):
f = plt.figure(figsize=(graphWidth/100.0, graphHeight/100.0), dpi=100)
axes = f.add_subplot(111)
# first the raw data as a scatter plot
axes.plot(xData, yData, 'D')
# create data for the fitted equation plot
xModel = numpy.linspace(min(xData), max(xData))
yModel = func(xModel, *fittedParameters)
# now the model as a line plot
axes.plot(xModel, yModel)
axes.set_xlabel('X Data') # X axis data label
axes.set_ylabel('Y Data') # Y axis data label
plt.show()
plt.close('all') # clean up after using pyplot
graphWidth = 800
graphHeight = 600
ModelAndScatterPlot(graphWidth, graphHeight)
I am looking at the KalmanFilter from pykalman shown in examples:
pykalman documentation
Example 1
Example 2
and I am wondering
observation_covariance=100,
vs
observation_covariance=1,
the documentation states
observation_covariance R: e(t)^2 ~ Gaussian (0, R)
How should the value be set here correctly?
Additionally, is it possible to apply the Kalman filter without intercept in the above module?
The observation covariance shows how much error you assume to be in your input data. Kalman filter works fine on normally distributed data. Under this assumption you can use the 3-Sigma rule to calculate the covariance (in this case the variance) of your observation based on the maximum error in the observation.
The values in your question can be interpreted as follows:
Example 1
observation_covariance = 100
sigma = sqrt(observation_covariance) = 10
max_error = 3*sigma = 30
Example 2
observation_covariance = 1
sigma = sqrt(observation_covariance) = 1
max_error = 3*sigma = 3
So you need to choose the value based on your observation data. The more accurate the observation, the smaller the observation covariance.
Another point: you can tune your filter by manipulating the covariance, but I think it's not a good idea. The higher the observation covariance value the weaker impact a new observation has on the filter state.
Sorry, I did not understand the second part of your question (about the Kalman Filter without intercept). Could you please explain what you mean?
You are trying to use a regression model and both intercept and slope belong to it.
---------------------------
UPDATE
I prepared some code and plots to answer your questions in details. I used EWC and EWA historical data to stay close to the original article.
First of all here is the code (pretty the same one as in the examples above but with a different notation)
from pykalman import KalmanFilter
import numpy as np
import matplotlib.pyplot as plt
# reading data (quick and dirty)
Datum=[]
EWA=[]
EWC=[]
for line in open('data/dataset.csv'):
f1, f2, f3 = line.split(';')
Datum.append(f1)
EWA.append(float(f2))
EWC.append(float(f3))
n = len(Datum)
# Filter Configuration
# both slope and intercept have to be estimated
# transition_matrix
F = np.eye(2) # identity matrix because x_(k+1) = x_(k) + noise
# observation_matrix
# H_k = [EWA_k 1]
H = np.vstack([np.matrix(EWA), np.ones((1, n))]).T[:, np.newaxis]
# transition_covariance
Q = [[1e-4, 0],
[ 0, 1e-4]]
# observation_covariance
R = 1 # max error = 3
# initial_state_mean
X0 = [0,
0]
# initial_state_covariance
P0 = [[ 1, 0],
[ 0, 1]]
# Kalman-Filter initialization
kf = KalmanFilter(n_dim_obs=1, n_dim_state=2,
transition_matrices = F,
observation_matrices = H,
transition_covariance = Q,
observation_covariance = R,
initial_state_mean = X0,
initial_state_covariance = P0)
# Filtering
state_means, state_covs = kf.filter(EWC)
# Restore EWC based on EWA and estimated parameters
EWC_restored = np.multiply(EWA, state_means[:, 0]) + state_means[:, 1]
# Plots
plt.figure(1)
ax1 = plt.subplot(211)
plt.plot(state_means[:, 0], label="Slope")
plt.grid()
plt.legend(loc="upper left")
ax2 = plt.subplot(212)
plt.plot(state_means[:, 1], label="Intercept")
plt.grid()
plt.legend(loc="upper left")
# check the result
plt.figure(2)
plt.plot(EWC, label="EWC original")
plt.plot(EWC_restored, label="EWC restored")
plt.grid()
plt.legend(loc="upper left")
plt.show()
I could not retrieve data using pandas, so I downloaded them and read from the file.
Here you can see the estimated slope and intercept:
To test the estimated data I restored the EWC value from the EWA using the estimated parameters:
About the observation covariance value
By varying the observation covariance value you tell the Filter how accurate the input data is (normally you just describe your confidence in the observation using some datasheets or your knowledge about the system).
Here are estimated parameters and the restored EWC values using different observation covariance values:
You can see the filter follows the original function better with a bigger confidence in observation (smaller R). If the confidence is low (bigger R) the filter leaves the initial estimate (slope = 0, intercept = 0) very slowly and the restored function is far away from the original one.
About the frozen intercept
If you want to freeze the intercept for some reason, you need to change the whole model and all filter parameters.
In the normal case we had:
x = [slope; intercept] #estimation state
H = [EWA 1] #observation matrix
z = [EWC] #observation
Now we have:
x = [slope] #estimation state
H = [EWA] #observation matrix
z = [EWC-const_intercept] #observation
Results:
Here is the code:
from pykalman import KalmanFilter
import numpy as np
import matplotlib.pyplot as plt
# only slope has to be estimated (it will be manipulated by the constant intercept) - mathematically incorrect!
const_intercept = 10
# reading data (quick and dirty)
Datum=[]
EWA=[]
EWC=[]
for line in open('data/dataset.csv'):
f1, f2, f3 = line.split(';')
Datum.append(f1)
EWA.append(float(f2))
EWC.append(float(f3))
n = len(Datum)
# Filter Configuration
# transition_matrix
F = 1 # identity matrix because x_(k+1) = x_(k) + noise
# observation_matrix
# H_k = [EWA_k]
H = np.matrix(EWA).T[:, np.newaxis]
# transition_covariance
Q = 1e-4
# observation_covariance
R = 1 # max error = 3
# initial_state_mean
X0 = 0
# initial_state_covariance
P0 = 1
# Kalman-Filter initialization
kf = KalmanFilter(n_dim_obs=1, n_dim_state=1,
transition_matrices = F,
observation_matrices = H,
transition_covariance = Q,
observation_covariance = R,
initial_state_mean = X0,
initial_state_covariance = P0)
# Creating the observation based on EWC and the constant intercept
z = EWC[:] # copy the list (not just assign the reference!)
z[:] = [x - const_intercept for x in z]
# Filtering
state_means, state_covs = kf.filter(z) # the estimation for the EWC data minus constant intercept
# Restore EWC based on EWA and estimated parameters
EWC_restored = np.multiply(EWA, state_means[:, 0]) + const_intercept
# Plots
plt.figure(1)
ax1 = plt.subplot(211)
plt.plot(state_means[:, 0], label="Slope")
plt.grid()
plt.legend(loc="upper left")
ax2 = plt.subplot(212)
plt.plot(const_intercept*np.ones((n, 1)), label="Intercept")
plt.grid()
plt.legend(loc="upper left")
# check the result
plt.figure(2)
plt.plot(EWC, label="EWC original")
plt.plot(EWC_restored, label="EWC restored")
plt.grid()
plt.legend(loc="upper left")
plt.show()
I am trying to compute the convolution of a sound signal without using the built in conv function but instead using arrays. x is the input signal and h is are the impulse responses. However, when I run my other main function to call onto my_conv I am getting these errors:
Undefined function or variable 'nx'.**
Error in my_conv (line 6)
ly=nx+nh-1;
Error in main_stereo (line 66)
leftchannel = my_conv(leftimp, mono); % convolution of left ear impulse response and mono
This is my function my_conv:
function [y]=my_conv(x,h)
x=x(:);
h=h(:);
lx=length(x);
lh=length(h);
ly=nx+nh-1;
Y=zeros(nh,ny);
for i =1:nh
Y((1:nx)+(i-1),i)=x;
end
y=Y*h;
What changes should I make to fix these errors and get this code running?
I am trying to immplement the function into this code:
input_filename = 'speech.wav';
stereo_filename = 'stereo2.wav';
imp_filename = 'H0e090a.dat';
len_imp = 128;
fp = fopen(imp_filename, 'r', 'ieee-be');
data = fread(fp, 2*len_imp, 'short');
fclose(fp);
[mono,Fs] = audioread(input_filename);
if (Fs~=44100)
end
len_mono = length(mono);
leftimp = data(1:2:2*len_imp);
rightimp = data(2:2:2*len_imp);
leftchannel = my_conv(leftimp, mono);
rightchannel = my_conv(rightimp, mono);
leftchannel = reshape(leftchannel , length(leftchannel ), 1);
rightchannel = reshape(rightchannel, length(rightchannel), 1);
norml = max(abs([leftchannel; rightchannel]))*1.05;
audiowrite(stereo_filename, [leftchannel rightchannel]/norml, Fs);
As pointed out by #SardarUsama in comments, the error
Undefined function or variable 'nx'.
Error in my_conv (line 6) ly=nx+nh-1;
tells you that the variable nx has not been defined before before its usage on the line ly=nx+nh-1. Given the naming of the variables and their usage, it looks like what you intended to do was:
nx = length(x);
nh = length(h);
ny = nx+nh-1;
After making these modifications and solving the first error, you are likely going to get another error telling you that
error: my_conv: operator *: nonconformant arguments
This error is due to an inversion in the specified size of the Y matrix. This can be fixed by initializing Y with Y = zeros(ny, nh);. The resulting my_conv function follows:
function [y]=my_conv(x,h)
nx=length(x);
nh=length(h);
ny=nx+nh-1;
Y=zeros(ny,nh);
for i =1:nh
Y((1:nx)+(i-1),i)=x;
end
y=Y*h;
Note that storing every possible shifts of one of the input vectors in the matrix Y to compute the convolution as a matrix multiplication is not very memory efficient (requiring O(NM) storage). A more memory efficient implementation would compute each element of the output vector directly:
function [y]=my_conv(x,h)
nx=length(x);
nh=length(h);
if (nx < nh)
y = my_conv(h,x);
else
ny=nx+nh-1;
y = zeros(1,ny);
for i =1:ny
idh = [max(i-(ny-nh),1):min(i,nh)]
idx = [min(i,nx):-1:max(nx-(ny-i),1)]
y(i) = sum(x(idx).*h(idh));
end
end
An alternate implementation which can be more computationally efficient for large arrays would make use of the convolution theorem and use the Fast Fourier Transform (FFT):
function [y]=my_conv2(x,h)
nx=length(x);
nh=length(h);
ny=nx+nh-1;
if (size(x,1)>size(x,2))
x = transpose(x);
end
if (size(h,1)>size(h,2))
h = transpose(h);
end
Xf = fft([x zeros(size(x,1),ny-nx)]);
Hf = fft([h zeros(size(h,1),ny-nh)]);
y = ifft(Xf .* Hf);
I'm solving a DAE problem with ode15i solver. I have 8 variables and 8 equations, and the system is complex that the only working solver so far is ode15i.
I've used the guide: http://se.mathworks.com/help/symbolic/set-up-your-dae-problem.html. However, this guide doesn't help me to solve the problem with a varying input.
My system has a time-dependent input, a function. The function itself is quite simple, but the problem is that the time t in DAE system is in symbolic form, and my input function can't solve the corresponding input value, because it can't compute symbolic time value t.
here is my input function for the DAE system:
function [delta] = delta(t)
t=vpa(t)
if t <= 1.01
delta = 0;
elseif t <= 3.61
delta = 0.1222;
elseif t <= 4.33
delta = 0;
elseif t <= 7.21
delta = -0.1222;
else
delta = 0;
end
The Dae problem:
syms v1(t) r1(t) r2(t) r3(t) r4(t) fii(t) sig(t) psi(t);
tspan = 0:0.01:10;
....
a11 = delta(t)-(v1(t)+r1(t)*l_1_1)/u;
....
eqs = [...]
vars [...]
f = daeFunction(eqs,vars);
y0est = zeros(8,1);
yp0est = zeros(8,1);
opt = odeset('RelTol',10.0^(-7),'Abstol',10.0^(-7));
[y0,yp0] = decic(f,0,y0est, [], yp0est, [], opt);
[t,y] = ode15i(f,tspan, y0, yp0, opt);
The error I receive is as follows:
Conversion to logical from sym is not possible.
Error in delta (line 3)
if t <= 1.01
Error in Nonlin_painonsiirto_DUO2 (line 181)
a11 = delta(t)-(v1(t)+r1(t)*l_1_1)/u;
The system works if the delta input is a constant number or a trigonometric function of (t) such as:
delta = 0.0175;
delta = sin(t)*0.0175;
I've tried the delta function also without the vpa(t) command and with command double(t), but it does nothing. I also tried to use my system's time vector tspan=0:0.01:10 as the input, which is given as the timespan for the ode15i:
delta(tspan)
However, then it tries to compute the whole vector tspan, which results in an error, because the matrix dimensions don't agree.
Hopefully the problem here is understandable, thanks.
-Jere