In the following code from Openturns FORM example:
import openturns as ot
model = ot.SymbolicFunction(['x1', 'x2'], ['x1^2+x2'])
R = ot.CorrelationMatrix(2)
R[0,1] = -0.6
inputDist = ot.Normal([0.,0.], R)
inputDist.setDescription(['X1', 'X2'])
inputVector = ot.RandomVector(inputDist)
Create the output random vector Y=model(X)
Y = ot.CompositeRandomVector(model, inputVector)
Create the event Y > 4
threshold = 4.0
event = ot.ThresholdEvent(Y, ot.Greater(), threshold)
Create a FORM algorithm
solver = ot.Cobyla()
startingPoint = inputDist.getMean()
algo = ot.FORM(solver, event, startingPoint)
Run the algorithm and retrieve the result
algo.run()
result_form = algo.getResult()
print(result_form)
Create the post analytical importance sampling simulation algorithm
algo = ot.PostAnalyticalImportanceSampling(result_form)
algo.run()
print(algo.getResult())
result = algo.getResult()
Create the post analytical controlled importance sampling simulation algorithm
algo = ot.PostAnalyticalControlledImportanceSampling(result_form)
algo.run()
print(algo.getResult())
Is it possible to see the values of X1, X2 and Y during the optimisation?
I am hoping to implement this on a simulation that takes a few minutes to run- so would be good to watch the steps of the optimisation process.
Thanks :-)
You just have to set the verbose flag in the solver:
solver.setVerbose(True)
then you will see how Cobyla steps during the finding of the design point:
cobyla: the initial value of RHO is 1.000000E-01 and PARMU is set to zero.
cobyla: NFVALS = 1, F = 0.000000E+00, MAXCV = 3.999990E+00
cobyla: X = 0.000000E+00 0.000000E+00
cobyla: NFVALS = 2, F = 5.000000E-03, MAXCV = 4.049990E+00
cobyla: X = 1.000000E-01 0.000000E+00
cobyla: NFVALS = 3, F = 5.000000E-03, MAXCV = 3.919990E+00
cobyla: X = 0.000000E+00 1.000000E-01
cobyla: increase in PARMU to 3.370787E-02
cobyla: NFVALS = 4, F = 5.000000E-03, MAXCV = 3.897541E+00
cobyla: X =-5.299989E-02 8.479983E-02
...
Note that the optimization is done in the standard space. You will have to implement logs in your code wrapper if you want to get the same information but in the physical space.
Cheers
Régis
Related
I am looking at the KalmanFilter from pykalman shown in examples:
pykalman documentation
Example 1
Example 2
and I am wondering
observation_covariance=100,
vs
observation_covariance=1,
the documentation states
observation_covariance R: e(t)^2 ~ Gaussian (0, R)
How should the value be set here correctly?
Additionally, is it possible to apply the Kalman filter without intercept in the above module?
The observation covariance shows how much error you assume to be in your input data. Kalman filter works fine on normally distributed data. Under this assumption you can use the 3-Sigma rule to calculate the covariance (in this case the variance) of your observation based on the maximum error in the observation.
The values in your question can be interpreted as follows:
Example 1
observation_covariance = 100
sigma = sqrt(observation_covariance) = 10
max_error = 3*sigma = 30
Example 2
observation_covariance = 1
sigma = sqrt(observation_covariance) = 1
max_error = 3*sigma = 3
So you need to choose the value based on your observation data. The more accurate the observation, the smaller the observation covariance.
Another point: you can tune your filter by manipulating the covariance, but I think it's not a good idea. The higher the observation covariance value the weaker impact a new observation has on the filter state.
Sorry, I did not understand the second part of your question (about the Kalman Filter without intercept). Could you please explain what you mean?
You are trying to use a regression model and both intercept and slope belong to it.
---------------------------
UPDATE
I prepared some code and plots to answer your questions in details. I used EWC and EWA historical data to stay close to the original article.
First of all here is the code (pretty the same one as in the examples above but with a different notation)
from pykalman import KalmanFilter
import numpy as np
import matplotlib.pyplot as plt
# reading data (quick and dirty)
Datum=[]
EWA=[]
EWC=[]
for line in open('data/dataset.csv'):
f1, f2, f3 = line.split(';')
Datum.append(f1)
EWA.append(float(f2))
EWC.append(float(f3))
n = len(Datum)
# Filter Configuration
# both slope and intercept have to be estimated
# transition_matrix
F = np.eye(2) # identity matrix because x_(k+1) = x_(k) + noise
# observation_matrix
# H_k = [EWA_k 1]
H = np.vstack([np.matrix(EWA), np.ones((1, n))]).T[:, np.newaxis]
# transition_covariance
Q = [[1e-4, 0],
[ 0, 1e-4]]
# observation_covariance
R = 1 # max error = 3
# initial_state_mean
X0 = [0,
0]
# initial_state_covariance
P0 = [[ 1, 0],
[ 0, 1]]
# Kalman-Filter initialization
kf = KalmanFilter(n_dim_obs=1, n_dim_state=2,
transition_matrices = F,
observation_matrices = H,
transition_covariance = Q,
observation_covariance = R,
initial_state_mean = X0,
initial_state_covariance = P0)
# Filtering
state_means, state_covs = kf.filter(EWC)
# Restore EWC based on EWA and estimated parameters
EWC_restored = np.multiply(EWA, state_means[:, 0]) + state_means[:, 1]
# Plots
plt.figure(1)
ax1 = plt.subplot(211)
plt.plot(state_means[:, 0], label="Slope")
plt.grid()
plt.legend(loc="upper left")
ax2 = plt.subplot(212)
plt.plot(state_means[:, 1], label="Intercept")
plt.grid()
plt.legend(loc="upper left")
# check the result
plt.figure(2)
plt.plot(EWC, label="EWC original")
plt.plot(EWC_restored, label="EWC restored")
plt.grid()
plt.legend(loc="upper left")
plt.show()
I could not retrieve data using pandas, so I downloaded them and read from the file.
Here you can see the estimated slope and intercept:
To test the estimated data I restored the EWC value from the EWA using the estimated parameters:
About the observation covariance value
By varying the observation covariance value you tell the Filter how accurate the input data is (normally you just describe your confidence in the observation using some datasheets or your knowledge about the system).
Here are estimated parameters and the restored EWC values using different observation covariance values:
You can see the filter follows the original function better with a bigger confidence in observation (smaller R). If the confidence is low (bigger R) the filter leaves the initial estimate (slope = 0, intercept = 0) very slowly and the restored function is far away from the original one.
About the frozen intercept
If you want to freeze the intercept for some reason, you need to change the whole model and all filter parameters.
In the normal case we had:
x = [slope; intercept] #estimation state
H = [EWA 1] #observation matrix
z = [EWC] #observation
Now we have:
x = [slope] #estimation state
H = [EWA] #observation matrix
z = [EWC-const_intercept] #observation
Results:
Here is the code:
from pykalman import KalmanFilter
import numpy as np
import matplotlib.pyplot as plt
# only slope has to be estimated (it will be manipulated by the constant intercept) - mathematically incorrect!
const_intercept = 10
# reading data (quick and dirty)
Datum=[]
EWA=[]
EWC=[]
for line in open('data/dataset.csv'):
f1, f2, f3 = line.split(';')
Datum.append(f1)
EWA.append(float(f2))
EWC.append(float(f3))
n = len(Datum)
# Filter Configuration
# transition_matrix
F = 1 # identity matrix because x_(k+1) = x_(k) + noise
# observation_matrix
# H_k = [EWA_k]
H = np.matrix(EWA).T[:, np.newaxis]
# transition_covariance
Q = 1e-4
# observation_covariance
R = 1 # max error = 3
# initial_state_mean
X0 = 0
# initial_state_covariance
P0 = 1
# Kalman-Filter initialization
kf = KalmanFilter(n_dim_obs=1, n_dim_state=1,
transition_matrices = F,
observation_matrices = H,
transition_covariance = Q,
observation_covariance = R,
initial_state_mean = X0,
initial_state_covariance = P0)
# Creating the observation based on EWC and the constant intercept
z = EWC[:] # copy the list (not just assign the reference!)
z[:] = [x - const_intercept for x in z]
# Filtering
state_means, state_covs = kf.filter(z) # the estimation for the EWC data minus constant intercept
# Restore EWC based on EWA and estimated parameters
EWC_restored = np.multiply(EWA, state_means[:, 0]) + const_intercept
# Plots
plt.figure(1)
ax1 = plt.subplot(211)
plt.plot(state_means[:, 0], label="Slope")
plt.grid()
plt.legend(loc="upper left")
ax2 = plt.subplot(212)
plt.plot(const_intercept*np.ones((n, 1)), label="Intercept")
plt.grid()
plt.legend(loc="upper left")
# check the result
plt.figure(2)
plt.plot(EWC, label="EWC original")
plt.plot(EWC_restored, label="EWC restored")
plt.grid()
plt.legend(loc="upper left")
plt.show()
My calculation involves cosh(x) and sinh(x) when x is around 700 - 1000 which reaches MATLAB's limit and the result is NaN. The problem in the code is elastic_restor_coeff rises when radius is small (below 5e-9 in the code). My goal is to do another integral over a radius distribution from 1e-9 to 100e-9 which is still a work in progress because I get stuck at this problem.
My work around solution right now is to approximate the real part of chi_para with a step function when threshold2 hits a value of about 300. The number 300 is obtained from using the lowest possible value of radius and look at the cut-off value from the plot. I think this approach is not good enough for actual calculation since this value changes with radius so I am looking for a better approximation method. Also, the imaginary part of chi_para is difficult to approximate since it looks like a pulse instead of a step.
Here is my code without an integration over a radius distribution.
k_B = 1.38e-23;
T = 296;
radius = [5e-9,10e-9, 20e-9, 30e-9,100e-9];
fric_coeff = 8*pi*1e-3.*radius.^3;
elastic_restor_coeff = 8*pi*1.*radius.^3;
time_const = fric_coeff/elastic_restor_coeff;
omega_ar = logspace(-6,6,60);
chi_para = zeros(1,length(omega_ar));
chi_perpen = zeros(1,length(omega_ar));
threshold = zeros(1,length(omega_ar));
threshold2 = zeros(1,length(omega_ar));
for i = 1:length(radius)
for k = 1:length(omega_ar)
omega = omega_ar(k);
fric_coeff = 8*pi*1e-3.*radius(i).^3;
elastic_restor_coeff = 8*pi*1.*radius(i).^3;
time_const = fric_coeff/elastic_restor_coeff;
G_para_func = #(t) ((cosh(2*k_B*T./elastic_restor_coeff.*exp(-t./time_const))-1).*exp(1i.*omega.*t))./(cosh(2*k_B*T./elastic_restor_coeff)-1);
G_perpen_func = #(t) ((sinh(2*k_B*T./elastic_restor_coeff.*exp(-t./time_const))).*exp(1i.*omega.*t))./(sinh(2*k_B*T./elastic_restor_coeff));
chi_para(k) = (1 + 1i*omega*integral(G_para_func, 0, inf));
chi_perpen(k) = (1 + 1i*omega*integral(G_perpen_func, 0, inf));
threshold(k) = 2*k_B*T./elastic_restor_coeff*omega;
threshold2(k) = 2*k_B*T./elastic_restor_coeff*(omega*time_const - 1);
end
figure(1);
semilogx(omega_ar,real(chi_para),omega_ar,imag(chi_para));
hold on;
figure(2);
semilogx(omega_ar,real(chi_perpen),omega_ar,imag(chi_perpen));
hold on;
end
Here is the simplified function that I would like to approximate:
where x is iterated in a loop and the maximum value of x is about 700.
I have a backwards recursion for a binomial tree. At each node an unknown C enters in such a way that at the starting node we get a formula, A(1,1), that depends upon C. The code is as follows:
A=sym(zeros(1,Steps));
B=zeros(1,Steps);
syms C; % The unknown that enters A at every node
tic
for t=Steps-1:-1:1
% Values needed in A and B
Lambda=1-exp(-(1./S(t,1:t).^b).*h);
Q=((1./D(t))./(1-Lambda)-d)/(u-d);
R=normcdf(a0+a1*Lambda);
% the backward recursion for A and B
A(1:t)=D(t)*C+D(t)*...
(Q.*(1-Lambda).*A(1:t) ...
+ (1-Q).*(1-Lambda).*A(2:t+1));
B(1:t)=Lambda.*(1-R)+D(t)*...
(Q.*(1-Lambda).*B(1:t)...
+ (1-Q.*(1-Lambda).*B(2:t+1)));
end
C = solve(A(1,1)==sym(B(1,1)),C);
This code takes around 4 seconds if Steps = 104. If however we remove C and set matrix A to a regular double matrix, it only takes about 0.02 seconds. Using syms thus increases the calculation time by a factor 200. This seems too much to me. Any suggestions into speeding this up?
I am using Matlab 2013b on a MacBook air 13-inch spring 2013. Furthermore, if you're interested in the code before the above part (not sure whether it is relevant):
a0 = 0.9;
a1 = -3.2557;
b = 1.2594;
S0=18.57;
sigma=0.6579;
h=1/104;
T=1;
Steps=T/h;
f=transpose(normrnd(0.04, 0.001 [1 pl]));
D=exp(-h*f); % discount values
pl=T/h; % pathlength - amount of steps in maturity
u=exp(sigma*sqrt(h));
d=1/u;
u_row = repmat(cumprod([1 u*ones(1,pl-1)]),pl,1);
d_row = cumprod(tril(d*ones(pl),-1)+triu(ones(pl)),1);
path = tril(u_row.*d_row);
S=S0*path;
Unless I'm missing something, there's no need to use symbolic math or use an unknown variable. You can effectively assume that C = 1 in your recursion relation and solve for the actual value at the end. Here's the full code with some other improvements:
rng(1); % Always seed your random number generator
a0 = 0.9;
a1 = -3.2557;
b = 1.2594;
S0 = 18.57;
sigma = 0.6579;
h = 1/104;
T = 1;
Steps = T/h;
pl = T/h;
f = 0.04+0.001*randn(pl,1);
D = exp(-h*f);
u = exp(sigma*sqrt(h));
d = 1/u;
u_row = repmat(cumprod([1 u*ones(1,pl-1)]),pl,1);
d_row = cumprod(tril(d*ones(pl),-1)+triu(ones(pl)),1);
pth = tril(u_row.*d_row);
S = S0*pth;
A = zeros(1,Steps);
B = zeros(1,Steps);
tic
for t = Steps-1:-1:1
Lambda = 1-exp(-h./S(t,1:t).^b);
Q = ((1./D(t))./(1-Lambda)-d)/(u-d);
R = 0.5*erfc((-a0-a1*Lambda)/sqrt(2)); % Faster than normcdf
% Backward recursion for A and B
A = D(t)+D(t)*(Q.*(1-Lambda).*A(1:end-1) + ...
(1-Q).*(1-Lambda).*A(2:end));
B = Lambda.*(1-R)+D(t)*(Q.*(1-Lambda).*B(1:end-1) + ...
(1-Q.*(1-Lambda).*B(2:end)));
end
C = B/A
toc
This take about 0.005 seconds to run on my MacBook Pro. There are certainly other improvements you could make. There are many combinations of variables that are used in multiple places (e.g., 1-Lambda or D(t)*(1-Lambda)), that could be calculated once. Matlab may try to optimize this a bit. And you can try moving Lambda, Q, and R out of the loop – or at least calculate parts of them outside and save the results in arrays.
I'm looking for a way to speed up some simple two port matrix calculations. See the below code example for what I'm doing currently. In essence, I create a [Nx1] frequency vector first. I then loop through the frequency vector and create the [2x2] matrices H1 and H2 (all functions of f). A bit of simple matrix math including a matrix left division '\' later, and I got my result pb as a [Nx1] vector. The problem is the loop - it takes a long time to calculate and I'm looking for way to improve efficiency of the calculations. I tried assembling the problem using [2x2xN] transfer matrices, but the mtimes operation cannot handle 3-D multiplications.
Can anybody please give me an idea how I can approach such a calculation without the need for looping through f?
Many thanks: svenr
% calculate frequency and wave number vector
f = linspace(20,200,400);
w = 2.*pi.*f;
% calculation for each frequency w
for i=1:length(w)
H1(i,1) = {[1, rho*c*k(i)^2 / (crad*pi); 0,1]};
H2(i,1) = {[1, 1i.*w(i).*mp; 0, 1]};
HZin(i,1) = {H1{i,1}*H2{i,1}};
temp_mat = HZin{i,1}*[1; 0];
Zin(i,1) = temp_mat(1,1)/temp_mat(2,1);
temp_mat= H1{i,1}\[1; 1/Zin(i,1)];
pb(i,1) = temp_mat(1,1); Ub(i,:) = temp_mat(2,1);
end
Assuming that length(w) == length(k) returns true , rho , c, crad, mp are all scalars and in the last line is Ub(i,1) = temp_mat(2,1) instead of Ub(i,:) = temp_mat(2,1);
temp = repmat(eyes(2),[1 1 length(w)]);
temp1(1,2,:) = rho*c*(k.^2)/crad/pi;
temp2(1,2,:) = (1i.*w)*mp;
H1 = permute(num2cell(temp1,[1 2]),[3 2 1]);
H2 = permute(num2cell(temp2,[1 2]),[3 2 1]);
HZin = cellfun(#(a,b)(a*b),H1,H2,'UniformOutput',0);
temp_cell = cellfun(#(a,b)(a*b),H1,repmat({[1;0]},length(w),1),'UniformOutput',0);
Zin_cell = cellfun(#(a)(a(1,1)/a(2,1)),temp_cell,'UniformOutput',0);
Zin = cell2mat(Zin);
temp2_cell = cellfun(#(a)({[1;1/a]}),Zin_cell,'UniformOutput',0);
temp3_cell = cellfun(#(a,b)(pinv(a)*b),H1,temp2_cell);
temp4 = cell2mat(temp3_cell);
p(:,1) = temp4(1:2:end-1);
Ub(:,1) = temp4(2:2:end);
I need to perform some elementary histogram matching on 2 sets of 3D data. This is part of a larger algorithm.
My goal is to perform this by minimising the following cost function:
|| cumpdf(f(A)) - cumpdf(B) || .^2
where:
cumpdf is the cumulative histogram
f() is linear transformation a*A + b where a/b are affine coefficients to be
determined
A is the image to be transformed and B is the image to be matched
I am using lsqcurvefit however I have run into some trouble and therefore really need some help.
A(maskA==0)=0;
B(maskB==0)=0;
[na,~] = hist(A(maskA~=0),500);
na = na ./ numel(A(maskA~=0));
x_data = cumsum(na);
[nb,~] = hist(B(maskB~=0),500);
nb = nb ./ numel(B(maskB~=0));
y_data = cumsum(nb);
xo = [1.5 -200];
[coeff,~] = lsqcurvefit(#cost,xo,x_data,y_data);
function F = cost(x,xc)
F = x(1).*A + x(2);
[nc,~] = hist(C(maskA~=0),500);
nc = nc / numel(C(maskA~=0));
xc = cumsum(nc);
Amask and Bmask just represent some indexing I need to do.
My question is: I know that the above is wrong. However, I think it represents best what I want to do, regarding the cost function and the goal. Some help would me much appreciated!