Stuck on numerically evaluating this integral and only have experience in doing simple quadrature schemes so bear with me if this looks amateurish. The nested integral is below using Gauss-Hermite(-inf,+inf) and a variant of Gauss Laguerre scheme (0,+inf) - over a gaussian and gamma(v/2,2) densities. I get to the point where the I m nearly finished the integration, intermediate steps look good, but I m stuck on how to combine the weights to evaluate the overall integral. I'd be really grateful for suggestions in modifing the code/other idea to code a better quadrature scheme that solves the problem.
\begin{equation}
\int^{\infty}{-\infty}\int^{\infty}{0} \prod_{i=1}^n\Phi \left(\frac{\sqrt{w/v}\,C{i}-a{i}Z}{\sqrt{1a {i^2}}\right)f{z}(Z)f{w}(W)dwdz
\end{equation}
% script defines nodes, weights, parameters then calls one main and one subfunction
rho=0.3; nfirms=10; h=repmat(0.1,[1,nfirms]); T=1; R=0.4; v=8; alpha=v/2;GaussPts=15;
% Quadrature nodes - gaussian and gamma(v/2) from Miranda and Fackler CompEcon
% toolbox
[x_norm,w_norm] = qnwnorm(GaussPts,0,1);
[x_gamma,w_gamma] = qnwgamma(GaussPts,alpha);
L_mat=zeros(nfirms+1,GaussPts);
for i=1:1:GaussPts;
L_mat(:,i) = TC_gamma(x_norm(i,:),x_gamma(i,:),h,rho,T,v,nfirms);
end;
w_norm_mat= repmat(w_norm',nfirms+1,1);
w_gamma_mat = repmat(w_gamma',nfirms+1,1);
% need to weight L_mat by the gaussian and chi-sq i.e, (gamma v/2,2)?
ucl = L_mat.*w_norm;%?? HERE
ucl2 = sum(ucl.*w_gamma2,2);% ?? HERE
function [out] = TC_gamma(x_norm,x_gamma,h,rho,T,v,nfirms)
% calls subfunction feeds into recursion
qki= Vec_CondPTC_gamma(x_norm,x_gamma,h,rho,T,v)' ;
fpdf=zeros(nfirms+1,nfirms+1);
% start at the first point on the tree
fpdf(1,1)=1;
for i=2:nfirms+1 ;
fpdf(1,i)=fpdf(1,i-1)*(1-qki(:,i-1));
for j=2:nfirms+1;
fpdf(j,i)=fpdf(j,i-1)*(1-qki(:,i-1))+fpdf(j-1,i-1)*qki(:,i-1);
end
fpdf(i,i)=fpdf(i-1,i-1)*qki(:,i-1);
end
out=fpdf(:,end);
end% of function TC_gamma
function qki= Vec_CondPTC_gamma(x_norm,x_gamma,h,rho,T,v)
PD = (1-exp(-kron(h,T)));DB = tinv(PD,v);
a=rho.^0.5; sqrt1_a2 = sqrt(1-sum(a.*a,2));
aM = gtimes(a, x_norm'); Sqrt_W=gamcdf(x_gamma,v/2,2).^0.5;
DB_times_W= gtimes(DB,Sqrt_W); DB_minus_aM = gminus(DB_times_W',aM);
qki=normcdf(grdivide(DB_minus_aM,sqrt1_a2));
end% of function Vec_CondPTC
Related
This is my first go with ML (and Matlab) and I'm following "Learning From Data" by Yaser S. Abu-Mostafa.
I'm trying to implement the Perceptron algorithm, after trying to go through the pseudocode, using other people's solutions I can't seem to fix my problem (I went through other threads too).
The algorithm separates the data fine, it works. However, I want to plot a single line, but it seems as it separates them in a way so the '-1' cluster is divided to a second cluster or more.
This is the code:
iterations = 100;
dim = 3;
X1=[rand(1,dim);rand(1,dim);ones(1,dim)]; % class '+1'
X2=[rand(1,dim);1+rand(1,dim);ones(1,dim)]; % class '-1'
X=[X1,X2];
Y=[-ones(1,dim),ones(1,dim)];
w=[0,0,0]';
% call perceptron
wtag=weight(X,Y,w,iterations);
% predict
ytag=wtag'*X;
% plot prediction over origianl data
figure;hold on
plot(X1(1,:),X1(2,:),'b.')
plot(X2(1,:),X2(2,:),'r.')
plot(X(1,ytag<0),X(2,ytag<0),'bo')
plot(X(1,ytag>0),X(2,ytag>0),'ro')
legend('class -1','class +1','pred -1','pred +1')
%Why don't I get just one line?
plot(X,Y);
The weight function (Perceptron):
function [w] = weight(X,Y,w_init,iterations)
%WEIGHT Summary of this function goes here
% Detailed explanation goes here
w = w_init;
for iteration = 1 : iterations %<- was 100!
for ii = 1 : size(X,2) %cycle through training set
if sign(w'*X(:,ii)) ~= Y(ii) %wrong decision?
w = w + X(:,ii) * Y(ii); %then add (or subtract) this point to w
end
end
sum(sign(w'*X)~=Y)/size(X,2); %show misclassification rate
end
I don't think the problem is in the second function but I added it regardless
I'm pretty sure the algorithm separates it to more than one cluster but I can't tell why most of the learning I've done so far was math and theory and not actual coding so I'm probably missing something obvious..
I am completely new to Matlab. I am trying to simulate a Wiener and Poisson combined process.
Why do I get Subscripted assignment dimension mismatch?
I am trying to simulate
Z(t)=lambda*W^2(t)-N(t)
Where W is a wiener process and N is a poisson process.
The code I am using is below:
T=500
dt=1
K=T/dt
W(1)=0
lambda=3
t=0:dt:T
for k=1:K
r=randn
W(k+1)=W(k)+sqrt(dt)*r
N=poissrnd(lambda*dt,1,k)
Z(k)=lambda*W.^2-N
end
plot(t,Z)
It is true that some indexing is missing, but I think you would benefit from rewriting your code in a more 'Matlab way'. The following code is using the fact that Matlab basic variables are matrices, and compute the results in a vectorized way. Try to understand this kind of writing, as this is the way to exploit Matlab more efficiently, along with writing shorter and readable code:
T = 500;
dt = 1;
K = T/dt;
lambda = 3;
t = 1:dt:T;
sqdtr = sqrt(dt)*randn(K-1,1); % define sqrt(dt)*r as a vector
N = poissrnd(lambda*dt,K,1); % define N as a vector
W = cumsum([0; sqdtr],1); % cumulative sum instead of the loop
Z = lambda*W.^2-N; % summing the processes element-wiesly
plot(t,Z)
Example for a result:
you forget index
Z(k)=lambda*W.^2-N
it must be
Z(k)=lambda*W(k).^2-N(k)
This post builds on my post about quickly evaluating analytic Jacobian in Matlab:
fast evaluation of analytical jacobian in MATLAB
The key difference is that now, I am working with the Hessian and I have to evaluate close to 700 matlabFunctions (instead of 1 matlabFunction, like I did for the Jacobian) each time the hessian is evaluated. So there is an opportunity to do things a little differently.
I have tried to do this two ways so far and I am thinking about implementing a third and was wondering if anyone has any other suggestions. I will go through each method with a toy example, but first some preprocessing to generate these matlabFunctions:
PreProcessing:
% This part of the code is calculated once, it is not the issue
dvs = 5;
X=sym('X',[dvs,1]);
num = dvs - 1; % number of constraints
% multiple functions
for k = 1:num
f1(X(k+1),X(k)) = (X(k+1)^3 - X(k)^2*k^2);
c(k) = f1;
end
gradc = jacobian(c,X).'; % .' performs transpose
parfor k = 1:num
hessc{k} = jacobian(gradc(:,k),X);
end
parfor k = 1:num
hess_name = strcat('hessian_',num2str(k));
matlabFunction(hessc{k},'file',hess_name,'vars',X);
end
METHOD #1 : Evaluate functions in series
%% Now we use the functions to run an "optimization." Just for an example the "optimization" is just a for loop
fprintf('This is test A, where the functions are evaluated in series!\n');
tic
for q = 1:10
x_dv = rand(dvs,1); % these are the design variables
lambda = rand(num,1); % these are the lagrange multipliers
x_dv_cell = num2cell(x_dv); % for passing large design variables
for k = 1:num
hess_name = strcat('hessian_',num2str(k));
function_handle = str2func(hess_name);
H_temp(:,:,k) = lambda(k)*function_handle(x_dv_cell{:});
end
H = sum(H_temp,3);
end
fprintf('The time for test A was:\n')
toc
METHOD # 2: Evaluate functions in parallel
%% Try to run a parfor loop
fprintf('This is test B, where the functions are evaluated in parallel!\n');
tic
for q = 1:10
x_dv = rand(dvs,1); % these are the design variables
lambda = rand(num,1); % these are the lagrange multipliers
x_dv_cell = num2cell(x_dv); % for passing large design variables
parfor k = 1:num
hess_name = strcat('hessian_',num2str(k));
function_handle = str2func(hess_name);
H_temp(:,:,k) = lambda(k)*function_handle(x_dv_cell{:});
end
H = sum(H_temp,3);
end
fprintf('The time for test B was:\n')
toc
RESULTS:
METHOD #1 = 0.008691 seconds
METHOD #2 = 0.464786 seconds
DISCUSSION of RESULTS
This result makes sense because, the functions evaluate very quickly and running them in parallel waists a lot of time setting up and sending out the jobs to the different Matlabs ( and then getting the data back from them). I see the same result on my actual problem.
METHOD # 3: Evaluating the functions using the GPU
I have not tried this yet, but I am interested to see what the performance difference is. I am not yet familiar with doing this in Matlab and will add it once I am done.
Any other thoughts? Comments? Thanks!
I was wondering if there are established robust libraries or FEX-like packages to deal with scalar conservation laws (say 1D) in matlab.
I am currently dealing with 1D non-linear, non-local, conservation laws and the diffusive error of first order schemes is killing me, moreover a lot of physics is missed. Thus, I am wondering if there is some robust tool already there so to avoid cooking some code myself (ideally, something like boost::odeint for scheme agnostic high order ODE integration in C++).
Any help appreciated.
EDIT: Apologies for the lack of clarity. Here for conservation laws I mean general hyberbolic partial derivative equations in the form
u_t(t,x) + F_x(t,x) = 0
where u=u(t,x) is an intensive conserved variable (say scalar, 1D, e.g. mass density, energy density,...) and F = F(t,x) is its flux. Therefore, I am not interested in the kind of conservation properties Hamiltonian systems feature (energy, currents...) (thanks to #headmyshoulder for his comment).
I cited boost::odeint for a conceptual reference of a robust and generic library addressing a mathematical issue (integration of ODEs). Therefore I am looking for some package implementing Godunov-type methods and so on.
I am currently working on new methods for shock-turbulence simulations and doing lots of code testing/validation in MATLAB. Unfortunately, I haven't found a general library that does what you're hoping, but a basic Godunov or MUSCL code is relatively straightforward to implement. This paper has a good overview of some useful methods:
[1] Kurganov, Alexander and Eitan Tadmor (2000), New High-Resolution Central Schemes for Nonlinear Conservation Laws and Convection-Diffusion Equations, J. Comp. Phys., 160, 214–282. PDF
Here are a few examples from that paper for a 1D equally spaced grid on a periodic domain for solving inviscid Burgers equation. The methods easily generalize to systems of equations, dissipative (viscous) systems, and higher dimension as outlined in [1]. These methods rely on the following functions:
Flux term:
function f = flux(u)
%flux term for Burgers equation: F(u) = u^2/2;
f = u.^2/2;
Minmod function:
function m = minmod(a,b)
%minmod function:
m = (sign(a)+sign(b))/2.*min(abs(a),abs(b));
Methods
Nessyahu-Tadmor scheme:
A 2nd order scheme
function unew = step_u(dx,dt,u)
%%% Nessyahu-Tadmor scheme
ux = minmod((u-circshift(u,[0 1]))/dx,(circshift(u,[0 -1])-u)/dx);
f = flux(u);
fx = minmod((f-circshift(f,[0 1]))/dx,(circshift(f,[0 -1])-f)/dx);
umid = u-dt/2*fx;
fmid = flux(umid);
unew = (u + circshift(u,[0 -1]))/2 + (dx)/8*(ux-circshift(ux,[0 -1])) ...
-dt/dx*( circshift(fmid,[0 -1])-fmid );
This method computes a new u value at xj+1/2 grid points so it also requires a grid shift at each step. The main function should be something like:
clear all
% Set up grid
nx = 256;
xmin=0; xmax=2*pi;
x=linspace(xmin,xmax,nx);
dx = x(2)-x(1);
%initialize
u = exp(-4*(x-pi*1/2).^2)-exp(-4*(x-pi*3/2).^2);
%CFL number:
CFL = 0.25;
t = 0;
dt = CFL*dx/max(abs(u(:)));
while (t<2)
u = step_u(dx,dt,u);
x=x+dx/2;
% handle grid shifts
if x(end)>=xmax+dx
x(end)=0;
x=circshift(x,[0 1]);
u=circshift(u,[0 1]);
end
t = t+dt;
%plot
figure(1)
clf
plot(x,u,'k')
title(sprintf('time, t = %1.2f',t))
if ~exist('YY','var')
YY=ylim;
end
axis([xmin xmax YY])
drawnow
end
Kurganov-Tadmor scheme
The Kurganov-Tadmor scheme of [1] has several advantages over the NT scheme including lower numerical dissipation and a semi-discrete form that allows the use of any time integration method you choose. Using the same spatial discretization as above, it can be formulated as an ODE for du/dt = (stuff). The right hand side of this ODE can be computed by the function:
function RHS = KTrhs(dx,u)
%%% Kurganov-Tadmor scheme
ux = minmod((u-circshift(u,[0 1]))/dx,(circshift(u,[0 -1])-u)/dx);
uplus = u-dx/2*ux;
uminus = circshift(u+dx/2*ux,[0 1]);
a = max(abs(rhodF(uminus)),abs(rhodF(uplus)));
RHS = -( flux(circshift(uplus,[0 -1]))+flux(circshift(uminus,[0 -1])) ...
-flux(uplus)-flux(uminus) )/(2*dx) ...
+( circshift(a,[0 -1]).*(circshift(uplus,[0 -1])-circshift(uminus,[0 -1])) ...
-a.*(uplus-uminus) )/(2*dx);
This function also relies on knowing the spectral radius of the Jacobian of F(u) (rhodF in the code above). For inviscid Burgers this is just
function rho = rhodF(u)
dFdu=abs(u);
The main program of the KT scheme could be something like:
clear all
nx = 256;
xmin=0; xmax=2*pi;
x=linspace(xmin,xmax,nx);
dx = x(2)-x(1);
%initialize
u = exp(-4*(x-pi*1/2).^2)-exp(-4*(x-pi*3/2).^2);
%CFL number:
CFL = 0.25;
t = 0;
dt = CFL*dx/max(abs(u(:)));
while (t<3)
% 4th order Runge-Kutta time stepping
k1 = KTrhs(dx,u);
k2 = KTrhs(dx,u+dt/2*k1);
k3 = KTrhs(dx,u+dt/2*k2);
k4 = KTrhs(dx,u+dt*k3);
u = u+dt/6*(k1+2*k2+2*k3+k4);
t = t+dt;
%plot
figure(1)
clf
plot(x,u,'k')
title(sprintf('time, t = %1.2f',t))
if ~exist('YY','var')
YY=ylim;
end
axis([xmin xmax YY])
drawnow
end
I've been trying to use MATLAB to solve equations like this:
B = alpha*Y0*sqrt(epsilon)/(pi*ln(b/a)*sqrt(epsilon_t))*integral from
0 to pi of
(2*sinint(k0*sqrt(epsilon*(a^2+b^2-2abcos(theta))-sinint(2*k0*sqrt(epsilon)*a*sin(theta/2))-sinint(2*k0*sqrt(epsilon)*b*sin(theta/2)))
with regard to theta
Where epsilon is the unknown.
I know how to symbolically solve equations with unknown embedded in an integral by using int() and solve(), but using the symbolic integrator int() takes too long for equations this complicated. When I try to use quad(), quadl() and quadgk(), I have trouble dealing with how the unknown is embedded in the integral.
This sort of thing gets complicated real fast. Although it is possible to do it all in a single inline equation, I would advise you to split it up into multiple nested functions, if only for readability.
The best example of why readability is important: you have a bracketing problem in the eqution you posted; there's not enough closing brackets, so I can't be entirely sure what the equation looks like in mathematical notation :)
Anyway, here's one way to do it with the version I --think-- you meant:
function test
% some random values for testing
Y0 = rand;
b = rand;
a = rand;
k0 = rand;
alpha = rand;
epsilon_t = rand;
% D is your B
D = -0.015;
% define SIMPLE anonymous function
Bb = #(ep) F(ep).*main_integral(ep) - D;
% aaaand...solve it!
sol = fsolve(Bb, 1)
% The anonymous function above is only simple, because of these:
% the main integral
function val = main_integral(epsilon)
% we need for loop through epsilon, due to how quad(gk) solves things
val = zeros(size(epsilon));
for ii = 1:numel(epsilon)
ep = epsilon(ii);
% NOTE how the sinint's all have a different function as argument:
val(ii) = quadgk(#(th)...
2*sinint(A(ep,th)) - sinint(B(ep,th)) - sinint(C(ep,th)), ...
0, pi);
end
end
% factor in front of integral
function f = F(epsilon)
f = alpha*Y0*sqrt(epsilon)./(pi*log(b/a)*sqrt(epsilon_t)); end
% first sinint argument
function val = A(epsilon, theta)
val = k0*sqrt(epsilon*(a^2+b^2-2*a*b*cos(theta))); end
% second sinint argument
function val = B(epsilon, theta)
val = 2*k0*sqrt(epsilon)*a*sin(theta/2); end
% third sinint argument
function val = C(epsilon, theta)
val = 2*k0*sqrt(epsilon)*b*sin(theta/2); end
end
The solution above will still be quite slow, but I think that's pretty normal for integrals this complicated.
I don't think implementing your own sinint will help much, as most of the speed loss is due to the for loops with non-builtin functions...If it's speed you want, I'd go for a MEX implementation with your own Gauss-Kronrod adaptive quadrature routine.