I'm trying to solve some ODEs in MatLab and seeing as the variables in the equations are populations they need to be constrained to being positive. So I tried using odeset() before calling the equation solver to make them non-negative but on plotting the values afterwards they are actually negative at times (in the code below it is the magenta line). What am I doing wrong?
Here's some code:
%Lots of variables
includeJ=1;
cullLIRate=1/2000;
cullDIRate=1/2000;
N = 16800;
beta = 2e-7;
delta = 0.5;
gamma = 1/50;
sigma = 1/400;
mu = 1/365;
maxTime = 30*365;
kappa = N;
gR = 0.05;
mJ = 1/3650;
initJPerAdult = 10;
numInitE = 1000;
TSpan = [0,maxTime];
initState = [N-numInitE,numInitE,0,0,0,initJPerAdult*N];
%IMPORTANT BIT HERE
options = odeset('NonNegative', 1:6)
scirSoln = ode45(#equation,TSpan,initState,[],beta,delta,gamma,sigma,mu,kappa,gR,mJ,cullLIRate,cullDIRate,includeJ);
scirVals = deval(scirSoln,timeToPlot);
plot(timeToPlot,scirVals(1,:));
hold on;
plot(timeToPlot,scirVals(3,:),'k');
plot(timeToPlot,scirVals(4,:),'g');
plot(timeToPlot,scirVals(6,:),'m');
timeToPlot = [0:max(TSpan)/1000:max(TSpan)];
The code for equation(...) is:
function retVal = equation(t,y,beta,delta,gamma,sigma,mu,kappa,gR,mJ,cullLIRate,cullDIRate,includeJ)
retVal = zeros(6,1);
S = y(1);
E = y(2);
LI = y(3);
DI = y(4);
R = y(5);
J = y(6);
retVal(1)= mJ * J - beta * S * (delta * LI + DI);
retVal(2) = beta * S * (delta * LI + DI) - gamma * E;
retVal(3) = gamma * E - (cullLIRate + sigma) * LI;
retVal(4) = sigma * LI - (mu + cullDIRate) * DI;
retVal(5) = mu * DI + cullLIRate* LI + cullDIRate * DI;
retVal(6) = gR * S * (1 - S / kappa) - mJ * J;
end
You are not passing your defined odeset (options variable) to the ODE45 - solver.
The syntax for the ODE45 is: [T,Y] = ODE45(ODEFUN,TSPAN,Y0,OPTIONS,P1,P2...)
Glad it worked! :)
Related
I have a problem in which I have to implement the following question in Matlab.
i(t) = A2 * sin(wr*t) * exp(-alpha*t); for t [0, 10] with step 0.5s
My approach is as follows
clc;
clear;
% Given Data
Vs = 220;
L = 5e-3;
C = 10e-6;
R = 22;
Vo = 50;
% a)
alpha = R / (2 * L);
omega_not = 1 / sqrt(L*C);
omega_r = sqrt( omega_not^2 - alpha^2 );
A2 = Vs / (omega_r * L);
t = 1:0.5:10;
i = A2 * sin( omega_r * t ) .* exp(-alpha * t);
% b)
t1 = pi / omega_r;
% c)
plot(t, i);
But it yields all the values of current equal to zero. Please help me solve the problem.
I think the problem is this part of the expression:
exp(-alpha * t)
When I run your code, -alpha equals -2200. The exponential for such a large negative number is so small that the code returns zero.
>> exp(-200)
ans =
1.3839e-87
>> exp(-1000)
ans =
0
I have an equation which goes like this:
Here, I_L(lambdap) is the modified bessel function. This and product with exponential function can be written in matlab as besseli(L,lambdap,1). "i" stands for square root of -1. I want to solve:
1+pt+it=0
where I have to vary 'k' and find values of 'w'. I had posted similar problem at mathematica stack exchange, but I couldn't solve the problem fully, though i have got a clue (please go through the comments at mathematica stack exchange site). I could not convert my equation to the code that has been posted in clue. Any help in this regards will be highly appreciated.
Thanks in advance...
I never attempted this before, but... is this returning a suitable result?
syms w k;
fun = 1 + pt(w,k) + it(w,k);
sol = vpasolve(fun == 0,w,k);
disp(sol.w);
disp(sol.k);
function res = pt(w,k)
eps_l0 = w / (1.22 * k);
lam_k = 0.25 * k^2;
res = sym('res',[5 1]);
res_off = 1;
for L = -2:2
gam = besseli(L,lam_k) * exp(-lam_k);
eps_z = (w - L) / (1.22 * k);
zeta = 1i * sqrt(pi()) * exp(-eps_z^2) * (1 + erfc(1i * eps_z));
res(res_off,:) = ((25000 * gam) / k^2) * (1 + (eps_l0 * zeta));
res_off = res_off + 1;
end
res = sum(res);
end
function res = it(w,k)
eps_l0 = (w - (0.86 * k)) / (3.46 * k);
lam_k = 0.03 * k^2;
res = sym('res',[5 1]);
res_off = 1;
for L = -2:2
gam = besseli(L,lam_k) * exp(-lam_k);
eps_z = (w - (8 * L) - (0.86 * k)) / (3.46 * k);
zeta = 1i * sqrt(pi()) * exp(-eps_z^2) * (1 + erfc(1i * eps_z));
res(res_off,:) = ((2000000 * gam) / k^2) * (1 + (eps_l0 * zeta));
res_off = res_off + 1;
end
res = sum(res);
end
EDIT
For numeric k and symbolic w:
syms w;
for k = -3:3
fun = 1 + pt(w,k) + it(w,k);
sol = vpasolve(fun == 0,w);
disp(sol.w);
end
I am right now stuck on a problem in matlab. What I have done is that I have an equation that is passed on into another function which works by the bisection-method.
But I have a multiplier that I am trying to implement which somehow leads to the function crashing.
Before I introduced the multiplier it all worked, I tried breaking it down by entering the multiplier value manually and it didn't work
P_{1} = 0.6;
P_{2} = 0.2;
P_{3} = 0.2;
a_1 = 4/3;
a_2 = -7/3;
b_1 = -1/3;
b_2 = 4/3;
persistent multiplier
multiplier = exp(a_1 * 44 + a_2 * 14 + 0);
eqn = #(x) ((a_1 * x + b_1)^a_1) * ((a_2 * x + b_2)^a_2) * x ...
-(P_{1}^a_1) * (P_{2}^a_2) * P_{3} * multiplier;
Q_{3} = Bisectionmethod(a_1, a_2, b_1, b_2, eqn);
Here is the calculating part of the bisection method.
x_lower = max(0, -b_1 / a_1);
x_upper = -b_2 / a_2;
x_mid = (x_lower + x_upper)/2;
Conditional statement encompassing the method of bisection
while abs(eqn(x_mid)) > 10^(-10)
if (eqn(x_mid) * eqn(x_upper)) < 0
x_lower = x_mid;
else
x_upper = x_mid;
end
x_mid = (x_lower + x_upper)/2;
end
Based on the information you provided this is what I came up with
function Q = Stackoverflow
persistent multiplier
P{1} = 0.6;
P{2} = 0.2;
P{3} = 0.2;
a1 = 4/3;
a2 = -7/3;
b1 = -1/3;
b2 = 4/3;
multiplier = exp(a1 * 44 + a2 * 14 + 0);
eqn = #(x) ((a1 .* x + b1).^a1) .* ((a2 .* x + b2).^a2) .* x -(P{1}.^a1) .* (P{2}.^a2) .* P{3} .* multiplier;
Q{3} = Bisectionmethod(eqn, max([0, -b1/a1]), -b2/a2, 1E-10);
end
function XOut = Bisectionmethod(f, xL, xH, EPS)
if sign(f(xL)) == sign(f(xH))
XOut = [];
error('Cannot bisect interval because can''t ensure the function crosses 0.')
end
x = [xL, xH];
while abs(diff(x)) > EPS
x(sign(f(mean(x))) == sign(f(x))) = mean(x);
end
XOut = mean(x);
end
I'm trying my hand at regularized LR, simple with this formulas in matlab:
The cost function:
J(theta) = 1/m*sum((-y_i)*log(h(x_i)-(1-y_i)*log(1-h(x_i))))+(lambda/2*m)*sum(theta_j)
The gradient:
∂J(theta)/∂theta_0 = [(1/m)*(sum((h(x_i)-y_i)*x_j)] if j=0
∂j(theta)/∂theta_n = [(1/m)*(sum((h(x_i)-y_i)*x_j)]+(lambda/m)*(theta_j) if j>1
This is not matlab code is just the formula.
So far I've done this:
function [J, grad] = costFunctionReg(theta, X, y, lambda)
J = 0;
grad = zeros(size(theta));
temp_theta = [];
%cost function
%get the regularization term
for jj = 2:length(theta)
temp_theta(jj) = theta(jj)^2;
end
theta_reg = lambda/(2*m)*sum(temp_theta);
temp_sum =[];
%for the sum in the cost function
for ii =1:m
temp_sum(ii) = -y(ii)*log(sigmoid(theta'*X(ii,:)'))-(1-y(ii))*log(1-sigmoid(theta'*X(ii,:)'));
end
tempo = sum(temp_sum);
J = (1/m)*tempo+theta_reg;
%regulatization
%theta 0
reg_theta0 = 0;
for jj=1:m
reg_theta0(jj) = (sigmoid(theta'*X(m,:)') -y(jj))*X(jj,1)
end
reg_theta0 = (1/m)*sum(reg_theta0)
grad_temp(1) = reg_theta0
%for the rest of thetas
reg_theta = [];
thetas_sum = 0;
for ii=2:size(theta)
for kk =1:m
reg_theta(kk) = (sigmoid(theta'*X(m,:)') - y(kk))*X(kk,ii)
end
thetas_sum(ii) = (1/m)*sum(reg_theta)+(lambda/m)*theta(ii)
reg_theta = []
end
for i=1:size(theta)
if i == 1
grad(i) = grad_temp(i)
else
grad(i) = thetas_sum(i)
end
end
end
And the cost function is giving correct results, but I have no idea why the gradient (one step) is not, the cost gives J = 0.6931 which is correct and the gradient grad = 0.3603 -0.1476 0.0320, which is not, the cost starts from 2 because the parameter theta(1) does not have to be regularized, any help? I guess there is something wrong with the code, but after 4 days I can't see it.Thanks
Vectorized:
function [J, grad] = costFunctionReg(theta, X, y, lambda)
hx = sigmoid(X * theta);
m = length(X);
J = (sum(-y' * log(hx) - (1 - y')*log(1 - hx)) / m) + lambda * sum(theta(2:end).^2) / (2*m);
grad =((hx - y)' * X / m)' + lambda .* theta .* [0; ones(length(theta)-1, 1)] ./ m ;
end
I used more variables, so you could see clearly what comes from the regular formula, and what comes from "the regularization cost added". Additionally, It is a good practice to use "vectorization" instead of loops in Matlab/Octave. By doing this, you guarantee a more optimized solution.
function [J, grad] = costFunctionReg(theta, X, y, lambda)
%Hypotheses
hx = sigmoid(X * theta);
%%The cost without regularization
J_partial = (-y' * log(hx) - (1 - y)' * log(1 - hx)) ./ m;
%%Regularization Cost Added
J_regularization = (lambda/(2*m)) * sum(theta(2:end).^2);
%%Cost when we add regularization
J = J_partial + J_regularization;
%Grad without regularization
grad_partial = (1/m) * (X' * (hx -y));
%%Grad Cost Added
grad_regularization = (lambda/m) .* theta(2:end);
grad_regularization = [0; grad_regularization];
grad = grad_partial + grad_regularization;
Finally got it, after rewriting it again like for the 4th time, this is the correct code:
function [J, grad] = costFunctionReg(theta, X, y, lambda)
J = 0;
grad = zeros(size(theta));
temp_theta = [];
for jj = 2:length(theta)
temp_theta(jj) = theta(jj)^2;
end
theta_reg = lambda/(2*m)*sum(temp_theta);
temp_sum =[];
for ii =1:m
temp_sum(ii) = -y(ii)*log(sigmoid(theta'*X(ii,:)'))-(1-y(ii))*log(1-sigmoid(theta'*X(ii,:)'));
end
tempo = sum(temp_sum);
J = (1/m)*tempo+theta_reg;
%regulatization
%theta 0
reg_theta0 = 0;
for i=1:m
reg_theta0(i) = ((sigmoid(theta'*X(i,:)'))-y(i))*X(i,1)
end
theta_temp(1) = (1/m)*sum(reg_theta0)
grad(1) = theta_temp
sum_thetas = []
thetas_sum = []
for j = 2:size(theta)
for i = 1:m
sum_thetas(i) = ((sigmoid(theta'*X(i,:)'))-y(i))*X(i,j)
end
thetas_sum(j) = (1/m)*sum(sum_thetas)+((lambda/m)*theta(j))
sum_thetas = []
end
for z=2:size(theta)
grad(z) = thetas_sum(z)
end
% =============================================================
end
If its helps anyone, or anyone has any comments on how can I do it better. :)
Here is an answer that eliminates the loops
m = length(y); % number of training examples
predictions = sigmoid(X*theta);
reg_term = (lambda/(2*m)) * sum(theta(2:end).^2);
calcErrors = -y.*log(predictions) - (1 -y).*log(1-predictions);
J = (1/m)*sum(calcErrors)+reg_term;
% prepend a 0 column to our reg_term matrix so we can use simple matrix addition
reg_term = [0 (lambda*theta(2:end)/m)'];
grad = sum(X.*(predictions - y)) / m + reg_term;
can someone help me how to show gabor filter in matlab, i can show it but its not what i want. this is my code :
[Gf,gabout] = gaborfilter1(B,sx,sy,f,theta(j));
G{m,n,i,j} = Gf;
and this is gabor filter class:
function [Gf,gabout] = gaborfilter(I,Sx,Sy,f,theta);
if isa(I,'double')~=1
I = double(I);
end
for x = -fix(Sx):fix(Sx)
for y = -fix(Sy):fix(Sy)
xPrime = x * cos(theta) + y * sin(theta);
yPrime = y * cos(theta) - x * sin(theta);
Gf(fix(Sx)+x+1,fix(Sy)+y+1) = exp(-.5*((xPrime/Sx)^2+(yPrime/Sy)^2))*cos(2*pi*f*xPrime);
end
end
Imgabout = conv2(I,double(imag(Gf)),'same');
Regabout = conv2(I,double(real(Gf)),'same');
gabout = sqrt(Imgabout.*Imgabout + Regabout.*Regabout);
Then, I imshow with this code:
imshow(G{m,n,i,j},[]);
and the results :
But i want this result, can someone help me how to slove this?
Use the following function. I hope this is useful.
----------------------------------------------------------------
gfs = GaborFilter(51,0.45,0.05,6,4);
n=0;
for s=1:6
for d=1:4
n=n+1;
subplot(6,4,n)
imshow(real(squeeze(gfs(s,d,:,:))),[])
end
end
Sample Image
----------------------------------------------------------------
function gfs = GaborFilter(winLen,uh,ul,S,D)
% gfs(SCALE, DIRECTION, :, :)
winLen = winLen + mod(winLen, 2) -1;
x0 = (winLen + 1)/2;
y0 = x0;
if S==1
a = 1;
su = uh/sqrt(log(4));
sv = su;
else
a = (uh/ul)^(1/(S-1));
su = (a-1)*uh/((a+1)*sqrt(log(4)));
if D==1
tang = 1;
else
tang = tan(pi/(2*D));
end
sv = tang * (uh - log(4)*su^2/uh)/sqrt(log(4) - (log(4)*su/uh)^2);
end
sx = 1/(2*pi*su);
sy = 1/(2*pi*sv);
coef = 1/(2*pi*sx*sy);
gfs = zeros(S, D, winLen, winLen);
for d = 1:D
theta = (d-1)*pi/D;
for s = 1:S
scale = a^(-(s-1));
gab = zeros(winLen);
for x = 1:winLen
for y = 1:winLen
X = scale * ((x-x0)*cos(theta) + (y-y0)*sin(theta));
Y = scale * (-(x-x0)*sin(theta) + (y-y0)*cos(theta));
gab(x, y) = -0.5 * ( (X/sx).^2 + (Y/sy).^2 ) + (2*pi*1j*uh)*X ;
end
end
gfs(s, d, :, :) = scale * coef * exp(gab);
end
end
Replace the "cos" component by complex part->complex(0, (2*pi*f*xprime)) ans also multiply equation by scaling factor of (1/sqrt(2*Sy*Sx)).