I get this code for LARS but the variable seems undefined? - matlab

I got this code for LARS but when I run, it says undefined X. I can't understand what x is. Why is there an error?
function [beta, A, mu, C, c, gamma] = lars(X, Y, option, t, standardize)
% Least Angle Regression (LAR) algorithm.
% Ref: Efron et. al. (2004) Least angle regression. Annals of Statistics.
% option = 'lar' implements the vanilla LAR algorithm (default);
% option = 'lasso' solves the lasso path with a modified LAR algorithm.
% t -- a vector of increasing positive real numbers. If given, LARS
% returns the solution at t.
%
% Output:
% A -- a sequence of indices that indicate the order of variable
% beta: history of estimated LARS coefficients;
% mu -- history of estimated mean vector;
% C -- history of maximal current absolute corrrelations;
% c -- history of current corrrelations;
% gamma: history of LARS step size.
% Note: history is traced by rows. If t is given, beta is just the
% estimated coefficient vector at the constraint ||beta||_1 = t.
%
% Remarks:
% 1. LARS is originally proposed to estimate a sparse coefficient vector
% a noisy over-determined linear system. LARS outputs estimates for all
% shrinkage/constraint parameters (homotopy).
%
% 2. LARS is well suited for Basis Pursuit (BP) purpose in the real
% automatically terminates when the current correlations for inactive
% all zeros. The recovered coefficient vector is the last column of beta
% with the *lasso* option. Hence, this function provides a fast and
% efficient solution for the ell_1 minimization problem.
% Ref: Donoho and Tsaig (2006). Fast solution of ell_1 norm minimization
if nargin < 5, standardize = true; end
if nargin < 4, t = Inf; end
if nargin < 3, option = 'lar'; end
if strcmpi(option, 'lasso'), lasso = 1; else, lasso = 0; end
eps = 1e-10; % Effective zero
[n,p] = size(X);
if standardize,
X = normalize(X);
Y = Y-mean(Y);
end
m = min(p,n-1); % Maximal number of variables in the final active set
T = length(t);
beta = zeros(1,p);
mu = zeros(n,1); % Mean vector
gamma = []; % LARS step lengths
A = [];
Ac = 1:p;
nVars = 0;
signOK = 1;
i = 0;
mu_old = zeros(n,1);
t_prev = 0;
beta_t = zeros(T,p);
ii = 1;
tt = t;
% LARS loop
while nVars < m,
i = i+1;
c = X'*(Y-mu); % Current correlation
C = max(abs(c)); % Maximal current absolute correlation
if C < eps || isempty(t), break; end % Early stopping criteria
if 1 == i, addVar = find(C==abs(c)); end
if signOK,
A = [A,addVar]; % Add one variable to active set
nVars = nVars+1;
end
s_A = sign(c(A));
Ac = setdiff(1:p,A); % Inactive set
nZeros = length(Ac);
X_A = X(:,A);
G_A = X_A'*X_A; % Gram matrix
invG_A = inv(G_A);
L_A = 1/sqrt(s_A'*invG_A*s_A);
w_A = L_A*invG_A*s_A; % Coefficients of equiangular vector u_A
u_A = X_A*w_A; % Equiangular vector
a = X'*u_A; % Angles between x_j and u_A
beta_tmp = zeros(p,1);
gammaTest = zeros(nZeros,2);
if nVars == m,
gamma(i) = C/L_A; % Move to the least squares projection
else
for j = 1:nZeros,
jj = Ac(j);
gammaTest(j,:) = [(C-c(jj))/(L_A-a(jj)), (C+c(jj))/(L_A+a(jj))];
end
[gamma(i) min_i min_j] = minplus(gammaTest);
addVar = unique(Ac(min_i));
end
beta_tmp(A) = beta(i,A)' + gamma(i)*w_A; % Update coefficient estimates
% Check the sign feasibility of lasso
if lasso,
signOK = 1;
gammaTest = -beta(i,A)'./w_A;
[gamma2 min_i min_j] = minplus(gammaTest);
if gamma2 < gamma(i), % The case when sign consistency gets violated
gamma(i) = gamma2;
beta_tmp(A) = beta(i,A)' + gamma(i)*w_A; % Correct the coefficients
beta_tmp(A(unique(min_i))) = 0;
A(unique(min_i)) = []; % Delete the zero-crossing variable (keep the ordering)
nVars = nVars-1;
signOK = 0;
end
end
if Inf ~= t(1),
t_now = norm(beta_tmp(A),1);
if t_prev < t(1) && t_now >= t(1),
beta_t(ii,A) = beta(i,A) + L_A*(t(1)-t_prev)*w_A'; % Compute coefficient estimates corresponding to a specific t
t(1) = [];
ii = ii+1;
end
t_prev = t_now;
end
mu = mu_old + gamma(i)*u_A; % Update mean vector
mu_old = mu;
beta = [beta; beta_tmp'];
end
if 1 < ii,
noCons = (tt > norm(beta_tmp,1));
if 0 < sum(noCons),
beta_t(noCons,:) = repmat(beta_tmp',sum(noCons),1);
end
beta = beta_t;
end
% Normalize columns of X to have mean zero and length one.
function sX = normalize(X)
[n,p] = size(X);
sX = X-repmat(mean(X),n,1);
sX = sX*diag(1./sqrt(ones(1,n)*sX.^2));
% Find the minimum and its index over the (strictly) positive part of X
% matrix
function [m, I, J] = minplus(X)
% Remove complex elements and reset to Inf
[I,J] = find(0~=imag(X));
for i = 1:length(I),
X(I(i),J(i)) = Inf;
end
X(X<=0) = Inf;
m = min(min(X));
[I,J] = find(X==m);

You can have more information in the related paper:
Efron, Bradley; Hastie, Trevor; Johnstone, Iain; Tibshirani, Robert. Least angle regression. Ann. Statist. 32 (2004), no. 2, 407--499. doi:10.1214/009053604000000067.
http://projecteuclid.org/euclid.aos/1083178935.

Related

how run kmean algorithm on sift keypoint in matlab

I need to run the K-Means algorithm on the key points of the Sift algorithm in MATLAB .I want to cluster the key points in the image but I do not know how to do it.
First, put the key points into X with x coordinates in the first column and y coordinates in the second column like this
X=[reshape(keypxcoord,numel(keypxcoord),1),reshape(keypycoord,numel(keypycoord),1))]
Then if you have the statistical toolbox, you can just use the built in 'kmeans' function lik this
output = kmeans(X,num_clusters)
Otherwise, write your own kmeans function:
function [ min_group, mu ] = mykmeans( X,K )
%MYKMEANS
% X = N obervations of D element vectors
% K = number of centroids
assert(K > 0);
D = size(X,1); %No. of r.v.
N = size(X,2); %No. of observations
group_size = zeros(1,K);
min_group = zeros(1,N);
step = 0;
%% init centroids
mu = kpp(X,K);
%% 2-phase iterative approach (local then global)
while step < 400
%% phase 1, batch update
old_group = min_group;
% computing distances
d2 = distances2(X,mu);
% reassignment all points to closest centroids
[~, min_group] = min(d2,[],1);
% recomputing centroids (K number of means)
for k = 1 : K
group_size(k) = sum(min_group==k);
% check empty group
%if group_size(k) == 0
assert(group_size(k)>0);
%else
mu(:,k) = sum(X(:,min_group==k),2)/group_size(k);
%end
end
changed = sum(min_group ~= old_group);
p1_converged = changed <= N*0.001;
%% phase 2, individual update
changed = 0;
for n = 1 : N
d2 = distances2(X(:,n),mu);
[~, new_group] = min(d2,[],1);
% recomputing centroids of affected groups
k = min_group(n);
if (new_group ~= k)
mu(:,k)=(mu(:,k)*group_size(k)-X(:,n));
group_size(k) = group_size(k) - 1;
mu(:,k)=mu(:,k)/group_size(k);
mu(:,new_group) = mu(:,new_group)*group_size(new_group)+ X(:,n);
group_size(new_group) = group_size(new_group) + 1;
mu(:,new_group)=mu(:,new_group)/group_size(new_group);
min_group(n) = new_group;
changed = changed + 1;
end
end
%% check convergence
if p1_converged && changed <= N*0.001
break;
else
step = step + 1;
end
end
end
function d2 = distances2(X, mu)
K = size(mu,2);
N = size(X,2);
d2 = zeros(K,N);
for j = 1 : K
d2(j,:) = sum((X - repmat(mu(:,j),1,N)).^2,1);
end
end
function mu = kpp( X,K )
% kmeans++ init
D = size(X,1); %No. of r.v.
N = size(X,2); %No. of observations
mu = zeros(D, K);
mu(:,1) = X(:,round(rand(1) * (size(X, 2)-1)+1));
for k = 2 : K
% computing distances between centroids and observations
d2 = distances2(X, mu(1:k-1));
% assignment
[min_dist, ~] = min(d2,[],1);
% select new centroids by selecting point with the cumulative dist
% value (distance) larger than random value (falls in range between
% dist(n-1) : dist(n), dist(0)= 0)
rv = sum(min_dist) * rand(1);
for n = 1 : N
if min_dist(n) >= rv
mu(:,k) = X(:,n);
break;
else
rv = rv - min_dist(n);
end
end
end
end

Fast approach in matlab to estimate linear regression with AR terms

I am trying to estimate regression and AR parameters for (loads of) linear regressions with AR error terms. (You could also think of this as a MA process with exogenous variables):
, where
, with lags of length p
I am following the official matlab recommendations and use regArima to set up a number of regressions and extract regression and AR parameters (see reproducible example below).
The problem: regArima is slow! For 5 regressions, matlab needs 14.24sec. And I intend to run a large number of different regression models. Is there any quicker method around?
y = rand(100,1);
r2 = rand(100,1);
r3 = rand(100,1);
r4 = rand(100,1);
r5 = rand(100,1);
exo = [r2 r3 r4 r5];
tic
for p = 0:4
Mdl = regARIMA(3,0,0);
[EstMdl, ~, LogL] = estimate(Mdl,y,'X',exo,'Display','off');
end
toc
Unlike the regArima function which uses Maximum Likelihood, the Cochrane-Orcutt prodecure relies on an iteration of OLS regression. There are a few more particularities when this approach is valid (refer to the link posted). But for the aim of this question, the appraoch is valid, and fast!
I modified James Le Sage's code which covers only AR lags of order 1, to cover lags of order p.
function result = olsc(y,x,arterms)
% PURPOSE: computes Cochrane-Orcutt ols Regression for AR1 errors
%---------------------------------------------------
% USAGE: results = olsc(y,x)
% where: y = dependent variable vector (nobs x 1)
% x = independent variables matrix (nobs x nvar)
%---------------------------------------------------
% RETURNS: a structure
% results.meth = 'olsc'
% results.beta = bhat estimates
% results.rho = rho estimate
% results.tstat = t-stats
% results.trho = t-statistic for rho estimate
% results.yhat = yhat
% results.resid = residuals
% results.sige = e'*e/(n-k)
% results.rsqr = rsquared
% results.rbar = rbar-squared
% results.iter = niter x 3 matrix of [rho converg iteration#]
% results.nobs = nobs
% results.nvar = nvars
% results.y = y data vector
% --------------------------------------------------
% SEE ALSO: prt_reg(results), plt_reg(results)
%---------------------------------------------------
% written by:
% James P. LeSage, Dept of Economics
% University of Toledo
% 2801 W. Bancroft St,
% Toledo, OH 43606
% jpl#jpl.econ.utoledo.edu
% do error checking on inputs
if (nargin ~= 3); error('Wrong # of arguments to olsc'); end;
[nobs nvar] = size(x);
[nobs2 junk] = size(y);
if (nobs ~= nobs2); error('x and y must have same # obs in olsc'); end;
% ----- setup parameters
ITERMAX = 100;
converg = 1.0;
rho = zeros(arterms,1);
iter = 1;
% xtmp = lag(x,1);
% ytmp = lag(y,1);
% truncate 1st observation to feed the lag
% xlag = x(1:nobs-1,:);
% ylag = y(1:nobs-1,1);
yt = y(1+arterms:nobs,1);
xt = x(1+arterms:nobs,:);
xlag = zeros(nobs-arterms,arterms);
for tt = 1 : arterms
xlag(:,nvar*(tt-1)+1:nvar*(tt-1)+nvar) = x(arterms-tt+1:nobs-tt,:);
end
ylag = zeros(nobs-arterms,arterms);
for tt = 1 : arterms
ylag(:,tt) = y(arterms-tt+1:nobs-tt,:);
end
% setup storage for iteration results
iterout = zeros(ITERMAX,3);
while (converg > 0.0001) & (iter < ITERMAX),
% step 1, using intial rho = 0, do OLS to get bhat
ystar = yt - ylag*rho;
xstar = zeros(nobs-arterms,nvar);
for ii = 1 : nvar
tmp = zeros(1,arterms);
for tt = 1:arterms
tmp(1,tt)=ii+nvar*(tt-1);
end
xstar(:,ii) = xt(:,ii) - xlag(:,tmp)*rho;
end
beta = (xstar'*xstar)\xstar' * ystar;
e = y - x*beta;
% truncate 1st observation to account for the lag
et = e(1+arterms:nobs,1);
elagt = zeros(nobs-arterms,arterms);
for tt = 1 : arterms
elagt(:,tt) = e(arterms-tt+1:nobs-tt,:);
end
% step 2, update estimate of rho using residuals
% from step 1
res_rho = (elagt'*elagt)\elagt' * et;
rho_last = rho;
rho = res_rho;
converg = sum(abs(rho - rho_last));
% iterout(iter,1) = rho;
iterout(iter,2) = converg;
iterout(iter,3) = iter;
iter = iter + 1;
end; % end of while loop
if iter == ITERMAX
% error('ols_corc did not converge in 100 iterations');
print('ols_corc did not converge in 100 iterations');
end;
result.iter= iterout(1:iter-1,:);
% after convergence produce a final set of estimates using rho-value
ystar = yt - ylag*rho;
xstar = zeros(nobs-arterms,nvar);
for ii = 1 : nvar
tmp = zeros(1,arterms);
for tt = 1:arterms
tmp(1,tt)=ii+nvar*(tt-1);
end
xstar(:,ii) = xt(:,ii) - xlag(:,tmp)*rho;
end
result.beta = (xstar'*xstar)\xstar' * ystar;
e = y - x*result.beta;
et = e(1+arterms:nobs,1);
elagt = zeros(nobs-arterms,arterms);
for tt = 1 : arterms
elagt(:,tt) = e(arterms-tt+1:nobs-tt,:);
end
u = et - elagt*rho;
result.vare = std(u)^2;
result.meth = 'olsc';
result.rho = rho;
result.iter = iterout(1:iter-1,:);
% % compute t-statistic for rho
% varrho = (1-rho*rho)/(nobs-2);
% result.trho = rho/sqrt(varrho);
(I did not adapt in the last 2 lines the t-test for rho vectors of length p, but this should be straight forward to do..)

All my weights for gradient descent become 0 on feature expansion

I have 2 features which I expand to contain all possible combinations of the two features under order 6. When I do MATLAB's fminunc, it returns a weight vector where all elements are 0.
The dataset is here
clear all;
clc;
data = load("P2-data1.txt");
m = length(data);
para = 0; % regularization parameter
%% Augment Feature
y = data(:,3);
new_data = newfeature(data(:,1), data(:,2), 3);
[~, n] = size(new_data);
betas1 = zeros(n,1); % initial weights
options = optimset('GradObj', 'on', 'MaxIter', 400);
[beta_new, cost] = fminunc(#(t)(regucostfunction(t, new_data, y, para)), betas1, options);
fprintf('Cost at theta found by fminunc: %f\n', cost);
fprintf('theta: \n');
fprintf(' %f \n', beta_new); % get all 0 here
% Compute accuracy on our training set
p_new = predict(beta_new, new_data);
fprintf('Train Accuracy after feature augmentation: %f\n', mean(double(p_new == y)) * 100);
fprintf('\n');
%% the functions are defined below
function g = sigmoid(z) % running properly
g = zeros(size(z));
g=ones(size(z))./(ones(size(z))+exp(-z));
end
function [J,grad] = regucostfunction(theta,x,y,para) % CalculateCost(x1,betas1,y);
m = length(y); % number of training examples
J = 0;
grad = zeros(size(theta));
hyp = sigmoid(x*theta);
err = (hyp - y)';
grad = (1/m)*(err)*x;
sum = 0;
for k = 2:length(theta)
sum = sum+theta(k)^2;
end
J = (1/m)*((-y' * log(hyp) - (1 - y)' * log(1 - hyp)) + para*(sum) );
end
function p = predict(theta, X)
m = size(X, 1); % Number of training examples
p = zeros(m, 1);
index = find(sigmoid(theta'*X') >= 0.5);
p(index,1) = 1;
end
function out = newfeature(X1, X2, degree)
out = ones(size(X1(:,1)));
for i = 1:degree
for j = 0:i
out(:, end+1) = (X1.^(i-j)).*(X2.^j);
end
end
end
data contains 2 columns of rows followed by a third column of 0/1 values.
The functions used are: newfeature returns the expanded features and regucostfunction computes the cost. When I did the same approach with the default features, it worked and I think the problem here has to do with some coding issue.

Error in FDM for a coupled PDEs

Here is the code which is trying to solve a coupled PDEs using finite difference method,
clear;
Lmax = 1.0; % Maximum length
Wmax = 1.0; % Maximum wedth
Tmax = 2.; % Maximum time
% Parameters needed to solve the equation
K = 30; % Number of time steps
n = 3; % Number of space steps
m =30; % Number of space steps
M = 2;
N = 1;
Pr = 1;
Re = 1;
Gr = 5;
maxn=20; % The wave-front: intermediate point from which u=0
maxm = 20;
maxk = 20;
dt = Tmax/K;
dx = Lmax/n;
dy = Wmax/m;
%M = a*B1^2*l/(p*U)
b =1/(1+M*dt);
c =dt/(1+M*dt);
d = dt/((1+M*dt)*dy);
%Gr = gB*(T-T1)*l/U^2;
% Initial value of the function u (amplitude of the wave)
for i = 1:n
if i < maxn
u(i,1)=1.;
else
u(i,1)=0.;
end
x(i) =(i-1)*dx;
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
for j = 1:m
if j < maxm
v(j,1)=1.;
else
v(j,1)=0.;
end
y(j) =(j-1)*dy;
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
for k = 1:K
if k < maxk
T(k,1)=1.;
else
T(k,1)=0.;
end
z(k) =(k-1)*dt;
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Value at the boundary
%for k=0:K
%end
% Implementation of the explicit method
for k=0:K % Time loop
for i=1:n % Space loop
for j=1:m
u(i,j,k+1) = b*u(i,j,k)+c*Gr*T(i,j,k+1)+d*[((u(i,j+1,k)-u(i,j,k))/dy)^(N-1)*((u(i,j+1,k)-u(i,j,k))/dy)]-d*[((u(i,j,k)-u(i,j-1,k))/dy)^(N-1)*((u(i,j,k)-u(i,j-1,k))/dy)]-d*[u(i,j,k)*((u(i,j,k)-u(i-1,j,k))/dx)+v(i,j,k)*((u(i,j+1,k)-u(i,j,k))/dy)];
v(i,j,k+1) = dy*[(u(i-1,j,k+1)-u(i,j,k+1))/dx]+v(i,j-1,k+1);
T(i,j,k+1) = T(i,j,k)+(dt/(Pr*Re))*{(T(i,j+1,k)-2*T(i,j,k)+T(i,j-1,k))/dy^2-Pr*Re{u(i,j,k)*((T(i,j,k)-T(i-1,j,k))/dx)+v(i,j,k)*((T(i,j+1,k)-T(i,j,k))/dy)}};
end
end
end
% Graphical representation of the wave at different selected times
plot(x,u(:,1),'-',x,u(:,10),'-',x,u(:,50),'-',x,u(:,100),'-')
title('graphs')
xlabel('X')
ylabel('Y')
But I am getting this error
Subscript indices must either be real positive integers or logicals.
I am trying to implement this
with boundary conditions
Can someone please help me out!
Thanks
To be quite honest, it looks like you started with something that's way over your head, just typed everything down in one go without thinking much, and now you are surprised that it doesn't work...
In the future, please break down problems like these into waaaay smaller chunks that you can individually plot, check, test, etc. Better yet, try simpler problems first (wave equation, heat equation, ...), gradually working your way up to this.
I say this so harshly, because there were quite a number of fairly basic things wrong with your code:
you've used braces ({}) and brackets ([]) exactly as they are written in the equation. In MATLAB, braces are a constructor for a special container object called a cell array, and brackets are used to construct arrays and matrices. To group things like in the equation, you always have to use parentheses (()).
You had quite a number of parentheses wrong, which became apparent when I re-grouped and broke up those huge unintelligible lines into multiple lines that humans can actually read with understanding
you forgot to take the absolute values in the 3rd and 4th terms of u
you looped over k = 0:K and j = 1:m and then happily index everything with k and j-1. MATLAB is 1-based, meaning, the first element of anything is element 1, and indexing with 0 is an error
you've initialized 3 vectors u, v and T, but then index those in the loop as if they are 3D arrays
Now, I've managed to come up with the following code, which runs OK and at least more or less agrees with the equations shown. But I think it still doesn't make much sense because I get only zeros out (except for the initial values).
But, with this feedback, you should be able to correct any problems left.
Lmax = 1.0; % Maximum length
Wmax = 1.0; % Maximum wedth
Tmax = 2.; % Maximum time
% Parameters needed to solve the equation
K = 30; % Number of time steps
n = 3; % Number of space steps
m = 30; % Number of space steps
M = 2;
N = 1;
Pr = 1;
Re = 1;
Gr = 5;
maxn = 20; % The wave-front: intermediate point from which u=0
maxm = 20;
maxk = 20;
dt = Tmax/K;
dx = Lmax/n;
dy = Wmax/m;
%M = a*B1^2*l/(p*U)
b = 1/(1+M*dt);
c = dt/(1+M*dt);
d = dt/((1+M*dt)*dy);
%Gr = gB*(T-T1)*l/U^2;
% Initial value of the function u (amplitude of the wave)
u = zeros(n,m,K+1);
x = zeros(n,1);
for i = 1:n
if i < maxn
u(i,1)=1.;
else
u(i,1)=0.;
end
x(i) =(i-1)*dx;
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
v = zeros(n,m,K+1);
y = zeros(m,1);
for j = 1:m
if j < maxm
v(1,j,1)=1.;
else
v(1,j,1)=0.;
end
y(j) =(j-1)*dy;
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
T = zeros(n,m,K+1);
z = zeros(K,1);
for k = 1:K
if k < maxk
T(1,1,k)=1.;
else
T(1,1,k)=0.;
end
z(k) =(k-1)*dt;
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Value at the boundary
%for k=0:K
%end
% Implementation of the explicit method
for k = 2:K % Time loop
for i = 2:n % Space loop
for j = 2:m-1
u(i,j,k+1) = b*u(i,j,k) + ...
c*Gr*T(i,j,k+1) + ...
d*(abs(u(i,j+1,k) - u(i,j ,k))/dy)^(N-1)*((u(i,j+1,k) - u(i,j ,k))/dy) - ...
d*(abs(u(i,j ,k) - u(i,j-1,k))/dy)^(N-1)*((u(i,j ,k) - u(i,j-1,k))/dy) - ...
d*(u(i,j,k)*((u(i,j ,k) - u(i-1,j,k))/dx) +...
v(i,j,k)*((u(i,j+1,k) - u(i ,j,k))/dy));
v(i,j,k+1) = dy*(u(i-1,j,k+1)-u(i,j,k+1))/dx + ...
v(i,j-1,k+1);
T(i,j,k+1) = T(i,j,k) + dt/(Pr*Re) * (...
(T(i,j+1,k) - 2*T(i,j,k) + T(i,j-1,k))/dy^2 - Pr*Re*(...
u(i,j,k)*((T(i,j,k) - T(i-1,j,k))/dx) + v(i,j,k)*((T(i,j+1,k) - T(i,j,k))/dy))...
);
end
end
end
% Graphical representation of the wave at different selected times
figure, hold on
plot(x, u(:, 1), '-',...
x, u(:, 10), '-',...
x, u(:, 50), '-',...
x, u(:,100), '-')
title('graphs')
xlabel('X')
ylabel('Y')

Issue Changing Script to function in MatLab

I am trying to give a/set 'rho_real' to this code and receive P_Mpa as an/set of outputs, however the initial 'rho_real' does not seem to get changed as I run the function.
Any Help is greatly appreciated.
function P_Mpa = Density_vs_Pressure_Ethane (rho_real)
% calculating Pressure vs Density curve using
mi = [1.6069,1.6069];
sigmai = [3.5206,3.5206]; 1
% epsilon = [2.873268218e-21,2.873268218e-21];
epsilon_ki = [191.42,191.42];
xi = [0.5,0.5]; % mole fraction of component i
T = 298.15; % temperature of the system
% k_bolt = 1.3806488e-23; % Boltzmann constant = 1.3806488 × 10-23 m2 kg s-2 K-1
% Initial guess for density
rho_real = 10000;
rho = rho_real*6.022e-7;
% Initial Kij
Kij = 0;
% Temperatuure-dependent segment diameter di of component i - matrix save
di = zeros(1,2);
giihs = zeros (1,2);
rho_giihs = zeros (1,2);
m_bar = 0;
zi0=0;
zi1=0;
zi2=0;
zi3=0;
for i=1:2
m_bar=m_bar+((xi(1,i))*(mi(1,i)));
di(1,i) = (sigmai(1,i))*(1-(0.12*exp(-3*(epsilon_ki(1,i))/T)));
% Calculating Zieeeeeeh for Hard Sphere compressibility (Rechcecked)
zi0 = zi0+((pi/6)*rho)*(xi(1,i)*(mi(1,i))*((di(1,i))^0));
zi1 = zi1+((pi/6)*rho)*(xi(1,i)*(mi(1,i))*((di(1,i))^1));
zi2 = zi2+((pi/6)*rho)*(xi(1,i)*(mi(1,i))*((di(1,i))^2));
zi3 = zi3+((pi/6)*rho)*(xi(1,i)*(mi(1,i))*((di(1,i))^3));
end
for i=1:2
giihs (1,i)= (1/(1-zi3))+((((di(1,i))^2)/(2*(di(1,i))))*(3*zi2)/((1-zi3)^2))+((di(1,i)^2)/(2*(di(1,i)^2))^2)*((2*(zi2)^2)/((1-zi3)^3));
% Derevative of site to site radial distribution function of hard spheres
% with respect to density
rho_giihs (1,i) = ((zi3/((1-zi3)^2)))+(((((di(1,i))^2)/(2*(di(1,i)))))*...
((((3*zi2)/((1-zi3)^2)))+((6*zi2*zi3)/((1-zi3)^3))))+...
(((((di(1,i))^2)/(2*(di(1,i))))^2)*(((4*(zi2^2))/((1-zi3)^3))+...
(((6*zi2^2)*zi3)/((1-zi3)^4))));
end
eta = zi3;
% Calculating the hard sphere compresibility factor ( Rechecked)
zhs = (zi3/(1-zi3)+((3*zi1*zi2)/((zi0*(1-zi3)^2)))+(((3*(zi2^3))-(zi3*(zi2^3)))/(zi0*((1-(zi3)^3)))));
zhc=0;
for i = 1:2
% Calculating the hard chain compressibility factor (Rechecked)
zhc= zhc+((xi(1,i)*(mi(1,i)-1)*((giihs(1,i))^-1)*((rho_giihs(1,i)))));
end
zhc=m_bar*zhs-zhc;
% a0 constants
a0 = [0.9105631445,0.6361281449,2.6861347891,-26.547362491,97.759208784,-159.59154087,91.297774084]; % Checked and Correct Sadowski 2001
% a1 constants
a1 = [-0.3084016918,0.1860531159,-2.5030047259,21.419793629,-65.255885330,83.318680481,-33.746922930]; % Checked and Correct Sadowski 2001
% a2 constants
a2 = [-0.0906148351,0.4527842806,0.5962700728,-1.7241829131,-4.1302112531,13.776631870,-8.6728470368]; % Checked and Correct Sadowski 2001
% b0 constants
b0 = [0.7240946941,2.2382791861,-4.0025849485,-21.003576815,26.855641363,206.55133841,-355.60235612]; % Checked and Correct Sadowski 2001
% b1 constants
b1 = [-0.5755498075,0.6995095521,3.8925673390,-17.215471648,192.67226447,-161.82646165,-165.20769346]; % Checked and Correct Sadowski 2001
% b2 constants
b2 = [0.0976883116,-0.2557574982,-9.1558561530,20.642075974,-38.804430052,93.626774077,-29.666905585];
aim = zeros(1,7);
bim = zeros(1,7);
for i = 1:7
aim(1,i)=a0(1,i)+(((m_bar-1)/m_bar)*a1(1,i))+(((m_bar-1)/m_bar)*((m_bar-2)/m_bar))*a2(1,i); % Checked and Correct Sadowski 2001 (rechecked)
bim(1,i)=b0(1,i)+(((m_bar-1)/m_bar)*b1(1,i))+(((m_bar-1)/m_bar)*((m_bar-2)/m_bar))*b2(1,i);
c1 = ((1 + (m_bar*(((8*eta)-(2*eta^2))/(1-eta)^4)+((1-m_bar)*(((20*eta)-(27*eta^2)+(12*eta^3)-(2*eta^4))/(((1-eta)*(2-eta))^2)))))^-1);
c2 = ((-c1^2)*(m_bar*(((-4*eta^2)+(20*eta)+8)/((1-eta)^5))+((1-m_bar)*(((2*eta^3)+(12*eta^2)-(48*eta)+40)/(((1-eta)*(2-eta))^3)))));
% Integrals of perturbation theory without/with respect to eta (rechecked)
dI1 = 0;
dI2 = 0;
I1 = 0;
I2 = 0;
for j = 1:7
I1 = I1+ aim(1,j)*eta^(j-1);
I2 = I2+ bim(1,j)*eta^(j-1);
dI1 = dI1 + (aim(1,j)*((j-1)+1)*(eta^(j-1)));
dI2 = dI2 + (bim(1,j)*((j-1)+1)*(eta^(j-1)));
end
% Segment abriviations
m2e = 0;
m2e2 = 0;
sigma_ij = zeros(1,2);
epsilon_ij = zeros (1,2);
for i = 1:2
for j = 1:2
% Mixing rules for sigma and eta respectively.
sigma_ij(i,j)=0.5*(sigmai(1,i)+sigmai(1,j));
epsilon_ij(i,j)=sqrt((epsilon_ki(1,i)*epsilon_ki(1,j))*(1-Kij));
m2e=m2e+(xi(1,i)*xi(1,j)*mi(1,i)*mi(1,j)*((epsilon_ij(i,j)/(T)))*(sigma_ij(i,j)^3));
m2e2=m2e2+(xi(1,i)*xi(1,j)*mi(1,i)*mi(1,j)*(((epsilon_ij(i,j)/(T)))^2)*(sigma_ij(i,j)^3));
end
end
% The dispersion contribution to the comprehensibility factor
zdis = (-2*pi*rho*dI1*m2e)-(pi*rho*m_bar*((c1*dI2)+(c2*eta*I2))*m2e2);
% Compresibility Factor (rechecked)
z = 1 + zhc + zdis;
zdis;
zhc;
P = z*8.314*T*rho/6.022e-7; % (rechecked)
P_Mpa = P*1e-6;
In the line below % Initial guess for density: you set the value of rho_real to 10000. From then on, of course, rho_real will be 10000 no matter what the input argument was. If you remove this line, your function will behave as your script, with variable rho_real defined by the input argument.
MATLAB does have a way of providing default values for arguments. Your function could test how many arguments were provided. If rho_real was left out, i.e. if the number of arguments nargin is less than 1, only then you set rho_real.
if nargin < 1
rho_real = 10000
end