Matlab error error using - matrix dimensions must agree - matlab

I am trying to write a code for classification of data. I try to implement a sigmoid function and then I try to use that function in calculation the cost.I keep getting errors and I have a feeling that it is because of the sigmoid function.I would like the sigmoid function to return a vector.But it keeps returning a scalar.
function g = sigmoid(z)
%SIGMOID Compute sigmoid functoon
% J = SIGMOID(z) computes the sigmoid of z.
% You need to return the following variables correctly
g=zeros(size(z));
m=ones(size(z));
% ====================== YOUR CODE HERE ======================
% Instructions: Compute the sigmoid of each value of z (z can be a matrix,
% vector or scalar).
g=1/(m+exp(-z));
This is my cost function:
m = length(y); % number of training examples
% You need to return the following variables correctly
grad=(1/m)*((X*(sigmoid(X*theta)-y)));//this is the derivative in gradient descent
J=(1/m)*(-(transpose(y)*log(sigmoid((X*theta))))-(transpose(1-y)*log(sigmoid((X*theta)))));//this is the cost function
the dimension of X are 100,4; of theta are 4,1;y is 100,1.
THank you.
Errors:
Program paused. Press enter to continue.
sigmoid answer: 0.500000Error using -
Matrix dimensions must agree.
Error in costFunction (line 11)
grad=(1/m)*((X*(sigmoid(X*theta)-y)));
Error in ex2 (line 69)
[cost, grad] = costFunction(initial_theta, X, y);

Please replace g=1/(m+exp(-z)); with g=1./(m+exp(-z)); in your method sigmoid
z = [2,3,4;5,6,7] ;
%SIGMOID Compute sigmoid functoon
% J = SIGMOID(z) computes the sigmoid of z.
% You need to return the following variables correctly
g=zeros(size(z));
m=ones(size(z));
% ====================== YOUR CODE HERE ======================
% Instructions: Compute the sigmoid of each value of z (z can be a matrix,
% vector or scalar).
g=1./(m+exp(-z));

Related

Interpolation using chebyshev points

Interpolate the Runge function of Example 10.6 at Chebyshev points for n from 10 to 170
in increments of 10. Calculate the maximum interpolation error on the uniform evaluation
mesh x = -1:.001:1 and plot the error vs. polynomial degree as in Figure 10.8 using
semilogy. Observe spectral accuracy.
The runge function is given by: f(x) = 1 / (1 + 25x^2)
My code so far:
x = -1:0.001:1;
n = 170;
i = 10:10:170;
cx = cos(((2*i + 1)/(2*(n+1)))*pi); %chebyshev pts
y = 1 ./ (1 + 25*x.^2); %true fct
%chebyshev polynomial, don't know how to construct using matlab
yc = polyval(c, x); %graph of approx polynomial fct
plot(x, yc);
mErr = (1 / ((2.^n).*(n+1)!))*%n+1 derivative of f evaluated at max x in [-1,1], not sure how to do this
%plotting stuff
I know very little matlab, so I am struggling on creating the interpolating polynomial. I did some google work, but I was confused with the current functions as I didn't find one that just simply took in points and the polynomial to be interpolated. I am also a bit confused in this case of whether I should be doing i = 0:1:n and n=10:10:170 or if n is fixed here. Any help is appreciated, thank you
Since you know very little about MATLAB, I will try explain everything step by step:
First, to visualize the Runge function, you can type:
f = #(x) 1./(1+25*x.^2); % Runge function
% plot Runge function over [-1,1];
x = -1:1e-3:1;
y = f(x);
figure;
plot(x,y); title('Runge function)'); xlabel('x');ylabel('y');
The #(x) part of the code is a function handle, a very useful feature of MATLAB. Notice the function is properly vecotrized, so it can receive as an argument a variable or an array. The plot function is straightforward.
To understand the Runge phenomenon, consider a linearly spaced vector of [-1,1] of 10 elements and use these points to obtain the interpolating (Lagrange) polynomial. You get the following:
% 10 linearly spaced points
xc = linspace(-1,1,10);
yc = f(xc);
p = polyfit(xc,yc,9); % gives the coefficients of the polynomial of degree 10
hold on; plot(xc,yc,'o',x,polyval(p,x));
The polyfit function does a polynomial curve fitting - it obtains the coefficients of the interpolating polynomial, given the poins x,y and the degree of the polynomial n. You can easily evaluate the polynomial at other points with the polyval function.
Obseve that, close to the end domains, you get an oscilatting polynomial and the interpolation is not a good approximation of the function. As a matter of fact, you can plot the absolute error, comparing the value of the function f(x) and the interpolating polynomial p(x):
plot(x,abs(y-polyval(p,x))); xlabel('x');ylabel('|f(x)-p(x)|');title('Error');
This error can be reduced if, instead of using a linearly space vector, you use other points to do the interpolation. A good choice is to use the Chebyshev nodes, which should reduce the error. As a matter of fact, notice that:
% find 10 Chebyshev nodes and mark them on the plot
n = 10;
k = 1:10; % iterator
xc = cos((2*k-1)/2/n*pi); % Chebyshev nodes
yc = f(xc); % function evaluated at Chebyshev nodes
hold on;
plot(xc,yc,'o')
% find polynomial to interpolate data using the Chebyshev nodes
p = polyfit(xc,yc,n-1); % gives the coefficients of the polynomial of degree 10
plot(x,polyval(p,x),'--'); % plot polynomial
legend('Runge function','Chebyshev nodes','interpolating polynomial','location','best')
Notice how the error is reduced close to the end domains. You don't get now that high oscillatory behaviour of the interpolating polynomial. If you plot the error, you will observe:
plot(x,abs(y-polyval(p,x))); xlabel('x');ylabel('|f(x)-p(x)|');title('Error');
If, now, you change the number of Chebyshev nodes, you will get an even better approximation. A little modification on the code lets you run it again for different numbers of nodes. You can store the maximum error and plot it as a function of the number of nodes:
n=1:20; % number of nodes
% pre-allocation for speed
e_ln = zeros(1,length(n)); % error for the linearly spaced interpolation
e_cn = zeros(1,length(n)); % error for the chebyshev nodes interpolation
for ii=1:length(n)
% linearly spaced vector
x_ln = linspace(-1,1,n(ii)); y_ln = f(x_ln);
p_ln = polyfit(x_ln,y_ln,n(ii)-1);
e_ln(ii) = max( abs( y-polyval(p_ln,x) ) );
% Chebyshev nodes
k = 1:n(ii); x_cn = cos((2*k-1)/2/n(ii)*pi); y_cn = f(x_cn);
p_cn = polyfit(x_cn,y_cn,n(ii)-1);
e_cn(ii) = max( abs( y-polyval(p_cn,x) ) );
end
figure
plot(n,e_ln,n,e_cn);
xlabel('no of points'); ylabel('maximum absolute error');
legend('linearly space','chebyshev nodes','location','best')

Demeaned Returns for Covariance (Matlab)

I've got this code:
function [sigma,shrinkage]=covMarket(x,shrink)
% function sigma=covmarket(x)
% x (t*n): t iid observations on n random variables
% sigma (n*n): invertible covariance matrix estimator
%
% This estimator is a weighted average of the sample
% covariance matrix and a "prior" or "shrinkage target".
% Here, the prior is given by a one-factor model.
% The factor is equal to the cross-sectional average
% of all the random variables.
% The notation follows Ledoit and Wolf (2003)
% This version: 04/2014
% de-mean returns
t=size(x,1);
n=size(x,2);
meanx=mean(x);
x=x-meanx(ones(t,1),:);
xmkt=mean(x')';
sample=cov([x xmkt])*(t-1)/t;
covmkt=sample(1:n,n+1);
varmkt=sample(n+1,n+1);
sample(:,n+1)=[];
sample(n+1,:)=[];
prior=covmkt*covmkt'./varmkt;
prior(logical(eye(n)))=diag(sample);
if (nargin < 2 | shrink == -1) % compute shrinkage parameters
c=norm(sample-prior,'fro')^2;
y=x.^2;
p=1/t*sum(sum(y'*y))-sum(sum(sample.^2));
% r is divided into diagonal
% and off-diagonal terms, and the off-diagonal term
% is itself divided into smaller terms
rdiag=1/t*sum(sum(y.^2))-sum(diag(sample).^2);
z=x.*xmkt(:,ones(1,n));
v1=1/t*y'*z-covmkt(:,ones(1,n)).*sample;
roff1=sum(sum(v1.*covmkt(:,ones(1,n))'))/varmkt...
-sum(diag(v1).*covmkt)/varmkt;
v3=1/t*z'*z-varmkt*sample;
roff3=sum(sum(v3.*(covmkt*covmkt')))/varmkt^2 ...
-sum(diag(v3).*covmkt.^2)/varmkt^2;
roff=2*roff1-roff3;
r=rdiag+roff;
% compute shrinkage constant
k=(p-r)/c;
shrinkage=max(0,min(1,k/t))
else % use specified number
shrinkage = shrink;
end
% compute the estimator
sigma=shrinkage*prior+(1-shrinkage)*sample;
end
It's a Part of the Matlab code from Ledoit/Wolf (2003). I don't understand why the demeaning the returns before calculating the covariance? Is this Matlab specific? In my opinion, there is no need for demeaning returns before calculating with the cov-function. (The function does it on its own)
Thanks for help in advance!

how to integrate this Function in matlab

I'm new on matlab.
How can I integrate this line of code ? ?
p2= polyfit(x,y,length(x));
from= x(1);
to= x(length(x));
I need the integration of p2.
I tried a lot with the Integration function:
value = integral(p2,from,to);
but I got
Error using integral (line 82) First input argument must be a function
handle.
Error in poly_integral (line 5)
value = integral(p2,from,to);
That is because p2, in your code, is not a function. It is just a vector of coefficients. The first argument for integral needs to be handle to the function that you want to integrate.
Judging from your code, it seems that you would want to define a function that evaluates the polynomial p2. If so, you could do something like the following example:
% take an example set of x and y
x = linspace(0, pi, 1000); % uniform samples between 0 to pi
y = sin(x); % assume, for sake of example, output is sine function of input
% polynomial fit
p2 = polyfit(x,y,4); % 4th order polynomial
% Note that, in general, the order should be much smaller than length(x).
% So you probably should review this part of your code as well.
% define a function to evaluate the polynomial
fn = #(x) polyval(p2, x);
% this means: fn(x0) is same as polyval(p2, x0)
% compute integral
value = integral(fn,x(1),x(end));
You can use the polyint function to get the polynomial coefficients for exact integration of the polynomial:
p2 = polyfit(x,y,length(x));
int = diff(polyval(polyint(p2),x([1 end])));

Confidence intervals for linear curve fit under constraints in MATLAB

I have fitted a straight line to a dataset with 68 samples, under the constraint that the line passes through (x0,y0) using the function lsqlin in MATLAB. How can I find the confidence intervals for this?
My code (Source):
I import the dataset containing x and y vectors from a mat file, which also contains the values of constraints x0 and y0.
n = 1; % Degree of polynomial to fit
V(:,n+1) = ones(length(x),1,class(x)); %V=Vandermonde matrix for 'x'
for j = n:-1:1
V(:,j) = x.*V(:,j+1);
end
d = y; % 'd' is the vector of target values, 'y'.
% There are no inequality constraints in this case, i.e.,
A = [];b = [];
% We use linear equality constraints to force the curve to hit the required point. In
% this case, 'Aeq' is the Vandermoonde matrix for 'x0'
Aeq = x0.^(n:-1:0);
% and 'beq' is the value the curve should take at that point
beq = y0;
%%
[p, resnorm, residual, exitflag, output, lambda] = lsqlin(V, d, A, b, Aeq, beq);
%%
% We can then use POLYVAL to evaluate the fitted curve
yhat = polyval( p, x );
The function bootci can be used to find confidence intervals when using lsqlin. Here's how it can be used:
ci=bootci(68,{#(x,y)func(x,y),x,y},'type','student');
The first argument is the number of data points, or the length of the vector x.
The function in the second argument is basically supposed to compute any statistic for which you need to find the confidence intervals. In this case, this statistic is the coefficients of our fitted line. Hence, the function func(x,y) here should return the regression coefficients returned by lsqnonlin. The inputs to this function are the dataset vectors x and y.
The third and fourth argument lets you specify the distribution of your dataset. You can get an idea of this by plotting a histogram of the residuals like this:
histogram(residuals);

regression in matlab

I have this matlab code for regression with one indepenpent variable, but what if I have two independent variables(x1 and x2)? How should I modify this code of polynomial regression?
x = linspace(0,10,200)'; % independent variable
y = x + 1.5*sin(x) + randn(size(x,1),1); % dependent variable
A = [x.^0, x]; % construct a matrix of permutations
w = (A'*A)\(A'*y); % solve the normal equation
y2 = A*w; % restore the dependent variable
r = y-y1; % find the vector of regression residual
plot(x, [y y2]);
Matlab has facilities for polynomial regression function polyfit. Have you tried that?
http://www.mathworks.com/help/techdoc/data_analysis/f1-8450.html
http://www.mathworks.com/help/toolbox/stats/bq_676m-2.html#bq_676m-3
But if you want to workout your own formulation,you should probably look at textbook or some online resources on regression e.g.
http://www.edwardtufte.com/tufte/dapp/DAPP3a.pdf