Solving Least square using MATLAB - matlab

Assume we want to determine the coefficients of a polynomial equation that is approximating the tangent function between 0 to 1, as follow:
-A is m×n vandermonde matrix. The entries are populated using m value between 0 to 11(given as input).
-The corresponding vector b is calculated using tangent function.
-x is calculated by typing x= A\b in MATLAB.
Now, using MATLAB, the computed x are subsittued in Ax. The result is plotted and it is pretty close to tangent function. But if I use polyval function of n−1 degree (in MATLAB) to calculate b, the resulting plot is significantly different from the original b. I cannot understand the reason for such a significant difference between the results of these two methods.
Here is the code:
clear all;
format long;
m = 60;
n = 11;
t = linspace(0,1,m);
A= fliplr(vander(t));
A=A(:,1:n);
b=tan(t');
x= A\b;
y=polyval(x, t);
plot(t,y,'r')
y2= A*x
hold on;
plot(t,y2,'g.');
hold on;
plot(t,tan(t),'--b');
Any insight would be appreciated. Thank you.

After A= fliplr(vander(t)) the A matrix is equal to
1 t(1) t(1)^2 ...
1 t(2) t(2)^2 ...
...
1 t(m) t(m)^2 ...
It is not correct because polyval accepts the coefficients in descending powers. You don't need to flip the columns of A:
A= vander(t);
A= A(:,end-n+1:end);

Related

Construction of a cubic spline with natural boundary conditions without built-in MATLAB functions

As a part of project I have to construct a cubic spline with natural boundary conditions without using any built-in MATLAB functions such as spline or csape.
I tried programming the following function.
While I'm pretty sure it's correct up to the point where it calculates the coefficients q, I can't figure out how I will eventually get the actual cubic polynomials. What I am getting right now as an outpout when calling the function is 9 distinct values for S.
Any help or hints would be appreciated.
function S=cubic_s(x,y)
n=length(x);
%construction of the tri-diagonal matrix
for j=1:n
V(j,1)=1;
V(j,2)=4;
V(j,3)=1;
end
%the first row should be (1,0,...,0) and the last (0,0,...,0,1)
V(1,2)=1; V(n,2)=1; V(2,3)=0; V(n-1,1)=0;
d=[-1 0 1];
A=spdiags(V,d,n,n);
%construction of the vector b
b=zeros(n,1);
%the first and last elements of b must equal 0
b(1)=0; b(n)=0;
%distance between two consecutive points
h=(x(n)-x(1))/(n-1);
for j=2:n-1
b(j,1)=(6/h^2)*(y(j+1)-2*y(j)+y(j-1));
end
%solving for the coefficients q
q=A\b;
%finding the polynomials with the formula for the cubic spline
for j=1:n-1
for z=x(j):0.01:x(j+1)
S(j)=(q(j)/(6*h))*(x(j+1)-z)^3+(q(j+1)/(6*h))*(z-x(j))^3+(z-x(j))* (y(j+1)/h-(q(j+1)*h)/6)+(x(j+1)-z)*(y(j)/h-(q(j)*h)/6);
end
end
You should save S every z-time, see picture and code below
function plot_spline
x = (0:10);
y = [1 4 3 7 1 5 2 1 6 2 3];
xx = x(1):0.01:x(2);
[XX,YY]=cubic_s(x,y);
plot(x,y,'*r', XX,YY,'-k')
function [XX,YY]=cubic_s(x,y)
n=length(x);
%construction of the tri-diagonal matrix
for j=1:n
V(j,1)=1;
V(j,2)=4;
V(j,3)=1;
end
%the first row should be (1,0,...,0) and the last (0,0,...,0,1)
V(1,2)=1; V(n,2)=1; V(2,3)=0; V(n-1,1)=0;
d=[-1 0 1];
A=spdiags(V,d,n,n);
%construction of the vector b
b=zeros(n,1);
%the first and last elements of b must equal 0
b(1)=0; b(n)=0;
%distance between two consecutive points
h=(x(n)-x(1))/(n-1);
for j=2:n-1
b(j,1)=(6/h^2)*(y(j+1)-2*y(j)+y(j-1));
end
%solving for the coefficients q
q=A\b;
%finding the polynomials with the formula for the cubic spline
enum = 1;
for j=1:n-1
for z=x(j):0.01:x(j+1)
YY(enum)=(q(j)/(6*h))*(x(j+1)-z)^3+(q(j+1)/(6*h))*(z-x(j))^3+(z-x(j))* (y(j+1)/h-(q(j+1)*h)/6)+(x(j+1)-z)*(y(j)/h-(q(j)*h)/6);
XX(enum)=z;
enum = enum+1;
end
end

Plot cubic roots in matlab

I would like to plot the roots of the cubic equation x^{3}+Ax^{2}+1=0 in matlab. I know that there are 3 real roots for A<-1.88 and 1 if A>-1.88. I would like to plot the 3 real roots as a function of A and when it switches to 1 real root and 2 complex to plot the real root and the real part of the complex conjugate solutions all in the same plot (perhaps as 2-3 graphs).
I am a matlab beginner though. I tried
syms x A
r = solve(x^3 + A*x^2+1 == 0, x);
ezplot(vpa(r(1)),[-10,10])
ezplot(vpa(r(2)),[-10,10])
ezplot(vpa(r(3)),[-10,10])
but vpa doesnt know how to numerically evaluate r.
There's no need to do symbolic math for this,
A = (-3:0.01:0)'; % create a vector of values for A
r = arrayfun(#(A)real(roots([1 A 0 1])),A,'uni',false); % calculate the polynomial roots for all values of A
r = [r{:}]; % convert result to a numeric array
plot(A,r'); % plot the result
grid on;
title('Real parts of Polynomial');
xlabel('A');

Transform random draws from a bivariate normal into the unit square in Matlab

I have an nx2 matrix r in Matlab reporting n draws from a bivariate normal distribution
n=1000;
m1=0.3;
m2=-m1;
v1=0.2;
n=10000;
v2=2;
rho=0.5;
mu = [m1, m2];
sigma = [v1,rho*sqrt(v1)*sqrt(v2);rho*sqrt(v1)*sqrt(v2),v2];
r = mvnrnd(mu,sigma,n);
I want to normalise these draws to the unit square [0,1]^2
First option
rmax1=max(r(:,1));
rmin1=min(r(:,1));
rmax2=max(r(:,2));
rmin2=min(r(:,2));
rnew=zeros(n,2);
for i=1:n
rnew(i,1)=(r(i,1)-rmin1)/(rmax1-rmin1);
rnew(i,2)=(r(i,2)-rmin2)/(rmax2-rmin2);
end
Second option
rmin1, rmax1, rmin2, rmax2 may be quite variable due to the sampling process. An alternative is applying the 68–95–99.7 rule (here) and I am asking for some help on how to generalise it to a bivariate normal (in particular Step 1 below). Here's my idea
%Step 1: transform the draws in r into draws from a bivariate normal
%with variance-covariance matrix equal to the 2x2 identity matrix
%and mean equal to mu
%How?
%Let t be the transformed vector
%Step 2: apply the 68–95–99.7 rule to each column of t
tmax1=mu(1)+3*1;
tmin1=mu(1)-3*1;
tmax2=mu(2)+3*1;
tmin2=mu(2)-3*1;
tnew=zeros(n,2);
for i=1:n
tnew(i,1)=(t(i,1)-tmin1)/(tmax1-tmin1);
tnew(i,2)=(t(i,1)-tmin2)/(tmax2-tmin2);
end
%Step 3: discard potential values (very few) outside [0,1]
In your case the x and y coordinates of the random vector are correlated, so it's not just a transformation in x and in y independently. You first need to rotate your samples so that x and y become uncorrelated (then the covariance matrix will be diagonal. You don't need it to be the identity, since anywya you normalize later). Then you can apply the transformation you call "2nd option" to the new x and y independently. Shortly, you need to diagonalize the covariance matrix.
As a side note, your code adds/subtracts 3 times 1, instead of 3 times the standard deviation. Also, you can avoid the for loop, using (e.g) Matlab's bsxfun which applies an operation between matrix and vector:
t = bsxfun(#minus,r,mean(r,1)); % center the data
[v, d] = eig(sigma); % find the directions for projection
t = t * v; % the projected data is uncorrelated
sigma_new = sqrt(diag(d)); % that's the std in the new coordinates
% now transform each coordinate independently
tmax1 = 3*sigma_new(1);
tmin1 = -3*sigma_new(1);
tmax2 = 3*sigma_new(2);
tmin2 = -3*sigma_new(2);
tnew = bsxfun(#minus, t, [tmin1, tmin2]);
tnew = bsxfun(#rdivide, tnew, [tmax1-tmin1, tmax2-tmin2]);
You still need to discard the few samples which are out of [0,1], as you wrote.

How can I use the Second Derivative test to find Maxima and Minima in MATLAB

I'm looking to create a function that takes an equation and marks the maxima and/or minima, and asymptotes on a graph.
From Calc 1, I remember using the Second Derivative Test.
I started by solving for the roots of the first derivative - but am not sure how I can plot where the point in this vector intersect my original equation.
syms x; %//
f = sin(x) %// Define equation as a function
df=diff(f) %// First derivatives
ddf=diff(df) %// Second Derivatives
I found the roots for the x-values of these points with dfRoots = solve(df)
and then created a vector of them called dfRoots_realDouble
dfRoots_double = double(dfRoots);
dfRoots_realDouble = real(dfRoots_double);
dfRoots_realDouble represent the x-values i need, but I dont know how to plot them as points and separate minima from maxima, etc
I'd like to be able to put in any function, like sin(6*(x^5) + 7*(x^3)+8*(x^2)) on [-1,1] and it highlight all the maxima with one marker, and all the minima with another.
If you sort the roots, you will have an alternating vector of maximums and minimums. Use the second derivative to determine if your vector of roots starts with a maximum or a minimum and then split the vector.
roots = sort([-3 4 2 -5]);
is_max = (subs(ddf, x, roots(1)) < 0)
if is_max
max_points = roots(1:2:end)
min_points = roots(2:2:end)
else
max_points = roots(2:2:end)
min_points = roots(1:2:end)
end
% plot with two different symbols
plot(max_points, subs(f, x, max_points), 'or',...
min_points, subs(f, x, min_points), '*k')

Square wave function for Matlab

I'm new to programming in Matlab. I'm trying to figure out how to calculate the following function:
I know my code is off, I just wanted to start with some form of the function. I have attempted to write out the sum of the function in the program below.
function [g] = square_wave(n)
g = symsum(((sin((2k-1)*t))/(2k-1)), 1,n);
end
Any help would be much appreciated.
Update:
My code as of now:
function [yout] = square_wave(n)
syms n;
f = n^4;
df = diff(f);
syms t k;
f = 1; %//Define frequency here
funcSum = (sin(2*pi*(2*k - 1)*f*t) / (2*k - 1));
funcOut = symsum(func, v, start, finish);
xsquare = (4/pi) * symsum(funcSum, k, 1, Inf);
tVector = 0 : 0.01 : 4*pi; %// Choose a step size of 0.01
yout = subs(xsquare, t, tVector);
end
Note: This answer was partly inspired by a previous post I wrote here: How to have square wave in Matlab symbolic equation - However, it isn't quite the same, which is why I'm providing an answer here.
Alright, so it looks like you got the first bit of the question right. However, when you're multiplying things together, you need to use the * operator... and so 2k - 1 should be 2*k - 1. Ignoring this, you are symsuming correctly given that square wave equation. The input into this function is only one parameter only - n. What you see in the above equation is a Fourier Series representation of a square wave. A bastardized version of this theory is that you can represent a periodic function as an infinite summation of sinusoidal functions with each function weighted by a certain amount. What you see in the equation is in fact the Fourier Series of a square wave.
n controls the total number of sinusoids to add into the equation. The more sinusoids you have, the more the function is going to look like a square wave. In the question, they want you to play around with the value of n. If n becomes very large, it should start approaching what looks like to be a square wave.
The symsum will represent this Fourier Series as a function with respect to t. What you need to do now is you need to substitute values of t into this expression to get the output amplitude for each value t. They define that for you already where it's a vector from 0 to 4*pi with 1001 points in between.
Define this vector, then you'll need to use subs to substitute the time values into the symsum expression and when you're done, cast them back to double so that you actually get a numeric vector.
As such, your function should simply be this:
function [g] = square_wave(n)
syms t k; %// Define t and k
f = sin((2*k-1)*t)/(2*k-1); %// Define function
F = symsum(f, k, 1, n); %// Define Fourier Series
tVector = linspace(0, 4*pi, 1001); %// Define time points
g = double(subs(F, t, tVector)); %// Get numeric output
end
The first line defines t and k to be symbolic because t and k are symbolic in the expression. Next, I'll define f to be the term inside the summation with respect to t and k. The line after that defines the actual sum itself. We use f and sum with respect to k as that is what the summation calls for and we sum from 1 up to n. Last but not least, we define a time vector from 0 to 4*pi with 1001 points in between and we use subs to substitute the value of t in the Fourier Series with all values in this vector. The result should be a 1001 vector which I then cast to double to get a numerical result and we get your desired output.
To show you that this works, we can try this with n = 20. Do this in the command prompt now:
>> g = square_wave(20);
>> t = linspace(0, 4*pi, 1001);
>> plot(t, g);
We get:
Therefore, if you make n go higher... so 200 as they suggest, you'll see that the wave will eventually look like what you expect from a square wave.
If you don't have the Symbolic Math Toolbox, which symsum, syms and subs relies on, we can do it completely numerically. What you'll have to do is define a meshgrid of points for pairs of t and n, substitute each pair into the sequence equation for the Fourier Series and sum up all of the results.
As such, you'd do something like this:
function [g] = square_wave(n)
tVector = linspace(0, 4*pi, 1001); %// Define time points
[t,k] = meshgrid(tVector, 1:n); %// Define meshgrid
f = sin((2*k-1).*t)./(2*k-1); %// Define Fourier Series
g = sum(f, 1); %// Sum up for each time point
end
The first line of code defines our time points from 0 to 4*pi. The next line of code defines a meshgrid of points. How this works is that for t, each column defines a unique time point, so the first column is 200 zeroes, up to the last column which is a column of 200 4*pi values. Similarly for k, each row denotes a unique n value so the first row is 1001 1s, followed by 1001 2s, up to 1001 1s. The implications with this is now each column of t and k denotes the right (t,n) pairs to compute the output of the Fourier series for each time that is unique to that column.
As such, you'd simply use the sequence equation and do element-wise multiplication and division, then sum along each individual column to finally get the square wave output. With the above code, you will get the same result as above, and it'll be much faster than symsum because we're doing it numerically now and not doing it symbolically which has a lot more computational overhead.
Here's what we get when n = 200:
This code with n=200 ran in milliseconds whereas the symsum equivalent took almost 2 minutes on my machine - Mac OS X 10.10.3 Yosemite, 16 GB RAM, Intel Core i7 2.3 GHz.