Manual prediction of Gaussian Regression SVM in Matlab - matlab

I trained a SVM using the Regression Learner of Matlab with a Gaussian kernel. The learning worked really well and the RSE is small.
Now, I exported the model back to the Matlab workspace (trainedModel) and I can use the predict function to get the estimation of new values. However, I would like to manually implement the prediction function, because I need to export it to a different programming language, thus I cannot rely on the Matlab's predict function. Therefore, following the MATLAB explanation I implemented the following equation:
with
This is my code for a [0.5 1 50] input:
bias = trainedModel.RegressionSVM.Bias;
alpha = trainedModel.RegressionSVM.Alpha;
SV = trainedModel.RegressionSVM.SupportVectors;
Mu = trainedModel.RegressionSVM.Mu;
Sg = trainedModel.RegressionSVM.Sigma;
input = ([0.5 1 50] - Mu) ./ Sg;
sum = bias;
for n=1:length(alpha)
G = exp(-norm((SV(n,:)'-input))^2);
sum = sum + alpha(n) .* G;
end
disp(sum)
(Note that alpha is already the difference of the Lagrangian multipliers according to the documentation)
However, the predicted results are completely wrong. I think something is wrong with G because the values are very small (in the order of 10^(-25)), but I cannot figure out the error.

The mistake was very small... The reason is the transposition of the SV array, which is incorrect (it creates a matrix due to the - operator, but then it's hidden by the norm). Therefore, changing the following line:
G = exp(-norm((SV(n,:)'-input))^2);
to
G = exp(-norm((SV(n,:)-input))^2);
solved the problem.

Related

Linear Regression in MATLAB without fitlm

I am tasked to perform a prediction analysis. This requires performing a linear regression on several (~10) predictor variables and coming up with intercepts for all and a constant.
so final equation will be of this format y = c + c1x1 + c2x2 + c3x3....
Now I know that you can use fitlm function in MATLAB that is available with Statistics and Machine Learning Toolbox however at this point I don't know if we will be purchasing it. How do I perform linear regression on them ?
You can use the closed form solution of linear least squares.
C=inv(transpose(X)*X)*transpose(X)*y
In the above, make the first row of X all ones, and the following rows are x1, x2,...
C will contain the corresponding constants. The first entry in C is c.
From: https://www.mathworks.com/help/matlab/data_analysis/linear-regression.html
You can write your predictor variables as a matrix X using X = [ones(length(x1),1),x1,x2,x3,...,xn] and formulating the response variables Y as the equation Y = XB and doing a matrix inverse operation using mldivide as B = X\Y to find your regression coefficients.

How to set up solving of multiple ODEs in Matlab propperly?

I have a task of writing an M script that would set up (and finally solve) a set of 19 differential equations and all of the equation coefficients. I am not sure what is the best way to input those equations.
Example of some equations:
Example of coefficients:
I don't think that this would be suited for Simulink. Other examples use a function in the form of #(t,x) where t is time and x the vector of all variables.
There are loads of "examples" online, but they don't seem to be appropriate for such a large set of large equations.
For example this exmple of solving 3 equatons
Even for 3 simple equations as they have solved, the functions are getting messy:
f = #(t,x) [-x(1)+3*x(3);-x(2)+2*x(3);x(1)^2-2*x(3)];
Using this notation and getting to the x(19) and cross-referencing all instances of x would be a mess.
I would like your help, and a simple example of how could I write these equations line by line, maybe using the symbolic toolbox, and then put them in an array that I can then forward to the solver.
As I said, I know there are examples online, but they tackle only the most basic system, and really they don't scale well if you want a clean and easily readable code.
I would like to have a similar to Wolfram alpha, where you type variable names as they are (instead od x(1), x(2), ... m x(19)) and if it's possible to get all solution vectors with their variable names.
You don't have to use an anonymous function handle as an ode function, you can create a separate function file (as shown in the odefun section of ode45.
For example, your odefun can look like:
function dy = myode(t,y)
% first unpack state variables
i_d2 = y(1);
i_q2 = y(2);
...
gamma2 = y(end-1);
omega2 = y(end);
% determine all constants
c34 = expression_for_c34;
...
c61 = expression_for_61;
% determine state derivative
i_d2_dot = expression;
...
omega2_dot = expression;
% pack state derivative based on order in state vector
dy(1) = i_d2_dot;
...
dy(end) = omega2_dot;
end
From this myode function you can also call other functions to e.g. determine the value for some coefficients based on the current state. Next, integrate the system using a suitable ode solver:
[t,y] = ode45(#myode,tspan,y0);

MATLAB - Meaning of guassian distribution data. (in Neural Network)

I'm a newbie to MATLAB and now I'm trying to create a 2-d gaussian distribute data to train my neural network. I just found the code on the official document.
mu = [0 0];
Sigma = [.25 .3; .3 1];
x1 = -3:.2:3; x2 = -3:.2:3;
[X1,X2] = meshgrid(x1,x2);
F = mvnpdf([X1(:) X2(:)],mu,Sigma);
I know "mu" is average of the data. Sigma is something related to
Standard deviation. But I just don't get what is the idea of mesgrid and the interval(x1,x2). And the Geometric meaning of these code.
Also, can someone explain me why is guassian distribution so important in machine learning and data science? Cause all the course keep saying and saying this term.
Meshgrid is a basic matlab function, that is in no way specifically related to neural networks or a gaussian distribution. Check the documentation of Matlab to find out more about it.
The gaussian distribution (also known as normal distribution) is important for datascience because it comes with several nice statistical properties. Unfortunately it is hard to describe them all in a compact way, and this would also not be a question about programming, but more about statistics.
I think the code you provide seems confusing to you because you expect it to generate samples whereas it merely returns values of the Gaussian PDF (probability density function) for some given pairs of (x1,x2).
For example F = mvnpdf(a,b,mu, Sigma) returns the probability of x1=a and x2=b given that they follow a multivariate Gaussian distribution with mean mu and covariance matrix Sigma.
Being in Stack Overflow, I am focusing on the Matlab aspect of your question: for generating 100 samples of a 2-D Gaussian you can use something like the following (taken from the Matlab help of randn function):
mu = [1 2];
Sigma = [1 .5; .5 2];
R = chol(Sigma);
z = repmat(mu,100,1) + randn(100,2)*R;
The array z = [x1,x2] contains the x1 and x2 vectors that you are looking for.
Some statistics textbook or wikipedia could convince you on why the above code indeed generates such samples. The last line of code is related to one of the nice properties of a Gaussian distribution (or any other elliptical distribution).

Fitting a 2D Gaussian to 2D Data Matlab

I have a vector of x and y coordinates drawn from two separate unknown Gaussian distributions. I would like to fit these points to a three dimensional Gauss function and evaluate this function at any x and y.
So far the only manner I've found of doing this is using a Gaussian Mixture model with a maximum of 1 component (see code below) and going into the handle of ezcontour to take the X, Y, and Z data out.
The problems with this method is firstly that its a very ugly roundabout manner of getting this done and secondly the ezcontour command only gives me a grid of 60x60 but I need a much higher resolution.
Does anyone know a more elegant and useful method that will allow me to find the underlying Gauss function and extract its value at any x and y?
Code:
GaussDistribution = fitgmdist([varX varY],1); %Not exactly the intention of fitgmdist, but it gets the job done.
h = ezcontour(#(x,y)pdf(GaussDistributions,[x y]),[-500 -400], [-40 40]);
Gaussian Distribution in general form is like this:
I am not allowed to upload picture but the Formula of gaussian is:
1/((2*pi)^(D/2)*sqrt(det(Sigma)))*exp(-1/2*(x-Mu)*Sigma^-1*(x-Mu)');
where D is the data dimension (for you is 2);
Sigma is covariance matrix;
and Mu is mean of each data vector.
here is an example. In this example a guassian is fitted into two vectors of randomly generated samples from normal distributions with parameters N1(4,7) and N2(-2,4):
Data = [random('norm',4,7,30,1),random('norm',-2,4,30,1)];
X = -25:.2:25;
Y = -25:.2:25;
D = length(Data(1,:));
Mu = mean(Data);
Sigma = cov(Data);
P_Gaussian = zeros(length(X),length(Y));
for i=1:length(X)
for j=1:length(Y)
x = [X(i),Y(j)];
P_Gaussian(i,j) = 1/((2*pi)^(D/2)*sqrt(det(Sigma)))...
*exp(-1/2*(x-Mu)*Sigma^-1*(x-Mu)');
end
end
mesh(P_Gaussian)
run the code in matlab. For the sake of clarity I wrote the code like this it can be written more more efficient from programming point of view.

Implementing logistic regression with L2 regularization in Matlab

Matlab has built in logistic regression using mnrfit, however I need to implement a logistic regression with L2 regularization. I'm completely at a loss at how to proceed. I've found some good papers and website references with a bunch of equations, but not sure how to implement the gradient descent algorithm needed for the optimization.
Is there an easily available sample code in Matlab for this. I've found some libraries and packages, but they are all part of larger packages, and call so many convoluted functions, one can get lost just going through the trace.
Here is an annotated piece of code for plain gradient descent for logistic regression. To introduce regularisation, you will want to update the cost and gradient equations. In this code, theta are the parameters, X are the class predictors, y are the class-labels and alpha is the learning rate
I hope this helps :)
function [theta,J_store] = logistic_gradientDescent(theta, X, y,alpha,numIterations)
% Initialize some useful values
m = length(y); % number of training examples
n = size(X,2); %number of features
J_store = 0;
%J_store = zeros(numIterations,1);
for iter=1:numIterations
%predicts the class labels using the current weights (theta)
Z = X*theta;
h = sigmoid(Z);
%This is the normal cost function equation
J = (1/m).*sum(-y.*log(h) - (1-y).*log(1-h));
%J_store(iter) = J;
%This is the equation to obtain the given the current weights, without regularisation
grad = [(1/m) .* sum(repmat((h - y),1,n).*X)]';
theta = theta - alpha.*grad;
end
end