Double integral of the equation in Matlab - matlab

How to implement this equation
in Matlab,
where: A and B are mxm matrices.
for example:
A = [3.45 32.54 78.2; 8.4 33.1 4.66; 68.2 9.336 33.87 ]
B = [6.45 36.54 28.24; 85.4 323.1 74.66; 98.2 93.336 38.55 ]
my code:
f1 = #(A) (abs(A) ).^2;
f2 = #(B) (abs(B) ).^2;
Q = integral2( f1, 0, 1, 0, 1) * integral2(f2, 0, 1, 0, 1);
But when i run the code I got the error "Too many input arguments.".
What is the problem in the code?

After clarification of your question, let me change my post.
What you are after is numerical integration of a function that was already sampled on a fixed grid, and the function values are stored in matrices A and B that are two dimensional M by M. I suppose that you have the associated gridpoints as well, suppose they are denoted xc and yc. Then, if you have sufficiently fine sampling of a smooth function, the integral approaches:
xc = linspace(0,1,M);
yc = linspace(0,1,M);
Q = trapz(xc,trapz(yc, abs(A).^2)) * trapz(xc,trapz(yc, abs(B).^2 ));
To test that, I made a simple example that evaluates the surface of a circle, i.e.
so to do that with trapezoidal method with N samples for r and M samples for phi, we have,
r = 2; % Pick a value for r
M = 100; % Pick M samples for the angular coordinate from 0 to 2*pi
N = 101; % Pick N samples for the radial coordinate from 0 to r
phic = linspace(0,2*pi,M); % take M samples uniformly for example
rc = linspace(0,r,N); % take N samples uniformly for example
integrand = repmat(rc,M,1); % Make MxN matrix, phi along rows, r along columns
I = trapz(rc, trapz(phic, integrand));
So for the case r=2, it gives indeed 4*pi.

Related

Curve fitting of complex variable in Matlab

I want to solve the following system of equations shown in the image below,
The matrix system
where the component of the matrix A is complex numbers with the angle (theta) runs from 0 to 2*pi which has m divisions, and n = 9. The known value z = x + iy. Suppose the x and y of matrix z is
z =
0 1.0148
0.1736 0.9848
0.3420 0.9397
0.5047 0.8742
0.6748 0.8042
0.8419 0.7065
0.9919 0.5727
1.1049 0.4022
1.1757 0.2073
1.1999 0
1.1757 -0.2073
1.1049 -0.4022
0.9919 -0.5727
0.8419 -0.7065
0.6748 -0.8042
0.5047 -0.8742
0.3420 -0.9397
0.1736 -0.9848
0 -1.0148
How do you solve them iteratively? Notice that the value of the first component of the desired constants must equal 1. I am working with Matlab.
You can apply simple multilinear regression for complex valued data.
Step 1. Get the matrix ready for linear regression
Your linear system
written without matrices, becomes
that rearranged yelds
If you rewrite it with matrices you get
Step 2. Apply multiple linear regression
Let the system above be
where
Now you can apply linear regression, that returns the best fit for α when
where
is the conjugate transpose.
In MATLAB
Y = Z - A(:,1); % Calculate Y subtracting the first col of A from Z
R = A(:,:); R(:,1) = []; % Calculate R as an exact copy of A, just without first column
Rs = ctranspose(R); % Calculate R-star (conjugate transpose of R)
alpha = (Rs*R)^(-1)*Rs*Y; % Finally apply multiple linear regression
alpha = cat(1, 1, alpha); % Add alpha1 back, whose value is 1
or, if you prefer built-ins, have a look at regress function:
Y = Z - A(:,1); % Calculate Y subtracting the first col of A from Z
R = A(:,:); R(:,1) = []; % Calculate R as an exact copy of A, just without first column
alpha = regress(Y, R); % Finally apply multiple linear regression
alpha = cat(1, 1, alpha); % Add alpha1 back, whose value is 1

How to generate frequency response given b,a coefficients of the system?

I have the following system, specified by the set of coefficients:
b = [1 2 3];
a = [1 .5 .25];
In the Z-Domain, such function will have the following transfer function:
H(Z) = Y(Z)/X(Z)
So the frequency response will be just the unit circle, where:
H(e^jw) = Y(e^jw)/X(e^jw)
Do I just substitute in the e^jw for 'Z' in my transfer function to obtain the frequency response of the system mathematically, on paper? Seems a bit ridiculous from my (a student's) point of view.
Have you tried freqz()? It returns the frequency response vector, h, and the corresponding angular frequency vector, w, for the digital filter with numerator and denominator polynomial coefficients stored in b and a, respectively.
In your case, simply follow the help:
[h,w]=freqz(b,a);
You do sub in e^jw for Z. This isn't ridiculous. Then you just sweep w from -pi to pi. Your freq response will be the absolute value of the result.
As Alessiox mentioned, freqz is the command you want to use in matlab.
I would indeed be as simple as substituting exp(j*w) in your transfer function. There are of course different ways to implement this with Matlab. For the purpose of illustration, I will be assuming bs are the coefficients of the x sequence and as are the coefficients of the y sequence, such that the b are in the numerator and the as are in the denominator:
A direct evaluation with Matlab could be done with:
b = [1 2 3];
a = [1 .5 .25];
N = 513; % number of points at which to evaluate the transfer function
w = linspace(0,2*pi,N);
num = 0;
for i=1:length(b)
num = num + b(i) * exp(-j*i*w);
end
den = 0;
for i=1:length(a)
den = den + a(i) * exp(-j*i*w);
end
H = num ./ den;
This would be equivalent to the following which makes use of the builtin polyval:
N = 513; % number of points at which to evaluate the transfer function
w = linspace(0,2*pi,N);
H = polyval(fliplr(b),exp(-j*w))./polyval(fliplr(a),exp(-j*w));
Also, this is really evaluating the transfer function at discrete equally spaced angular frequencies w = 2*pi*k/N which corresponds to the Discrete Fourier Transform (DFT). As such it could also be done with:
N = 512;
H = fft(b,N) ./ fft(a,N);
Incidentally this is what freqz does, so you could also get the same result with:
N = 512;
H = freqz(b,a,N,'whole');

Gradient Descent with multiple variable without Matrix

I'm new with Matlab and Machine Learning and I tried to make a gradient descent function without using matrix.
m is the number of example on my training set
n is the number of feature for each example
The function gradientDescentMulti takes 5 arguments:
X mxn Matrix
y m-dimensional vector
theta : n-dimensional vector
alpha : a real number
nb_iters : a real number
I already have a solution using matrix multiplication
function theta = gradientDescentMulti(X, y, theta, alpha, num_iters)
for iter = 1:num_iters
gradJ = 1/m * (X'*X*theta - X'*y);
theta = theta - alpha * gradJ;
end
end
The result after iterations:
theta =
1.0e+05 *
3.3430
1.0009
0.0367
But now, I tried to do the same without matrix multiplication, this is the function:
function theta = gradientDescentMulti(X, y, theta, alpha, num_iters)
m = length(y); % number of training examples
n = size(X, 2); % number of features
for iter = 1:num_iters
new_theta = zeros(1, n);
%// for each feature, found the new theta
for t = 1:n
S = 0;
for example = 1:m
h = 0;
for example_feature = 1:n
h = h + (theta(example_feature) * X(example, example_feature));
end
S = S + ((h - y(example)) * X(example, n)); %// Sum each feature for this example
end
new_theta(t) = theta(t) - alpha * (1/m) * S; %// Calculate new theta for this example
end
%// only at the end of the function, update all theta simultaneously
theta = new_theta'; %// Transpose new_theta (horizontal vector) to theta (vertical vector)
end
end
The result, all the theta are the same :/
theta =
1.0e+04 *
3.5374
3.5374
3.5374
If you look at the gradient update rule, it may be more efficient to actually compute the hypothesis of all of your training examples first, then subtract this with the ground truth value of each training example and store these into an array or vector. Once you do this, you can then compute the update rule very easily. To me, it doesn't appear that you're doing this in your code.
As such, I rewrote the code, but I have a separate array that stores the difference in the hypothesis of each training example and ground truth value. Once I do this, I compute the update rule for each feature separately:
for iter = 1 : num_iters
%// Compute hypothesis differences with ground truth first
h = zeros(1, m);
for t = 1 : m
%// Compute hypothesis
for tt = 1 : n
h(t) = h(t) + theta(tt)*X(t,tt);
end
%// Compute difference between hypothesis and ground truth
h(t) = h(t) - y(t);
end
%// Now update parameters
new_theta = zeros(1, n);
%// for each feature, find the new theta
for tt = 1 : n
S = 0;
%// For each sample, compute products of hypothesis difference
%// and the right feature of the sample and accumulate
for t = 1 : m
S = S + h(t)*X(t,tt);
end
%// Compute gradient descent step
new_theta(tt) = theta(tt) - (alpha/m)*S;
end
theta = new_theta'; %// Transpose new_theta (horizontal vector) to theta (vertical vector)
end
When I do this, I get the same answers as using the matrix formulation.

Matlab SVM custom kernel function

In the Matlab SVM tutorial, it says
You can set your own kernel function, for example, kernel, by setting 'KernelFunction','kernel'. kernel must have the following form:
function G = kernel(U,V)
where:
U is an m-by-p matrix.
V is an n-by-p matrix.
G is an m-by-n Gram matrix of the rows of U and V.
When I followed the custom SVM kernel example, I set a break point in mysigmoid.m function. However, I found U and V were in fact 1-by-p vectors and G was a scalar.
Why does not MATLAB process the kernel by matrices?
My custom kernel function is
function G = mysigmoid(U,V)
% Sigmoid kernel function with slope gamma and intercept c
gamma = 0.5;
c = -1;
G = tanh(gamma*U*V' + c);
end
My Matlab script is
%% Train SVM Classifiers Using a Custom Kernel
rng(1); % For reproducibility
n = 100; % Number of points per quadrant
r1 = sqrt(rand(2*n,1)); % Random radius
t1 = [pi/2*rand(n,1); (pi/2*rand(n,1)+pi)]; % Random angles for Q1 and Q3
X1 = [r1.*cos(t1), r1.*sin(t1)]; % Polar-to-Cartesian conversion
r2 = sqrt(rand(2*n,1));
t2 = [pi/2*rand(n,1)+pi/2; (pi/2*rand(n,1)-pi/2)]; % Random angles for Q2 and Q4
X2 = [r2.*cos(t2), r2.*sin(t2)];
X = [X1; X2]; % Predictors
Y = ones(4*n,1);
Y(2*n + 1:end) = -1; % Labels
% Plot the data
figure(1);
gscatter(X(:,1),X(:,2),Y);
title('Scatter Diagram of Simulated Data');
SVMModel1 = fitcsvm(X,Y,'KernelFunction','mysigmoid','Standardize',true);
% Compute the scores over a grid
d = 0.02; % Step size of the grid
[x1Grid,x2Grid] = meshgrid(min(X(:,1)):d:max(X(:,1)),...
min(X(:,2)):d:max(X(:,2)));
xGrid = [x1Grid(:),x2Grid(:)]; % The grid
[~,scores1] = predict(SVMModel1,xGrid); % The scores
figure(2);
h(1:2) = gscatter(X(:,1),X(:,2),Y);
hold on;
h(3) = plot(X(SVMModel1.IsSupportVector,1),X(SVMModel1.IsSupportVector,2),...
'ko','MarkerSize',10);
% Support vectors
contour(x1Grid,x2Grid,reshape(scores1(:,2),size(x1Grid)),[0,0],'k');
% Decision boundary
title('Scatter Diagram with the Decision Boundary');
legend({'-1','1','Support Vectors'},'Location','Best');
hold off;
CVSVMModel1 = crossval(SVMModel1);
misclass1 = kfoldLoss(CVSVMModel1);
disp(misclass1);
Kernels add dimensions to a feature. If you have, for example, one feature for sample x={a} it will expand it into something like x= {a_1... a_q}. As you are doing this for all of your data at once, you are going to have a M x P (M is the number of examples in your training set and P is the number of features). The second matrix it asks for is P x N, where N is the number of examples in the training/test set.
That said, your output should be M x N. Since it is instead 1, it means that you have U = 1XM and V=Nx1 where N=M. To have an output of M x N logic follows that you should simply transpose your inputs.

Correlation between two signals with two different sizes

I am trying to calculate the correlation between two signals, where it returns 1 if both are the same and it will return between 0 and 1 otherwise. The problem that the two signals have different sizes so resampling is needed. I already did it but the output is not correct. Can anyone help me to implement it in an efficient way.
My code:
MaxRow = max(size(A,1),size(B,1));
MaxCol = max(size(A,2),size(B,2));
NewA = resample(A,MaxRow,size(A,1));
NewB = resample(B,MaxRow,size(B,1));
NewA = resample(NewA',MaxCol,size(A,2))';
NewB = resample(NewB',MaxCol,size(B,2))';
for s = 1:MaxRow
a = NewA(s,:);
b = NewB(s,:);
c(s)=real(corr(a',b'));
end
c(isnan(c)) = 0 ;
score = mean(c);
Here is a toy example.
% Example Data
x = 0:9;
y = 1:0.1:10;
% Check if y is longer
if length(x) < length(y)
x = interp1( x, linspace( 1, length(x), length(y) ) ); % Resample x
else
y = interp1( y, linspace( 1, length(y), length(x) ) ); % Resample y
end
% Get corrcoeff
c = abs( corrcoef( x, y ) ); % Corrcoeff solution here
c = c(2,1);
% Get MSE
m = mse( x - y ); % MSE solution here
linespace will generate indicies between 1 and length(x) with length(y) number of divisions.
Essentially interp1 will resample the variable to the length of the other one. The if statement will check which one needs resampling. The function corrcoef will get the correlation coefficient of the coefficient of the 2 signals. Since corrcoef is between 0 and 1, we need an absolute value. If you don't care about scaling and bias corrcoef will work for you.
If you plan to use MSE instead, then you can use the MSE function on the error (x-y) but it will not be between 0 and 1. Any comparison method will work with this resampling code.