Matlab monte carlo integration for loop - matlab

I'm trying to write function which calculates the integral using the Monte Carlo method in MATLAB. I'm not familiar enough with MATLAB to understand why I'm getting the issue of the integration being different each time. this is my code:
f=#(x)exp(-(x-3).^2);
N = 1000; %random samples
a = 0; % lower bound
b = 3; %upper bound
x2=linspace(0,3,1000);
syms z % zero vector holder to find max y value
z = zeros(size(x2));
z = f(x2);
y = f(b).*rand(1,1000);
x = a +(b-a)*rand(1,N);
count = 0;
for k=1:numel(x);
%produce random x coordinate
if y(k) <= f(x);
count= count +1;
end
end
count;
i = (b-a)/N*sum(f(x));
When I run this, the i value changes each time but I want the integral to be calculated using the for loop. Thanks

Your i calculation at the end is wrong, it should be along the lines of
count/numel(x) * max(z) * (b-a)

Related

Problem with integral calculus via function "trapz" in Matlab

someone can help me with this problem. I have to calculate the integral with the function trapz of Matlab increasing the mesh intervals N. I need to do this until I reach tolerance.
N = 1; %Initial number of mesh intervals
t = [t0,tf]; %Create an initial mesh
y = L(t); %Evaluate the function at mesh points
I = trapz(t,y); %Compute the integral numerically using trapz
epsilon = I;
while epsilon >= tol % Until I reach tolerance
N = N+1;
tstep = (tf-t0)/N;
t1 = t0:tstep:tf;
y1 = L(t1);
I_new = trapz(t1,y1);
epsilon = I_new - I;
I = I_new;
end
The problem is that i have always the same value of I_new, don't increase never.
Implement an abs on your epsilon calculation to to avoid negative values in the while loop evaluation. i.e.
epsilon = abs(I_new - I);
or
abs(epsilon) >= tol
Would also recommend setting your initial epsilon value as inf: epsilon = inf;

How to evaluate function of two variables with different x and y vectors

I've been trying to evaluate a function in matlab. I want my x vector to go from 0 to 1000 and my y vector to go from 0 to 125. They should both have a length of 101.
The equation to be evaluated is z(x,y) = ay + bx, with a=10 and b=20.
a = 10;
b = 20;
n = 101;
dx = 10; % Interval length
dy = 1.25;
x = zeros(1,n);
y = zeros(1,n);
z = zeros(n,n);
for i = 1:n;
x(i) = dx*(i-1);
y(i) = dy*(i-1);
for j = 1:n;
z(i,j) = a*dy*(j-1) + b*dx*(j-1);
end
end
I get an answer, but I don't know if I did it correctly with the indices in the nested for loop?
See MATLAB's linspace function.
a=10;
b=20;
n=101;
x=linspace(0,1000,n);
y=linspace(0,125,n);
z=a*y+b*x;
This is easier and takes care of the interval spacing for you. From the linspace documentation,
y = linspace(x1,x2,n) generates n points. The spacing between the points is (x2-x1)/(n-1).
Edit:
As others have pointed out, my solution above makes a vector, not a matrix which the OP seems to want. As #obchardon pointed out, you can use meshgrid to make that 2D grid of x and y points to generate a matrix of z. Updated approached would be:
a=10;
b=20;
n=101;
x=linspace(0,1000,n);
y=linspace(0,125,n);
[X,Y] = meshgrid(x,y);
z=a*Y+b*X;
(you may swap the order of x and y depending on if you want each variable along the row or column of z.)

Issue with Discrete Double Fourier Series in MATLAB

The formula for the discrete double Fourier series that I'm attempting to code in MATLAB is:
The coefficient in front of the trigonometric sum (Fourier amplitude) is what I'm trying to extract from the fitting of the data through the double Fourier series seen above. Using my current code, the original function is not reconstructed, therefore my coefficients cannot be correct. I'm not certain if this is of any significance or insight, but the second term for the A coefficients (Akn(1))) is 13 orders of magnitude larger than any other coefficient.
Any suggestions, modifications, or comments about my program would be greatly appreciated.
%data = csvread('digitized_plot_data.csv',1);
%xdata = data(:,1);
%ydata = data(:,2);
%x0 = xdata(1);
lambda = 20; %km
tau = 20; %s
vs = 7.6; %k/s (velocity of CHAMP satellite)
L = 4; %S
% Number of terms to use:
N = 100;
% set up matrices:
M = zeros(length(xdata),1+2*N);
M(:,1) = 1;
for k=1:N
for n=1:N %error using *, inner matrix dimensions must agree...
M(:,2*n) = cos(2*pi/lambda*k*vs*xdata).*cos(2*pi/tau*n*xdata);
M(:,2*n+1) = sin(2*pi/lambda*k*vs*xdata).*sin(2*pi/tau*n*xdata);
end
end
C = M\ydata;
%least squares coefficients:
A0 = C(1);
Akn = C(2:2:end);
Bkn = C(3:2:end);
% reconstruct original function values (verification check):
y = A0;
for k=1:length(Akn)
y = y + Akn(k)*cos(2*pi/lambda*k*vs*xdata).*cos(2*pi/tau*n*xdata) + Bkn(k)*sin(2*pi/lambda*k*vs*xdata).*sin(2*pi/tau*n*xdata);
end
% plotting
hold on
plot(xdata,ydata,'ko')
plot(xdata,yk,'b--')
legend('Data','Least Squares','location','northeast')
xlabel('Centered Time Event [s]'); ylabel('J[\muA/m^2]'); title('Single FAC Event (50 Hz)')

Logistic regression - Calculating cost function returns wrong results

I just started taking Andrew Ng's course on Machine Learning on Coursera.
The topic of the third week is logistic regression, so I am trying to implement the following cost function.
The hypothesis is defined as:
where g is the sigmoid function:
This is how my function looks at the moment:
function [J, grad] = costFunction(theta, X, y)
m = length(y); % number of training examples
S = 0;
J = 0;
for i=1:m
Yi = y(i);
Xi = X(i,:);
H = sigmoid(transpose(theta).*Xi);
S = S + ((-Yi)*log(H)-((1-Yi)*log(1-H)));
end
J = S/m;
end
Given the following values
X = [magic(3) ; magic(3)];
y = [1 0 1 0 1 0]';
[j g] = costFunction([0 1 0]', X, y)
j returns 0.6931 2.6067 0.6931 even though the result should be j = 2.6067. I am assuming that there is a problem with Xi, but I just can't see the error.
I would be very thankful if someone could point me to the right direction.
You are supposed to apply the sigmoid function to the dot product of your parameter vector (theta) and input vector (Xi, which in this case is a row vector). So, you should change
H = sigmoid(transpose(theta).*Xi);
to
H = sigmoid(theta' * Xi'); % or sigmoid(Xi * theta)
Of course, you need to make sure that the bias input 1 is added to your inputs (a row of 1s to X).
Next, think about how you can vectorize this entire operation so that it can be written without any loops. That way it would be considerably faster.
function [J, grad] = costFunction(theta, X, y)
m = length(y);
J = 0;
grad = zeros(size(theta));
J=(1/m)*((-y'*(log(sigmoid(X*theta))))-((1-y)'*(log(1-(sigmoid(X*theta))))));
grad=(1/m)*(X'*((sigmoid(X*theta))-y));
end
The above code snippets works perfectly fine for Logistic Regrssion Cost and Gradient functions given the sigmoid function is working fine.

Finding optimal weight factor for SOR

I am using the SOR method and need to find the optimal weight factor. I think a good way to go about this is to run my SOR code with a number of omegas from 0 to 2, then store the number of iterations for each of these. Then I can see which iteration is the lowest and which omega it corresponds to. Being a novice programer, however, I am unsure how to go about this.
Here is my SOR code:
function [x, l] = SORtest(A, b, x0, TOL,w)
[m n] = size(A); % assigning m and n to number of rows and columns of A
l = 0; % counter variable
x = [0;0;0]; % introducing solution matrix
max_iter = 200;
while (l < max_iter) % loop until max # of iters.
l = l + 1; % increasing counter variable
for i=1:m % looping through rows of A
sum1 = 0; sum2 = 0; % intoducing sum1 and sum2
for j=1:i-1 % looping through columns
sum1 = sum1 + A(i,j)*x(j); % computing sum using x
end
for j=i+1:n
sum2 = sum2 + A(i,j)*x0(j); % computing sum using more recent values in x0
end
x(i) =(1-w)*x0(i) + w*(-sum1-sum2+b(i))/A(i,i); % assigning elements to the solution matrix.
end
if abs(norm(x) - norm(x0)) < TOL % checking tolerance
break
end
x0 = x; % assigning x to x0 before relooping
end
end
That's pretty easy to do. Simply loop through values of w and determine what the total number of iterations is at each w. Once the function finishes, check to see if this is the current minimum number of iterations required to get a solution. If it is, then update what the final solution would be. Once we iterate over all w, the result would be the solution vector that produced the smallest number of iterations to converge. Bear in mind that SOR has the w such that it does not include w = 0 or w = 2, or 0 < w < 2, so we can't include 0 or 2 in the range. As such, do something like this:
omega_vec = 0.01:0.01:1.99;
final_x = x0;
min_iter = intmax;
for w = omega_vec
[x, iter] = SORtest(A, b, x0, TOL, w);
if iter < min_iter
min_iter = iter;
final_x = x;
end
end
The loop checks to see if the total number of iterations at each w is less than the current minimum. If it is, log this and also record what the solution vector was. The final solution vector that was the minimum over all w will be stored in final_x.