How can I code a contour integral representation of cosine? - matlab

I am pretty new to MATLAB computation and would appreciate any help on this. Firstly, I am trying to integrate a cosine function using the McLaurin expansion [cos(z) = 1 − (z^2)/2 + (z^4)/4 + ....] and lets say plotted over one cycle from 0 to 2π. This first integral (as seen in Figure 1) would serve as a "reference" or "contour" for what I want to do next.
Figure 1 - "The first integral representation"
Now, my next problem comes from writing in MATLAB cos(z) in
terms of an integral in the complex plane.
Figure 2 - "showing cos(z) as an integral in the complex plane"
Where I could choose an equi-sampled set of points 'sn' along the contour, from 'sL' to 'sU'
and separated by '∆s'. This is Euler’s method.
I am trying to write a code to numerically approximate the integral
along a suitable contour and plot the approximation against the
exact value of cos(z). I would like to do this twice - once for
z ∈ [0, 6π] and once for complex valued z in the range
z ∈ [0 + i, 6π + ]. Then to plot both the real and imaginary part of the
computed cos(z)
I am aware of the steps I am looking to implement which I'll bullet-point here.
Choose γ, SL, SU , N.
Step through z from z lower to z upper (use a different
number of steps (other than N) for this).
For each value of z compute cos z by stepping along the contour in
N discrete steps from SL to SU .
For each value of sn along the contour compute the integrand e^(sn-(z^2/4sn))/sqrt(sn) and add it to the rolling sum [I have attached figure 3 showing an image formula of the integrand if its not clear!] Figure 3 - "The exponential integrand I am looking to compute"
Now I will show what I have attempted in MATLAB!
% "Contour Integral Representation of Cosine"
% making 'z' a possible number such as pi
N = 10000000; % example number - meaning sample of steps
z_lower = 0;
z_upper = 6*pi;
%==========================%
z1 = linspace(z_lower,z_upper,N);
y = 1;
Sl = y - 10*1i;
sum = 0.0;
%==========================%
for z = linspace(z_lower,z_upper,N)
for Sn = linspace(Sl,Su,N)
sum = sum + ((exp(Sn) - (z.^2/4*Sn))/sqrt(Sn))*ds;
end
end
sum = sum*(sqrt(pi)/2*pi*1i);
plot(Sn,sum)
Edit1: Hiya, this figure will show what I am expecting - the numerical method to be not exactly the same as the "symbolic" integration lets say. In figure 4, the black cosine wave is the same as in figure 1 and the blue is the numerical integration method.Figure 4 - "End Result of what I expect to plot

Related

Plotting a linear decision boundary

I have a set of data points (40 x 2), and I've derived the formula for the decision boundary which ends up like this :
wk*X + w0 = 0
wk is a 1 x 2 vector and X is a 2 x 1 point from the data point set; essentially X = (xi,yi), where i = 1,2,...,40. I have the values for wk and w0.
I'm trying to plot the line wk*X + w0 = 0 but I have no idea how to plot the actual line. In the past, I've done this by finding the minimum and maximum of the data points and just connecting them together but that's definitely not the right approach.
wk*X is simply the dot product between two vectors, and so the equation becomes:
w1*x + w2*y + w0 = 0
... assuming a general point (x,y). If we rearrange this equation and solve for y, we get:
y = -(w1/w2)*x - (w0/w2)
As such, this defines an equation of the line where the slope is -(w1/w2) with an intercept -(w0/w2). All you have to do is define a bunch of linearly spaced points within a certain range, take each point and substitute this into the above equation and get an output. You'd plot all of these output points in the figure as well as the actual points themselves. You make the space or resolution between points small enough so that we are visualizing a line when we connect all of the points together.
To determine the range or limits of this line, figure out what the smallest and largest x value is in your data, define a set of linearly spaced points between these and plot your line using the equation of the line we just talked about.
Something like this could work assuming that you have a matrix of points stored in X as you have mentioned, and w1 and w2 are defined in the vector wk and w0 is defined separately:
x = linspace(min(X(:,1)), max(X(:,1)));
y = -(wk(1)/wk(2))*x - (w0/wk(2));
plot(X(:,1), X(:,2), 'b.', x, y);
linspace determines a linearly spaced array of points from a beginning to an end, and by default 100 points are generated. We then create the output values of the line given these points and we plot the individual points in blue as well as the line itself on top of these points.

Square wave function for Matlab

I'm new to programming in Matlab. I'm trying to figure out how to calculate the following function:
I know my code is off, I just wanted to start with some form of the function. I have attempted to write out the sum of the function in the program below.
function [g] = square_wave(n)
g = symsum(((sin((2k-1)*t))/(2k-1)), 1,n);
end
Any help would be much appreciated.
Update:
My code as of now:
function [yout] = square_wave(n)
syms n;
f = n^4;
df = diff(f);
syms t k;
f = 1; %//Define frequency here
funcSum = (sin(2*pi*(2*k - 1)*f*t) / (2*k - 1));
funcOut = symsum(func, v, start, finish);
xsquare = (4/pi) * symsum(funcSum, k, 1, Inf);
tVector = 0 : 0.01 : 4*pi; %// Choose a step size of 0.01
yout = subs(xsquare, t, tVector);
end
Note: This answer was partly inspired by a previous post I wrote here: How to have square wave in Matlab symbolic equation - However, it isn't quite the same, which is why I'm providing an answer here.
Alright, so it looks like you got the first bit of the question right. However, when you're multiplying things together, you need to use the * operator... and so 2k - 1 should be 2*k - 1. Ignoring this, you are symsuming correctly given that square wave equation. The input into this function is only one parameter only - n. What you see in the above equation is a Fourier Series representation of a square wave. A bastardized version of this theory is that you can represent a periodic function as an infinite summation of sinusoidal functions with each function weighted by a certain amount. What you see in the equation is in fact the Fourier Series of a square wave.
n controls the total number of sinusoids to add into the equation. The more sinusoids you have, the more the function is going to look like a square wave. In the question, they want you to play around with the value of n. If n becomes very large, it should start approaching what looks like to be a square wave.
The symsum will represent this Fourier Series as a function with respect to t. What you need to do now is you need to substitute values of t into this expression to get the output amplitude for each value t. They define that for you already where it's a vector from 0 to 4*pi with 1001 points in between.
Define this vector, then you'll need to use subs to substitute the time values into the symsum expression and when you're done, cast them back to double so that you actually get a numeric vector.
As such, your function should simply be this:
function [g] = square_wave(n)
syms t k; %// Define t and k
f = sin((2*k-1)*t)/(2*k-1); %// Define function
F = symsum(f, k, 1, n); %// Define Fourier Series
tVector = linspace(0, 4*pi, 1001); %// Define time points
g = double(subs(F, t, tVector)); %// Get numeric output
end
The first line defines t and k to be symbolic because t and k are symbolic in the expression. Next, I'll define f to be the term inside the summation with respect to t and k. The line after that defines the actual sum itself. We use f and sum with respect to k as that is what the summation calls for and we sum from 1 up to n. Last but not least, we define a time vector from 0 to 4*pi with 1001 points in between and we use subs to substitute the value of t in the Fourier Series with all values in this vector. The result should be a 1001 vector which I then cast to double to get a numerical result and we get your desired output.
To show you that this works, we can try this with n = 20. Do this in the command prompt now:
>> g = square_wave(20);
>> t = linspace(0, 4*pi, 1001);
>> plot(t, g);
We get:
Therefore, if you make n go higher... so 200 as they suggest, you'll see that the wave will eventually look like what you expect from a square wave.
If you don't have the Symbolic Math Toolbox, which symsum, syms and subs relies on, we can do it completely numerically. What you'll have to do is define a meshgrid of points for pairs of t and n, substitute each pair into the sequence equation for the Fourier Series and sum up all of the results.
As such, you'd do something like this:
function [g] = square_wave(n)
tVector = linspace(0, 4*pi, 1001); %// Define time points
[t,k] = meshgrid(tVector, 1:n); %// Define meshgrid
f = sin((2*k-1).*t)./(2*k-1); %// Define Fourier Series
g = sum(f, 1); %// Sum up for each time point
end
The first line of code defines our time points from 0 to 4*pi. The next line of code defines a meshgrid of points. How this works is that for t, each column defines a unique time point, so the first column is 200 zeroes, up to the last column which is a column of 200 4*pi values. Similarly for k, each row denotes a unique n value so the first row is 1001 1s, followed by 1001 2s, up to 1001 1s. The implications with this is now each column of t and k denotes the right (t,n) pairs to compute the output of the Fourier series for each time that is unique to that column.
As such, you'd simply use the sequence equation and do element-wise multiplication and division, then sum along each individual column to finally get the square wave output. With the above code, you will get the same result as above, and it'll be much faster than symsum because we're doing it numerically now and not doing it symbolically which has a lot more computational overhead.
Here's what we get when n = 200:
This code with n=200 ran in milliseconds whereas the symsum equivalent took almost 2 minutes on my machine - Mac OS X 10.10.3 Yosemite, 16 GB RAM, Intel Core i7 2.3 GHz.

Finding values of x for a given y when approaching a limit

I'm trying to find two x values for each y value on a plot that is very similar to a Gaussian fn. The difficulty is that I need to be able to find the values of x for several values of y even when the gaussian fn is very close to zero.
I can't post an image due to being a new user, however think of a gaussian function and then the regions where it is close to zero on either side of the peak. This part where the fn is very close to reaching zero is where I need to find the x values for a given y.
What I've tried:
When the fn is discrete: I have tried interp1, however I get the error that it is not strictly monotonic increasing because of the many values that are close to zero.
When I fit a two-term gaussian:
I use fzero (fzero(function-yvalue)) however I get a lot of NaN's. These might be from me not having a close enough 'guess' value??
Does anyone have any other suggestions for me to try? Or how to improve what I've already attempted?
Thanks everyone
EDIT:
I've added a picture below. The data that I actually have is the blue line, while the fitted eqn is in red. The eqn should be accurate enough.
Again, I'm trying to pick out x values for a given y where y is very small (approaching 0).
I've tried splitting the function into left and right halves for the interpolation and fzero method.
Thanks for your responses anyway, I'll have a look at bisection.
Fitting a Gaussian seems to be uneffective, as its deviation (in the x-coordinate) from the real data is noticeable.
Since your data is already presented as a numeric vector y, the straightforward find(y<y0) seems adequate. Here is a sample code, in which the y-values are produced from a perturbed Gaussian.
x = 0:1:700;
y = 2000*exp(-((x-200)/50).^2 - sin(x/100).^2); % imitated data
plot(x,y)
y0 = 1e-2; % the y-value to look for
i = min(find(y>y0)); % first entry above y0
if i == 1
x1 = x(i);
else
x1 = x(i) - y(i)*(x(i)-x(i-1))/(y(i)-y(i-1)); % linear interpolation
end
i = max(find(y>y0)); % last entry above y0
if i == numel(y)
x2 = x(i);
else
x2 = x(i) - y(i)*(x(i)-x(i+1))/(y(i)-y(i+1)); % linear interpolation
end
fprintf('Roots: %g, %g \n', x1, x2)
Output: Roots: 18.0659, 379.306
The curve looks much like your plot.

Unnormalization of ellipse coefficients after direct ellipse fitting

I am trying to understand the normalization and "unnormalization" steps in the direct least squares ellipse fitting algorithm developed by Fitzgibbon, Pilu and Fisher (improved by Halir and Flusser).
EDITED: More details about theory added. Is eigenvalue problem where confusion stems from?
Short theory:
The ellipse is represented by an implicit second order polynomial (general conic equation):
where:
To constrain this general conic to an ellipse, the coefficients must satisfy the quadratic constraint:
which is equivalent to:
where C is a matrix of zeros except:
The design matrix D is composed of all data points x sub i.
The minimization of the distance between a conic and the data points can be expressed by a generalized eigenvalue problem (some theory has been omitted):
Denoting:
We now have the system:
If we solve this system, the eigenvector corresponding to the single positive eigenvalue is the correct answer.
The code:
The code snippets here are directly from the MATLAB code provided by the authors:
http://research.microsoft.com/en-us/um/people/awf/ellipse/fitellipse.html
The data input is a series of (x,y) points. The points are normalized by subtracting the mean and dividing by the standard deviation (in this case, computed as half the range). I'm assuming this normalization allows for a better fit of the data.
% normalize data
% X and Y are the vectors of data points, not normalized
mx = mean(X);
my = mean(Y);
sx = (max(X)-min(X))/2;
sy = (max(Y)-min(Y))/2;
x = (X-mx)/sx;
y = (Y-my)/sy;
% Build design matrix
D = [ x.*x x.*y y.*y x y ones(size(x)) ];
% Build scatter matrix
S = D'*D; %'
% Build 6x6 constraint matrix
C(6,6) = 0; C(1,3) = -2; C(2,2) = 1; C(3,1) = -2;
[gevec, geval] = eig(S,C);
% Find the negative eigenvalue
I = find(real(diag(geval)) < 1e-8 & ~isinf(diag(geval)));
% Extract eigenvector corresponding to negative eigenvalue
A = real(gevec(:,I));
After this, the normalization is reversed on the coefficients:
par = [
A(1)*sy*sy, ...
A(2)*sx*sy, ...
A(3)*sx*sx, ...
-2*A(1)*sy*sy*mx - A(2)*sx*sy*my + A(4)*sx*sy*sy, ...
-A(2)*sx*sy*mx - 2*A(3)*sx*sx*my + A(5)*sx*sx*sy, ...
A(1)*sy*sy*mx*mx + A(2)*sx*sy*mx*my + A(3)*sx*sx*my*my ...
- A(4)*sx*sy*sy*mx - A(5)*sx*sx*sy*my ...
+ A(6)*sx*sx*sy*sy ...
]';
At this point, I'm not sure what happened. Why is the unnormalization of the last three coefficients of A (d, e, f) dependent on the first three coefficients? How do you mathematically show where these unnormalization equations come from?
The 2 and 1 coefficients in the unnormalization lead me to believe the constraint matrix must be involved somehow.
Please let me know if more detail is needed on the method...it seems I'm missing how the normalization has propagated through the matrices and eigenvalue problem.
Any help is appreciated. Thanks!
At first, let me formalize the problem in a homogeneous space (as used in Richard Hartley and Andrew Zisserman's book Multiple View Geometry):
Assume that,
P=[X,Y,1]'
is our point in the unnormalized space, and
p=lambda*[x,y,1]'
is our point in the normalized space, where lambda is an unimportant free scale (in homogeneous space [x,y,1]=[10*x,10*y,10] and so on).
Now it is clear that we can write
x = (X-mx)/sx;
y = (Y-my)/sy;
as a simple matrix equation like:
p=H*P; %(equation (1))
where
H=[1/sx, 0, -mx/sx;
0, 1/sy, -my/sy;
0, 0, 1];
Also we know that an ellipse with the equation
A(1)*x^2 + A(2)*xy + A(3)*y^2 + A(4)*x + A(5)*y + A(6) = 0 %(first representation)
can be written in matrix form as:
p'*C*p=0 %you can easily verify this by matrix multiplication
where
C=[A(1), A(2)/2, A(4)/2;
A(2)/2, A(3), A(5)/2;
A(4)/2, A(5)/2, A(6)]; %second representation
and
p=[x,y,1]
and it is clear that these two representations of an ellipse are exactly the same and equivalent.
Also we know that the vector A=[A(1),A(2),A(3),A(4),A(5),A(6)] is a type-1 representation of the ellipse in the normalized space.
So we can write:
p'*C*p=0
where p is the normalized point and C is as defined previously.
Now we can use the "equation (1): p=HP" to derive some good result:
(H*P)'*C*(H*P)=0
=====>
P'*H'*C*H*P=0
=====>
P'*(H'*C*H)*P=0
=====>
P'*(C1)*P=0 %(equation (2))
We see that the equation (2) is an equation of an ellipse in the unnormalized space where C1 is the type-2 representation of ellipse and we know that:
C1=H'*C*H
Ans also, because the equation (2) is a zero equation we can multiply it by any non-zero number. So we multiply it by sx^2*sy^2 and we can write:
C1=sx^2*sy^2*H'*C*H
And finally we get the result
C1=[ A(1)*sy^2, (A(2)*sx*sy)/2, (A(4)*sx*sy^2)/2 - A(1)*mx*sy^2 - (A(2)*my*sx*sy)/2;
(A(2)*sx*sy)/2, A(3)*sx^2, (A(5)*sx^2*sy)/2 - A(3)*my*sx^2 - (A(2)*mx*sx*sy)/2;
-(- (A(4)*sx^2*sy^2)/2 + (A(2)*my*sx^2*sy)/2 + A(1)*mx*sx*sy^2)/sx, -(- (A(5)*sx^2*sy^2)/2 + A(3)*my*sx^2*sy + (A(2)*mx*sx*sy^2)/2)/sy, (mx*(- (A(4)*sx^2*sy^2)/2 + (A(2)*my*sx^2*sy)/2 + A(1)*mx*sx*sy^2))/sx + (my*(- (A(5)*sx^2*sy^2)/2 + A(3)*my*sx^2*sy + (A(2)*mx*sx*sy^2)/2))/sy + A(6)*sx^2*sy^2 - (A(4)*mx*sx*sy^2)/2 - (A(5)*my*sx^2*sy)/2]
which can be transformed into the type-2 ellipse and get the exact result we were looking for:
[ A(1)*sy^2, A(2)*sx*sy, A(3)*sx^2, A(4)*sx*sy^2 - 2*A(1)*mx*sy^2 - A(2)*my*sx*sy, A(5)*sx^2*sy - 2*A(3)*my*sx^2 - A(2)*mx*sx*sy, A(2)*mx*my*sx*sy + A(1)*mx*my*sy^2 + A(3)*my^2*sx^2 + A(6)*sx^2*sy^2 - A(4)*mx*sx*sy^2 - A(5)*my*sx^2*sy]
If you are curious how I managed to caculate these time-consuming equations I can give you the matlab code to do it for you as follows:
syms sx sy mx my
syms a b c d e f
C=[a, b/2, d/2;
b/2, c, e/2;
d/2, e/2, f];
H=[1/sx, 0, -mx/sx;
0, 1/sy, -my/sy;
0, 0, 1];
C1=sx^2*sy^2*H.'*C*H
a=[Cp(1,1), 2*Cp(1,2), Cp(2,2), 2*Cp(1,3), 2*Cp(2,3), Cp(3,3)]

How do you plot elliptic curves over a finite field using matlab

I need to draw an elliptic curve over the finite field F17(in other words, I want to draw some specific dots on the curve), but somehow I don't get it right.
The curve is defined by the equation:
y^2 = x^3 +x + 1 (mod 17)
I tried the way below, but it can't work.
for x = 0:16, plot(x, mod(sqrt(x^3+x+1), 16),'r')', end
Can someone help ?
[Update]
According to Nathan and Bill's suggestions, here is a slightly modified version.
x = 0:18
plot(mod(x,16), mod(sqrt(x.^3+x+1), 16),'ro')
However, I feel the figure is WRONG , e.g.,y is not an integer when x=4 .
You have to test all points that fulfill the equation y^2 = x^3 +x + 1 (mod 17). Since it is a finite field, you cannot simply take the square root on the right side.
This is how I would go about it:
a=0:16 %all points of your finite field
left_side = mod(a.^2,17) %left side of the equation
right_side = mod(a.^3+a+1,17) %right side of the equation
points = [];
%testing if left and right side are the same
%(you could probably do something nicer here)
for i = 1:length(right_side)
I = find(left_side == right_side(i));
for j=1:length(I)
points = [points;a(i),a(I(j))];
end
end
plot(points(:,1),points(:,2),'ro')
set(gca,'XTick',0:1:16)
set(gca,'YTick',0:1:16)
grid on;
Matlab works with vectors natively.
your syntax was close, but needs to be vectorized:
x = 0:16
plot(x, mod(sqrt(x.^3+x+1), 16),'r')
Note the . in x.^3. This tells Matlab to cube each element of x individually, as opposed to raising the vector x to the 3rd power, which doesn't mean anything.
You can use this code if you want to plot on Real numbers:
syms x y;
v=y^2-x^3-x-1;
ezplot(v, [-1,3,-5,5]);
But, for plot in modulo, at first you can write below code;
X=[]; for x=[0:16], z=[x; mod(x^3+x+1,17)]; X=[X, z]; end, X,
Then, you can plot X with a coordinate matrix.