How to plot a sinus uniformly in MATLAB? - matlab

If I plot sinus like this
x=0:0.05:2*pi;
y=sin(x);
plot(x,y,'.-')
I'm getting obviously non-uniformly density of points.Please see attachment.sin
What I want is, that points should be at the equivalent distance each other. So, I need to define x array somehow.. or is there is another way?

The point density is uniform in x. If you want the points to be uniform in y, you could use:
y=-1:.05:1;
plot(asin(y),y,'o')
But then the points aren't uniform in x.
EDIT: Just for fun or for any future readers, to get points uniform in overall distance, the distance between points is d=sqrt(h^2+(f(x+h)-f(x))^2) which is approximately d=h*sqrt(1+f'(x)^2), i.e. h=d/sqrt(1+cos(x)^2) in this case. The curve length is the integral of sqrt(1+f'(x)^2) which in this case is 4*sqrt(2)*ellipticE(1/2) = 7.6404:
N = 100;
d = 7.6404/N;
x = zeros(1,N);
for n = 2:N
x(n) = x(n-1) + d/sqrt(1+cos(x(n-1))^2);
end
y = sin(x);
plot(x,y,'x')
You can check that the distance between points is approximately constant by looking at sqrt(diff(y).^2+diff(x).^2). It's only approximate because of the use of the derivative (at the left endpoint of the interval at that) for the distance, but this gets better as N increases. To get the distance exact, we'd need to numerically solve a trig equation for each point. The curve length is also affected by the approximation and tends to miss the last point.

Related

How to calculate the point-by-point radius of curvature of a trajectory that is not a proper function

with Matlab i'm trying to calculate the "radius of curvature" signal of a trajectory obtained using GPS data projected to the local cartesian plane.
The value of the signal onthe n-th point is the one of the osculating circle tangent to the trajectory on that point.
By convention the signal amplitude has to be negative when related to a left turn and viceversa.
With trajectories having a proper function as graph i'm building the "sign" signal evaluating the numeric difference between the y coordinate of the center of the osculating circle:
for i=1:length(yCenter) -1
aux=Y_m(closestIndex_head:closestIndex_tail );
if yCenter(i) - aux(i) > 0
sign(i)=-1;
else
sign(i)=+1;
end
end
yCenter contains x-coordinates of all osculating circles related to each point of the trajectory;
Y_m contain the y-coordinates of every point in trajectory.
The above simple method works as long as the trajectory's graph is a proper function (for every x there is only one y).
The trajectory i'm working on is like that:
and the sign signal got some anomalies:
The sign seems to change within a turn.
I've tried to correct the sign using the sin of the angle between the tangent vector and the trajectory, the sign of the tangent of the angle and other similar stuff, but still i'm looking at some anomalies:
I'm pretty sure that those anomalies came from the fact that the graph is not a proper function and that the solution lies on the angle of the tangent vector, but still something is missing.
Any advice will be really appreciated,
thank you.
Alessandro
To track a 2D curve, you should be using an expression for the curvature that is appropriate for general parametrized 2D functions.
While implementing the equation from Wikipedia, you can use discrete differences to approximate the derivatives. Given the x and y coordinates, this could be implemented as follows:
% approximate 1st derivatives of x & y with discrete differences
dx = 0.5*(x(3:end)-x(1:end-2))
dy = 0.5*(y(3:end)-y(1:end-2))
dl = sqrt(dx.^2 + dy.^2)
xp = dx./dl
yp = dy./dl
% approximate 2nd derivatives of x & y with discrete differences
xpp = (x(3:end)-2*x(2:end-1)+x(1:end-2))./(dl.^2)
ypp = (y(3:end)-2*y(2:end-1)+y(1:end-2))./(dl.^2)
% Compute the curvature
curvature = (xp.*ypp - yp.*xpp) ./ ((xp.^2 + yp.^2).^(1.5))
For demonstration purposes I've also constructed a synthetic test signal (which can be used to recreate the same conditions), but you can obviously use your own data instead:
z1 = linspace(2,1,N).*exp(1i*linspace(0.75*pi,-0.25*pi,N))
z2 = 2*exp(-1i*0.25*pi) + linspace(1,2,N)*exp(1i*linspace(0.75*pi,2.25*pi,N))
z = cat(1,z1,z2)
x = real(z)
y = imag(z)
With the corresponding curvature results:

How to calculate normalized euclidean distance on two vectors?

Let's say I have the following two vectors:
x = [(10-1).*rand(7,1) + 1; randi(10,1,1)];
y = [(10-1).*rand(7,1) + 1; randi(10,1,1)];
The first seven elements are continuous values in the range [1,10]. The last element is an integer in the range [1,10].
Now I would like to compute the euclidean distance between x and y. I think the integer element is a problem because all other elements can get very close but the integer element has always spacings of ones. So there is a bias towards the integer element.
How can I calculate something like a normalized euclidean distance on it?
According to Wolfram Alpha, and the following answer from cross validated, the normalized Eucledean distance is defined by:
You can calculate it with MATLAB by using:
0.5*(std(x-y)^2) / (std(x)^2+std(y)^2)
Alternatively, you can use:
0.5*((norm((x-mean(x))-(y-mean(y)))^2)/(norm(x-mean(x))^2+norm(y-mean(y))^2))
I would rather normalise x and y before calculating the distance and then vanilla Euclidean would suffice.
In your example
x_norm = (x -1) / 9; % normalised x
y_norm = (y -1) / 9; % normalised y
dist = norm(x_norm - y_norm); % Euclidean distance between normalised x, y
However, I am not sure about whether having an integer element contributes to some sort of bias but we have already gotten kind of off-topic for stack overflow :)
From Euclidean Distance - raw, normalized and double‐scaled coefficients
SYSTAT, Primer 5, and SPSS provide Normalization options for the data so as to permit an investigator to compute a distance
coefficient which is essentially “scale free”. Systat 10.2’s
normalised Euclidean distance produces its “normalisation” by dividing
each squared discrepancy between attributes or persons by the total
number of squared discrepancies (or sample size).
Frankly, I can see little point in this standardization – as the final
coefficient still remains scale‐sensitive. That is, it is impossible
to know whether the value indicates high or low dissimilarity from the
coefficient value alone

Plotting a closed interal function MATLAB

I need a help. I have to generate a curve using MATLAB. The plot is defined by the formula (an analytic expression) :-
where, the meaning of the variables are as follows: R is the distributed resistive function, S is the distributive conductive function, k is the sheet resistance and r(x,y) is the distance between *
(x,y)*, and the perimeter dl with the integration made around all the perimeter of the chip.
A squared foil as shown in the figure with sides (a) 10 arbitrary units long and an unitary unit sheet resistance (k=1 ohm) is used for our consideration. The plot of the function R(x,y) is supposed to come out like this...
I literally have no clue how to plot this function. I could not even get the idea how to define the distance function r(x,y) with respect to dl. On top of that it is complicated further by the closed integral. Please help me. Any help in even simplifying the expression is also welcome. Is there any possible closed form expression for such a square structure ?
Thanks in advance. The link to the paper is here. paper here
Reconstructing the math
The definition of the function R is not particularly clear, but I guess what they mean is:
With dOmega being the boundary of the foil and p a point p = [px,py] on the foil.
Imagine that for a point p on the sheet you are computing R(p) by going around the boundary of the foil (what they call the perimeter), your position being q, and integrating one divided by the distance from you (q) to the point p multiplied by k.
I guess you could analytically compute the integral for this rectangular sheet, but if you just want to plot the function, you could simply approximate the integral by defining a finite number of points on the boundary, evaluating the integrand in those points, then taking the mean and multiplying by the perimeter. [The same way you could approximate integral(f(x), x=0...pi) by pi*(f(0)+f(pi/2)+f(pi))/3]
Alternative representation using coordinates:
If you are only familiar with integrals along the real line in coordinate representation you could write this in the following way, which is frankly quite UGGGLY:
Plotting an approximation
%% Config:
xlen = 10;
ylen = 10;
k = 1;
%% Setting up points on the boundary of the square
x = linspace(0,xlen,50);
y = linspace(0,ylen,50);
perimeter = 2*(xlen+ylen);
boundary = [x(1)*ones(length(y),1), y'
x', y(1)*ones(length(x),1); ...
x(end)*ones(length(y),1), y'; ...
x', y(end)*ones(length(x),1)];
%% Function definition
norm2 = #(X) sqrt(sum(X.^2,2));
R = #(p) 1/(perimeter*mean(1./(k*norm2(bsxfun(#minus,boundary,p)))));
%% Plotting
[X_grid,Y_grid] = ndgrid(x,y);
R_grid = zeros(size(X_grid));
for ii = 1:length(x)
for jj = 1:length(y)
R_grid(ii,jj) = R([x(ii),y(jj)]);
end
end
surf(X_grid, Y_grid, R_grid);
axis vis3d;
This will give you the following plot:

"Frequency" shift in discrete FFT in MATLAB

(Disclaimer: I thought about posting this on math.statsexchange, but found similar questions there that were moved to SO, so here I am)
The context:
I'm using fft/ifft to determine probability distributions for sums of random variables.
So e.g. I'm having two uniform probability distributions - in the simplest case two uniform distributions on the interval [0,1].
So to get the probability distribution for the sum of two random variables sampled from these two distributions, one can calculate the product of the fourier-transformed of each probabilty density.
Doing the inverse fft on this product, you get back the probability density for the sum.
An example:
function usumdist_example()
x = linspace(-1, 2, 1e5);
dx = diff(x(1:2));
NFFT = 2^nextpow2(numel(x));
% take two uniform distributions on [0,0.5]
intervals = [0, 0.5;
0, 0.5];
figure();
hold all;
for i=1:size(intervals,1)
% construct the prob. dens. function
P_x = x >= intervals(i,1) & x <= intervals(i,2);
plot(x, P_x);
% for each pdf, get the characteristic function fft(pdf,NFFT)
% and form the product of all char. functions in Y
if i==1
Y = fft(P_x,NFFT) / NFFT;
else
Y = Y .* fft(P_x,NFFT) / NFFT;
end
end
y = ifft(Y, NFFT);
x_plot = x(1) + (0:dx:(NFFT-1)*dx);
plot(x_plot, y / max(y), '.');
end
My issue is, the shape of the resulting prob. dens. function is perfect.
However, the x-axis does not fit to the x I create in the beginning, but is shifted.
In the example, the peak is at 1.5, while it should be 0.5.
The shift changes if I e.g. add a third random variable or if I modify the range of x.
But I can't get figure how.
I'm afraid it might have to do with the fact that I'm having negative x values, while fourier transforms usually work in a time/frequency domain, where frequencies < 0 don't make sense.
I'm aware I could find e.g. the peak and shift it to its proper place, but seems nasty and error prone...
Glad about any ideas!
The problem is that your x origin is -1, not 0. You expect the center of the triangular pdf to be at .5, because that's twice the value of the center of the uniform pdf. However, the correct reasoning is: the center of the uniform pdf is 1.25 above your minimum x, and you get the center of the triangle at 2*1.25 = 2.5 above the minimum x (that is, at 1.5).
In other words: although your original x axis is (-1, 2), the convolution (or the FFT) behave as if it were (0, 3). In fact, the FFT knows nothing about your x axis; it only uses the y samples. Since your uniform is zero for the first samples, that zero interval of width 1 is amplified to twice its width when you do the convolution (or the FFT). I suggest drawing the convolution on paper to see this (draw original signal, reflected signal about y axis, displace the latter and see when both begin to overlap). So you need a correction in the x_plot line to compensate for this increased width of the zero interval: use
x_plot = 2*x(1) + (0:dx:(NFFT-1)*dx);
and then plot(x_plot, y / max(y), '.') will give the correct graph:

FFTW and fft with MatLab

I have a weird problem with the discrete fft. I know that the Fourier Transform of a Gauss function exp(-x^2/2) is again the same Gauss function exp(-k^2/2). I tried to test that with some simple code in MatLab and FFTW but I get strange results.
First, the imaginary part of the result is not zero (in MatLab) as it should be.
Second, the absolute value of the real part is a Gauss curve but without the absolute value half of the modes have a negative coefficient. More precisely, every second mode has a coefficient that is the negative of that what it should be.
Third, the peak of the resulting Gauss curve (after taking the absolute value of the real part) is not at one but much higher. Its height is proportional to the number of points on the x-axis. However, the proportionality factor is not 1 but nearly 1/20.
Could anyone explain me what I am doing wrong?
Here is the MatLab code that I used:
function [nooutput,M] = fourier_test
Nx = 512; % number of points in x direction
Lx = 50; % width of the window containing the Gauss curve
x = linspace(-Lx/2,Lx/2,Nx); % creating an equidistant grid on the x-axis
input_1d = exp(-x.^2/2); % Gauss function as an input
input_1d_hat = fft(input_1d); % computing the discrete FFT
input_1d_hat = fftshift(input_1d_hat); % ordering the modes such that the peak is centred
plot(real(input_1d_hat), '-')
hold on
plot(imag(input_1d_hat), 'r-')
The answer is basically what Paul R suggests in his second comment, you introduce a phase shift (linearly dependent on the frequency) because the center of the Gaussian described by input_1d_hat is effectively at k>0, where k+1 is the index into input_1d_hat. Instead if you center your data (such that input_1d_hat(1) corresponds to the center) as follows you get a phase-corrected Gaussian in the frequency domain:
Nx = 512; % number of points in x direction
Lx = 50; % width of the window containing the Gauss curve
x = linspace(-Lx/2,Lx/2,Nx); % creating an equidistant grid on the x-axis
%%%%%%%%%%%%%%%%
x=fftshift(x); % <-- center
%%%%%%%%%%%%%%%%
input_1d = exp(-x.^2/2); % Gauss function as an input
input_1d_hat = fft(input_1d); % computing the discrete FFT
input_1d_hat = fftshift(input_1d_hat); % ordering the modes such that the peak is centered
plot(real(input_1d_hat), '-')
hold on
plot(imag(input_1d_hat), 'r-')
From the definition of the DFT, if the Gaussian is not centered such that maximum occurs at k=0, you will see a phase twist. The effect off fftshift is to perform a circular shift or swapping of left and right sides of the dataset, which is equivalent to shifting the center of the peak to k=0.
As for the amplitude scaling, that is an issue with the definition of the DFT implemented in Matlab. From the documentation for the FFT:
For length N input vector x, the DFT is a length N vector X,
with elements
N
X(k) = sum x(n)*exp(-j*2*pi*(k-1)*(n-1)/N), 1 <= k <= N.
n=1
The inverse DFT (computed by IFFT) is given by
N
x(n) = (1/N) sum X(k)*exp( j*2*pi*(k-1)*(n-1)/N), 1 <= n <= N.
k=1
Note that in the forward step the summation is not normalized by N. Therefore if you increase the number of points Nx in the summation while keeping the width Lx of the Gaussian function constant you will increase X(k) proportionately.
As for signal leaking into the imaginary frequency dimension, that is due to the discrete form of the DFT, which results in truncation and other effects, as noted again by Paul R. If you reduce Lx while keeping Nx constant, you should see a reduction in the amount of signal in the imaginary dimension relative to the real dimension (compare the spectra while keeping peak intensities in the real dimension equal).
You'll find additional answers to similar questions here and here.