I am trying to plot pole/zero plot of a simple polynomial (1+z)^(2p) for p=7. My code is as follows:
p = 7;
rCoeffs = [1 1];
for ii=1:2*p-1
rCoeffs = conv(rCoeffs, [1 1]);
end
zplane(real(rCoeffs),1);
The plot displays the following:
I don't understand why the zeros are complex numbers. I think that all the zeros should be located at z=-1 but this plot shows a circle. This doesn't happen when p is small but again I have seen a few plots online that are apparently generated by zplane and they show large number of zeros on a particular point.
Basically you're looking for 14 solutions to the equation given the setup. Unfortunately the general solution to polynomial equations of order 5 or greater doesn't exist and must be found numerically. The solution provided is correct only in that is approximately what the algorithm thinks you're looking for.
I would add that Nathan's method works as intended, and if you change it slightly, you will see all the solutions to the above equation.
z = tf('z',1)
H = (1+z)^(2*7);
[p,z1] = pzmap(H)
z1 % solution to H = 0
(1+z1)^(2*7)
An easier way:
p = 7
z = zpk('z',0.1);
H = (1+z)^(2*p);
pzmap(H)
Related
This is a part of a larger project so I will try to keep only the relevant parts (The variables and my attempt at the calculations)
I want to calculate the root mean squared error between Zi_cubic and Z_actual
RMSE formula
Given/already established variables
rng('default');
% Set up 2,000 random numbers between -1 & +1 as our x & y values
n=2000;
x = 2*(rand(n,1)-0.5);
y = 2*(rand(n,1)-0.5);
z = x.^5+y.^3;
% Interpolate to a regular grid
d = -1:0.01:1;
[Xi,Yi] = meshgrid(d,d);
Zi_cubic = griddata(x,y,z,Xi,Yi,'cubic');
Z_actual = Xi.^5+Yi.^3;
My attempt at a calculation
My approach is to
Arrange Zi_cubic and Z_actual as column vectors
Take the difference
Square each element in the difference
Sum up all the elements in 4 using nansum
Divide by the number of finite elements in 4
Take the square root
D1 = reshape(Zi_cubic,[numel(Zi_cubic),1]);
D2 = reshape(Z_actual,[numel(Z_actual),1]);
D3 = D1 - D2;
D4 = D3.^2;
D5 = nansum(D4)
d6 = sum(isfinite(D4))
D6 = D5/d6
D7 = sqrt(D6)
Apparently this is wrong. I'm either mis-applying the RMSE formula or I don't understand what I'm telling matlab to do.
Any help would be appreciated. Thanks in advance.
Your RMSE is fine (in my book). The only thing that seems possibly off is the meshgrid and griddata. Your inputs to griddata are vectors and you are asking for a matrix output. That is fine, but you're potentially undersampling your input space. In other words, you are giving n samples as inputs, but perhaps you are expected to give n^2 samples as inputs? Here's some sample code for a smaller n to demonstrate this effect more clearly:
rng('default');
% Set up 2,000 random numbers between -1 & +1 as our x & y values
n=100; %Reduced because scatter is slow to plot
x = 2*(rand(n,1)-0.5);
y = 2*(rand(n,1)-0.5);
z = x.^5+y.^3;
S = 100;
subplot(1,2,1)
scatter(x,y,S,z)
%More data, more accurate ...
[x2,y2] = meshgrid(x,y);
z2 = x2.^5+y2.^3;
subplot(1,2,2)
scatter(x2(:),y2(:),S,z2(:))
The second plot should be a lot cleaner and thus will likely provide a more accurate estimate of Z_actual later on.
I also thought you might be running into some issues with floating point numbers and calculating RMSE but that appears not to be the case. Here's some alternative code which is how I would write RMSE.
d = Zi_cubic(:) - Z_actual(:);
mask = ~isnan(d);
n_valid = sum(mask);
rmse = sqrt(sum(d(mask).^2)/n_valid);
Notice that (:) linearizes the matrix. Also it is useful to try and use better variable names than D1-D7.
In the end though these are just suggestions and your code looks fine.
PS - I'm assuming that you are supposed to be using cubic interpolation as that is another place you could perhaps deviate from what's expected ...
I have data that look like this:
These are curves of the same process but with different parameters.
I need to find the index (or x value) for certain y values (say, 10).
For the blue curve, this is easy: I'm using min to find the index:
[~, idx] = min(abs(y - target));
where y denotes the data and target the wanted value.
This approach works fine since I know that there is an intersection, and only one.
Now what to do with the red curve? I don't know beforehand, if there will be two intersections, so my idea of finding the first one and then stripping some of the data is not feasible.
How can I solve this?
Please note the the curves can shift in the x direction, so that checking the found solution for its xrange is not really an option (it could work for the data I have, but since there are more to come, this solution is probably not the best).
Shameless steal from here:
function x0 = data_zeros(x,y)
% Indices of Approximate Zero-Crossings
% (you can also use your own 'find' method here, although it has
% this pesky difference of 1-missing-element because of diff...)
dy = find(y(:).*circshift(y(:), [-1 0]) <= 0);
% Do linear interpolation of near-zero-crossings
x0 = NaN(size(dy,1)-1,1);
for k1 = 1:size(dy,1)-1
b = [[1;1] [x(dy(k1)); x(dy(k1)+1)]] \ ...
[y(dy(k1)); y(dy(k1)+1)];
x0(k1) = -b(1)/b(2);
end
end
Usage:
% Some data
x = linspace(0, 2*pi, 1e2);
y = sin(x);
% Find zeros
xz = data_zeros1(x,y);
% Plot original data and zeros found
figure(1), hold on
plot(x, y);
plot(xz, zeros(size(xz)), '+r');
axis([0,2*pi -1,+1]);
The gist: multiply all data points with their consecutive data points. Any of these products that is negative therefore has opposite sign, and gives you an approximate location of the zero. Then use linear interpolation between the same two points to get a more precise answer, and store that.
NOTE: for zeros exactly at the endpoints, this approach will not work. Therefore, it may be necessary to check those manually.
Subtract the desired number from your curve, i.e. if you want the values at 10 do data-10, then use and equality-within-tolerance, something like
TOL = 1e-4;
IDX = 1:numel(data(:,1)); % Assuming you have column data
IDX = IDX(abs(data-10)<=TOL);
where logical indexing has been used.
I figured out a way: The answer by b3 in this question did the trick.
idx = find(diff(y > target));
Easy as can be :) The exact xvalue can then be found by interpolation. For me, this is fine since i don't need exact values.
I'm trying to find the line which best fits to the data. I use the following code below but now I want to have the data placed into an array sorted so it has the data which is closest to the line first how can I do this? Also is polyfit the correct function to use for this?
x=[1,2,2.5,4,5];
y=[1,-1,-.9,-2,1.5];
n=1;
p = polyfit(x,y,n)
f = polyval(p,x);
plot(x,y,'o',x,f,'-')
PS: I'm using Octave 4.0 which is similar to Matlab
You can first compute the error between the real value y and the predicted value f
err = abs(y-f);
Then sort the error vector
[val, idx] = sort(err);
And use the sorted indexes to have your y values sorted
y2 = y(idx);
Now y2 has the same values as y but the ones closer to the fitting value first.
Do the same for x to compute x2 so you have a correspondence between x2 and y2
x2 = x(idx);
Sembei Norimaki did a good job of explaining your primary question, so I will look at your secondary question = is polyfit the right function?
The best fit line is defined as the line that has a mean error of zero.
If it must be a "line" we could use polyfit, which will fit a polynomial. Of course, a "line" can be defined as first degree polynomial, but first degree polynomials have some properties that make it easy to deal with. The first order polynomial (or linear) equation you are looking for should come in this form:
y = mx + b
where y is your dependent variable and X is your independent variable. So the challenge is this: find the m and b such that the modeled y is as close to the actual y as possible. As it turns out, the error associated with a linear fit is convex, meaning it has one minimum value. In order to calculate this minimum value, it is simplest to combine the bias and the x vectors as follows:
Xcombined = [x.' ones(length(x),1)];
then utilized the normal equation, derived from the minimization of error
beta = inv(Xcombined.'*Xcombined)*(Xcombined.')*(y.')
great, now our line is defined as Y = Xcombined*beta. to draw a line, simply sample from some range of x and add the b term
Xplot = [[0:.1:5].' ones(length([0:.1:5].'),1)];
Yplot = Xplot*beta;
plot(Xplot, Yplot);
So why does polyfit work so poorly? well, I cant say for sure, but my hypothesis is that you need to transpose your x and y matrixies. I would guess that that would give you a much more reasonable line.
x = x.';
y = y.';
then try
p = polyfit(x,y,n)
I hope this helps. A wise man once told me (and as I learn every day), don't trust an algorithm you do not understand!
Here's some test code that may help someone else dealing with linear regression and least squares
%https://youtu.be/m8FDX1nALSE matlab code
%https://youtu.be/1C3olrs1CUw good video to work out by hand if you want to test
function [a0 a1] = rtlinreg(x,y)
x=x(:);
y=y(:);
n=length(x);
a1 = (n*sum(x.*y) - sum(x)*sum(y))/(n*sum(x.^2) - (sum(x))^2); %a1 this is the slope of linear model
a0 = mean(y) - a1*mean(x); %a0 is the y-intercept
end
x=[65,65,62,67,69,65,61,67]'
y=[105,125,110,120,140,135,95,130]'
[a0 a1] = rtlinreg(x,y); %a1 is the slope of linear model, a0 is the y-intercept
x_model =min(x):.001:max(x);
y_model = a0 + a1.*x_model; %y=-186.47 +4.70x
plot(x,y,'x',x_model,y_model)
I want to simulate the movements of 3 robots/agents in space and I would like to generate 3 different trajectories which have one constraint: in a certain time T the all the trajectories must have the same tangent.
I want something like in the following picture:
I need to do it through MATLAB and/or SIMULINK.
Thanks a lot for your help.
I do not know if this is enough for what I need or not but I probably figured out something.
What I did is to fit a polynomial to some points and constraining the derivative of the polynomial in a certain point to be equal to 0.
I used the following function:
http://www.mathworks.com/matlabcentral/fileexchange/54207-polyfix-x-y-n-xfix-yfix-xder-dydx-
It is pretty easy, but it saved me some work.
And, if you try the following:
% point you want your functions to pass
p1 = [1 1];
p2 = [1 3];
% First function
x1 = linspace(0,4);
y1 = linspace(0,4);
p = polyfix(x1,y1,degreePoly,p1(1),p1(2),[1],[0]);
% p = [-0.0767 0.8290 -1.4277 1.6755];
figure
plot(x1,polyval(p,x1))
xlim([0 3])
ylim([0 3])
grid on
hold on
% Second function
x2 = linspace(0,4);
y2 = linspace(0,4);
p = polyfix(x2,y2,degreePoly,[1],[3],[1],[0])
% p = [0.4984 -2.7132 3.9312 1.2836];
plot(x2,polyval(p,x2))
xlim([0 3])
ylim([0 3])
grid on
If you don't have the polyfix function and you don't want to download it you can try the same code commenting the polyfix lines:
p = polyfix(x1,y1,degreePoly,p1(1),p1(2),[1],[0]);
p = polyfix(x2,y2,degreePoly,p2(1),p2(1),[1],[0]);
And uncommenting the lines:
% p = [-0.0767 0.8290 -1.4277 1.6755];
% p = [0.4984 -2.7132 3.9312 1.2836];
You will get this:
Now I will use this polynomial as a position (x,y) over time of my robots and I think I should be done. The x of the polynomial will be also the time, in this way I am sure that the robots will arrive in the 0-derivative point at the same time.
What do you think? Does it make sense?
Thanks again.
To have tangents same all the time, curves (or trajectories) needs to be always parallel. Generate trajectory for one object and keep other objects at fixed distance apart and following master trajectory.
In image provides if we take time next to three dots trajectories will not be same (or parallel) as objects moved in different directions. So i guess you've to keep then always parallel
I'm trying to make a plot with density of the zeros of random quadratic monic polynomials in the complex plane. In fact, I am plotting! But there is just a little detail which is bugging me: the values of the axes are not matching with the points in the plot. Here is my code.
n=2;
p = [1 random('Uniform', -1, 1, [1,n])]
roots(p)
z = zeros(0);
n = 2;
for j=1:10000
p = [1 random('Uniform', -1, 1, [1,n])];
R = roots(p);
z = [ z, R.' ];
end
Re = real(z);
Im = imag(z);
[values, centers] = hist3([Im(:) Re(:)],[1000 1000]);
imagesc(centers{:}, values,[0,10]);
colorbar
axis equal
axis xy
cmap = summer(max(values(:)));
cmap(1:1,:) = 0;
colormap(cmap);
Now here is the plot generated by this code.
You can try this code and check max(Re) and max(Im), which correspond to the maximum values of the real part (x axis) and imaginary part (y axis). I have max(Re) = 1.6076 (it's always something close to 1.5) and max(Im) = 0.9993 (it's always something close to 1). These values doesn't match the plot, which seems to be the opposite.
If I try the scatter function (losing density and all the nice visual), I have the right values. The following command generates the figure below.
scatter(Re(1,:), Im(1,:),'.')
This clearly shows that the first plot is actually(not rotated as I was thinking first) correct, except for the axes values. I need a help to fix this. Thanks.
PS: The commands to make this plot I got in the answer here. Note the comments there. I explicitly asked for a solution for this problem and got one. The given solution actually worked in some cases but failed in this one, I don't know why.
I believe what you are looking for should be:
imagesc(centers{[2,1]}, values,[0,10]);
Incidently, you did not spot the problem in the other post is because the sample images all happen to be more or less square.