I'm embarrassed to ask this as I believe I may be missing something obvious, but I just can't see where I am going wrong. As part of a larger program I am investigating the application of discretisation methods to approximate the convection equation on a square wave. However, I have noticed that in some cases the bounds of my square wave are being incorrectly applied. It should have the initial conditions of 1 when 10.25<=X<=10.5 and 0 everywhere else. This is an example of the issue:
L=20; %domain length
dx=0.01; %space step
nx=(L/dx); %number of steps in space
x=0:dx:(nx-1)*dx; %steps along x
sqr=zeros(1,nx); %pre-allocate array space
for j=1:nx
if ((10.25<=x(j))&&(x(j)<=10.5))
sqr(j)=1;
else
sqr(j)=0;
end
d=plot(x,sqr,'r','LineWidth',2); axis([10.1 10.6 0 1.1]);
drawnow;
In this case, the wave displays incorrectly, with x=10.5 taking a value of 0 and not 1, like so:
https://dl.dropboxusercontent.com/u/8037738/project/square_wave.png
The weird thing is that if I change the domain length to a different value it sometimes displays correctly. This is when the domain length is set to 30 and it displays correctly:
https://dl.dropboxusercontent.com/u/8037738/project/square_wave_correct.png
I really don't understand because my x array is always discretised in 0.01 intervals, so it will never 'miss' 10.5 when it loops through. I hope I have explained this problem adequately enough and if it is a stupid mistake on my part I apologise in advance.
You forgot the end when you finish with the if
L=20; %domain length
dx=0.01; %space step
nx=(L/dx); %number of steps in space
x=0:dx:(nx-1)*dx; %steps along x
sqr=zeros(1,nx); %pre-allocate array space
eps = 1.e-8;
for j=1:nx
if ((10.25-eps<=x(j))&&(x(j)<=10.5+eps))
sqr(j)=1;
else
sqr(j)=0;
end
end
d=plot(x,sqr,'r','LineWidth',2); axis([10.1 10.6 0 1.1]);
drawnow;
I also recomend you to use a small tolerance when using comparisons with float values, like the x(j)<=10.5
if you se the difference at that point, matlab tell you
x(1051)-10.5
ans =
1.776356839400250e-15
so because of the round-off the value of x(j) was lightly larger that 10.5, not equal as you were expecting
Your issue is that the x values do not fall exactly on the space step. For example, if you look at the following:
j=1051;
format long;
x(j)
ans =
10.500000000000002
The answer from sebas fixes this issue by adding the tolerance to each of the bounds. You could also try rounding x(j) to the nearest 100th before performing the logic in the if statement:
roundn(x(j), -2)
Related
I am writing a report for a class and am having some issues with the lines of an unstable plot going beyond the boundary of the graph and overlapping the title and xlabel. This is despite specifying a ylim from -2 to 2. Is there a good way to solve this issue?
Thanks!
plot(X,u(:,v0),X,u(:,v1),X,u(:,v2),X,u(:,v3),X,u(:,v4))
titlestr = sprintf('Velocity vs. Distance of %s function using %s: C=%g, imax=%g, dx=%gm, dt=%gsec',ICFType,SDType,C,imax,dx,dt);
ttl=title(titlestr);
ylabl=ylabel("u (m/s)");
xlabl=xlabel("x (m)");
ylim([-2 2])
lgnd=legend('t=0','t=1','t=2','t=3','t=4');
ttl.FontSize=18;
ylabl.FontSize=18;
xlabl.FontSize=18;
lgnd.FontSize=18;
EDIT: Minimum reproducible example
mgc=randi([-900*10^10,900*10^10], [1000,2]);
mgc=mgc*1000000;
plot(mgc(:,1),mgc(:,2))
ylim([-1,1])
This is odd. It really looks like a Bug... partly
The reason is probably that the angle of the lines are so narrow that MATLAB runs into rounding errors when calculating the points to draw for your limits very small limits given very large numbers. (You see that you don't run into this problem when you don't scale the matrix mgc.
mgc = randi([-900*10^10,900*10^10], [1000,2]);
plot(mgc(:,1),mgc(:,2))
ylim([-1,1])
but if you scale it further, you run into this problem...
mgc = randi([-900*10^10,900*10^10], [1000,2]);
plot(mgc(:,1)*1e6,mgc(:,2)*1e6)
ylim([-1,1])
While those numbers are nowhere near the maximum number a double can represent (type realmax in the command window to see that this is a number with 308 zeros!); limiting the plot to [-1,1] on one of the axes -- note that you obtain the same phenom on the x-axis -- let MATLAB run into precision problems.
First of all, you see that it plots much less lines than before (in my case)... although, I just said to zoom on the y-axis. The thing is, that MATLAB does not recalculate the lines for the section but it really zooms into it (I guess that this may cause resolution errors with regard to pixels?)
Well, lets have a look at the data (Pro-tip, you can get the data of a line from a MATLAB figure by calling this snippet
datObj = findobj(gcf,'-property','YData','-property','XData');
X = datObj.XData;
Y = datObj.YData;
xlm = get(gca,'XLim'); % get the current x-limits
) We see that it represents the original data set, which is not surprising as you can also zoom out again.
Note that his only occurs if you have such a chaotic, jagged line. If you sort it, it does not happen.
quick fix:
Now, what happens, if we calculate the exact points for this section?
m = diff(Y)./diff(X); % slope
n = Y(1:end-1)-m.*X(1:end-1); % offset
x = [(-1-n); (1-n)]./m;
y = ones(size(x))./[-1 1].';
% plot
plot([xMinus1;xPlus1],(ones(length(xMinus1),2).*[-1 1]).')
xlim(xlm); % limit to exact same scale as before
The different colors indicate that they are now individual lines and not a single wild chaos;)
It seems Max pretty much hit the nail on the head as it pertains to the reason for this error is occurring. Per Enrico's advice I went ahead and submitted a bug report. MathWorks responded saying they were unsure it was "unexpected behavior" and would look into it more shortly. They also did suggest a temporary workaround (which, in my case, may be permanent).
This workaround is to put
set(gca,'ClippingStyle','rectangle');
directly after the plotting line.
Below is a modified version of the minimum reproducible example with this modification.
mgc=randi([-900*10^10,900*10^10], [1000,2]);
mgc=mgc*1000000;
plot(mgc(:,1),mgc(:,2))
set(gca,'ClippingStyle','rectangle');
ylim([-1,1])
Im implementing the k-means algorithm on matlab without using the k-means built-in function, The stopping criteria is when the new centroids doesn't change by new iterations, but i cannot implement it in matlab , can anybody help?
Thanks
Setting no change as a stopping criteria is a bad idea. There are a few main reasons you shouldn't use a 0 change condition
even for a well behaved function the difference between 0 change and a very small change (say 1e-5 perhaps)could be 1000+ iterations, so you are wasting time trying to get them to be exactly the same. Especially because computers usually keep far more digits than we are interested in. IF you only need 1 digit accuracy, why wait for the computer to find an answer within 1e-31?
computers have floating point errors everywhere. Try doing some easily reversible matrix operations like a = rand(3,3); b = a*a*inv(a); a-b theoretically this should be 0 but you will see it isn't. So these errors alone could prevent your program from ever stopping
dithering. lets say we have a 1d k means problem with 3 numbers and we want to split them into 2 groups. One iteration the grouping can be a,b vs c. the next iteration could be a vs b,c the next could be a,b vs c the next.... This is of course a simplified example, but there can be instances where a few data points can dither between clusters, and you will end up with a never ending algorithm. Since those few points are reassigned, the change will never be 0
the solution is to use a delta threshold. basically you subtract the current values from the previous and if they are less than a threshold you are done. This on its own is powerful, but as with any loop, you need a backup escape plan. And that is setting a max_iterations variable. Look at matlabs documentation for kmeans, even they have a MaxIter variable (default is 100) so even if your kmeans doesn't converge, at least it wont run endlessly. Something like this might work
%problem specific
max_iter = 100;
%choose a small number appropriate to your problem
thresh = 1e-3;
%ensures it runs the first time
delta_mu = thresh + 1;
num_iter = 0;
%do your kmeans in the loop
while (delta_mu > thresh && num_iter < max_iter)
%save these right away
old_mu = curr_mu;
%calculate new means and variances, this is the standard kmeans iteration
%then store the values in a variable called curr_mu
curr_mu = newly_calculate_values;
%use the two norm to find the delta as a single number. no matter what
%the original dimensionality of mu was. If old_mu -new_mu was
% 0 the norm is still 0. so it behaves well as a distance measure.
delta_mu = norm(old_mu - curr_mu,2);
num_ter = num_iter + 1;
end
edit
if you don't know the 2 norm is essentially the euclidean distance
The Problem:
I have a problem in Matlab with a part of a function that calculates the value of
atan(c*tan(x)).
For some real c and x where x is an angle. Usually this would return some values between -pi/2 and pi/2 but I want it to mind "how often the angle got around": For example with c=1 we get the same result for x=pi/2 and x=3*pi/2.
This is what I want to avoid, in this case I would want it to calculate for x=3*pi/2 the value 3*pi/2 (I know that for c=1 atan and tan would cancel out, but for arbitrary real c they do not).
In other words, I want to make atan(c*tan(x)) continuous on R in x.
How I tried to fix it:
I simply added a function that kept an eye on the argument of the tangens:
atan(c*tan(x))+pi.*(floor((x./pi)+(1/2)))
This works for values that are not too near to the poles of the tangent function, but once x lies !very! near it breaks down (jumps of heigh pi are observed). This is especially a problem for my calculations as x is near enough one of the poles all the time.
What still seems to be the problem:
Imho the problem is that the tangens has a "much higher resolution" near its poles than the "floor function" making its jump too early/late for the correcting function.
My question:
Is there another possibility to fix this problem that is not sensitive to x lying near the poles of the tangent ?
There are multiple possible solutions for this problem. A simple one would be to use a special case for x near pi + k*pi/2, letting atan(c*tan(x)) = +- pi/2 depending on whether x is slightly smaller, or slightly larger, than pi/2 respectively.
function Out = ContTangents(In,RidgeParam)
InOverPi = In./pi;
Rounded = round(InOverPi);
CloseRounded = round(InOverPi+1/2)-1/2;
ArctanResult = zeros(size(In));
IsCloseVal = abs(InOverPi - CloseRounded) < 0.00001;
CloseVal = sign(RidgeParam*(CloseRounded-InOverPi)) * pi/2;
NotCloseVal = atan(RidgeParam*tan(In));
ArctanResult(IsCloseVal) = CloseVal(IsCloseVal);
ArctanResult(~IsCloseVal) = NotCloseVal(~IsCloseVal);
Out = ArctanResult + pi*(Rounded + 1/2);
This solution appears to produce nice-looking curves, and as far as I can see it avoids discontinuities and singularities. At least, it seems nice when I run
figure, plot(ContTangents(-10:.001*pi:10,2))
Edit: Some parts of the function are modified. When running ContTangents(pi/2-exp(-36),2) (the problem value in the OP's example below), I found this very peculiar behavior. Is this a bug in Matlab?
K>> InOverPi<0.5
ans =
1
K>> round(InOverPi)
ans =
1
This could be worked around through redefining an own round function which doesn't fail at the point .5-eps(.25). In fact, due to the entertaining behavior of floating point at the limit, you can't even define that number as such, but you have to proceed as follows.
format hex
(pi/2-exp(-36))/pi
ans =
3fdfffffffffffff
-eps(.25)+.25+.25
ans =
3fdfffffffffffff
I have a matrix of numbers for one of the variables in an fsolve equation so when I run matlab I am hoping to get back a matrix but instead get a scalar. I even tried a for loop but this gave me an error about size so that is not the solution. I am including the code to get some feedback as to what i am doing wrong.
z=0.1;
bubba =[1 1.5 2];
bubba = bubba';
joe = 0:0.1:1.5;
joe = repmat(joe,3,1);
bubba = repmat(bubba,1,length(joe));
for x=1:1:16
eqn0 = #(psi0) (joe.-bubba.*(sqrt((psi0+z))));
result0(x) = fsolve(eqn0,0.1,options);
end
note I need the joe variable later for plotting so I clipped that part of the code.
Based on your earlier comments, let me take a shot at a solution... still not sure this is what you want:
bubba =[1 1.5 2];
joe = 0:0.1:1.5;
for xi = 1:numel(joe)
for xj = 1:numel(bubba)
eqn0 = #(psi0) (joe(xi).-bubba(xj).*(sqrt((psi0+z))));
result(xi,xj) = fsolve(eqn0,0.1,options);
end
end
It is pedestrian; but is it what you want? I can't access matlab right now, otherwise I might come up with something more efficient.
To elaborate on my comment:
psi0 is the independent variable in your solver. You set the dimension of it to [1 1] when you use a scalar as the second argument of fsolve(eqn0, 0.1, options); - this tells Matlab to optimize the scalar psi0, starting at a value of 0.1. The result will be a scalar - the value that minimizes the function
0.1 * sqrt(psi0 + 0.1)
since you had set z=0.1
You should get a value of -0.1 returned for every iteration of your loop, since you never changed anything. There is not enough information right now to figure out which factor you would like to be a matrix - especially since your expression for eqn0 involves a matrix multiplication, it's hard to know what you expect the dimensionality of the result to be.
I hope that you will use this initial answer as a springboard to modify your question so it can be answered properly!?
I am calculating the overlap of two normal bivariate distribution using the following function
function [ oa ] = bivariate_overlap_integral(mu_x1,mu_y1,mu_x2,mu_y2)
%calculating pdf. Using x as vector because of MATLAB requirements for integration
bpdf_vec1=#(x,y,mu_x,mu_y)(exp(-((x-mu_x).^2)./2.-((y-mu_y)^2)/2)./(2*pi));
%calcualting overlap of two distributions at the point x,y
overlap_point = #(x,y) min(bpdf_vec1(x,y,mu_x1,mu_y1),bpdf_vec1(x,y,mu_x2,mu_y2));
%calculating overall overlap area
oa=dblquad(overlap_point,-100,100,-100,100);
You can see that this involves taking a double integral (x: -100 to 100, y:-100 to 100, ideally -inf to inf but suffices at the moment) from the function overlap_point which is minimum of 2 pdf-s given by the function bpdf_vec1 of two distributions at the point x,y.
Now, PDF is never 0, so I would expect that the larger the area of the interval, the larger will the end result become, obviously with the negligible difference after a certain point. However, it appears that sometimes, when I decrease the size of the interval the result grows. For instance:
>> mu_x1=0;mu_y1=0;mu_x2=5;mu_y2=0;
>> bpdf_vec1=#(x,y,mu_x,mu_y)(exp(-((x-mu_x).^2)./2.-((y-mu_y)^2)/2)./(2*pi));
>> overlap_point = #(x,y) min(bpdf_vec1(x,y,mu_x1,mu_y1),bpdf_vec1(x,y,mu_x2,mu_y2));
>> dblquad(overlap_point,-10,10,-10,10)
ans =
0.0124
>> dblquad(overlap_point,-100,100,-100,100)
ans =
1.4976e-005 -----> strange, as theoretically cannot be smaller then the first answer
>> dblquad(overlap_point,-3,3,-3,3)
ans =
0.0110 -----> makes sense that the result is less than the first answer as the
interval is decreased
Here we can check that the overlaps are (close to) 0 at the border points of interval.
>> overlap_point (100,100)
ans =
0
>> overlap_point (-100,100)
ans =
0
>> overlap_point (-100,-100)
ans =
0
>> overlap_point (100,-100)
ans =
0
Does this perhaps have to do with the implementation of dblquad, or am I making a mistake somewhere? I use MATLAB R2011a.
Thanks
CONGRATULATIONS! You win the award for being the 12 millionth person to ask essentially this question. :) What I'm trying to say is this is an issue that everyone stumbles over at first. Honestly, this question gets asked over and over again, so really, this question should be marked as a dup.
What happens with these things is a bivariate normal is essentially a delta function when viewed from far enough away. And you don't need to really spread that region out too far, since the normal density drops off fast. It is essentially zero over most of the domain you are trying to integrate over, at least to within the tolerances employed.
So then if the quadrature happens to hit some sample points near the areas where there is mass, then you may get some realistic estimate of your integral. But if all the tool sees are numbers that are essentially zero over the entire domain, it concludes the integral is zero. Remember, adaptive integration tools are NOT omniscient. They do not know anything about your function. It is a black box to them. These are NOT symbolic tools.
BTW, this is NOT something I'd expect to see consistently different for Octave versus MATLAB. It is only an issue of the adaptive integrator, and where it chooses to set its sample points down.
OK, here are my octave results.
>fomat long
>z10 = dblquad(overlap_point,-10,10,-10,10)
z10 = 0.0124193303284589
>z100 = dblquad(overlap_point,-100,100,-100,100)
z100 = 0.0124193303245106
>z100 - z10
ans = -3.94835379669001e-012
>z10a = dblquad(overlap_point,-10,10,-10,10,1e-8)
z10a = 0.0124193306514996
>z100a = dblquad(overlap_point,-100,100,-100,100,1e-8)
z100a = 0.0124193306527155
>z100a-z10a
ans = 1.21588676627038e-012
BTW. I've noticed this type of problem before with numerical solutions. Sometimes you make a change that you expect will improve the accuracy of the result (in this case by making your limits closer to the ideal case of the full plane), but instead you get the opposite effect and the result becomes less accurate. What's happening in this case is that by going out "wider", to -100..100, you're shifting the focus away from where the really important action is happening in your function, which is close to the origin. At some point the implementation of dblquad that you're using must start increasing the intersample distance as you increase the limits, and then it starts missing some important stuff close to the origin.
Maybe someone with running a later version of matlab can check this out and see if it's been improved.