Plotting piecewise functions in maple - maple

How would I plot this function in maple?
f(x)= 1 if x is rational, 0 otherwise.
Also I want this to be within the interval 0<=x<=1

That function is not piecewise, and it cannot be plotted by any software. It's theoretically impossible. The best that you could do is plot a line segment from (0,1) to (1,1) to represent the rationals and another from (0,0) to (0,1) to represent the irrationals.

This is an old question now but is a good place to clarify just what a computer program could mean by "rational" and "irrational".
As a first attempt you could try to define your desired function this way:
f1 := x -> `if`(x::rational, 1, 0):
A few test cases seem to be giving us what we want:
> f1(3), f1(3/2), f1(Pi), f1(sqrt(2));
1, 1, 0, 0
However we then run into this:
> f1(1.5);
0
What gives? Since f(3/2)=1, we might expect f(1.5) to be the same. The explanation is that the check x::rational is checking that the input x is of the Maple type rational, which is an integer or a fraction. A Maple fraction is an ordered pair of integers (numerator and denominator) which is structurally different from a floating-point number.
A broader interpretation of the mathematical meaning of 'rational' would include the floating-point numbers. So we can broaden that definition and write:
f2 := x -> `if`(x::{rational,float}, 1, 0}:
and then we have the desired f2(1.5)=1.
But both of these are useless for plotting. When Maple plots something, it generates a set of sample points from the specified interval, all of which will be floating-point numbers. Of our previously-defined functions, f1 will then return zero for all these points, while f2 will return 1.
You won't do any better with other software. If you were to take a truly uniform sample of n points from some real interval, your resulting points would be irrational (in fact, transcendental). Almost all such numbers cannot be represented on a computer because they cannot be represented in any compact form, so any software that attempts such sampling will simply return a collection of n results with terminating decimal expansions.
As Carl suggested you can generate something resembling the plot you want with
> plot([0,1]);

Related

Maple unable to evalute function in whole range of plot

Maple helpfully can work out the solution to Laplace's equation in a square region and give me the answer in closed form (in terms of an infinite sum). If I try to plot the function of two variables as a 3d plot it gives me most of the surface but not all of it:
Here is the Maple code which produces the solution and turns it into an expression suitable for plotting
lapeq:=diff(v(x,y),x$2)+diff(v(x,y),y$2)=0;
bcs:=v(x,0)=0,v(0,y)=0,v(1,y)=0,v(x,1)=100;
sol1:=pdsolve({lapeq,bcs});
vxy:=eval(v(x,y),sol1);
the result of which is
All good so far. Plotting it via
plot3d(vxy,x=0..1,y=0..1);
gives a result which is fine for x in the full range (0<x<1) but only for y between 0 and around 0.9:
I have tried to evalf some point in the unknown region and Maple can't tell me numerical values there. Is there any way to get Maple to "try a bit harder" to evaluate those numbers?
You could try setting the number of terms in the sum
Compare
lapeq:=diff(v(x,y),x$2)+diff(v(x,y),y$2)=0;
bcs:=v(x,0)=0,v(0,y)=0,v(1,y)=0,v(x,1)=100;
sol1:=pdsolve({lapeq,bcs});
vxy:=subs(infinity=100,sol1);
plot3d(rhs(vxy),x=0..1,y=0..1);
With
restart;
lapeq:=diff(v(x,y),x$2)+diff(v(x,y),y$2)=0;
bcs:=v(x,0)=0,v(0,y)=0,v(1,y)=0,v(x,1)=100;
sol1:=pdsolve({lapeq,bcs});
vxy:=eval(v(x,y),sol1);
plot3d(vxy,x=0..1,y=0..1);
I'm not a huge fan of chopping the infinite sum at some value of the upper bound for n, without at least demonstrating either symbolically or numerically that it is justified. Ie, that chopping does not provide a false idea of convergence.
So, you asked how to make it work "harder". I'll take that to mean that you too might prefer to let evalf/Sum itself decide whether each infinite numeric sum converges -- rather than manually truncate it yourself at some finite value for the upper value of the range for n.
For fun, and caution, I also divide both numerator and denominator of K by the potentially large exp call (potentially much larger than 1). That may not be necessary here.
restart;
lapeq:=diff(v(x,y),x$2)+diff(v(x,y),y$2)=0:
bcs:=v(x,0)=0,v(0,y)=0,v(1,y)=0,v(x,1)=100:
sol1:=pdsolve({lapeq,bcs}):
vxy:=eval(v(x,y),sol1):
K:=op(1,vxy):
J:=simplify(combine(numer(K)/exp(2*Pi*n)))
/simplify(combine(denom(K)/exp(2*Pi*n))):
F:=subs(__d=J,
proc(x,y) local k, m, n, r;
if y<0.8 then
r:=Sum(__d,n=1..infinity);
else
UseHardwareFloats:=false;
m := ceil(1*abs(y/0.80)^16);
r:=add(Sum(eval(__d,n=m*n-k),n=1..infinity),
k=0..m-1);
end if;
evalf(r);
end proc):
plot3d( F, 0..1, 0..0.99 );
Naturally this is slower than mere chopping of terms to obtain a finite sum. And you might be satisfied with some technique that establishes that the excluded terms' sums are negligible.

Does the rand function ever produce values of 0 or 1 in MATLAB/Octave?

I'm looking for a function that will generate random values between 0 and 1, inclusive. I have generated 120,000 random values by using rand() function in octave, but haven't once got the values 0 or 1 as output. Does rand() ever produce such values? If not, is there any other function I can use to achieve the desired result?
If you read the documentation of rand in both Octave and MATLAB, it is an open interval between (0,1), so no, it shouldn't generate the numbers 0 or 1.
However, you can perhaps generate a set of random integers, then normalize the values so that they lie between [0,1]. So perhaps use something like randi (MATLAB docs, Octave docs) where it generates integer values from 1 up to a given maximum. With this, define this maximum number, then subtract by 1 and divide by this offset maximum to get values between [0,1] inclusive:
max_num = 10000; %// Define maximum number
N = 1000; %// Define size of vector
out = (randi(max_num, N, 1) - 1) / (max_num - 1); %// Output
If you want this to act more like rand but including 0 and 1, make the max_num variable quite large.
Mathematically, if you sample from a (continuous) uniform distribution on the closed interval [0 1], values 0 and 1 (or any value, in fact) have probability strictly zero.
Programmatically,
If you have a random generator that produces values of type double on the closed interval [0 1], the probability of getting the value 0, or 1, is not zero, but it's so small it can be neglected.
If the random generator produces values from the open interval (0, 1), the probability of getting a value 0, or 1, is strictly zero.
So the probability is either strictly zero or so small it can be neglected. Therefore, you shouldn't worry about that: in either case the probability is zero for practical purposes. Even if rand were of type (1) above, and thus could produce 0 and 1, it would produce them with probability so small that you would "never" see those values.
Does that sound strange? Well, that happens with any number. You "never" see rand ever outputting exactly 1/4, either. There are so many possible outputs, all of them equally likely, that the probability of any given output is virtually zero.
rand produces numbers from the open interval (0,1), which does not include 0 or 1, so you should never get those values.. This was more clearly documented in previous versions, but it's still stated in the help text for rand (type help rand rather than doc rand).
However, since it produces doubles, there are only a finite number of values that it will actually produce. The precise set varies depending on the RNG algorithm used. For Mersenne twister, the default algorithm, the possible values are all multiples of 2^(-53), within the open interval (0,1). (See doc RandStream.list, and then "Choosing a Random Number Generator" for info on other generators).
Note that 2^(-53) is eps/2. Therefore, it's equivalent to drawing from the closed interval [2^(-53), 1-2^(-53)], or [eps/2, 1-eps/2].
You can scale this interval to [0,1] by subtracting eps/2 and dividing by 1-eps. (Use format hex to display enough precision to check that at the bit level).
So x = (rand-eps/2)/(1-eps) should give you values on the closed interval [0,1].
But I should give a word of caution: they've put a lot of effort into making sure that output of rand gives an appropriate distribution of any given double within (0,1), and I don't think you're going to get the same nice properties on [0,1] if you apply the scaling I suggested. My knowledge of floating-point math and RNGs isn't up to explaining why, or what you might do about that.
I just tried this:
octave:1> max(rand(10000000,1))
ans = 1.00000
octave:2> min(rand(10000000,1))
ans = 3.3788e-08
Did not give me 0 strictly, so watch out for floating point operations.
Edit
Even though I said, watch out for floating point operations I did fall for that. As #eigenchris pointed out:
format long g
octave:1> a=max(rand(1000000,1))
a = 0.999999711020176
It yields a floating number close to one, not equal, as you can see now after changing the precision, as #rayryeng suggested.
Although not direct to the question here, I find it helpful to link to this SO post Octave - random generate number that has a one liner to generate 1s and 0s using r = rand > 0.5.

Variable precicion arithmetic for symbolic integral in Matlab

I am trying to calculate some integrals that use very high power exponents. An example equation is:
(-exp(-(x+sqrt(p)).^2)+exp(-(x-sqrt(p)).^2)).^2 ...
./( exp(-(x+sqrt(p)).^2)+exp(-(x-sqrt(p)).^2)) ...
/ (2*sqrt(pi))
where p is constant (1000 being a typical value), and I need the integral for x=[-inf,inf]. If I use the integral function for numeric integration I get NaN as a result. I can avoid that if I set the limits of the integration to something like [-20,20] and a low p (<100), but ideally I need the full range.
I have also tried setting syms x and using int and vpa, but in this case vpa returns:
1.0 - 1.0*numeric::int((1125899906842624*(exp(-(x - 10*10^(1/2))^2) - exp(-(x + 10*10^(1/2))^2))^2)/(3991211251234741*(exp(-(x - 10*10^(1/2))^2) + exp(-(x + 10*10^(1/2))^2)))
without calculating a value. Again, if I set the limits of the integration to lower values I do get a result (also for low p), but I know that the result that I get is wrong – e.g., if x=[-100,100] and p=1000, the result is >1, which should be wrong as the equation should be asymptotic to 1 (or alternatively the codomain should be [0,1) ).
Am I doing something wrong with vpa or is there another way to calculate high precision values for my integrals?
First, you're doing something that makes solving symbolic problems more difficult and less accurate. The variable pi is a floating-point value, not an exact symbolic representation of the fundamental constant. In Matlab symbolic math code, you should always use sym('pi'). You should do the same for any other special numeric values, e.g., sqrt(sym('2')) and exp(sym('1')), you use or they will get converted to an approximate rational fraction by default (the source of strange large number you see in the code in your question). For further details, I recommend that you read through the documentation for the sym function.
Applying the above, here's a runnable example:
syms x;
p = 1000;
f = (-exp(-(x+sqrt(p)).^2)+exp(-(x-sqrt(p)).^2)).^2./(exp(-(x+sqrt(p)).^2)...
+exp(-(x-sqrt(p)).^2))/(2*sqrt(sym('pi')));
Now vpa(int(f,x,-100,100)) and vpa(int(f,x,-1e3,1e3)) return exactly 1.0 (to 32 digits of precision, see below).
Unfortunately, vpa(int(f,x,-Inf,Inf)), does not return an answer, but a call to the underlying MuPAD function numeric::int. As I explain in this answer, this is what can happen when int cannot obtain a result. Normally, it should try to evaluate the the integral numerically, but your function appears to be ill-defined at ±∞, resulting in divide by zero issues that the variable precision quadrature methods can't handle well. You can evaluate the integral at wider bounds by increasing the variable precision using the digits function (just remember to set digits back to the default of 32 when done). Setting digits(128) allowed me to evaluate vpa(int(f,x,-1e4,1e4)). You can also more efficiently evaluate your integral over a wider range via 2*vpa(int(f,x,0,1e4)) at lower effective digits settings.
If your goal is to see exactly how much less than one p = 1000 corresponds to, you can use something like vpa(1-2*int(f,x,0,1e4)). At digits(128), this returns
0.000000000000000000000000000000000000000000000000000000000000000000000000000000000000000086457415971094118490438229708839420392402555445545519907545198837816908450303280444030703989603548138797600750757834260181259102
Applying double to this shows that it is approximately 8.6e-89.

What is this code doing? Machine Learning

I'm just learning matlab and I have a snippet of code which I don't understand the syntax of. The x is an n x 1 vector.
Code is below
p = (min(x):(max(x)/300):max(x))';
The p vector is used a few lines later to plot the function
plot(p,pp*model,'r');
It generates an arithmetic progression.
An arithmetic progression is a sequence of numbers where the next number is equal to the previous number plus a constant. In an arithmetic progression, this constant must stay the same value.
In your code,
min(x) is the initial value of the sequence
max(x) / 300 is the increment amount
max(x) is the stopping criteria. When the result of incrementation exceeds this stopping criteria, no more items are generated for the sequence.
I cannot comment on this particular choice of initial value and increment amount, without seeing the surrounding code where it was used.
However, from a naive perspective, MATLAB has a linspace command which does something similar, but not exactly the same.
Certainly looks to me like an odd thing to be doing. Basically, it's creating a vector of values p that range from the smallest to the largest values of x, which is fine, but it's using steps between successive values of max(x)/300.
If min(x)=300 and max(x)=300.5 then this would only give 1 point for p.
On the other hand, if min(x)=-1000 and max(x)=0.3 then p would have thousands of elements.
In fact, it's even worse. If max(x) is negative, then you would get an error as p would start from min(x), some negative number below max(x), and then each element would be smaller than the last.
I think p must be used to create pp or model somehow as well so that the plot works, and without knowing how I can't suggest how to fix this, but I can't think of a good reason why it would be done like this. using linspace(min(x),max(x),300) or setting the step to (max(x)-min(x))/299 would make more sense to me.
This code examines an array named x, and finds its minimum value min(x) and its maximum value max(x). It takes the maximum value and divides it by the constant 300.
It doesn't explicitly name any variable, setting it equal to max(x)/300, but for the sake of explanation, I'm naming it "incr", short for increment.
And, it creates a vector named p. p looks something like this:
p = [min(x), min(x) + incr, min(x) + 2*incr, ..., min(x) + 299*incr, max(x)];

Why inverse equality does not satisfy in MATLAB?

MATLAB does not satisfy matrix arithmetic for inverse, that is;
(ABC)-1 = C-1 * B-1 * A-1
in MATLAB,
if inv(A*B*C) == inv(C)*inv(B)*inv(A)
disp('satisfied')
end
It does not qualify. When I made it format long, I realized that there is difference in points, but it even does not satisfy when I make it format rat.
Why is that so?
Very likely a floating point error. Note that the format function affects only how numbers display, not how MATLAB computes or saves them. So setting it to rat won't help the inaccuracy.
I haven't tested, but you may try the Fractions Toolbox for exact rational number arithmetics, which should give an equality to above.
Consider this (MATLAB R2011a):
a = 1e10;
>> b = inv(a)*inv(a)
b =
1.0000e-020
>> c = inv(a*a)
c =
1.0000e-020
>> b==c
ans =
0
>> format hex
>> b
b =
3bc79ca10c924224
>> c
c =
3bc79ca10c924223
When MATLAB calculates the intermediate quantities inv(a), or a*a (whether a is a scalar or a matrix), it by default stores them as the closest double precision floating point number - which is not exact. So when these slightly inaccurate intermediate results are used in subsequent calculations, there will be round off error.
Instead of comparing floating point numbers for direct equality, such as inv(A*B*C) == inv(C)*inv(B)*inv(A), it's often better to compare the absolute difference to a threshold, such as abs(inv(A*B*C) - inv(C)*inv(B)*inv(A)) < thresh. Here thresh can be an arbitrary small number, or some expression involving eps, which gives you the smallest difference between two numbers at the precision at which you're working.
The format command only controls the display of results at the command line, not the way in which results are internally stored. In particular, format rat does not make MATLAB do calculations symbolically. For this, you might take a look at the Symbolic Math Toolbox. format hex is often even more useful than format long for diagnosing floating point precision issues such as the one you've come across.