Suppose we want to calculate (a + b)2 in two different ways, that is
(a + b) * (a + b)
a2 + 2 a b + b2
Now, suppose a = 1.4 and b = -2.7. If we plug those two numbers in the formulas with format long we obtain in both cases 1.690000000000001, that is, if I run the following script:
a = 1.4;
b = -2.7;
format long
r = (a + b) * (a + b)
r2 = a^2 + 2*a*b + b^2
abs_diff = abs(r - r2)
I obtain
r = 1.690000000000001
r2 = 1.690000000000001
abs_diff = 6.661338147750939e-16
What's going on here? I could preview different results for r or r2 (because Matlab would be executing different floating-point operations), but not for the absolute value of their difference.
I also noticed that the relative error of r and r2 are different, that is, if I do
rel_err1 = abs(1.69 - r) / 1.69
rel_err2 = abs(1.69 - r2) / 1.69
I obtain
rel_err1 = 3.941620205769786e-16
rel_err2 = 7.883240411539573e-16
This only makes me think that r are not actually the same r2. Is there a way to see them completely then, if they are really different? If not, what's happening?
Also, both relative errors are not less than eps / 2, does this mean that an overflow has happened? If yes, where?
Note: This is a specific case. I understood that we're dealing with floating-point numbers and rounding errors. But I would like a to understand better them by going through this example.
Don't rely on the output of format long to conclude that two numbers are equal...
a = 1.4;
b = -2.7
r1 = (a + b) * (a + b);
r2 = a^2 + 2*a*b + b^2;
r3 = (a+b)^2;
Instead you can check their hex representation using:
>> num2hex([r1 r2 r3])
ans =
3ffb0a3d70a3d70d
3ffb0a3d70a3d710
3ffb0a3d70a3d70d
or the printf family of functions:
>> fprintf('%bx\n', r1, r2, r3)
3ffb0a3d70a3d70d
3ffb0a3d70a3d710
3ffb0a3d70a3d70d
or even:
>> format hex
>> disp([r1; r2; r3])
3ffb0a3d70a3d70d
3ffb0a3d70a3d710
3ffb0a3d70a3d70d
Floating point arithmetic is not associative.
While mathematically these two are equal, they are not in floating point maths.
r = (a + b) * (a + b)
r2 = a^2 + 2*a*b + b^2
The order operations are executed in floating point maths is very relevant. That is why when you do floating point maths you need to be very careful of the order of you r multiplications/divisions, specially when working with very big numbers together with really small numbers.
Related
Suppose that gcd(e,m) = g. Find integer d such that (e x d) = g mod m
Where m and e are greater than or equal to 1.
The following problem seems to be solvable algebraically but I've tried doing it and it give me an integer number. Sometimes, the solution for d is an integer and sometimes it isn't. How can I approach this problem?
d can be computed with the extended euklidean algorithm, see e.g. here:
https://en.wikipedia.org/wiki/Extended_Euclidean_algorithm
The a,b on that page are your e,m, and your d will be the x.
Perhaps you are assuming that both e and m are integers, but the problem allows them to be non-integers? There is only one case that gives an integer solution when both e and m are integers.
Why strictly integer output is not a reasonable outcome if e != m:
When you look at a fraction like 3/7 say, and refer to its denominator as the numerator's "divisor", this is a loose sense of the word from a classical math-y perspective. When you talk about the gcd (greatest common divisor), the "d" refers to an integer that divides the numerator (an integer) evenly, resulting in another integer: 4 is a divisor of 8, because 8/4 = 2 and 2 is an integer. A computer science or discrete mathematics perspective might frame a divisor as a number d that for a given number a gives 0 when we take a % d (a mod d for discrete math). Can you see that the absolute value of a divisor can't exceed the absolute value of the numerator? If it did, you would get pieces of pie, instead of whole pies - example:
4 % a = 0 for a in Z (Z being the set of integers) while |a| <= 4 (in math-y notation, that set is: {a ∈ Z : |a| <= 4}), but
4 % a != 0 for a in Z while |a| > 4 (math-y: {a ∈ Z : |a| > 4}
), because when we divide 4 by stuff bigger than it, like 5, we get fractions (i.e. |4/a| < 1 when |a| > 4). Don't worry too much about the absolute value stuff if it throws you off - it is there to account for working with negative numbers since they are integers as well.
So, even the "greatest" of divisors for any given integer will be smaller than the integer. Otherwise it's not a divisor (see above, or Wikipedia on divisors).
Look at gcd(e, m) = g:
By the definition of % (mod for math people), for any two numbers number1 and number2, number1 % number2 never makes number1 bigger: number1 % number2 <= number1.
So substitute: (e * d) = g % m --> (e * d) <= g
By the paragraphs above and definition of gcd being a divisor of both e and m: g <= e, m.
To make (e * d) <= g such that d, g are both integers, knowing that g <= e since g is a divisor of e, we have to make the left side smaller to match g. You can only make an integer smaller with multiplcation if the other multipland is 0 or a fraction. The problem specifies that d is an integer, so we one case that works - the d = 0 case - and infinitely many that give a contradiction - contradiction that e, m, and d all be integers.
If e == m:
This is the d = 0 case:
If e == m, then gcd(e, m) = e = m - example: greatest common divisor of 3 and 3 is 3
Then (e * d) = g % m is (e * d) = m % m and m % m = 0 so (e * d) = 0 implying d = 0
How to code a function that will find d when either of e or m might be NON-integer:
A lot of divisor problems are done iteratively, like "find the gcd" or "find a prime number". That works in part because those problems deal strictly with integers, which are countable. With this problem, we need to allow e or m to be non-integer in order to have a solution for cases other than e = m. The set of rational numbers is NOT countable, however, so an iterative solution would eventually make your program crash. With this problem, you really just want a formula, and possibly some cases. You might set it up like this:
If e == m
return 0 # since (e * d) = m % m -> d = 0
Else
return g / e
Lastly:
Another thing that might be useful depending on what you do with this problem is the fact that the right-hand-side is always either g or 0, because g <= m since g is a divisor of m (see all the stuff above). In the cases where g < m, g % m = g. In the case where g == m, g % m = 0.
The #asp answer with the link to the Wikipedia page on the Euclidean Algorithm is good.
The #aidenhjj comment about trying the math-specific version of StackOverflow is good.
In case this is for a math class and you aren't used to coding: <=, >=, ==, and != are computer speak for ≤, ≥, "are equal", and "not equal" respectively.
Good luck.
I need to rewrite a symbolic expression in terms of a specific subexpression.
Consider the following scenario:
expression f with 2 variables a, b
subexpression c = a / b
syms a b c
f = b / (a + b) % = 1 / (1 + a/b) = 1 / (1 + c) <- what I need
Is there a way to achieve this?
Edit:
The step from 1 / (1 + a/b) to 1 / (1 + c) can be achieved by calling
subs(1 / (1 + a/b),a/b,c)
So a better formulated question is:
Is there a way to tell MATLAB to 'simplify' b / (a + b) into 1 / (1 + a/b)?
Just calling simplify(b / (a + b) makes no difference.
Simplification to your desired form is not automatically guaranteed, and in my experience, isn't likely to be achieved directly through simplify-ing as I've noticed simplification rules prefer rational polynomial functions. However,
if you know the proper reducing ratio, you can substitute and simplify
>> syms a b c
>> f = b / (a + b);
>> simplify(subs(f,a,c*b))
ans =
1/(c + 1)
>> simplify(subs(f,b,a/c))
ans =
1/(c + 1)
And then re-substitute without simplification, if desired:
>> subs(simplify(subs(f,a,c*b)),c,a/b)
ans =
1/(a/b + 1)
>> subs(simplify(subs(f,b,a/c)),c,a/b)
ans =
1/(a/b + 1)
This question is connected to this one. Suppose again the following code:
syms x
f = 1/(x^2+4*x+9)
Now taylor allows the function f to be expanded about infinity:
ts = taylor(f,x,inf,'Order',100)
But the following code
c = coeffs(ts)
produces errors, because the series does not contain positive powers of x (it contains negative powers of x).
In such a case, what code should be used?
Since the Taylor Expansion around infinity was likely performed with the substitution y = 1/x and expanded around 0, I would explicitly make that substitution to make the power positive for use on coeffs:
syms x y
f = 1/(x^2+4x+9);
ts = taylor(f,x,inf,'Order',100);
[c,ty] = coeffs(subs(ts,x,1/y),y);
tx = subs(ty,y,1/x);
The output from taylor is not a multivariate polynomial, so coeffs won't work in this case. One thing you can try is using collect (you may get the same or similar result from using simplify):
syms x
f = 1/(x^2 + 4*x + 9);
ts = series(f,x,Inf,'Order',5) % 4-th order Puiseux series of f about 0
c = collect(ts)
which returns
ts =
1/x^2 - 4/x^3 + 7/x^4 + 8/x^5 - 95/x^6
c =
(x^4 - 4*x^3 + 7*x^2 + 8*x - 95)/x^6
Then you can use numden to extract the numerator and denominator from either c or ts:
[n,d] = numden(ts)
which returns the following polynomials:
n =
x^4 - 4*x^3 + 7*x^2 + 8*x - 95
d =
x^6
coeffs can then be used on the numerator. You may find other functions listed here helpful as well.
I want to write a program which will calculate the statistic in Bertrand paradox.
In my way, I want select two dot in circle , and pass line using them (two dots), it is my chord. Then I want to calculate how many of these chords are longer than sqrt(3); but when I run this script some of the chords are bigger than 2 ! ( radius of my circle is 1 )
I don't know what is wrong with it, can anybody help me?
See this link please for the formula used.
r1 = rand(1,1000000);
teta1 = 2*pi * rand(1,1000000);
x1 = r1 .* (cos(teta1));
y1 = r1 .* (sin(teta1));
r2 = rand(1,1000000);
teta2 = 2*pi * rand(1,1000000);
x2 = r2 .* (cos(teta2));
y2 = r2 .* (sin(teta2));
%solve this equation : solve('(t*x2 +(1-t)*x1)^2 +(t*y2 +(1-t)*y1)^2 =1', 't');
t1= ((- x1.^2.*y2.^2 + x1.^2 + 2*x1.*x2.*y1.*y2 - 2*x1.*x2 - x2.^2.*y1.^2 + x2.^2 + y1.^2 - 2*y1.*y2 + y2.^2).^(1/2) - x1.*x2 - y1.*y2 + x1.^2 + y1.^2)/(x1.^2 - 2*x1.*x2 + x2.^2 + y1.^2 - 2*y1.*y2 + y2.^2);
t2= -((- x1.^2.*y2.^2 + x1.^2 + 2*x1.*x2.*y1.*y2 - 2*x1.*x2 - x2.^2.*y1.^2 + x2.^2 + y1.^2 - 2*y1.*y2 + y2.^2).^(1/2) + x1.*x2 + y1.*y2 - x1.^2 - y1.^2)/(x1.^2 - 2*x1.*x2 + x2.^2 + y1.^2 - 2*y1.*y2 + y2.^2);
length = abs (t1-t2) * sqrt (( x2-x1).^2 + (y2-y1).^2);
hist(length)
flag = 0;
for check = length
if( check > sqrt(3) )
flag = flag + 1;
end
end
prob = (flag/1000000)^2;
Your formula for length is probably to blame for the nonsensical results, and given its length, it is easier to replace it than to debug. Here is another way to find the length of chord passing through two points (x1,y1) and (x2,y2):
Find the distance of the chord from the center
Use the Pythagorean theorem to find its length
In Matlab code, this is done by
distance = abs(x1.*y2-x2.*y1)./sqrt((x2-x1).^2+(y2-y1).^2);
length = 2*sqrt(1-distance.^2);
The formula for distance involves abs(x1.*y2-x2.*y1), which is twice the area of the triangle with vertices (0,0), (x1,y1), and (x2,y2). Dividing this quantity by the base of triangle, sqrt((x2-x1).^2+(y2-y1).^2), yields its height.
Also, putting 1000000 samples into mere 10 bins is a waste of information: you get a crude histogram for all that effort. Better to use hist(length,100).
Finally, your method of selecting two points through which to pass a line does not take them from the uniform distribution on the disk. If you want uniform distribution over the disk, use
r1 = sqrt(rand(1,1000000));
r2 = sqrt(rand(1,1000000));
because for a uniformly distributed point, the square of the distance to the center is uniformly distributed in [0,1].
Finally, I've no idea why you square in prob = (flag/1000000)^2.
Here is your code with aforementioned modifications.
r1 = sqrt(rand(1,1000000));
teta1 = 2*pi * rand(1,1000000);
x1 = r1 .* (cos(teta1));
y1 = r1 .* (sin(teta1));
r2 = sqrt(rand(1,1000000));
teta2 = 2*pi * rand(1,1000000);
x2 = r2 .* (cos(teta2));
y2 = r2 .* (sin(teta2));
distance = abs(x1.*y2-x2.*y1)./sqrt((x2-x1).^2+(y2-y1).^2);
length = 2*sqrt(1-distance.^2);
hist(length,100)
flag = 0;
for check = length
if( check > sqrt(3) )
flag = flag + 1;
end
end
prob = flag/1000000;
i am trying to minimize (globally) 3 functions that use common variables, i tried to combine them into one function and minimize that using L-BFGS-B(i need to set boundaries for the variables), but it has shown to be very difficult to balance each parameter with weightings, i.e. when one is minimised the other will not be. I also tried to use SLSQP method to minimize one of them while setting others as constraints, but the constraints are often ignored/not met.
Here are what need to be minimized, all the maths are done in meritscalculation and meritoflength, meritofROC, meritofproximity, heightorderreturnedare returned from the calculations as globals.
def lengthmerit(x0):
meritscalculation(x0)
print meritoflength
return meritoflength
def ROCmerit(x0):
meritscalculation(x0)
print meritofROC
return meritofROC
def proximitymerit(x0):
meritscalculation(x0)
print meritofproximity+heightorder
return meritofproximity+heightorder
i want to minimize all of them using a common x0 (with boundaries) as independent variable, is there a way to achieve this?
Is this what you want to do ?
minimize a * amerit(x) + b * bmerit(x) + c * cmerit(x)
over a, b, c, x:
a + b + c = 1
a >= 0.1, b >= 0.1, c >= 0.1 (say)
x in xbounds
If x is say [x0 x1 .. x9], set up a new variable abcx = [a b c x0 x1 .. x9],
constrain a + b + c = 1 with a penalty term added to the objective function,
and minimize this:
define fabc( abcx ):
""" abcx = a, b, c, x
-> a * amerit(x) + ... + penalty 100 (a + b + c - 1)^2
"""
a, b, c, x = abcx[0], abcx[1], abcx[2], abcx[3:] # split
fa = a * amerit(x)
fb = b * bmerit(x)
fc = c * cmerit(x)
penalty = 100 * (a + b + c - 1) ** 2 # 100 ?
f = fa + fb + fc + penalty
print "fabc: %6.2g = %6.2g + %6.2g + %6.2g + %6.2g a b c: %6.2g %6.2g %6.2g" % (
f, fa, fb, fc, penalty, a, b, c )
return f
and bounds = [[0.1, 0.5]] * 3 + xbounds, i.e. each of a b c in 0.1 .. 0.5 or so.
The long print s should show you why one of a b c approach 0 --
maybe one of amerit() bmerit() cmerit() is way bigger than the others ?
Plots instead of prints would be easy too.
Summary:
1) formulate the problem clearly on paper, as at the top
2) translate that into python.
here is the result of some scaling and weighting
objective function:
merit_function=wa*meritoflength*1e3+wb*meritofROC+wc*meritofproximity+wd*heightorder*10+1000 * (wa+wb+wc+wd-1) ** 2
input:
abcdex=np.array(( 0.5, 0.5, 0.1, 0.3, 0.1...))
output:
fun: array([ 7.79494644])
x: array([ 4.00000000e-01, 2.50000000e-01, 1.00000000e-01,
2.50000000e-01...])
meritoflength : 0.00465499380753. #target 1e-5, usually start at 0.1
meritofROC: 23.7317956542 #target ~1, range <33
Heightorder: 0 #target :strictly 0, range <28
meritofproximity : 0.0 #target:less than 0.02, range <0.052
i realised after a few runs, all the weightings tend to stay at the minimum values of the bound, and im back to manually tuning the scaling problem i started with.
Is there a possibility that my optimisation function isnt finding the true global minimum?
here is how i minimised it:
minimizer_kwargs = {"method": "L-BFGS-B", "bounds": bnds, "tol":1e0 }
ret = basinhopping(merit_function, abcdex, minimizer_kwargs=minimizer_kwargs, niter=10)
zoom = ret['x']
res = minimize(merit_function, zoom, method = 'L-BFGS-B', bounds=bnds, tol=1e-6)