Logarithm/Exponential of real numbers in cvc4 - logarithm

I am looking for a solver that can provide models of formulas on real numbers involving logarithms or exponentials.
Can cvc4 handle functions which contain logarithms or exponentials of real numbers? Similarly, can cvc4 express the constant e?
According to this question, z3 can only handle constant exponents, which does not help me.
This question only asks about logarithms for integers.

I am unfamiliar with cvc4 but I perhaps have some useful properties about logarithms that you may be able to exploit based on your limitations.
Technically speaking, no computer (no matter how powerful) knows what e is because it is transcendental (cannot be expressed as the solution to a polynomial equation with rational coefficients).
If you are limited such that you can only take logarithms for integers, you can express e as a faction approximation and solve it that way. The formula ends up being a bit longer than just taking the logarithm directly but the advantage is that you can effectively calculate the logarithm where the base is any rational number, while only individually finding logarithms of whole numbers.
Let e be approximated by the fraction a/b where both a and b are integers.
(a/b)^n = x
log(base a/b)(x) = n
This doesn't really get you anywhere so we have to take a different route that requires a bit more algebra.
(a/b)^n = x
(a^n)/(b^n) = x
a^n = x * b^n
log(base a)(x * b^n) = n
log(base a)(x) + log(base a)(b^n) = n
log(base a)(x) + n*log(base a)(b) = n
log(base a)(x) = n - n*log(base a)(b)
log(base a)(x) = n * (1 - log(base a)(b))
n = log(base a)(x) / (1 - log(base a)(b))
In other words, log(base a)(x) / (1 - log(base a)(b)) is an approximation for ln(x) where a/b is an approximation of e. Obviously, this approximation for ln(x) gets closer to the real value of ln(x) as a/b more closely approximates e. Note I kept this in a general form here that a/b could represent any rational number, not just e.
If this doesn't answer your question fully, I hope it at least helps.
Just tried an arbitrary example.
If you consider a and b as 27183 and 10000 respectively, I tried this quick calculation:
log(base 27183)(82834) / (1 - log(base 27138)(10000)) = 11.32452...
ln(82834) = 11.32459...

Related

Calculating d value in RSA

I saw a couple questions about this but most of them were answered in unhelpful way or didn't get a proper answer at all. I have these variables:
p = 31
q = 23
e - public key exponent = 223
phi - (p-1)*(q-1) = 660
Now I need to calculate d variable (which I know is equal 367). The problem is that I don't know how. I found this equation on the internet but it doesn't work (or I can't use it):
e⋅d=1modϕ(n)
When I see that equation i think that it means this:
d=(1modϕ(n))/e
But apparently it doesn't because 367 (1modϕ(n))/e = 1%660/223 = 1/223 != 367
Maybe I don't understand and I did something wrong - that's why I ask.
I did some more research and I found second equation:
d=1/e mod ϕ(n)
or
d=e^-1 mod ϕ(n)
But in the end it gives the same result:
1/e mod ϕ(n) = 1/223 % 660 = 1/223 != 367
Then I saw some guy saying that to solve that equation you need extended Euclidean algorithm If anyone knows how to use it to solve that problem then I'd be very thankful if you help me.
If you want to calculate something like a / b mod p, you can't just divide it and take division remainder from it. Instead, you have to find such b-1 that b-1 = 1/b mod p (b-1 is a modular multiplicative inverse of b mod p). If p is a prime, you can use Fermat's little theorem. It states that for any prime p, ap = a mod p <=> a(p - 2) = 1/a mod p. So, instead of a / b mod p, you have to compute something like a * b(p - 2) mod p. b(p - 2) can be computed in O(log(p))
using exponentiation by squaring. If p is not a prime, modular multiplicative inverse exists if and only if GCD(b, p) = 1. Here, we can use extended euclidean algorithm to solve equation bx + py = 1 in logarithmic time. When we have bx + py = 1, we can take it mod p and we have bx = 1 mod p <=> x = 1/b mod p, so x is our b-1. If GCD(b, p) ≠ 1, b-1 mod p doesn't exist.
Using either Fermat's theorem or the euclidean algorithm gives us same result in same time complexity, but the euclidean algorithm can also be used when we want to compute something modulo number that's not a prime (but it has to be coprime with numer want to divide by).

Does f(x) = 2*x + 1 belong to $o(X)$?

Suppose a function f: R -> R defined as
f(x) = mx + c for some m, c > 0 and x in R. Does f(x) belong to o(x)?
If the answer is "NO", can we conclude that o(x) does not properly contain the set of sub-linear functions?
The reason I'm asking this:
It is easy to see that f(x) is sub-linear because
f(x1) + f(x2) = mx1 + c + mx2 + c > m(x1+x2) + c = f(x1+x2).
But lim x-> infinity f(x)/x = 2. In this sense f(x) is not in o(x). But o(x) represents the set of sub linear functions. That's where my confusion comes from.
No, f(x) = 2x + 1 ∉ o(x).
I think your confusion comes from the definition of sublinear. Linear algebra and computer science use two different meanings here:
In linear algebra, sublinear functions are a generalization of linear functions, i.e. every linear function is a sublinear function. As you have shown in the question, your f(x) satisfies the subadditivity criterion.
In computer science, linear and sublinear describe the asymptotic behavior. A sublinear function is a function which grows slower than every linear function, given a large enough input. Thus, no linear function is a sublinear function.
Thus, your f(x) is sublinear w.r.t. linear algebra, but it is not sublinear w.r.t. computer science.

matlab wrong modulo result when the divident is raised to a power

Just wondering... I tried doing by hand (with the multiply and square method) the operation (111^11)mod143 and I got the result 67. I also checked that this is correct, in many online tools. Yet, in matlab plugging:
mod(111^11,143)
gives 127! Is there any particular reason for this? I didn't find anything in the documentation...
The value of 111^11 (about 3.1518e+022) exceeds the maximum integer that is guaranteed to be represented exactly as a double, which is 2^53 (about 9.0072e+015). So the result is spoilt by insufficient numerical precision.
To achieve the correct result, use symbolic computation:
>> syms x y z
>> r = mod(x^y, z);
>> subs(r, [x y z], [111 11 143])
ans =
67
Alternatively, for this specific operation (modulo of a large number that is expressed as a product of small numbers), you can do the computation very easily using the following fact (where ∗ denotes product):
mod(a∗b, z) = mod(mod(a,z)∗mod(b,z), z)
That is, you can apply the modulo operation to factors of your large number and the final result is unchanged. If you choose factors sufficiently small so that they can be represented exactly as double, you can do the computation numerically without any loss of precision.
For example: using the decomposition 111^11 = 111^4*111^4*111^3, since all factors are small enough, gives the correct result:
>> mod((mod(111^4, 143))^2 * mod(111^3, 143), 143)
ans =
67
Similarly, using 111^2 and 111 as factors,
>> mod((mod(111^2, 143))^5 * mod(111, 143), 143)
ans =
67
from the matlab website they recommend using powermod(b, e, m) (b^e mod m)
"If b and m are numbers, the modular power b^e mod m can also be computed by the direct call b^e mod m. However, powermod(b, e, m) avoids the overhead of computing the intermediate result be and computes the modular power much more efficiently." ...
Another way is to use symfun
syms x y z
f = symfun(mod(x^y,z), [x y z])
f(111,11,143)

Reverse multiplication of 32-bit numbers

I have two large signed 32-bit numbers (java ints) being multiplied together such that they'll overflow. Actually, I have one of the numbers, and the result. Can I determine what the other operand was?
knownResult = unknownOperand * knownOperand;
Why? I have a string and a suffix being hashed with fnv1a. I know the resulting hash and the suffix, I want to see how easy it is to determine the hash of the original string.
This is the core of fnv1a:
hash ^= byte
hash *= PRIME
It depends. If the multiplier is even, at least one bit must inevitably be lost. So I hope that prime isn't 2.
If it's odd, then you can absolutely reverse it, just multiply by the modular multiplicative inverse of the multiplier to undo the multiplication.
There is an algorithm to calculate the modular multiplicative inverse modulo a power of two in Hacker's Delight.
For example, if the multiplier was 3, then you'd multiply by 0xaaaaaaab to undo (because 0xaaaaaaab * 3 = 1). For 0x01000193, the inverse is 0x359c449b.
You want to solve the equation y = prime * x for x, which you do by division in the finite ring modulo 232: x = y / prime.
Technically you do that by multiplying y with the multiplicative inverse of the prime modulo 232, which can be computed by the extended Euclidean algorithm.
Uh, division? Or am I not understanding the question?
It's not the fastest method, but something very easy to memorise is this:
unsigned inv(unsigned x) {
unsigned xx = x * x;
while (xx != 1) {
x *= xx;
xx *= xx;
}
return x;
}
It returns x**(2**n-1) (as in x*(x**2)*(x**4)*(x**8)*..., or x**(1+2+4+8+...)). As the loop exit condition implies, x**(2**n) is 1 when n is big enough, provided x is odd.
So, x**(2**n-1) equals x**(2**n)/x equals 1/x equals the thing you multiply x by to get the value 1 (mod 2**n). Which you then apply:
knownResult = unknownOperand * knownOperand
knownResult * inv(knownOperand) = unknownOperand * knownOperand * inv(knownOperand)
knownResult * inv(knownOperand) = unknownOperand * 1
or simply:
unknownOperand = knownResult * inv(knownOperand);
But there are faster ways, as given in other answers here. This one's just easy to remember.
Also, obligatory SO "use a library function" answer: BN_mod_inverse().

problem with arithmetic using logarthms to avoid numerical underflow

I have two lists of fractions;
say A = [ 1/212, 5/212, 3/212, ... ]
and B = [ 4/143, 7/143, 2/143, ... ].
If we define A' = a[0] * a[1] * a[2] * ... and B' = b[0] * b[1] * b[2] * ...
I want to calculate the values of A' / B',
My trouble is A are B are both quite long and each value is small so calculating the product causes numerical underflow very quickly...
I understand turning the product into a sum through logarithms can help me determine which of A' or B' is greater
ie max( log(a[0])+log(a[1])+..., log(b[0])+log(b[1])+... )
but i need the actual ratio....
My best bet to date is to keep the number representations as fractions, ie A = [ [1,212], [5,212], [3,212], ... ] and implement my own arithmetic but it's getting clumsy and I have a feeling there is a (simple) way of logarithms I'm just missing....
The numerators for A and B don't come from a sequence. They might as well be random for the purpose of this question. If it helps the denominators for all values in A are the same, as are all the denominators for B.
Any ideas most welcome!
Mat
You could calculate it in a slightly different order:
A' / B' = a[0] / b[0] * a[1] / b[1] * a[2] / b[2] * ...
If you want to keep it in logarithms, remember that A/B corresponds to log A - log B, so after you've summed the logarithms of A and B, you can find the ratio of the larger to the smaller by exponentiating your log base with max(logsumA, logsumB)-min(logsumA,logsumB).
Strip out the numerators and denominators since they are the same for the whole sequence. Compute the ratio of numerators element-by-element (rather as #Mark suggests), finally multiply the result by the right power of the denominator-of-B/denominator-of-A.
Or, if that threatens integer overflow in computing the product of the numerators or powers of the denominators, something like:
A'/B' = (numerator(A[0])/numerator(b[0]))*(denominator(B)/denominator(A) * ...
I've probably got some of the fractions upside-down, but I guess you can figure that out ?