Why does `evalb(((3*x + 1)^5)(3*x - 1)/x^6 = ((3 + 1/x)^5)(3 - 1/x)) assuming (0 < x)` return `false` in Maple? - boolean

The question should perhaps be why does (3*x + 1)^5 * (3*x - 1) / x^6 = (3 + 1/x)^5 * (3 - 1/x) evaluate to false in Maple, even with the assumption that x > 0. The same expression evaluates to true in Mathematica and, of course, the statement itself is mathematically true under the previous assumption.
Maple's help pages don't give any clue on why this happens, and I would like someone to explain this behaviour before I think that Maple's evalb() is kind of broken. It is the type of questions I'm asking myself lately, since I'm deciding wether I should learn Maple or rather drop it and focus on learning Mathematica.
Thank you in advance.

If both sides of the equation are of type numeric (or extended complex numeric, etc), then evalb will test the equality. But for your symbolic expressions the evalb command will only test that they are the very same expression (by comparing memory address).
But they are not the very same expression. They are merely two different symbolic expressions for which you wish to do a mathematical test of equivalence.
restart;
expr:=(3*x+1)^5*(3*x-1)/x^6=(3+1/x)^5*(3-1/x):
is(expr);
true
testeq(expr);
true
simplify((rhs-lhs)(expr));
0

Related

Boolean Expression Evaluation For ExNor Gate

I know that an xnor expression can be broken up as follows:
X xnor Y = X'Y' + XY
But but I know that sume of the complement of the same combination (x+x') is 1 always therefore shouldn't xnor be always equal to 1?
I got confused in this question when I was trying to solve the following problem:
Z=X*Y where * denotes xnor. Evaluate Z*X.
My answer was 1, which according to the provided solution is incorrect the correct solution was given Y.
How? Thanks for the help.

which hash functions are orthogonal to each other?

I'm interested in multi-level data integrity checking and correcting. Where multiple error correcting codes are being used (they can be 2 of the same type of codes). I'm under the impression that a system using 2 codes would achieve maximum effectiveness if the 2 hash codes being used were orthogonal to each other.
Is there a list of which codes are orthogonal to what? Or do you need to use the same hashing function but with different parameters or usage?
I expect that the first level ecc will be a reed-solomon code, though I do not actually have control over this first function, hence I cannot use a single code with improved capabilities.
Note that I'm not concerned with encryption security.
Edit: This is not a duplicate of
When are hash functions orthogonal to each other? due to it essentially asking what the definition of orthogonal hash functions are. I want examples of which hash functions that are orthogonal.
I'm not certain it is even possible to enumerate all orthogonal hash functions. However, you only asked for some examples, so I will endeavour to provide some as well as some intuition as to what properties seem to lead to orthogonal hash functions.
From a related question, these two functions are orthogonal to each other:
Domain: Reals --> Codomain: Reals
f(x) = x + 1
g(x) = x + 2
This is a pretty obvious case. It is easier to determine orthogonality if the hash functions are (both) perfect hash functions such as these are. Please note that the term "perfect" is meant in the mathematical sense, not in the sense that these should ever be used as hash functions.
It is a more or less trivial case for perfect hash functions to satisfy orthogonality requirements. Whenever the functions are injective they are perfect hash functions and are thus orthogonal. Similar examples:
Domain: Integers --> Codomain: Integers
f(x) = 2x
g(x) = 3x
In the previous case, this is an injective function but not bijective as there is exactly one element in the codomain mapped to by each element in the domain, but there are many elements in the codomain that are not mapped to at all. These are still adequate for both perfect hashing and orthogonality. (Note that if the Domain/Codomain were Reals, this would be a bijection.)
Functions that are not injective are more tricky to analyze. However, it is always the case that if one function is injective and the other is not, they are not orthogonal:
Domain: Reals --> Codomain: Reals
f(x) = e^x // Injective -- every x produces a unique value
g(x) = x^2 // Not injective -- every number other than 0 can be produced by two different x's
So one trick is thus to know that one function is injective and the other is not. But what if neither is injective? I do not presently know of an algorithm for the general case that will determine this other than brute force.
Domain: Naturals --> Codomain: Naturals
j(x) = ceil(sqrt(x))
k(x) = ceil(x / 2)
Neither function is injective, in this case because of the presence of two obvious non-injective functions: ceil and abs combined with a restricted domain. (In practice most hash functions will not have a domain more permissive than integers.) Testing out values will show that j will have non-unique results when k will not and vice versa:
j(1) = ceil(sqrt(1)) = ceil(1) = 1
j(2) = ceil(sqrt(2)) = ceil(~1.41) = 2
k(1) = ceil(x / 2) = ceil(0.5) = 1
k(2) = ceil(x / 2) = ceil(1) = 1
But what about these functions?
Domain: Integers --> Codomain: Reals
m(x) = cos(x^3) % 117
n(x) = ceil(e^x)
In these cases, neither of the functions are injective (due to the modulus and the ceil) but when do they have a collision? More importantly, for what tuples of values of x do they both have a collision? These questions are hard to answer. I would suspect they are not orthogonal, but without a specific counterexample, I'm not sure I could prove that.
These are not the only hash functions you could encounter, of course. So the trick to determining orthogonality is first to see if they are both injective. If so, they are orthogonal. Second, see if exactly one is injective. If so, they are not orthogonal. Third, see if you can see the pieces of the function that are causing them to not be injective, see if you can determine its period or special cases (such as x=0) and try to come up with counter-examples. Fourth, visit math-stack-exchange and hope someone can tell you where they break orthogonality, or prove that they don't.

Simplifying a 9 variable boolean expression

I am trying to create a tic-tac-toe program as a mental exercise and I have the board states stored as booleans like so:
http://i.imgur.com/xBiuoAO.png
I would like to simplify this boolean expression...
(a&b&c) | (d&e&f) | (g&h&i) | (a&d&g) | (b&e&h) | (c&f&i) | (a&e&i) | (g&e&c)
My first thoughts were to use a Karnaugh Map but there were no solvers online that supported 9 variables.
and heres the question:
First of all, how would I know if a boolean condition is already as simple as possible?
and second: What is the above boolean condition simplified?
2. Simplified condition:
The original expression
a&b&c|d&e&f|g&h&i|a&d&g|b&e&h|c&f&i|a&e&i|g&e&c
can be simplified to the following, knowing that & is more prioritary than |
e&(d&f|b&h|a&i|g&c)|a&(b&c|d&g)|i&(g&h|c&f)
which is 4 chars shorter, performs in the worst case 18 & and | evaluations (the original one counted 23)
There is no shorter boolean formula (see point below). If you switch to matrices, maybe you can find another solution.
1. Making sure we got the smallest formula
Normally, it is very hard to find the smallest formula. See this recent paper if you are more interested. But in our case, there is a simple proof.
We will reason about a formula being the smallest with respect to the formula size, where for a variable a, size(a)=1, for a boolean operation size(A&B) = size(A|B) = size(A) + 1 + size(B), and for negation size(!A) = size(A) (thus we can suppose that we have Negation Normal Form at no cost).
With respect to that size, our formula has size 37.
The proof that you cannot do better consists in first remarking that there are 8 rows to check, and that there is always a pair of letter distinguishing 2 different rows. Since we can regroup these 8 checks in no less than 3 conjuncts with the remaining variable, the number of variables in the final formula should be at least 8*2+3 = 19, from which we can deduce the minimal tree size.
Detailed proof
Let us suppose that a given formula F is the smallest and in NNF format.
F cannot contain negated variables like !a. For that, remark that F should be monotonic, that is, if it returns "true" (there is a winning row), then changing one of the variables from false to true should not change that result. According to Wikipedia, F can be written without negation. Even better, we can prove that we can remove the negation. Following this answer, we could convert back and from DNF format, removing negated variables in the middle or replacing them by true.
F cannot contain a sub-tree like a disjunction of two variables a|b.
For this formula to be useful and not exchangeable with either a or b, it would mean that there are contradicting assignments such that for example
F[a|b] = true and F[a] = false, therefore that a = false and b = true because of monotonicity. Also, in this case, turning b to false makes the whole formula false because false = F[a] = F[a|false] >= F[a|b](b = false).
Therefore there is a row passing by b which is the cause of the truth, and it cannot go through a, hence for example e = true and h = true.
And the checking of this row passes by the expression a|b for testing b. However, it means that with a,e,h being true and all other set to false, F is still true, which contradicts the purpose of the formula.
Every subtree looking like a&b checks a unique row. So the last letter should appear just above the corresponding disjunction (a&b|...)&{c somewhere for sure here}, or this leaf is useless and either a or b can be removed safely. Indeed, suppose that c does not appear above, and the game is where a&b&c is true and all other variables are false. Then the expression where c is supposed to be above returns false, so a&b will be always useless. So there is a shorter expression by removing a&b.
There are 8 independent branches, so there is at least 8 subtrees of type a&b. We cannot regroup them using a disjunction of 2 conjunctions since a, f and h never share the same rows, so there must be 3 outer variables. 8*2+3 makes 19 variables appear in the final formula.
A tree with 19 variables cannot have less than 18 operators, so in total the size have to be at least 19+18 = 37.
You can have variants of the above formula.
QED.
One option is doing the Karnaugh map manually. Since you have 9 variables, that makes for a 2^4 by 2^5 grid, which is rather large, and by the looks of the equation, probably not very interesting either.
By inspection, it doesn't look like a Karnaugh map will give you any useful information (Karnaugh maps basically reduce expressions such as ((!a)&b) | (a&b) into b), so in that sense of simplification, your expression is already as simple as it can get. But if you want to reduce the number of computations, you can factor out a few variables using the distributivity of the AND operators over ORs.
The best way to think of this is how a person would think of it. No person would say to themselves, "a and b and c, or if d and e and f," etc. They would say "Any three in a row, horizontally, vertically, or diagonally."
Also, instead of doing eight checks (3 rows, 3 columns, and 2 diagonals), you can do just four checks (three rows and one diagonal), then rotate the board 90 degrees, then do the same checks again.
Here's what you end up with. These functions all assume that the board is a three-by-three matrix of booleans, where true represents a winning symbol, and false represents a not-winning symbol.
def win?(board)
winning_row_or_diagonal?(board) ||
winning_row_or_diagonal?(rotate_90(board))
end
def winning_row_or_diagonal?(board)
winning_row?(board) || winning_diagonal?(board)
end
def winning_row?(board)
3.times.any? do |row_number|
three_in_a_row?(board, row_number, 0, 1, 0)
end
end
def winning_diagonal?(board)
three_in_a_row?(board, 0, 0, 1, 1)
end
def three_in_a_row?(board, x, y, delta_x, delta_y)
3.times.all? do |i|
board[x + i * delta_x][y + i * deltay]
end
end
def rotate_90(board)
board.transpose.map(&:reverse)
end
The matrix rotate is from here: https://stackoverflow.com/a/3571501/238886
Although this code is quite a bit more verbose, each function is clear in its intent. Rather than a long boolean expresion, the code now expresses the rules of tic-tac-toe.
You know it's a simple as possible when there are no common sub-terms to extract (e.g. if you had "a&b" in two different trios).
You know your tic tac toe solution must already be as simple as possible because any pair of boxes can belong to at most only one winning line (only one straight line can pass through two given points), so (a & b) can't be reused in any other win you're checking for.
(Also, "simple" can mean a lot of things; specifying what you mean may help you answer your own question. )

A system or language to reverse equations?

Lets say I have a simple equation as follows:
x = 50 * (20 * item)
Is there a system or language to reverse this automatically? So lets say I have x as an input and want to find out which item it is.
item = (x - 50) / 20
A very trivial and probably misleading example, but just to illustrate the point. I'm not talking about solving equations with least squares fitting or such, but instead a system that can reverse equations effectively generating what you would have to code by hand. I'm not a mathematician so forgive the incorrect terminology (if any).
A colleage of mine said Matlab can do this natively. Anyone has any idea how? Can I actually specify which inputs I have and the output I'm looking for? Or do I simply have to use the inbuilt math functions to for a handwritten reversed equation?
You're probably looking for http://www.mathworks.com/help/symbolic/solve.html
You should first declare a symbolic variable: syms item
and then: solve(x == 50 * (20 * item), item) should give You what You're looking for.

Problem with arithmetic using logarithms to avoid numerical underflow (take 2)

I have two lists of fractions;
say A = [ 1/212, 5/212, 3/212, ... ]
and B = [ 4/143, 7/143, 2/143, ... ].
If we define A' = a[0] * a[1] * a[2] * ... and B' = b[0] * b[1] * b[2] * ...
I want to calculate a normalised value of A' and B'
ie specifically the values of A' / (A'+B') and B' / (A'+B')
My trouble is A are B are both quite long and each value is small so calculating the product causes numerical underflow very quickly...
I understand turning the product into a sum through logarithms can help me determine which of A' or B' is greater
ie max( log(a[0])+log(a[1])+..., log(b[0])+log(b[1])+... )
and that using logs I can calculate the value of A' / B' but how do I do A' / A'+B'
My best bet to date is to keep the number representations as fractions, ie A = [ [1,212], [5,212], [3,212], ... ] and implement my own arithmetic but it's getting clumsy and I have a feeling there is a (simple) way of logarithms I'm just missing....
The numerators for A and B don't come from a sequence. They might as well be random for the purpose of this question. If it helps the denominators for all values in A are the same, as are all the denominators for B.
Any ideas most welcome!
( ps. I asked a similar question 24 hours ago regarding the ratio A'/B' but it was actually the wrong question to ask. I'm actually after A'/(A'+B'). Sorry, my mistake. )
I see few ways here
First of all you can notice that
A' / (A'+B') = 1 / (1 + B'/A')
and you know how to calculate B'/A' with logarithms.
Another way is to implement your own rational arithmetic but you don't need to go far about it. Since you know that denominators are the same for the whole array, it immediately gives you
numerator(A') = numerator(a[0]) * numerator(a[1]) ...
denumerator(A') = denumerator(a[0]) ^ A.length
All you now need to do is to sum A' and B' which is easy and then multiply A' and 1/(A'+B') which is also easy. The hardest part here is to normalize resulting value which is done with modulo operation and is trivial.
Alternatively, since you most likely using some popular scripting language, most of them have classes for rational arithmetic built in, Python and Ruby have them for sure.
My best bet to date is to keep the number representations as fractions and implement my own arithmetic but it's getting clumsy
What language are you using? If you can overload operators, it should be really easy to make up a Fraction class that you can treat as a number pretty much everywhere.
As an example, determining whether a fraction A / B is larger than C / D is basically comparing whether A * D is larger than B * C.
Both A and B have the same denominator in each fraction you mention. Is that true for every term in the list? If that's so, why don't you factor that out when you calculate the product? The denominator will simply be X^n, when X is the value and n is the number of terms in the list.
If you do that, you'll have the opposite problem: overflow in the numerator. You know that it can't be smaller than max(X)^n, where max(X) is the maximum value in the numerator and n is the number of terms in the list. If you can calculate that, you can see if your computer will have a problem. You can't put 10 pounds of anything in a 5 pound bag.
Unfortunately, the properties of logarithms limit you to the following simplifications:
(source: equationsheet.com)
and
(source: equationsheet.com)
So you're stuck with:
(source: equationsheet.com)
If you're using a language that supports infinite precision numbers (e.g., Java BigDecimal) it might make your life a little easier. But there's still a good argument for doing some thinking before you compute. Why use brute force when you can be elegant?
Well ... if you know A'(A'+B'), then B'(A'+B') should be one minus that. I personally would not use logarithms. I would use the actual fractions. I would also use some sort of BigInt class to represent the numerator and denominator. Which language are you using? Python can be a good fit.