Boolean Algebra Proof - boolean

I am teaching myself Boolean Algebra.
I was hoping someone could correct the following if I'm wrong.
Question:
Using Boolean Algebra prove that A(A+B)=A.
A(A+B) would mean A and ( A or B).
My Answer:
A(A+B) = A(A(1+B)) = A(A1) = AA = A.

Distribute A first, as such:
A(A+B)=A
AA+AB=A
A+AB=A
A(1+B)=A
A(1)=A
A=A
You seemed to skip a couple steps within your first step: you essentially stated A+B=1+B, which is not always correct.

Let me introduce you to propositional logic. We use the notions below to denote and, or, and logically equivalent respectively:
Below is your equation rewritten to use this notion:
To complete the proof, 3 laws are applied. At line 2, the distributed law is applied for reduction, the idempotent law is applied at line 3, and the absorption law is applied at line 4:
And this completes the proof.

Related

Reusing lia tactic to prove decidability

I have an abstract syntax for Presburger arithmetic, along with a fixpoint function determining a given formula's propositional denotation (you can see it here: https://gist.github.com/d4hines/d9a0c674f324cab46d2cf0967bde1ac3).
I'd like to prove that the truth value of any given formula is decidable. Since it's Presburger arithmetic, I know it must decidable. I've heard that the decision procedures for Presburger arithmetic are very complicated. I'd like to reuse the existing one in Coq.
How can I do this?
Thanks!
There are a few reasons why lia will not be of great help to you.
A small true goal like exists x : Z, 2 < x < 4 is not solved by lia: this tactic is not complete for Prestburger arithmetic
Even if lia was complete for Presburger, it would act as an oracle: giving you an answer every time you need one for a true formula. But when presented with a false formula, lia only says to Coq I can't do it, it does not say I have a proof that this can't be done. In other words, the information that a proof procedure is complete may not be stored in the Coq system as a Coq readable proof.

Coq QArith division by zero is zero, why?

I noticed that in Coq's definition of rationals the inverse of zero is defined to zero. (Usually, division by zero is not well-defined/legal/allowed.)
Require Import QArith.
Lemma inv_zero_is_zero: (/ 0) == 0.
Proof. unfold Qeq. reflexivity. Qed.
Why is it so?
Could it cause problems in calculations with rationals, or is it safe?
The short answer is: yes, it is absolutely safe.
When we say that division by zero is not well-defined, what we actually mean is that zero doesn't have a multiplicative inverse. In particular, we can't have a function that computes a multiplicative inverse for zero. However, it is possible to write a function that computes the multiplicative inverse for all other elements, and returns some arbitrary value when such an inverse doesn't exists (e.g. for zero). This is exactly what this function is doing.
Having this inverse operator be defined everywhere means that we'll be able to define other functions that compute with it without having to argue explicitly that its argument is different from zero, making it more convenient to use. Indeed, imagine what a pain it would be if we made this function return an option instead, failing when we pass it zero: we would have to make our entire code monadic, making it harder to understand and reason about. We would have a similar problem if writing a function that requires a proof that its argument is non-zero.
So, what's the catch? Well, when trying to prove anything about a function that uses the inverse operator, we will have to add explicit hypotheses saying that we're passing it an argument that is different from zero, or argue that its argument can never be zero. The lemmas about this function then get additional preconditions, e.g.
forall q, q <> 0 -> q * (/ q) = 1
Many other libraries are structured like that, cf. for instance the definition of the field axioms in the algebra library of MathComp.
There are some cases where we want to internalize the additional preconditions required by certain functions as type-level constraints. This is what we do for instance when we use length-indexed vectors and a safe get function that can only be called on numbers that are in bounds. So how do we decide which one to go for when designing a library, i.e. whether to use a rich type with a lot of extra information and prevent bogus calls to certain functions (as in the length-indexed case) or to leave this information out and require it as explicit lemmas (as in the multiplicative inverse case)? Well, there's no definite answer here, and one really needs to analyze each case individually and decide which alternative will be better for that particular case.

What forms of goal in Coq are considered to be "true"?

When I prove some theorem, my goal evolves as I apply more and more tactics. Generally speaking the goal tends to split into sub goals, where the subgoals are more simple. At some final point Coq decides that the goal is proven. How this "proven" goal may look like? These goals seems to be fine:
a = a. (* Any object is identical to itself (?) *)
myFunc x y = myFunc x y. (* Result of the same function with the same params
is always the same (?) *)
What else can be here or can it be that examples are fundamentally wrong?
In other words, when I finally apply reflexivity, Coq just says ** Got it ** without any explanation. Is there any way to get more details on what it actually did or why it decided that the goal is proven?
You're actually facing a very general notion that seems not so general because Coq has some user-friendly facility for reasoning with equality in particular.
In general, Coq accepts a goal as solved as soon as it receives a term whose type is the type of the goal: it has been convinced the proposition is true because it has been convinced the type that this proposition describes is inhabited, and what convinced it is the actual witness you helped build along your proof.
For the particular case of inductive datatypes, the two ways you are going to be able to proved the proposition P a b c are:
by constructing a term of type P a b c, using the constructors of the inductive type P, and providing all the necessary arguments.
or by reusing an existing proof or an axiom in the environment whose type you can get to match P a b c.
For the even more particular case of equality proofs (equality is just an inductive datatype in Coq), the same two ways I list above degenerate to this:
the only constructor of equality is eq_refl, and to apply it you need to show that the two sides are judgementally equal. For most purposes, this corresponds to goals that look like T a b c = T a b c, but it is actually a slightly more broad notion of equality (see below). For these, all you have to do is apply the eq_refl constructor. In a nutshell, that is what reflexivity does!
the second case consists in proving that the equality holds because you have other equalities in your context, nothing special here.
Now one part of your question was: when does Coq accept that two sides of an equality are equal by reflexivity?
If I am not mistaken, the answer is when the two sides of the equality are αβδιζ-convertible.
What this grossly means is that there is a way to make them syntactically equal by repeated applications of:
α : sane renaming of non-free variables
β : computing reducible expressions
δ : unfolding definitions
ι : simplifying matches
ζ : expanding let-bound expressions
[someone please correct me if more rules apply or if I got one wrong]
For instance some of the things that are not captured by these rules are:
equality of functions that do more or less the same thing in different ways:
(fun x => 0 + x) = (fun x => x + 0)
quicksort = mergesort
equality of terms that are stuck reducing but would be equal:
forall n, 0 + n = n + 0

which hash functions are orthogonal to each other?

I'm interested in multi-level data integrity checking and correcting. Where multiple error correcting codes are being used (they can be 2 of the same type of codes). I'm under the impression that a system using 2 codes would achieve maximum effectiveness if the 2 hash codes being used were orthogonal to each other.
Is there a list of which codes are orthogonal to what? Or do you need to use the same hashing function but with different parameters or usage?
I expect that the first level ecc will be a reed-solomon code, though I do not actually have control over this first function, hence I cannot use a single code with improved capabilities.
Note that I'm not concerned with encryption security.
Edit: This is not a duplicate of
When are hash functions orthogonal to each other? due to it essentially asking what the definition of orthogonal hash functions are. I want examples of which hash functions that are orthogonal.
I'm not certain it is even possible to enumerate all orthogonal hash functions. However, you only asked for some examples, so I will endeavour to provide some as well as some intuition as to what properties seem to lead to orthogonal hash functions.
From a related question, these two functions are orthogonal to each other:
Domain: Reals --> Codomain: Reals
f(x) = x + 1
g(x) = x + 2
This is a pretty obvious case. It is easier to determine orthogonality if the hash functions are (both) perfect hash functions such as these are. Please note that the term "perfect" is meant in the mathematical sense, not in the sense that these should ever be used as hash functions.
It is a more or less trivial case for perfect hash functions to satisfy orthogonality requirements. Whenever the functions are injective they are perfect hash functions and are thus orthogonal. Similar examples:
Domain: Integers --> Codomain: Integers
f(x) = 2x
g(x) = 3x
In the previous case, this is an injective function but not bijective as there is exactly one element in the codomain mapped to by each element in the domain, but there are many elements in the codomain that are not mapped to at all. These are still adequate for both perfect hashing and orthogonality. (Note that if the Domain/Codomain were Reals, this would be a bijection.)
Functions that are not injective are more tricky to analyze. However, it is always the case that if one function is injective and the other is not, they are not orthogonal:
Domain: Reals --> Codomain: Reals
f(x) = e^x // Injective -- every x produces a unique value
g(x) = x^2 // Not injective -- every number other than 0 can be produced by two different x's
So one trick is thus to know that one function is injective and the other is not. But what if neither is injective? I do not presently know of an algorithm for the general case that will determine this other than brute force.
Domain: Naturals --> Codomain: Naturals
j(x) = ceil(sqrt(x))
k(x) = ceil(x / 2)
Neither function is injective, in this case because of the presence of two obvious non-injective functions: ceil and abs combined with a restricted domain. (In practice most hash functions will not have a domain more permissive than integers.) Testing out values will show that j will have non-unique results when k will not and vice versa:
j(1) = ceil(sqrt(1)) = ceil(1) = 1
j(2) = ceil(sqrt(2)) = ceil(~1.41) = 2
k(1) = ceil(x / 2) = ceil(0.5) = 1
k(2) = ceil(x / 2) = ceil(1) = 1
But what about these functions?
Domain: Integers --> Codomain: Reals
m(x) = cos(x^3) % 117
n(x) = ceil(e^x)
In these cases, neither of the functions are injective (due to the modulus and the ceil) but when do they have a collision? More importantly, for what tuples of values of x do they both have a collision? These questions are hard to answer. I would suspect they are not orthogonal, but without a specific counterexample, I'm not sure I could prove that.
These are not the only hash functions you could encounter, of course. So the trick to determining orthogonality is first to see if they are both injective. If so, they are orthogonal. Second, see if exactly one is injective. If so, they are not orthogonal. Third, see if you can see the pieces of the function that are causing them to not be injective, see if you can determine its period or special cases (such as x=0) and try to come up with counter-examples. Fourth, visit math-stack-exchange and hope someone can tell you where they break orthogonality, or prove that they don't.

Is this language regular? {a^i b^j| i=j mod 19}

I know that {a^i b^j | i = j } is not regular and I can prove with pumping lemma; similarly I can use pumping lemma to prove this one not regular too. But I think I see some similar problem that says such language is actually regular. And because I'm not confident about my knowledge in pumping lemma I'm asking this bad question. Sorry.
This is how I prove it: let w be a^p b^(19k+p) , clearly this is in the language. Then if I pump a, making it a^(p+1) b^(19k+p) . It fails. Therefore, it's not regular.
Is my proof right?
Take a look at this answer. In short, you're not pumping the string correctly. The pumping lemma states that your string w can be divided as w = xyz where |xy| ≥ p, and y is not empty. You can then pump the string as xyiz for all i ≥ 0.
The key here, is that the pumping lemma states that there exists a division of the string w satisfying these properties, you do not get to choose the division, and that you can only pump the string as xyiz.
However, this language is regular, so the pumping lemma can't be used to prove if a language is regular, it can only prove if a language is irregular (it's a necessary but not sufficient condition). To show that a language is regular you can construct a DFA, NFA or regular expression that describes your language exactly. One such regular expression is:
(a^19)*(e|ab|aabb|aaabbb|...|a^18b^18)(b^19)*
where e is the empty string.
I suspect that your language is an example from an introductory course in automata or computation. If you're interested, the Myhill-Nerode theorem is not often covered in introductory material, but in this case offers a very easy proof of regularity: consider the distinguising extensions b, bb, bbb, ..., b^19 and the proof follows relatively easily from that.