How would you simplify this boolean expression? I don't know how to apply the boolean laws with the implication sign.
(pq -> r)'
and
(p -> (q'r))'
The implication rule is as follows as stated in wikipedia or any other book related to logic:
p -> q = p' + q (not p or q)
therefore, when you apply rules to your examples, the step by step solution would be:
((pq)' + r)'
distribute the outer negation (2 negations cancel each other out and negation of disjunction is conjunction): yields pqr'
(pq -> r)' = pqr'
your second example is doable when you understand this answer, so I'll leave it to you:)
Related
If i have a snippet similar to this one:
Inductive Q : Set :=
| NULL : Q
| CELL : nat -> Q -> Q -> Q
.
Definition someFunc (q: Q) :=
match q with
| NULL => True
| CELL 0 -> q1 -> q2 => True
| CELL z -> q1 -> q2 => (someFunc q1) || (someFunc q2)
end.
Will the case where q is 0 -> Q -> Q -> Q exhaust the match and finish the function execution, or will it drop down the next branch and destructure 0 as z?
The title of you question is not representative of the mistakes you made in your original question. You think you have a problem with exhaustive pattern-matching (you don't), while you have a mere problem of syntax and basic typing.
You managed to define an type of trees (called Q) with two constructors. That part is good.
You attempted to define a function, which by the look of it, is a recursive function. When defining a recursive function, you should use the keyword Fixpoint.
When writing a pattern in a pattern match construct, you have to respect the syntax for function application. So what you wrote Cell 0 -> q1 -> q2 should probably be written Cell 0 q1 q2.
The values returned in different branches of the pattern-match construct are in different type: True has type Prop, while an expression of the form (someFunc q1) || (someFunc q2) has type bool. These are two different types. There are two different notions of or connectives for these two types. One is written || the other one (which you should have used) is written \/.
So here is an example of code that is similar to yours, but does not suffer with syntax or typing problems.
Inductive Q : Set :=
| NULL : Q
| CELL : nat -> Q -> Q -> Q
.
Fixpoint someFunc (q: Q) :=
match q with
| NULL => True
| CELL 0 q1 q2 => True
| CELL z q1 q2 => (someFunc q1) \/ (someFunc q2)
end.
Maybe reading a few more examples of existing Coq code would help you get a better feeling of the syntax. The supplementary material of the Coq'Art book provides a few of these examples (in particular, you can look at the material for chapter 1 and chapter 6).
Given a type (like List) in Coq, how do I figure out what the equality symbol "=" mean in that type? What commands should I type to figure out the definition?
The equality symbol is just special infix syntax for the eq predicate. Perhaps surprisingly, it is defined the same way for every type, and we can even ask Coq to print it for us:
Print eq.
(* Answer: *)
Inductive eq (A : Type) (x : A) : Prop :=
| eq_refl : eq x x.
This definition is so minimal that it might be hard to understand what is going on. Roughly speaking, it says that the most basic way to show that two expressions are equal is by reflexivity -- that is, when they are exactly the same. For instance, we can use eq_refl to prove that 5 = 5 or [4] = [4]:
Check eq_refl : 5 = 5.
Check eq_refl : [4] = [4].
There is more to this definition than meets the eye. First, Coq considers any two expressions that are equalivalent up to simplification to be equal. In these cases, we can use eq_refl to show that they are equal as well. For instance:
Check eq_refl : 2 + 2 = 4.
This works because Coq knows the definition of addition on the natural numbers and is able to mechanically simplify the expression 2 + 2 until it arrives at 4.
Furthermore, the above definition tells us how to use an equality to prove other facts. Because of the way inductive types work in Coq, we can show the following result:
eq_elim :
forall (A : Type) (x y : A),
x = y ->
forall (P : A -> Prop), P x -> P y
Paraphrasing, when two things are equal, any fact that holds of the first one also holds of the second one. This principle is roughly what Coq uses under the hood when you invoke the rewrite tactic.
Finally, equality interacts with other types in interesting ways. You asked what the definition of equality for list was. We can show that the following lemmas are valid:
forall A (x1 x2 : A) (l1 l2 : list A),
x1 :: l1 = x2 :: l2 -> x1 = x2 /\ l1 = l2
forall A (x : A) (l : list A),
x :: l <> nil.
In words:
if two nonempty lists are equal, then their heads and tails are equal;
a nonempty list is different from nil.
More generally, if T is an inductive type, we can show that:
if two expressions starting with the same constructor are equal, then their arguments are equal (that is, constructors are injective); and
two expressions starting with different constructors are always different (that is, different constructors are disjoint).
These facts are not, strictly speaking, part of the definition of equality, but rather consequences of the way inductive types work in Coq. Unfortunately, it doesn't work as well for other kinds of types in Coq; in particular, the notion of equality for functions in Coq is not very useful, unless you are willing to add extra axioms into the theory.
I have a string a and on comparison with string b, if equals has an string c, else has string x. I know in the hypothesis that fun x <= fun c. How do I prove this below statement? fun is some function which takes in string and returns nat.
fun (if a == b then c else x) <= S (fun c)
The logic seems obvious but I am unable to split the if statements in coq. Any help would be appreciated.
Thanks!
Let me complement Yves answer pointing out to a general "view" pattern that works well in many situations were case analysis is needed. I will use the built-in support in math-comp but the technique is not specific to it.
Let's assume your initial goal:
From mathcomp Require Import all_ssreflect.
Variables (T : eqType) (a b : T).
Lemma u : (if a == b then 0 else 1) = 2.
Proof.
now, you could use case_eq + simpl to arrive to next step; however, you can also match using more specialized "view" lemmas. For example, you could use ifP:
ifP : forall (A : Type) (b : bool) (vT vF : A),
if_spec b vT vF (b = false) b (if b then vT else vF)
where if_spec is:
Inductive if_spec (A : Type) (b : bool) (vT vF : A) (not_b : Prop) : bool -> A -> Set :=
IfSpecTrue : b -> if_spec b vT vF not_b true vT
| IfSpecFalse : not_b -> if_spec b vT vF not_b false vF
That looks a bit confusing, the important bit is the parameters to the type family bool -> A -> Set. The first exercise is "prove the ifP lemma!".
Indeed, if we use ifP in our proof, we get:
case: ifP.
Goal 1: (a == b) = true -> 0 = 2
Goal 2: (a == b) = false -> 1 = 2
Note that we didn't have to specify anything! Indeed, lemmas of the form { A
} + { B } are just special cases of this view pattern. This trick works in many other situations, for example, you can also use eqP, which has a spec relating the boolean equality with the propositional one. If you do:
case: eqP.
you'll get:
Goal 1: a = b -> 0 = 2
Goal 2: a <> b -> 1 = 2
which is very convenient. In fact, eqP is basically a generic version of the type_dec principle.
If you can write an if-then-else statement, it means that the test expression a == b is in a type with two constructors (like bool) or (sumbool). I will first assume the type is bool. In that case, the best approach during a proof is to enter the following command.
case_eq (a == b); intros hyp_ab.
This will generate two goals. In the first one, you will have an hypothesis
hyp_ab : a == b = true
that asserts that the test succeeds and the goal conclusion has the following shape (the if-then-else is replaced by the then branch):
fun c <= S (fun c)
In the second goal, you will have an hypothesis
hyp_ab : a == b = false
and the goal conclusion has the following shape (the if-then-else is replaced by the else branch).
fun x <= S (fun c)
You should be able to carry on from there.
On the other hand, the String library from Coq has a function string_dec with return type {a = b}+{a <> b}. If your notation a == b is a pretty notation for string_dec a b, it is better to use the following tactic:
destruct (a == b) as [hyp_ab | hyp_ab].
The behavior will be quite close to what I described above, only easier to use.
Intuitively, when you reason on an if-then-else statement, you use a command like case_eq, destruct, or case that leads you to studying separately the two executions paths, remember in an hypothesis why you took each of these executions paths.
I understood destruct as it breaks an inductive definition into its constructors. I recently saw case_eq and I couldn't understand what it does differently?
1 subgoals
n : nat
k : nat
m : M.t nat
H : match M.find (elt:=nat) n m with
| Some _ => true
| None => false
end = true
______________________________________(1/1)
cc n (M.add k k m) = true
In the above context, if I do destruct M.find n m it breaks H into true and false whereas case_eq (M.find n m) leaves H intact and adds separate proposition M.find (elt:=nat) n m = Some v, which I can rewrite to get same effect as destruct.
Can someone please explain me the difference between the two tactics and when which one should be used?
The first basic tactic in the family of destruct and case_eq is called case. This tactic modifies only the conclusion. When you type case A and A has a type T which is inductive, the system replaces A in the goal's conclusion by instances of all the constructors of type T, adding universal quantifications for the arguments of these constructors, if needed. This creates as many goals as there are constructors in type T. The formula A disappears from the goal and if there is any information about A in an hypothesis, the link between this information and all the new constructors that replace it in the conclusion gets lost. In spite of this, case is an important primitive tactic.
Loosing the link between information in the hypotheses and instances of A in the conclusion is a big problem in practice, so developers came up with two solutions: case_eq and destruct.
Personnally, when writing the Coq'Art book, I proposed that we write a simple tactic on top of case that keeps a link between A and the various constructor instances in the form of an equality. This is the tactic now called case_eq. It does the same thing as case but adds an extra implication in the goal, where the premise of the implication is an equality of the form A = ... and where ... is an instance of each constructor.
At about the same time, the tactic destruct was proposed. Instead of limiting the effect of replacement in the goal's conclusion, destruct replaces all instances of A appearing in the hypotheses with instances of constructors of type T. In a sense, this is cleaner because it avoids relying on the extra concept of equality, but it is still incomplete because the expression A may be a compound expression f B, and if B appears in the hypothesis but not f B the link between A and B will still be lost.
Illustration
Definition my_pred (n : nat) := match n with 0 => 0 | S p => p end.
Lemma example n : n <= 1 -> my_pred n <= 0.
Proof.
case_eq (my_pred n).
Gives the two goals
------------------
n <= 1 -> my_pred n = 0 -> 0 <= 0
and
------------------
forall p, my_pred n = S p -> n <= 1 -> S p <= 0
the extra equality is very useful here.
In this question I suggested that the developer use case_eq (a == b) when (a == b) has type bool because this type is inductive and not very informative (constructors have no argument). But when (a == b) has type {a = b}+{a <> b} (which is the case for the string_dec function) the constructors have arguments that are proofs of interesting properties and the extra universal quantification for the arguments of the constructors are enough to give the relevant information, in this case a = b in a first goal and a <> b in a second goal.
Ever since I learned a little bit of Coq I wanted to learn to write a Coq proof of the so-called division algorithm that is actually a logical proposition: forall n m : nat, exists q : nat, exists r : nat, n = q * m + r
I recently accomplished that task using what I learned from Software Foundations.
Coq being a system for developing constructive proofs, my proof is in effect a method to construct suitable values q and r from values m and n.
Coq has an intriguing facility for "extracting" an algorithm in Coq's algorithm language (Gallina) to general-purpose functional programming languages including Haskell.
Separately I have managed to write the divmod operation as a Gallina Fixpoint and extract that. I want to note carefully that that task is not what I'm considering here.
Adam Chlipala has written in Certified Programming with Dependent Types that "Many fans of the Curry-Howard correspondence support the idea of extracting programs from proofs. In reality, few users of Coq and related tools do any such thing."
Is it even possible to extract the algorithm implicit in my proof to Haskell? If it is possible, how would it be done?
Thanks to Prof. Pierce's summer 2012 video 4.1 as Dan Feltey suggested, we see that the key is that the theorem to be extracted must provide a member of Type rather than the usual kind of propositions, which is Prop.
For the particular theorem the affected construct is the inductive Prop ex and its notation exists. Similarly to what Prof. Pierce has done, we can state our own alternate definitions ex_t and exists_t that replace occurrences of Prop with occurrences of Type.
Here is the usual redefinition of ex and exists similarly as they are defined in Coq's standard library.
Inductive ex (X:Type) (P : X->Prop) : Prop :=
ex_intro : forall (witness:X), P witness -> ex X P.
Notation "'exists' x : X , p" := (ex _ (fun x:X => p))
(at level 200, x ident, right associativity) : type_scope.
Here are the alternate definitions.
Inductive ex_t (X:Type) (P : X->Type) : Type :=
ex_t_intro : forall (witness:X), P witness -> ex_t X P.
Notation "'exists_t' x : X , p" := (ex_t _ (fun x:X => p))
(at level 200, x ident, right associativity) : type_scope.
Now, somewhat unfortunately, it is necessary to repeat both the statement and the proof of the theorem using these new definitions.
What in the world??
Why is it necessary to make a reiterated statement of the theorem and a reiterated proof of the theorem, that differ only by using an alternative definition of the quantifier??
I had hoped to use the existing theorem in Prop to prove the theorem over again in Type. That strategy fails when Coq rejects the proof tactic inversion for a Prop in the environment when that Prop uses exists and the goal is a Type that uses exists_t. Coq reports "Error: Inversion would require case analysis on sort Set which is not allowed
for inductive definition ex." This behavior occurred in Coq 8.3. I am not certain that it
still occurs in Coq 8.4.
I think the need to repeat the proof is actually profound although I doubt that I personally am quite managing to perceive its profundity. It involves the facts that Prop is "impredicative" and Type is not impredicative, but rather, tacitly "stratified". Predicativity is (if I understand correctly) vulnerability to Russell's paradox that the set S of sets that are not members of themselves can neither be a member of S, nor a non-member of S. Type avoids Russell's paradox by tacitly creating a sequence of higher types that contain lower types. Because Coq is drenched in the formulae-as-types interpretation of the Curry-Howard correspondence, and if I am getting this right, we can even understand stratification of types in Coq as a way to avoid Gödel incompleteness, the phenomenon that certain formulae express constraints on formulae such as themselves and thereby become unknowable as to their truth or falsehood.
Back on planet Earth, here is the repeated statement of the theorem using "exists_t".
Theorem divalg_t : forall n m : nat, exists_t q : nat,
exists_t r : nat, n = plus (mult q m) r.
As I have omitted the proof of divalg, I will also omit the proof of divalg_t. I will only mention that we do have the good fortune that proof tactics including "exists" and "inversion" work just the same with our new definitions "ex_t" and "exists_t".
Finally, the extraction itself is accomplished easily.
Extraction Language Haskell.
Extraction "divalg.hs" divalg_t.
The resulting Haskell file contains a number of definitions, the heart of which is the reasonably nice code, below. And I was only slightly hampered by my near-total ignorance of the Haskell programming language. Note that Ex_t_intro creates a result whose type is Ex_t; O and S are the zero and the successor function from Peano arithmetic; beq_nat tests Peano numbers for equality; nat_rec is a higher-order function that recurs over the function among its arguments. The definition of nat_rec is not shown here. At any rate it is generated by Coq according to the inductive type "nat" that was defined in Coq.
divalg :: Nat -> Nat -> Ex_t Nat (Ex_t Nat ())
divalg n m =
case m of {
O -> Ex_t_intro O (Ex_t_intro n __);
S m' ->
nat_rec (Ex_t_intro O (Ex_t_intro O __)) (\n' iHn' ->
case iHn' of {
Ex_t_intro q' hq' ->
case hq' of {
Ex_t_intro r' _ ->
let {k = beq_nat r' m'} in
case k of {
True -> Ex_t_intro (S q') (Ex_t_intro O __);
False -> Ex_t_intro q' (Ex_t_intro (S r') __)}}}) n}
Update 2013-04-24: I know a bit more Haskell now. To assist others in reading the extracted code above, I'm presenting the following hand-rewritten code that I claim is equivalent and more readable. I'm also presenting the extracted definitions Nat, O, S, and nat_rec that I did not eliminate.
-- Extracted: Natural numbers (non-negative integers)
-- in the manner in which Peano defined them.
data Nat =
O
| S Nat
deriving (Eq, Show)
-- Extracted: General recursion over natural numbers,
-- an interpretation of Nat in the manner of higher-order abstract syntax.
nat_rec :: a1 -> (Nat -> a1 -> a1) -> Nat -> a1
nat_rec f f0 n =
case n of {
O -> f;
S n0 -> f0 n0 (nat_rec f f0 n0)}
-- Given non-negative integers n and m, produce (q, r) with n = q * m + r.
divalg_t :: Nat -> Nat -> (Nat, Nat)
divalg_t n O = (O, n) -- n/0: Define quotient 0, remainder n.
divalg_t n (S m') = divpos n m' -- n/(S m')
where
-- Given non-negative integers n and m',
-- and defining m = m' + 1,
-- produce (q, r) with n = q * m + r
-- so that q = floor (n / m) and r = n % m.
divpos :: Nat -> Nat -> (Nat, Nat)
divpos n m' = nat_rec (O, O) (incrDivMod m') n
-- Given a non-negative integer m' and
-- a pair of non-negative integers (q', r') with r <= m',
-- and defining m = m' + 1,
-- produce (q, r) with q*m + r = q'*m + r' + 1 and r <= m'.
incrDivMod :: Nat -> Nat -> (Nat, Nat) -> (Nat, Nat)
incrDivMod m' _ (q', r')
| r' == m' = (S q', O)
| otherwise = (q', S r')
The current copy of Software Foundations dated July 25, 2012, answers this quite concisely in the late chapter "Extraction2". The answer is that it can certainly be done, much like this:
Extraction Language Haskell
Extraction "divalg.hs" divalg
One more trick is necessary. Instead of a Prop, divalg must be a Type. Otherwise it will be erased in the process of extraction.
Uh oh, #Anthill is correct, I haven't answered the question because I don't know how to explain how Prof. Pierce accomplished that in his NormInType.v variant of his Norm.v and MoreStlc.v.
OK, here's the rest of my partial answer anyway.
Where "divalg" appears above, it will be necessary to provide a space-separated list of all of the propositions (which must each be redefined as a Type rather than a Prop) on which divalg relies. For a thorough, interesting, and working example of a proof extraction, one may consult the chapter Extraction2 mentioned above. That example extracts to OCaml, but adapting it for Haskell is simply a matter of using Extraction Language Haskell as above.
In part, the reason that I spent some time not knowing the above answer is that I have been using the copy of Software Foundations dated October 14, 2010, that I downloaded in 2011.