Exercise 6.7 in Coq'Art, or the final exercise of the Logic chapter in Software Foundations: show that the following are equivalent.
Definition peirce := forall P Q:Prop, ((P->Q)->P)->P.
Definition classic := forall P:Prop, ~~P -> P.
Definition excluded_middle := forall P:Prop, P\/~P.
Definition de_morgan_not_and_not := forall P Q:Prop, ~(~P/\~Q)->P\/Q.
Definition implies_to_or := forall P Q:Prop, (P->Q)->(~P\/Q).
The solution set expresses this by a circular chain of implications, using five separate lemmas. But "TFAE" proofs are common enough in mathematics that I'd like to have an idiom to express them. Is there one in Coq?
This type of pattern is very easy to express in Coq, although setting up the infrastructure to do so might take some effort.
First, we define a proposition that expresses that all propositions in a list are equivalent:
Require Import Coq.Lists.List. Import ListNotations.
Definition all_equivalent (Ps : list Prop) : Prop :=
forall n m : nat, nth n Ps False -> nth m Ps True.
Next, we want to capture the standard pattern for proving this kind of result: if each proposition in the list implies the next one, and the last implies the first, we know they are all equivalent. (We could also have a more general pattern, where we replace a straight list of implications with a graph of implications between the propositions, whose transitive closure generates a complete graph. We'll avoid that in the interest of simplicity.) The premise of this pattern is easy to express; it is just a code transcription of the English explanation above.
Fixpoint all_equivalent'_aux
(first current : Prop) (rest : list Prop) : Prop :=
match rest with
| [] => current -> first
| P :: rest' => (current -> P) /\ all_equivalent'_aux first P rest'
end.
Definition all_equivalent' (Ps : list Prop) : Prop :=
match Ps with
| first :: second :: rest =>
(first -> second) /\ all_equivalent' first second rest
| _ => True
end.
The difficult part is showing that this premise implies the conclusion we want:
Lemma all_equivalentP Ps : all_equivalent' Ps -> all_equivalent Ps.
Showing that this lemma holds probably requires some ingenuity to find a strong enough inductive generalization. I can't quite prove it right now, but might add a solution later to the answer if you want.
Related
I'm fairly new to Coq, and was doing some Katas on CodeWars for fun and learning.
I'm stuck with one of them and want to hear some ideas from you.
So, I have:
Record iso (A B : Set) : Set :=
bijection {
A_to_B : A -> B;
B_to_A : B -> A;
A_B_A : forall a : A, B_to_A (A_to_B a) = a;
B_A_B : forall b : B, A_to_B (B_to_A b) = b
}.
(* nat_plus_nat : a set having size(nat) more elements than nat. (provided in preloaded) *)
Inductive nat_plus_nat : Set := left (n : nat) | right (n : nat).
Theorem nat_iso_natpnat : iso nat nat_plus_nat.
I have and idea, but I can't implement it, and I don't know if it's feasible. Basically, I want to map every odd nat to one constructor(left, for example) and every even nat to another(right, for example). Will this work? If no, how can it be done?
Right now I'm stuck with the fact, that A_to_B defined as fun n => if odd n then left n else right n and B_to_A defined as fun n => match n with | left n' => n' | right n' => n' end won't give me enough facts to eliminate some cases.
You need to do the math correctly first: find two functions that are inverse of each other.
You initial intent is correct: odd numbers to one side, even numbers to the other side, but what you store on each side should cover all the natural numbers, so you will probably have to divide by 2 somewhere.
For Coq usage, You should load the Arith package, by starting with the following line:
Require Import Arith.
This way, you can benefit from existing functions, like Nat.div2 and Nat.even and all the existing theorems about them. To find the relevant theorems, I suggest commands like:
Search Nat.even 2.
Search Nat.div2.
Last hint: proving properties of Nat.div2 by induction is rather difficult for beginners. Try to use the existing theorems as much as possible. If you choose to perform a proof by induction concerning div2, go look in the sources in file theories/Arith/Div2.v : the author of that file designed a specific induction theorem called ìnd_0_1_SS just for that purpose.
I can define the following inductive type:
Inductive T : Type -> Type :=
| c1 : forall (A : Type), A -> T A
| c2 : T unit.
But then the command Check (c1 (T nat)) fails with the message: The term T nat has type Type#{max(Set, Top.3+1)} while it is expected to have type Type#{Top.3} (universe inconsistency).
How can I tweak the above inductive definition so that c1 (T nat) does not cause a universe inconsistency, and without setting universe polymorphism on?
The following works, but I would prefer a solution without adding equality:
Inductive T (A : Type) : Type :=
| c1 : A -> T A
| c2' : A = unit -> T A.
Definition c2 : T unit := c2' unit eq_refl.
Check (c1 (T nat)).
(*
c1 (T nat)
: T nat -> T (T nat)
*)
Let me first answer the question of why we get the universe inconsistency in the first place.
Universe inconsistencies are the errors that Coq reports to avoid proofs of False via Russell's paradox, which results from considering the set of all sets which do not contain themselves.
There's a variant which is more convenient to formalize in type theory called Hurken's Paradox; see Coq.Logic.Hurkens for more details. There is a specialization of Hurken's paradox which proves that no type can retract to a smaller type. That is, given
U := Type#{u}
A : U
down : U -> A
up : A -> U
up_down : forall (X:U), up (down X) = X
we can prove False.
This is almost exactly the setup of your Inductive type. Annotating your type with universes, you start with
Inductive T : Type#{i} -> Type#{j} :=
| c1 : forall (A : Type#{i}), A -> T A
| c2 : T unit.
Note that we can invert this inductive; we may write
Definition c1' (A : Type#{i}) (v : T A) : A
:= match v with
| c1 A x => x
| c2 => tt
end.
Lemma c1'_c1 (A : Type#{i}) : forall v, c1' A (c1 A v) = v.
Proof. reflexivity. Qed.
Suppose, for a moment, that c1 (T nat) typechecked. Since T nat : Type#{j}, this would require j <= i. If it gave us that j < i, then we would be set. We could write c1 Type#{j}. And this is exactly the setup for the variant of Hurken's that I mentioned above! We could define
u = j
U := Type#{j}
A := T Type#{j}
down : U -> A := c1 Type#{j}
up : A -> U := c1' Type#{j}
up_down := c1'_c1 Type#{j}
and hence prove False.
Coq needs to implement a rule for avoiding this paradox. As described here, the rule is that for each (non-parameter) argument to a constructor of an inductive, if the type of the argument has a sort in universe u, then the universe of the inductive is constrained to be >= u. In this case, this is stricter than Coq needs to be. As mentioned by SkySkimmer here, Coq could recognize arguments which appear directly in locations which are indices of the inductive, and disregard those in the same way that it disregards parameters.
So, to finally answer your question, I believe the following are your only options:
You can Set Universe Polymorphism so that in T (T nat), your two Ts take different universe arguments. (Equivalently, you can write Polymorphic Inductive.)
You can take advantage of how Coq treats parameters of inductive types specially, which mandates using equality in your case. (The requirement of using equality is a general property of going from indexed inductive types to parameterized inductives types---from moving arguments from after the : to before it.)
You can pass Coq the flag -type-in-type to entirely disable universe checking.
You can fix bug #7929, which I reported as part of digging into this question, to make Coq handle arguments of constructors which appear in index-position in the inductive in the same way it handles parameters of inductive types.
(You can find another edge case of the system, and manage to trick Coq into ignoring the universes you want to slip past it, and probably find a proof of False in the process. (Possibly involving module subtyping, see, e.g., this recent bug in modules with universes.))
I am learning Coq and I'd like to use it to formalize Regular languages theory, specially finite automata. Let's say I have a structure for an automata as follows:
Record automata : Type := {
dfa_set_states : list state;
init_state : state;
end_state : state;
dfa_func: state -> terminal -> state;
}.
Where state is an inductive type as:
Inductive state:Type :=
S.
And the type terminal terminal is
Inductive terminal:Type :=
a | b.
I am trying to define it so later I'll be able to generalize the definition for any regular language. For now, I'd want to construct an automata which recognizes the language (a * b *), which is all words over the {a,b} alphabet. Does anyone have an idea on how to build some kind of fixpoint function that will run the word (which I see as a list of terminal) and tell me if that automata recgonizes that word or not? Any idea/help will be greatly apreciated.
Thanks in advance,
Erick.
Because you're restricting yourself to regular languages, this is quite simple: you just have to use a fold. Here is a sample:
Require Import Coq.Lists.List.
Import ListNotations.
Set Implicit Arguments.
Unset Strict Implicit.
Unset Printing Implicit Defensive.
Record dfa (S A : Type) := DFA {
initial_state : S;
is_final : S -> bool;
next : S -> A -> S
}.
Definition run_dfa S A (m : dfa S A) (l : list A) : bool :=
is_final m (fold_left (next m) l (initial_state m)).
This snippet is a little bit different from your original definition in that the state and alphabet components are now type parameters of the DFA, and in that I have replaced the end state with a predicate that answers whether we are in an accepting state or not. The run_dfa function simply iterates the transition function of the DFA starting from the initial state, and then tests whether the last state is an accepting state.
You can use this infrastructure to describe pretty much any regular language. For instance, here is an automaton for recognizing a*b*:
Inductive ab := A | B.
Inductive ab_state : Type :=
ReadA | ReadB | Fail.
Definition ab_dfa : dfa ab_state ab := {|
initial_state := ReadA;
is_final s := match s with Fail => false | _ => true end;
next s x :=
match s, x with
| ReadB, A => Fail
| ReadA, B => ReadB
| _, _ => s
end
|}.
We can prove that this automaton does what we expect. Here is a theorem that says that it accepts strings of the sought language:
Lemma ab_dfa_complete n m : run_dfa ab_dfa (repeat A n ++ repeat B m) = true.
Proof.
unfold run_dfa. rewrite fold_left_app.
assert (fold_left (next ab_dfa) (repeat A n) (initial_state ab_dfa) = ReadA) as ->.
{ now simpl; induction n as [| n IH]; simpl; trivial. }
destruct m as [|m]; simpl; trivial.
induction m as [|m IH]; simpl; trivial.
Qed.
We can also state a converse, that says that it accepts only strings of that language, and nothing else. I have left the proof out; it shouldn't be hard to figure it out.
Lemma ab_dfa_sound l :
run_dfa ab_dfa l = true ->
exists n m, l = repeat A n ++ repeat B m.
Unfortunately, there is not much we can do with this representation besides running the automaton. In particular, we cannot minimize an automaton, test whether two automata are equivalent, etc. These functions also need to take as arguments lists that enumerate all elements of the state and alphabet types, S and A.
I have been struggling on this for a while now. I have an inductive type:
Definition char := nat.
Definition string := list char.
Inductive Exp : Set :=
| Lit : char -> Exp
| And : Exp -> Exp -> Exp
| Or : Exp -> Exp -> Exp
| Many: Exp -> Exp
from which I define a family of types inductively:
Inductive Language : Exp -> Set :=
| LangLit : forall c:char, Language (Lit c)
| LangAnd : forall r1 r2: Exp, Language(r1) -> Language(r2) -> Language(And r1 r2)
| LangOrLeft : forall r1 r2: Exp, Language(r1) -> Language(Or r1 r2)
| LangOrRight : forall r1 r2: Exp, Language(r2) -> Language(Or r1 r2)
| LangEmpty : forall r: Exp, Language (Many r)
| LangMany : forall r: Exp, Language (Many r) -> Language r -> Language (Many r).
The rational here is that given a regular expression r:Exp I am attempting to represent the language associated with r as a type Language r, and I am doing so with a single inductive definition.
I would like to prove:
Lemma L1 : forall (c:char)(x:Language (Lit c)),
x = LangLit c.
(In other words, the type Language (Lit c) has only one element, i.e. the language of the regular expression 'c' is made of the single string "c". Of course I need to define some semantics converting elements of Language r to string)
Now the specifics of this problem are not important and simply serve to motivate my question: let us use nat instead of Exp and let us define a type List n which represents the lists of length n:
Parameter A:Set.
Inductive List : nat -> Set :=
| ListNil : List 0
| ListCons : forall (n:nat), A -> List n -> List (S n).
Here again I am using a single inductive definition to define a family of types List n.
I would like to prove:
Lemma L2: forall (x: List 0),
x = ListNil.
(in other words, the type List 0 has only one element).
I have run out of ideas on this one.
Normally when attempting to prove (negative) results with inductive types (or predicates), I would use the elim tactic (having made sure all the relevant hypothesis are inside my goal (generalize) and only variables occur in the type constructors). But elim is no good in this case.
If you are willing to accept more than just the basic logic of Coq, you can just use the dependent destruction tactic, available in the Program library (I've taken the liberty of rephrasing your last example in terms of standard-library vectors):
Require Coq.Vectors.Vector.
Require Import Program.
Lemma l0 A (v : Vector.t A 0) : v = #Vector.nil A.
Proof.
now dependent destruction v.
Qed.
If you inspect the term, you'll see that this tactic relied on the JMeq_eq axiom to get the proof to go through:
Print Assumptions l0.
Axioms:
JMeq_eq : forall (A : Type) (x y : A), x ~= y -> x = y
Fortunately, it is possible to prove l0 without having to resort to features outside of Coq's basic logic, by making a small change to the statement of the previous lemma.
Lemma l0_gen A n (v : Vector.t A n) :
match n return Vector.t A n -> Prop with
| 0 => fun v => v = #Vector.nil A
| _ => fun _ => True
end v.
Proof.
now destruct v.
Qed.
Lemma l0' A (v : Vector.t A 0) : v = #Vector.nil A.
Proof.
exact (l0_gen A 0 v).
Qed.
We can see that this new proof does not require any additional axioms:
Print Assumptions l0'.
Closed under the global context
What happened here? The problem, roughly speaking, is that in Coq we cannot perform case analysis on terms of dependent types whose indices have a specific shape (such as 0, in your case) directly. Instead, we must prove a more general statement where the problematic indices are replaced by variables. This is exactly what the l0_gen lemma is doing. Notice how we had to make the match on n return a function that abstracts on v. This is another instance of what is known as "convoy pattern". Had we written
match n with
| 0 => v = #Vector.nil A
| _ => True
end.
Coq would see the v in the 0 branch as having type Vector.t A n, making that branch ill-typed.
Coming up with such generalizations is one of the big pains of doing dependently typed programming in Coq. Other systems, such as Agda, make it possible to write this kind of code with much less effort, but it was only recently shown that this can be done without relying on the extra axioms that Coq wanted to avoid including in its basic theory. We can only hope that this will be simplified in future versions.
Ever since I learned a little bit of Coq I wanted to learn to write a Coq proof of the so-called division algorithm that is actually a logical proposition: forall n m : nat, exists q : nat, exists r : nat, n = q * m + r
I recently accomplished that task using what I learned from Software Foundations.
Coq being a system for developing constructive proofs, my proof is in effect a method to construct suitable values q and r from values m and n.
Coq has an intriguing facility for "extracting" an algorithm in Coq's algorithm language (Gallina) to general-purpose functional programming languages including Haskell.
Separately I have managed to write the divmod operation as a Gallina Fixpoint and extract that. I want to note carefully that that task is not what I'm considering here.
Adam Chlipala has written in Certified Programming with Dependent Types that "Many fans of the Curry-Howard correspondence support the idea of extracting programs from proofs. In reality, few users of Coq and related tools do any such thing."
Is it even possible to extract the algorithm implicit in my proof to Haskell? If it is possible, how would it be done?
Thanks to Prof. Pierce's summer 2012 video 4.1 as Dan Feltey suggested, we see that the key is that the theorem to be extracted must provide a member of Type rather than the usual kind of propositions, which is Prop.
For the particular theorem the affected construct is the inductive Prop ex and its notation exists. Similarly to what Prof. Pierce has done, we can state our own alternate definitions ex_t and exists_t that replace occurrences of Prop with occurrences of Type.
Here is the usual redefinition of ex and exists similarly as they are defined in Coq's standard library.
Inductive ex (X:Type) (P : X->Prop) : Prop :=
ex_intro : forall (witness:X), P witness -> ex X P.
Notation "'exists' x : X , p" := (ex _ (fun x:X => p))
(at level 200, x ident, right associativity) : type_scope.
Here are the alternate definitions.
Inductive ex_t (X:Type) (P : X->Type) : Type :=
ex_t_intro : forall (witness:X), P witness -> ex_t X P.
Notation "'exists_t' x : X , p" := (ex_t _ (fun x:X => p))
(at level 200, x ident, right associativity) : type_scope.
Now, somewhat unfortunately, it is necessary to repeat both the statement and the proof of the theorem using these new definitions.
What in the world??
Why is it necessary to make a reiterated statement of the theorem and a reiterated proof of the theorem, that differ only by using an alternative definition of the quantifier??
I had hoped to use the existing theorem in Prop to prove the theorem over again in Type. That strategy fails when Coq rejects the proof tactic inversion for a Prop in the environment when that Prop uses exists and the goal is a Type that uses exists_t. Coq reports "Error: Inversion would require case analysis on sort Set which is not allowed
for inductive definition ex." This behavior occurred in Coq 8.3. I am not certain that it
still occurs in Coq 8.4.
I think the need to repeat the proof is actually profound although I doubt that I personally am quite managing to perceive its profundity. It involves the facts that Prop is "impredicative" and Type is not impredicative, but rather, tacitly "stratified". Predicativity is (if I understand correctly) vulnerability to Russell's paradox that the set S of sets that are not members of themselves can neither be a member of S, nor a non-member of S. Type avoids Russell's paradox by tacitly creating a sequence of higher types that contain lower types. Because Coq is drenched in the formulae-as-types interpretation of the Curry-Howard correspondence, and if I am getting this right, we can even understand stratification of types in Coq as a way to avoid Gödel incompleteness, the phenomenon that certain formulae express constraints on formulae such as themselves and thereby become unknowable as to their truth or falsehood.
Back on planet Earth, here is the repeated statement of the theorem using "exists_t".
Theorem divalg_t : forall n m : nat, exists_t q : nat,
exists_t r : nat, n = plus (mult q m) r.
As I have omitted the proof of divalg, I will also omit the proof of divalg_t. I will only mention that we do have the good fortune that proof tactics including "exists" and "inversion" work just the same with our new definitions "ex_t" and "exists_t".
Finally, the extraction itself is accomplished easily.
Extraction Language Haskell.
Extraction "divalg.hs" divalg_t.
The resulting Haskell file contains a number of definitions, the heart of which is the reasonably nice code, below. And I was only slightly hampered by my near-total ignorance of the Haskell programming language. Note that Ex_t_intro creates a result whose type is Ex_t; O and S are the zero and the successor function from Peano arithmetic; beq_nat tests Peano numbers for equality; nat_rec is a higher-order function that recurs over the function among its arguments. The definition of nat_rec is not shown here. At any rate it is generated by Coq according to the inductive type "nat" that was defined in Coq.
divalg :: Nat -> Nat -> Ex_t Nat (Ex_t Nat ())
divalg n m =
case m of {
O -> Ex_t_intro O (Ex_t_intro n __);
S m' ->
nat_rec (Ex_t_intro O (Ex_t_intro O __)) (\n' iHn' ->
case iHn' of {
Ex_t_intro q' hq' ->
case hq' of {
Ex_t_intro r' _ ->
let {k = beq_nat r' m'} in
case k of {
True -> Ex_t_intro (S q') (Ex_t_intro O __);
False -> Ex_t_intro q' (Ex_t_intro (S r') __)}}}) n}
Update 2013-04-24: I know a bit more Haskell now. To assist others in reading the extracted code above, I'm presenting the following hand-rewritten code that I claim is equivalent and more readable. I'm also presenting the extracted definitions Nat, O, S, and nat_rec that I did not eliminate.
-- Extracted: Natural numbers (non-negative integers)
-- in the manner in which Peano defined them.
data Nat =
O
| S Nat
deriving (Eq, Show)
-- Extracted: General recursion over natural numbers,
-- an interpretation of Nat in the manner of higher-order abstract syntax.
nat_rec :: a1 -> (Nat -> a1 -> a1) -> Nat -> a1
nat_rec f f0 n =
case n of {
O -> f;
S n0 -> f0 n0 (nat_rec f f0 n0)}
-- Given non-negative integers n and m, produce (q, r) with n = q * m + r.
divalg_t :: Nat -> Nat -> (Nat, Nat)
divalg_t n O = (O, n) -- n/0: Define quotient 0, remainder n.
divalg_t n (S m') = divpos n m' -- n/(S m')
where
-- Given non-negative integers n and m',
-- and defining m = m' + 1,
-- produce (q, r) with n = q * m + r
-- so that q = floor (n / m) and r = n % m.
divpos :: Nat -> Nat -> (Nat, Nat)
divpos n m' = nat_rec (O, O) (incrDivMod m') n
-- Given a non-negative integer m' and
-- a pair of non-negative integers (q', r') with r <= m',
-- and defining m = m' + 1,
-- produce (q, r) with q*m + r = q'*m + r' + 1 and r <= m'.
incrDivMod :: Nat -> Nat -> (Nat, Nat) -> (Nat, Nat)
incrDivMod m' _ (q', r')
| r' == m' = (S q', O)
| otherwise = (q', S r')
The current copy of Software Foundations dated July 25, 2012, answers this quite concisely in the late chapter "Extraction2". The answer is that it can certainly be done, much like this:
Extraction Language Haskell
Extraction "divalg.hs" divalg
One more trick is necessary. Instead of a Prop, divalg must be a Type. Otherwise it will be erased in the process of extraction.
Uh oh, #Anthill is correct, I haven't answered the question because I don't know how to explain how Prof. Pierce accomplished that in his NormInType.v variant of his Norm.v and MoreStlc.v.
OK, here's the rest of my partial answer anyway.
Where "divalg" appears above, it will be necessary to provide a space-separated list of all of the propositions (which must each be redefined as a Type rather than a Prop) on which divalg relies. For a thorough, interesting, and working example of a proof extraction, one may consult the chapter Extraction2 mentioned above. That example extracts to OCaml, but adapting it for Haskell is simply a matter of using Extraction Language Haskell as above.
In part, the reason that I spent some time not knowing the above answer is that I have been using the copy of Software Foundations dated October 14, 2010, that I downloaded in 2011.