universe inconsistency with sumbool - coq

I have defined and proven the following lemma:
NM.In k m -> {NM.In k m0}+{NM.In k m1}.
I can also prove a symmetric lemma for:
{NM.In k m0}+{NM.In k m1} -> NM.In k m
However when I try to combine them into one as:
NM.In k m <-> {NM.In k m0}+{NM.In k m1}.
I got the following error:
The term "sumbool (#NM.In CarrierA k m0) (#NM.In CarrierA k m1)" has type
"Set" while it is expected to have type "Prop" (universe inconsistency: Cannot enforce
Set = Prop).
How this could be solved?

As Daniel pointed out, the problem is that the <-> connective only takes propositions as arguments, while sumbool lives in Set. This can be circumvented in a few ways: you can replace sumbool with or, or you can replace iff with a computationally relevant connective:
Variables A B C : Prop.
Check (({A} + {B} -> C) * (C -> {A} + {B}))%type.

Related

What is difference between `destruct` and `case_eq` tactics in Coq?

I understood destruct as it breaks an inductive definition into its constructors. I recently saw case_eq and I couldn't understand what it does differently?
1 subgoals
n : nat
k : nat
m : M.t nat
H : match M.find (elt:=nat) n m with
| Some _ => true
| None => false
end = true
______________________________________(1/1)
cc n (M.add k k m) = true
In the above context, if I do destruct M.find n m it breaks H into true and false whereas case_eq (M.find n m) leaves H intact and adds separate proposition M.find (elt:=nat) n m = Some v, which I can rewrite to get same effect as destruct.
Can someone please explain me the difference between the two tactics and when which one should be used?
The first basic tactic in the family of destruct and case_eq is called case. This tactic modifies only the conclusion. When you type case A and A has a type T which is inductive, the system replaces A in the goal's conclusion by instances of all the constructors of type T, adding universal quantifications for the arguments of these constructors, if needed. This creates as many goals as there are constructors in type T. The formula A disappears from the goal and if there is any information about A in an hypothesis, the link between this information and all the new constructors that replace it in the conclusion gets lost. In spite of this, case is an important primitive tactic.
Loosing the link between information in the hypotheses and instances of A in the conclusion is a big problem in practice, so developers came up with two solutions: case_eq and destruct.
Personnally, when writing the Coq'Art book, I proposed that we write a simple tactic on top of case that keeps a link between A and the various constructor instances in the form of an equality. This is the tactic now called case_eq. It does the same thing as case but adds an extra implication in the goal, where the premise of the implication is an equality of the form A = ... and where ... is an instance of each constructor.
At about the same time, the tactic destruct was proposed. Instead of limiting the effect of replacement in the goal's conclusion, destruct replaces all instances of A appearing in the hypotheses with instances of constructors of type T. In a sense, this is cleaner because it avoids relying on the extra concept of equality, but it is still incomplete because the expression A may be a compound expression f B, and if B appears in the hypothesis but not f B the link between A and B will still be lost.
Illustration
Definition my_pred (n : nat) := match n with 0 => 0 | S p => p end.
Lemma example n : n <= 1 -> my_pred n <= 0.
Proof.
case_eq (my_pred n).
Gives the two goals
------------------
n <= 1 -> my_pred n = 0 -> 0 <= 0
and
------------------
forall p, my_pred n = S p -> n <= 1 -> S p <= 0
the extra equality is very useful here.
In this question I suggested that the developer use case_eq (a == b) when (a == b) has type bool because this type is inductive and not very informative (constructors have no argument). But when (a == b) has type {a = b}+{a <> b} (which is the case for the string_dec function) the constructors have arguments that are proofs of interesting properties and the extra universal quantification for the arguments of the constructors are enough to give the relevant information, in this case a = b in a first goal and a <> b in a second goal.

How to express subset relation in Coq?

How can I describe in Coq that one set Y is a subset of another set X?
I tested the following:
Definition subset (Y X:Set) : Prop :=
forall y:Y, y:X.
, trying to express that if an element y is in Y, then y is in X. But this generates type errors about y, not surprisingly.
Is there an easy way to define subset in Coq?
Here is how it is done in the standard library (Coq.Logic.ClassicalChoice):
Definition subset (U:Type) (P Q:U->Prop) : Prop := forall x, P x -> Q x.
Unary predicates P and Q represent some subsets of the (universal) set U, so the above definition reads: whenever some x is in P, it is in Q at the same time.
A somewhat similar defintion can be found in Coq.MSets.MSetInterface:
Definition Subset s s' := forall a : elt, In a s -> In a s'.
where In has type elt -> t -> Prop, which means that some element of type elt is a member of a set of type t.

Using `apply with` without giving names of parameters in Coq?

In using the Coq apply ... with tactic, the examples I have seen all involve explicitly giving the names of variables to instantiate. For example, given a theorem about the transitivity of equality.
Theorem trans_eq : forall (X:Type) (n m o : X),
n = m -> m = o -> n = o.
To apply it:
Example test: forall n m: nat,
n = 1 -> 1 = m -> n = m.
Proof.
intros n m.
apply trans_eq with (m := 1). Qed.
Note that in the last line apply trans_eq with (m := 1)., I have to remember that the name of the variable to instantiate is m, rather than o or n or some other names y.
To me, whether m n o or x y z are used in the original statement of the theorem shouldn't matter, because they are like dummy variables or formal parameters of a function. And sometimes I can't remember the specific names I used or somebody else put down in a different file when defining the theorem.
Is there a way by which I can refer to the variables e.g. by their position and use something like:
apply trans_eq with (#1 := 1)
in the above example?
By the way, I tried: apply trans_eq with (1 := 1). and got Error: No such binder.
Thanks.
You can specialize the lemma with the right arguments. The _ is used for all arguments that we don't want to specialize (because they can be inferred). The # is required to specialize implicit arguments.
Example test: forall n m: nat,
n = 1 -> 1 = m -> n = m.
Proof.
intros n m.
apply (#trans_eq _ _ 1).
Qed.
You can omit the binder names after with, so in your case do apply trans_eq with 1.
Example test: forall n m: nat,
n = 1 -> 1 = m -> n = m.
Proof.
intros n m.
apply trans_eq with 1; auto.
Qed.
I've changed your original example a little to conclude the proof.
Why this works
To understand why this works, check the manual under Bindings:
Tactics that take a term as an argument may also accept bindings to
instantiate some parameters of the term by name or position. The
general form of a term with bindings is termtac with bindings where
bindings can take two different forms:
bindings::= (ident | ​natural := term)+
| one_term+
What is shown in this example is the form one_term, which is described as follows:
in the case of apply, or of constructor and its variants, only instances for the dependent products that are not bound in the conclusion of termtac are required.
Which is why only one term needs to be supplied.

Stuck in the construction of a very simple function

I am learning Coq. I am stuck on a quite silly problem (which has no motivation, it is really silly). I want to build a function from ]2,+oo] to the set of integers mapping x to x-3. That should be simple... In any language I know, it is simple. But not in Coq. First, I write (I explain with a lot of details so that someone can explain what I don't understand in the behaviour of Coq)
Definition f : forall n : nat, n > 2 -> nat.
I get a subgoal
============================
forall n : nat, n > 2 -> nat
which means that Coq wants a map from a proof of n>2 to the set of integers. Fine. So I want to tell it that n = 3 + p for some integer p, and then return the integer p. I write :
intros n H.
And I get the context/subgoal
n : nat
H : n > 2
============================
nat
Then i suppose that I have proved n = 3 + p for some integer p by
cut(exists p, 3 + p = n).
I get the context/subgoal
n : nat
H : n > 2
============================
(exists p : nat, 3 + p = n) -> nat
subgoal 2 (ID 6) is:
exists p : nat, 3 + p = n
I move the hypothesis in the context by
intro K.
I obtain:
n : nat
H : n > 2
K : exists p : nat, 3 + p = n
============================
nat
subgoal 2 (ID 6) is:
exists p : nat, 3 + p = n
I will prove the existence of p later. Now I want to finish the proof by exact p. So i need first to do a
destruct K as (p,K).
and I obtain the error message
Error: Case analysis on sort Set is not allowed for inductive
definition ex.
And I am stuck.
You are absolutely right! Writing this function should be easy in any reasonable programming language, and, fortunately, Coq is no exception.
In your case, it is much easier to define your function by simply ignoring the proof argument you are supplying:
Definition f (n : nat) : nat := n - 3.
You may then wonder "but wait a second, the natural numbers aren't closed under subtraction, so how can this make sense?". Well, in Coq, subtraction on the natural numbers isn't really subtraction: it is actually truncated. If you try to subtract, say, 3 from 2, you get 0 as an answer:
Goal 2 - 3 = 0. reflexivity. Qed.
What this means in practice is that you are always allowed to "subtract" two natural numbers and get a natural number back, but in order for this subtraction make sense, the first argument needs to be greater than the second. We then get lemmas such as the following (available in the standard library):
le_plus_minus_r : forall n m, n <= m -> n + (m - n) = m
In most cases, working with a function that is partially correct, such as this definition of subtraction, is good enough. If you want, however, you can restrict the domain of f to make its properties more pleasant. I've taken the liberty of doing the following script with the ssreflect library, which makes writing this kind of function easier:
Require Import Ssreflect.ssreflect Ssreflect.ssrfun Ssreflect.ssrbool.
Require Import Ssreflect.ssrnat Ssreflect.eqtype.
Definition f (n : {n | 2 < n}) : nat :=
val n - 3.
Definition finv (m : nat) : {n | 2 < n} :=
Sub (3 + m) erefl.
Lemma fK : cancel f finv.
Proof.
move=> [n Pn] /=; apply/val_inj=> /=.
by rewrite /f /= addnC subnK.
Qed.
Lemma finvK : cancel finv f.
Proof.
by move=> n; rewrite /finv /f /= addnC addnK.
Qed.
Now, f takes as an argument a natural number n that is greater than 2 (the {x : T | P x} form is syntax sugar for the sig type from the standard library, which is used for forming types that work like subsets). By restricting the argument type, we can write an inverse function finv that takes an arbitrary nat and returns another number that is greater than 2. Then, we can prove lemmas fK and finvK, which assert that fK and finvK are inverses of each other.
On the definition of f, we use val, which is ssreflect's idiom for extracting the element out of a member of a type such as {n | 2 < n}. The Sub function on finv does the opposite, packaging a natural number n with a proof that 2 < n and returning an element of {n | 2 < n}. Here, we rely crucially on the fact that the < is expressed in ssreflect as a boolean computation, so that Coq can use its computation rules to check that erefl, a proof of true = true, is also a valid proof of 2 < 3 + m.
To conclude, the mysterious error message you got in the end has to do with Coq's rules governing computational types, with live in Type, and propositional types, which live in Prop. Coq's rules forbid you from using proofs of propositions to build elements that have computational content (such as natural numbers), except in very particular cases. If you wanted, you could still finish your definition by using {p | 3 + p = n} instead of exists p, 3 + p = n -- both mean the same thing, except the former lives in Type while the latter lives in Prop.

Can I extract a Coq proof as a Haskell function?

Ever since I learned a little bit of Coq I wanted to learn to write a Coq proof of the so-called division algorithm that is actually a logical proposition: forall n m : nat, exists q : nat, exists r : nat, n = q * m + r
I recently accomplished that task using what I learned from Software Foundations.
Coq being a system for developing constructive proofs, my proof is in effect a method to construct suitable values q and r from values m and n.
Coq has an intriguing facility for "extracting" an algorithm in Coq's algorithm language (Gallina) to general-purpose functional programming languages including Haskell.
Separately I have managed to write the divmod operation as a Gallina Fixpoint and extract that. I want to note carefully that that task is not what I'm considering here.
Adam Chlipala has written in Certified Programming with Dependent Types that "Many fans of the Curry-Howard correspondence support the idea of extracting programs from proofs. In reality, few users of Coq and related tools do any such thing."
Is it even possible to extract the algorithm implicit in my proof to Haskell? If it is possible, how would it be done?
Thanks to Prof. Pierce's summer 2012 video 4.1 as Dan Feltey suggested, we see that the key is that the theorem to be extracted must provide a member of Type rather than the usual kind of propositions, which is Prop.
For the particular theorem the affected construct is the inductive Prop ex and its notation exists. Similarly to what Prof. Pierce has done, we can state our own alternate definitions ex_t and exists_t that replace occurrences of Prop with occurrences of Type.
Here is the usual redefinition of ex and exists similarly as they are defined in Coq's standard library.
Inductive ex (X:Type) (P : X->Prop) : Prop :=
ex_intro : forall (witness:X), P witness -> ex X P.
Notation "'exists' x : X , p" := (ex _ (fun x:X => p))
(at level 200, x ident, right associativity) : type_scope.
Here are the alternate definitions.
Inductive ex_t (X:Type) (P : X->Type) : Type :=
ex_t_intro : forall (witness:X), P witness -> ex_t X P.
Notation "'exists_t' x : X , p" := (ex_t _ (fun x:X => p))
(at level 200, x ident, right associativity) : type_scope.
Now, somewhat unfortunately, it is necessary to repeat both the statement and the proof of the theorem using these new definitions.
What in the world??
Why is it necessary to make a reiterated statement of the theorem and a reiterated proof of the theorem, that differ only by using an alternative definition of the quantifier??
I had hoped to use the existing theorem in Prop to prove the theorem over again in Type. That strategy fails when Coq rejects the proof tactic inversion for a Prop in the environment when that Prop uses exists and the goal is a Type that uses exists_t. Coq reports "Error: Inversion would require case analysis on sort Set which is not allowed
for inductive definition ex." This behavior occurred in Coq 8.3. I am not certain that it
still occurs in Coq 8.4.
I think the need to repeat the proof is actually profound although I doubt that I personally am quite managing to perceive its profundity. It involves the facts that Prop is "impredicative" and Type is not impredicative, but rather, tacitly "stratified". Predicativity is (if I understand correctly) vulnerability to Russell's paradox that the set S of sets that are not members of themselves can neither be a member of S, nor a non-member of S. Type avoids Russell's paradox by tacitly creating a sequence of higher types that contain lower types. Because Coq is drenched in the formulae-as-types interpretation of the Curry-Howard correspondence, and if I am getting this right, we can even understand stratification of types in Coq as a way to avoid Gödel incompleteness, the phenomenon that certain formulae express constraints on formulae such as themselves and thereby become unknowable as to their truth or falsehood.
Back on planet Earth, here is the repeated statement of the theorem using "exists_t".
Theorem divalg_t : forall n m : nat, exists_t q : nat,
exists_t r : nat, n = plus (mult q m) r.
As I have omitted the proof of divalg, I will also omit the proof of divalg_t. I will only mention that we do have the good fortune that proof tactics including "exists" and "inversion" work just the same with our new definitions "ex_t" and "exists_t".
Finally, the extraction itself is accomplished easily.
Extraction Language Haskell.
Extraction "divalg.hs" divalg_t.
The resulting Haskell file contains a number of definitions, the heart of which is the reasonably nice code, below. And I was only slightly hampered by my near-total ignorance of the Haskell programming language. Note that Ex_t_intro creates a result whose type is Ex_t; O and S are the zero and the successor function from Peano arithmetic; beq_nat tests Peano numbers for equality; nat_rec is a higher-order function that recurs over the function among its arguments. The definition of nat_rec is not shown here. At any rate it is generated by Coq according to the inductive type "nat" that was defined in Coq.
divalg :: Nat -> Nat -> Ex_t Nat (Ex_t Nat ())
divalg n m =
case m of {
O -> Ex_t_intro O (Ex_t_intro n __);
S m' ->
nat_rec (Ex_t_intro O (Ex_t_intro O __)) (\n' iHn' ->
case iHn' of {
Ex_t_intro q' hq' ->
case hq' of {
Ex_t_intro r' _ ->
let {k = beq_nat r' m'} in
case k of {
True -> Ex_t_intro (S q') (Ex_t_intro O __);
False -> Ex_t_intro q' (Ex_t_intro (S r') __)}}}) n}
Update 2013-04-24: I know a bit more Haskell now. To assist others in reading the extracted code above, I'm presenting the following hand-rewritten code that I claim is equivalent and more readable. I'm also presenting the extracted definitions Nat, O, S, and nat_rec that I did not eliminate.
-- Extracted: Natural numbers (non-negative integers)
-- in the manner in which Peano defined them.
data Nat =
O
| S Nat
deriving (Eq, Show)
-- Extracted: General recursion over natural numbers,
-- an interpretation of Nat in the manner of higher-order abstract syntax.
nat_rec :: a1 -> (Nat -> a1 -> a1) -> Nat -> a1
nat_rec f f0 n =
case n of {
O -> f;
S n0 -> f0 n0 (nat_rec f f0 n0)}
-- Given non-negative integers n and m, produce (q, r) with n = q * m + r.
divalg_t :: Nat -> Nat -> (Nat, Nat)
divalg_t n O = (O, n) -- n/0: Define quotient 0, remainder n.
divalg_t n (S m') = divpos n m' -- n/(S m')
where
-- Given non-negative integers n and m',
-- and defining m = m' + 1,
-- produce (q, r) with n = q * m + r
-- so that q = floor (n / m) and r = n % m.
divpos :: Nat -> Nat -> (Nat, Nat)
divpos n m' = nat_rec (O, O) (incrDivMod m') n
-- Given a non-negative integer m' and
-- a pair of non-negative integers (q', r') with r <= m',
-- and defining m = m' + 1,
-- produce (q, r) with q*m + r = q'*m + r' + 1 and r <= m'.
incrDivMod :: Nat -> Nat -> (Nat, Nat) -> (Nat, Nat)
incrDivMod m' _ (q', r')
| r' == m' = (S q', O)
| otherwise = (q', S r')
The current copy of Software Foundations dated July 25, 2012, answers this quite concisely in the late chapter "Extraction2". The answer is that it can certainly be done, much like this:
Extraction Language Haskell
Extraction "divalg.hs" divalg
One more trick is necessary. Instead of a Prop, divalg must be a Type. Otherwise it will be erased in the process of extraction.
Uh oh, #Anthill is correct, I haven't answered the question because I don't know how to explain how Prof. Pierce accomplished that in his NormInType.v variant of his Norm.v and MoreStlc.v.
OK, here's the rest of my partial answer anyway.
Where "divalg" appears above, it will be necessary to provide a space-separated list of all of the propositions (which must each be redefined as a Type rather than a Prop) on which divalg relies. For a thorough, interesting, and working example of a proof extraction, one may consult the chapter Extraction2 mentioned above. That example extracts to OCaml, but adapting it for Haskell is simply a matter of using Extraction Language Haskell as above.
In part, the reason that I spent some time not knowing the above answer is that I have been using the copy of Software Foundations dated October 14, 2010, that I downloaded in 2011.