apply dependently typed constructor in hypothesis - coq

Consider the following inductive definition:
Inductive T (n : nat) : nat -> Prop :=
| c1 : T n n.
This works:
Theorem a : T 1 1.
Proof.
apply c1.
Qed.
But this doesn't:
Theorem b : T 1 1 -> True.
Proof.
intros H.
apply c1 in H.
Both calls to apply seem equivalent to me, but the latter fails with Error: Statement without assumptions. Why? And how can I make it work?

This question arose from my poor understanding of the apply tactic. #gallais's comment made me realize why apply c1 in H doesn't really make sense. The tactic essentially works as a function application on H and c1 H doesn't really make sense. For future reference, this is an example in which apply in would make sense. If we had another constructor c2 like this:
Inductive T (n : nat) : nat -> Prop :=
| c1 : T n n
| c2 : forall x, T n x -> T n (S x).
Then apply c2 in H would transform something of type T n x into something of type T n (S x), for example:
Theorem b : T 1 1 -> True.
Proof.
intros H.
apply c2 in H.
transforms the hypothesis H : T 1 1 into H : T 1 2.

There is a difference between proving H or proving H -> True. The Coq standard library already defined a proof of True, called I.
Hence:
Theorem b : T 1 1 -> True.
Proof.
intros; apply I.
Qed.
But please also note that:
Theorem prove_I : forall p : Prop, p -> True.
Proof.
intros; apply I.
Qed.
And as a consequence H -> True does not prove H, there's no correlation between the inhabitants of the two type.

The apply tactic works differently when it is applied to a goal or a hypothesis. Applying H : A -> B has the following semantics:
if the goal was B, then it will be transformed to A after apply H.
if the hypothesis H1 : A, then it will be transformed to H1 : B after apply H in H1
This makes the reasoning correct; so it does not introduce inconsistencies.

Related

Destruct hypothesis: general case

That's pretty clear what destruct H does if H contains conjunction or disjunction. But I can't figure out what it does in general case. It does something bizarre, especially if H: a -> b.
Some examples:
Lemma demo : forall (x y: nat), x=4 -> x=4.
Proof.
intros. destruct H.
The hypothesis is just destroyed:
1 subgoal
x, y : nat
______________________________________(1/1)
x = x
Another one:
Lemma demo : forall (x y: nat), (x = 4 -> x=4) -> True.
Proof.
intros. destruct H.
Now I have two branches:
1 subgoal
x, y : nat
______________________________________(1/1)
x = 4
1 subgoal
x, y : nat
______________________________________(1/1)
True
Third example. It's not provable but it still doesn't make sense to me:
Lemma demo : forall (x y: nat), (x = 4 -> x = 4) -> x = 4.
Proof.
intros. destruct H.
Now I have to prove x = x in the second branch!
2 subgoals
x, y : nat
______________________________________(1/2)
x = 4
______________________________________(2/2)
x = x
So, I clearly don't understand what destruct H does.
The cases you are referring to fall in two categories. If H : A and A is inductively or coinductively defined (e.g., conjunction and disjunction), then destruct H generates one subgoal for each constructor in that type, with additional hypotheses determined by the arguments of that constructor. On the other hand, if H : A -> B, then destruct H generates one subgoal where you have to prove A, and then continues recursively as if H : B. This is roughly equivalent to the following calls:
assert (H' : A); [ |specialize (H H'); destruct H].
The missing piece of the puzzle is that equality itself is defined as an inductive type:
Inductive eq (A : Type) (a : A) : A -> Prop :=
| eq_refl : eq A a a
When you destruct something of type x = 4, Coq generates one case for each constructor of that type. But there is only one constructor in that type: eq_refl. When considering that case, Coq also automatically replaces occurrences of the RHS of destructed equality by the LHS (since both sides are equal for that constructor). In your first and third examples, this leads to replacing 4 in the goal with x.
Most of the time, you do not want to destruct an equality hypothesis, since this replacement behavior is not very useful. It is usually better to use the rewrite tactic, since it allows you to rewrite from rightto-left or left-to-right.

Rewriting with John Major's equality

John Major's equality comes with the following lemma for rewriting:
Check JMeq_ind_r.
(*
JMeq_ind_r
: forall (A : Type) (x : A) (P : A -> Prop),
P x -> forall y : A, JMeq y x -> P y
*)
It is easy to generalize it like that:
Lemma JMeq_ind2_r
: forall (A:Type)(x:A)(P:forall C,C->Prop),
P A x -> forall (B:Type)(y:B), #JMeq B y A x -> P B y.
Proof.
intros.
destruct H0.
assumption.
Qed.
However I need something a bit different:
Lemma JMeq_ind3_r
: forall (A:Type)(x:A*A) (P:forall C,C*C->Prop),
P A x -> forall (B:Type)(y:B*B), #JMeq (B*B) y (A*A) x -> P B y.
Proof.
intros.
Fail destruct H0.
Abort.
Is JMeq_ind3_r provable?
If not:
Is it safe to assume it as an axiom?
Is it reducible to a simpler and safe axiom?
It's not provable. JMeq is essentially two equality proofs bundled together, one for the types and one for the values. In this case, we get from the hypothesis that A * A = B * B. From this, it is not provable that A = B, so we cannot convert a P A x into P B y.
If A * A = B * B implies A = B, that means that the pair type constructor is injective. Type constructor injectivity in general (i.e. for all types) is inconsistent with classical logic and also with univalence. For some type constructors, injectivity is provable, but not for pairs.
Is it safe to assume it as an axiom?
If you use classical logic or univalence then it isn't. Otherwise, it probably is, but I would instead try to rephrase the problem so that type constructor injectivity does not come up.

How to prove False from obviously contradictory assumptions

Suppose I want to prove following Theorem:
Theorem succ_neq_zero : forall n m: nat, S n = m -> 0 = m -> False.
This one is trivial since m cannot be both successor and zero, as assumed. However I found it quite tricky to prove it, and I don't know how to make it without an auxiliary lemma:
Lemma succ_neq_zero_lemma : forall n : nat, O = S n -> False.
Proof.
intros.
inversion H.
Qed.
Theorem succ_neq_zero : forall n m: nat, S n = m -> 0 = m -> False.
Proof.
intros.
symmetry in H.
apply (succ_neq_zero_lemma n).
transitivity m.
assumption.
assumption.
Qed.
I am pretty sure there is a better way to prove this. What is the best way to do it?
You just need to substitute for m in the first equation:
Theorem succ_neq_zero : forall n m: nat, S n = m -> 0 = m -> False.
Proof.
intros n m H1 H2; rewrite <- H2 in H1; inversion H1.
Qed.
There's a very easy way to prove it:
Theorem succ_neq_zero : forall n m: nat, S n = m -> 0 = m -> False.
Proof.
congruence.
Qed.
The congruence tactic is a decision procedure for ground equalities on uninterpreted symbols. It's complete for uninterpreted symbols and for constructors, so in cases like this one, it can prove that the equality 0 = m is impossible.
It might be useful to know how congruence works.
To prove that two terms constructed by different constructors are in fact different, just create a function that returns True in one case and False in the other cases, and then use it to prove True = False. I think this is explained in Coq'Art
Example not_congruent: 0 <> 1.
intros C. (* now our goal is 'False' *)
pose (fun m=>match m with 0=>True |S _=>False end) as f.
assert (Contra: f 1 = f 0) by (rewrite C; reflexivity).
now replace False with True by Contra.
Qed.

Coq induction start at specific nat

I'm trying to learn coq so please assume I know nothing about it.
If I have a lemma in coq that starts
forall n m:nat, n>=1 -> m>=1 ...
And I want to proceed by induction on n. How do I start the induction at 1? Currently when I use the "induction n." tactic it starts at zero and this makes the base statement false which makes it hard to proceed.
Any hints?
The following is a proof that every proposition P is true forall n>=1, if P is true for 1 and if P is inductively true.
Require Import Omega.
Parameter P : nat -> Prop.
Parameter p1 : P 1.
Parameter pS : forall n, P n -> P (S n).
Goal forall n, n>=1 -> P n.
We begin the proof by induction.
induction n; intro.
A false base case is no problem, if you have a false hypothesis laying around. In this case 0>=1.
- exfalso. omega.
The inductive case is tricky, because to access a proof of P n, we first have to proof that n>=1. The trick is to do a case analysis on n. If n=0, then we can trivially proof the goal P 1. If n>=1, we can access P n, and then proof the rest.
- destruct n.
+ apply p1.
+ assert (S n >= 1) by omega.
intuition.
apply pS.
trivial.
Qed.

Prove equality on Sigma-types

I have defined a Sygma-Type that looks like:
{ R : nat -> nat -> bool | Reflexive R }
I have two elements r1 r2 : { R : nat -> nat -> bool | Reflexive R } and I am to prove r1 = r2. How can I do that?
If you want to show such an equality, you need to (1) show that the underlying functions are equal (i.e., the R component of your sigma type), and (2) show that the corresponding proofs are equal. There are two problems, however.
The first one is that equality of functions is too weak in Coq. According to common mathematical practice, we expect two functions to be equal if they yield equal results for any inputs. This principle is known as functional extensionality:
Axiom functional_extensionality :
forall A (B : A -> Type)
(f g : forall a, B a),
(forall x, f x = g x) ->
f = g.
As natural as it sounds, however, this principle is not provable in Coq's logic! Roughly speaking, the only way two functions can be equal is if they can be converted to a syntactically equal terms according to the computation rules of the logic. For instance, we can show that fun n : nat => 0 + n and fun n : nat => n are equal because + is defined in Coq by pattern-matching on the first argument, and the first argument on the first term is 0.
Goal (fun n : nat => 0 + n) = (fun n : nat => n). reflexivity. Qed.
We could expect to show that fun n => n + 0 and fun n => n are equal by similar means. However, Coq does not accept this, because + cannot be simplified when the first argument is a variable.
The other problem is that the notion of equality on proofs is not very interesting as well. The only way one can show that two proofs are equal is, again, syntactic equality. Intuitively, however, one would like to argue by proof irrelevance, a principle that states that proofs of the same thing are always equal:
Axiom proof_irrelevance :
forall (P : Prop) (p q : P), p = q.
but, again, this principle is not provable in the logic. Fortunately, Coq's logic was designed to allow one to add these principles as axioms in a sound way. One then gets the following proof:
Axiom functional_extensionality :
forall A (B : A -> Type)
(f g : forall a, B a),
(forall a, f a = g a) ->
f = g.
Axiom proof_irrelevance :
forall (P : Prop) (p q : P), p = q.
Lemma l (r1 r2 : { R : nat -> nat -> bool |
forall n, R n n = true }) :
(forall n1 n2, proj1_sig r1 n1 n2 = proj1_sig r2 n1 n2) ->
r1 = r2.
Proof.
destruct r1 as [r1 H1], r2 as [r2 H2].
simpl.
intros H.
assert (H' : r1 = r2).
{ apply functional_extensionality.
intros n1.
apply functional_extensionality.
intros n2.
apply H. }
subst r2.
rename r1 into r.
f_equal.
apply proof_irrelevance.
Qed.
Even though axioms can be useful, one might like to avoid them. In this case, it is actually possible to prove this lemma just with functional extensionality, but you do need at least that. If you want to avoid using axioms, and r1 and r2 are not equal up to computation, you'll have to use a difference equivalence relation on your type, and do your formalization using that relation instead, e.g.
Definition rel_equiv (r1 r2 : { R : nat -> nat -> bool | forall n, R n n = true }) : Prop :=
forall n1 n2, proj1_sig r1 n1 n2 = proj2_sig r2 n1 n2.
The standard library has good support for rewriting with equivalence relations; cf. for instance this.