I have to admit that I'm not very good at coinduction. I'm trying to show a bisimulation principle on co-natural numbers, but I'm stuck on a pair of (symmetric) cases.
CoInductive conat :=
| cozero : conat
| cosucc : conat -> conat.
CoInductive conat_eq : conat -> conat -> Prop :=
| eqbase : conat_eq cozero cozero
| eqstep : forall m n, conat_eq m n -> conat_eq (cosucc m) (cosucc n).
Section conat_eq_coind.
Variable R : conat -> conat -> Prop.
Hypothesis H1 : R cozero cozero.
Hypothesis H2 : forall (m n : conat), R (cosucc m) (cosucc n) -> R m n.
Theorem conat_eq_coind : forall m n : conat,
R m n -> conat_eq m n.
Proof.
cofix. intros m n H.
destruct m, n.
simpl in H1.
- exact eqbase.
- admit.
- admit.
- specialize (H2 H).
specialize (conat_eq_coind _ _ H2).
exact (eqstep conat_eq_coind).
Admitted.
End conat_eq_coind.
This is what the context looks like when focused on the first admitted case:
1 subgoal
R : conat -> conat -> Prop
H1 : R cozero cozero
H2 : forall m n : conat, R (cosucc m) (cosucc n) -> R m n
conat_eq_coind : forall m n : conat, R m n -> conat_eq m n
n : conat
H : R cozero (cosucc n)
______________________________________(1/1)
conat_eq cozero (cosucc n)
Thoughts?
I do not understand what you try to prove here. This is wrong. Take the trivial predicate that is always True for example.
Theorem conat_eq_coind_false :
~ forall (R : conat -> conat -> Prop) (H1 : R cozero cozero)
(H2 : forall (m n : conat), R (cosucc m) (cosucc n) -> R m n)
(m n : conat) (H3 : R m n), conat_eq m n.
Proof.
intros contra.
specialize (contra (fun _ _ => True) I (fun _ _ _ => I)
cozero (cosucc cozero) I).
inversion contra.
Qed.
Related
I have a definition of conat which can represent both finite and infinite values, a conversion from nat, a definition of infinity, and a bisimulation relation:
CoInductive conat : Set := O' | S' (n : conat).
Fixpoint toCo (n : nat) : conat := match n with
| O => O'
| S n' => S' (toCo n') end.
CoFixpoint inf : conat := S' inf.
CoInductive bisim : conat -> conat -> Prop :=
| OO : bisim O' O'
| SS : forall n m : conat, bisim n m -> bisim (S' n) (S' m).
Notation "x == y" := (bisim x y) (at level 70).
I want to prove that conat is either finite or infinite (I'm not 100% sure that this is the correct formulation):
(* This is the goal *)
Theorem fin_or_inf : forall n : conat, (exists m : nat, toCo m == n) \/ (n == inf).
I couldn't prove it so far, but I could prove another statement that, if a conat is not finite, it is infinite (again, not 100% sure about the formulation):
(* I have a proof for this *)
Theorem not_fin_then_inf : forall n : conat, ~ (exists m : nat, toCo m == n) -> (n == inf).
I have no idea how to go from not_fin_then_inf to fin_or_inf though.
Is my definition of fin_or_inf correct?
Can I prove fin_or_inf, either using not_fin_then_inf or not using it?
Also, I found that bridging the gap between the two theorems involves decidability of bisimulation (or extension thereof). I think the decidability theorem could be stated as
Lemma bisim_dec : forall n m : conat, n == m \/ ~ (n == m).
Can I prove bisim_dec or any similar statement on bisimulation?
Edit
The original motivation to prove "either finite or infinite" was to prove commutativity and associativity of coplus:
CoFixpoint coplus (n m : conat) := match n with
| O' => m
| S' n' => S' (coplus n' m)
end.
Notation "x ~+ y" := (coplus x y) (at level 50, left associativity).
Theorem coplus_comm : forall n m, n ~+ m == m ~+ n.
Theorem coplus_assoc : forall n m p, n ~+ m ~+ p == n ~+ (m ~+ p).
Going through the same way as nat's + does not work because it requires transitivity of == with a lemma analogous to plus_n_Sm, which makes the cofix call unguarded. Otherwise, I have to destruct both n and m, and then I'm stuck at the goal n ~+ S' m == m ~+ S' n.
If I choose an alternative definition of coplus:
CoFixpoint coplus (n m : conat) := match n, m with
| O', O' => O'
| O', S' m' => S' m'
| S' n', O' => S' n'
| S' n', S' m' => S' (S' (coplus n' m'))
end.
Then coplus_comm is easy, but coplus_assoc becomes near-impossible to prove instead.
Can I indeed prove coplus_comm with the first definition of coplus, or coplus_assoc with the second?
I am facing a pretty strange problem: coq doesn't want to move forall variable into the context.
In the old times it did:
Example and_exercise :
forall n m : nat, n + m = 0 -> n = 0 /\ m = 0.
Proof.
intros n m.
It generates:
n, m : nat
============================
n + m = 0 -> n = 0 /\ m = 0
But when we have forall inside forall, it doesn't work:
(* Auxilliary definition *)
Fixpoint All {T : Type} (P : T -> Prop) (l : list T) : Prop :=
(* ... *)
Lemma All_In :
forall T (P : T -> Prop) (l : list T),
(forall x, In x l -> P x) <->
All P l.
Proof.
intros T P l. split.
- intros H.
After this we get:
T : Type
P : T -> Prop
l : list T
H : forall x : T, In x l -> P x
============================
All P l
But how to move x outside of H and destruct it into smaller pieces? I tried:
destruct H as [x H1].
But it gives an error:
Error: Unable to find an instance for the variable x.
What is it? How to fix?
The problem is that forall is nested to the left of an implication rather than the right. It does not make sense to introduce x from a hypothesis of the form forall x, P x, just like it wouldn't make sense to introduce the n in plus_comm : forall n m, n + m = m + n into the context of another proof. Instead, you need to use the H hypothesis by applying it at the right place. I can't give you the answer to this question, but you might want to refer to the dist_not_exists exercise in the same chapter.
For the inductive type nat, the generated induction principle uses the constructors O and S in its statement:
Inductive nat : Set := O : nat | S : nat -> nat
nat_ind
: forall P : nat -> Prop,
P 0 ->
(forall n : nat, P n -> P (S n)) -> forall n : nat, P n
But for le, the generated statement does not uses the constructors le_n and le_S:
Inductive le (n : nat) : nat -> Prop :=
le_n : n <= n | le_S : forall m : nat, n <= m -> n <= S m
le_ind
: forall (n : nat) (P : nat -> Prop),
P n ->
(forall m : nat, n <= m -> P m -> P (S m)) ->
forall n0 : nat, n <= n0 -> P n0
However it is possible to state and prove an induction principle following the same shape as the one for nat:
Lemma le_ind' : forall n (P : forall m, le n m -> Prop),
P n (le_n n) ->
(forall m (p : le n m), P m p -> P (S m) (le_S n m p)) ->
forall m (p : le n m), P m p.
Proof.
fix H 6; intros; destruct p.
apply H0.
apply H1, H.
apply H0.
apply H1.
Qed.
I guess the generated one is more convenient. But how does Coq chooses the shape for its generated induction principle? If there is any rule, I cannot find them in the reference manual. What about other proof assistants such as Agda?
You can manually generate an induction principle for an inductive type by using the command Scheme (see the documentation).
The command comes in two flavours :
Scheme scheme := Induction for Sort Prop generates the standard induction scheme.
Scheme scheme := Minimality for Sort Prop generates a simplified induction scheme more suited to inductive predicates.
If you define an inductive type in Type, the generated induction principle is of the first kind. If you define an inductive type in Prop (i.e. an inductive predicate), the generated induction principle is of the second kind.
To obtain the induction principle that you want in the case of le, you can define it in Type:
Inductive le (n : nat) : nat -> Type :=
| le_n : le n n
| le_S : forall m : nat, le n m -> le n (S m).
Check le_ind.
(* forall (n : nat) (P : forall n0 : nat, le n n0 -> Prop),
P n (le_n n) ->
(forall (m : nat) (l : le n m), P m l -> P (S m) (le_S n m l)) ->
forall (n0 : nat) (l : le n n0), P n0 l
*)
or you can manually ask Coq to generate the expected induction principle:
Inductive le (n : nat) : nat -> Prop :=
| le_n : le n n
| le_S : forall m : nat, le n m -> le n (S m).
Check le_ind.
(* forall (n : nat) (P : nat -> Prop),
P n ->
(forall m : nat, le n m -> P m -> P (S m)) ->
forall n0 : nat, le n n0 -> P n0
*)
Scheme le_ind2 := Induction for le Sort Prop.
Check le_ind2.
(* forall (n : nat) (P : forall n0 : nat, le n n0 -> Prop),
P n (le_n n) ->
(forall (m : nat) (l : le n m), P m l -> P (S m) (le_S n m l)) ->
forall (n0 : nat) (l : le n n0), P n0 l
*)
Consider the following toy exercise:
Theorem swap_id: forall (m n : nat), m = n -> (m, n) = (n, m).
Proof.
intros m n H.
At this point I have the following:
1 subgoal
m, n : nat
H : m = n
______________________________________(1/1)
(m, n) = (n, m)
I would like to split the goal into two subgoals, m = n and n = m. Is there a tactic which does that?
Solve using the f_equal tactic:
Theorem test: forall (m n : nat), m = n -> (m, n) = (n, m).
Proof.
intros m n H. f_equal.
With state:
2 subgoals
m, n : nat
H : m = n
______________________________________(1/2)
m = n
______________________________________(2/2)
n = m
When using induction, I'd like to have hypotheses n = 0 and n = S n' to separate the cases.
Section x.
Variable P : nat -> Prop.
Axiom P0: P 0.
Axiom PSn : forall n, P n -> P (S n).
Theorem Pn: forall n:nat, P n.
Proof. intros n. induction n.
- (* = 0 *)
apply P0.
- (* = S n *)
apply PSn. assumption.
Qed.
In theory I could do this with induction n eqn: Hn, but that seems to mess up the inductive hypothesis:
Theorem Pn2: forall n:nat, P n.
Proof. intros n. induction n eqn: Hn.
- (* Hn : n = 0 *)
apply P0.
- (* Hn : n = S n0 *)
(*** 1 subgoals
P : nat -> Prop
n : nat
n0 : nat
Hn : n = S n0
IHn0 : n = n0 -> P n0
______________________________________(1/1)
P (S n0)
****)
Abort.
End x.
Is there an easy way to get what I want here?
Matt was almost right, you just forgot to generalize a bit your goal by reverting the remembered n:
Theorem Pn2: forall n:nat, P n.
Proof. intros n. remember n. revert n0 Heqn0.
induction n as [ | p hi]; intros m heq.
- (* heq : n = 0 *) subst. apply P0.
- (* heq : n = S n0 *)
(*
1 subgoal
P : nat -> Prop
p : nat
hi : forall n0 : nat, n0 = p -> P n0
m : nat
heq : m = S p
______________________________________(1/1)
P m
*) subst; apply (PSn p). apply hi. reflexivity.
Ooo, I think I figured it out!
Applying the inductive hypothesis changes your goal from (P n) to (P (constructor n')), so I think in general you can just match against the goal to create the equation n = construct n'.
Here's a tactic that I think does this:
(* like set (a:=b) except introduces a name and hypothesis *)
Tactic Notation
"provide_name" ident(n) "=" constr(v)
"as" simple_intropattern(H) :=
assert (exists n, n = v) as [n H] by (exists v; reflexivity).
Tactic Notation
"induction_eqn" ident(n) "as" simple_intropattern(HNS)
"eqn:" ident(Hn) :=
let PROP := fresh in (
pattern n;
match goal with [ |- ?FP _ ] => set ( PROP := FP ) end;
induction n as HNS;
match goal with [ |- PROP ?nnn ] => provide_name n = nnn as Hn end;
unfold PROP in *; clear PROP
).
It works for my example:
Theorem Pn_3: forall n:nat, P n.
Proof.
intros n.
induction_eqn n as [|n'] eqn: Hn.
- (* n: nat, Hn: n = 0; Goal: P 0 *)
apply P0.
- (* n': nat, IHn': P n';
n: nat, Hn: n = S n'
Goal: P (S n') *)
apply PSn. exact IHn'.
Qed.
I'm not sure if this is any easier, than what you have done in your second attempt, but you can first "remember" n.
Theorem Pn: forall n:nat, P n.
Proof. intro n. remember n. induction n.
- (*P : nat -> Prop
n0 : nat
Heqn0 : n0 = 0
============================
P n0
*)
subst. apply P0.
- (* P : nat -> Prop
n : nat
n0 : nat
Heqn0 : n0 = S n
IHn : n0 = n -> P n0
============================
P n0
*)