I'm finding myself stuck on a Coq proof.
Preliminary definitions:
Require Import Coq.Bool.Bool.
Require Import Coq.Arith.Arith.
Require Import Coq.Arith.EqNat.
Require Import Coq.omega.Omega.
Require Import Coq.Lists.List.
Require Export Coq.Strings.String.
Import ListNotations.
Definition total_map (A:Type) := string -> A.
Definition state := total_map nat.
Inductive sinstr : Type :=
| SPush : nat -> sinstr
| SLoad : string -> sinstr
| SPlus : sinstr
| SMinus : sinstr
| SMult : sinstr.
Definition s_execute_instr (st : state) (stack : list nat)
(instr : sinstr)
: option (list nat) :=
match instr with
| SPush n => Some (n :: stack)
| SLoad x => Some (st x :: stack)
| SPlus => match stack with
| x :: y :: stack' => Some (x+y :: stack')
| _ => None
end
| SMinus => match stack with
| x :: y :: stack' => Some (y-x :: stack')
| _ => None
end
| SMult => match stack with
| x :: y :: stack' => Some (x*y::stack')
| _ => None
end
end.
Fixpoint s_execute (st : state) (stack : list nat)
(prog : list sinstr)
: option (list nat) :=
match prog with
| [] => Some (stack)
| instr::prog' => match (s_execute_instr st stack instr) with
| Some stack' => s_execute st stack' prog'
| None => None
end
end.
And my attempt at a theorem proof:
Theorem s_execute_relational : forall (l1 l2: list sinstr) (sk sk': list nat) (st : state),
s_execute st sk l1 = Some sk' ->
s_execute st sk (l1 ++ l2) = s_execute st sk' l2.
Proof.
intros l1 l2 sk sk' st H.
induction l1 as [|l1' l1].
- inversion H. reflexivity.
-
The current status is:
l1' : sinstr
l1, l2 : list sinstr
sk, sk' : list nat
st : state
H : s_execute st sk (l1' :: l1) = Some sk'
IHl1 : s_execute st sk l1 = Some sk' -> s_execute st sk (l1 ++ l2) = s_execute st sk' l2
============================
s_execute st sk ((l1' :: l1) ++ l2) = s_execute st sk' l2
I've gone this path because I think I need to use induction somehow, but at this point, I'm not sure how to proceed.
I tried induction on l2 as well, but that doesn't seem to get me anywhere, either;
Theorem s_execute_relational : forall (l1 l2: list sinstr) (sk sk': list nat) (st : state),
s_execute st sk l1 = Some sk' ->
s_execute st sk (l1 ++ l2) = s_execute st sk' l2.
Proof.
intros l1 l2 sk sk' st H.
induction l2 as [|l2' l2].
- simpl. rewrite <- H. replace (l1 ++ []) with l1.
+ reflexivity.
+ symmetry. apply app_nil_r.
-
l1 : list sinstr
l2' : sinstr
l2 : list sinstr
sk, sk' : list nat
st : state
H : s_execute st sk l1 = Some sk'
IHl2 : s_execute st sk (l1 ++ l2) = s_execute st sk' l2
============================
s_execute st sk (l1 ++ l2' :: l2) =
s_execute st sk' (l2' :: l2)
It's strange asking this type of question on SO because it's not...really a reusable question/title is bad, but unsure how to improve on that front, either.
you should not introduce all variables as you do in the first line. You should first look at your recursive functions and has yourself two questions:
What are the recursive functions and their "structurally recursive arguments" here. You may have noticed that when Coq accepts a recursive definition, it tells you with respect to which argument it is structurally recursive.
What happens to the arguments that are not structurally recursives in the function: do they change between the recursive call or not?
Answer to question 1:
In your case, we have two main recursive functions List.app and s_execute. The recursive argument to s_execute is l1 on the left hand side of the implication. The recursive argument to s_execute is l1 ++ l2 in the left-hand side of the final equality, and the recursive argument to s_execute is only l2 in the right hand side. Because l1 ++ l2 is in position of a recursive argument, we can now look at the recursive argument of app by looking at its code, and we see that the argument that decreases structurally at the recursive call is again l1. This gives a strong feeling that induction should be performed on l1.
Answer to question 2:
s_execute takes three arguments. The state does not change during execution, but the stack does. So you can fix st for the whole proof, but the stack argument should not be fixed. A similar observation appears for app: the second argument does not change during recursive calls.
Practically, you can start your proof with
intros l1 l2.
induction l1 ....
Don't go any further in the intros, because the stack should be left flexible, you will need this flexibility when using the induction hypothesis.
Just for the fun, you can try introducing more arguments, but you have to free the flexible ones by using the revert tactic. Just like so:
intros l1 l2 sk sk' st; revert sk.
induction l1 as ....
Here only sk has to be freed (or unfixed, or reverted).
This is actually a very good question, and the need to avoid fixing arguments that will need to change in uses of the induction hypothesis pops up regularly in formal proofs.
Later edit
Here is how I started your proof:
Theorem s_execute_relational : forall (l1 l2: list sinstr) (sk sk': list nat) (st : state),
s_execute st sk l1 = Some sk' ->
s_execute st sk (l1 ++ l2) = s_execute st sk' l2.
Proof.
intros l1 l2 sk sk' st; revert sk.
induction l1 as [ | n l1 Ih].
simpl; intros sk [= skk']; rewrite skk'; easy.
Now we are in the induction step case. The stack is still universally quantified in the goal's conclusion. So the induction hypothesis and the goal are actually talking about two potentially different stacks. The next step is to fix an arbitrary stack to reason on the conclusion.
intros sk.
Then we compute,
simpl.
We are reasoning about a symbolic execution of the code, and we don't know how (s_execute_instr st sk n) will result, so we need to cover both cases, this is what the destruct step does.
destruct (s_execute_instr st sk n) as [sk1 | ].
In the first case (for the execution of (s_execute_instr st sk n)), a new state sk1 appears, on which execution will proceed, and we know that
execution of l1 from that state leads exactly to Some sk'. Let's give the name complete_l1 to that new state. Now it happens that the proof can be finished by instantiating the induction hypothesis on this new state.
intros complete_l1.
now rewrite (Ih sk1).
There remain the other case produced by the destruct step, but this case contains a self-inconsistent assumption of the form None = Some sk'. The easy tactic knows how to get rid of this (actually easy relies on discriminate, which implements what I like to call the non-confusion property of data-types).
easy.
Qed.
Please tell me what was missing in your attempt? Was it the destruct step?
Eventually, I figured it out.
Theorem s_execute_relational : forall (l1 l2: list sinstr) (sk sk': list nat) (st : state),
s_execute st sk l1 = Some sk' ->
s_execute st sk (l1 ++ l2) = s_execute st sk' l2.
Proof.
intros l1.
induction l1 as [| l1' l1].
- intros l2 sk sk' st H. simpl.
inversion H. reflexivity.
- intros l2 sk sk' st H.
assert (forall (x:sinstr) (xs ys: list sinstr), (x::xs) ++ys = x::(xs++ys)) as app_comm_cons.
{
auto.
}
rewrite app_comm_cons.
unfold s_execute in *. destruct (s_execute_instr st sk l1').
+ eapply IHl1. apply H.
+ inversion H.
Qed.
Related
I have been working my way through logical foundations and have gotten very stuck on the transitivity of subsequences exercise.
(** **** Exercise: 2 stars, advanced (subsequence)
A list is a _subsequence_ of another list if all of the elements
in the first list occur in the same order in the second list,
possibly with some extra elements in between. For example,
- (Optional, harder) Prove [subseq_trans] that subsequence is
transitive -- that is, if [l1] is a subsequence of [l2] and [l2]
is a subsequence of [l3], then [l1] is a subsequence of [l3].
Hint: choose your induction carefully! *)
Inductive subseq : list nat -> list nat -> Prop :=
| sseq_e (l2 : list nat) : subseq ([]) (l2)
| sseq_m (l1 l2 : list nat) (n : nat) (H: subseq l1 l2) : subseq (n::l1) (n::l2)
| sseq_nm (l1 l2 : list nat) (n : nat) (H: subseq l1 l2) : subseq (l1) (n::l2)
.
Theorem subseq_trans : forall (l1 l2 l3 : list nat),
subseq l1 l2 ->
subseq l2 l3 ->
subseq l1 l3.
Proof.
intros l1 l2 l3 H H0.
induction H.
- apply sseq_e.
- induction l3.
-- inversion H0.
-- inversion H0.
--- apply sseq_m.
I am having trouble getting the right induction hypothesis after having tried a couple of different approaches. I have tried a number of approaches and end up with a situation where, in my assumptions, I have something like subseq l2 x::l3 but then I need to prove subseq l2 l3 which seems like a dead end. Any pointers in the right direction would be much appreciated.
I have something like subseq l2 x::l3 but then I need to prove subseq l2 l3 which seems like a dead end.
That experience suggests generalizing the induction hypothesis over l3.
(* near the beginning of the proof *)
generalize dependent l3.
induction H.
Disclaimer: This is not a homework question.
I am trying to implement my own version of rev_append in Coq, and then to prove that it is equivalent to the built in version. The following is my implementation.
Fixpoint my_rev_append (l1 l2 : list nat) : (list nat) * (list nat) :=
match l1 with
| nil => (l1, l2)
| hd :: tl => my_rev_append tl (hd :: l2)
end.
Then I tried to prove that it is equivalent to rev_append
Theorem my_rev_append_correct : forall (l1 l2 : list nat),
my_rev_append l1 l2 = (nil, (rev_append l1 l2)).
Proof.
intros l1 l2.
induction l1.
reflexivity.
And then I hit the following goal, which I do not see a way to move forward.
IHl1 : my_rev_append l1 l2 = (nil, rev_append l1 l2)
============================
my_rev_append (a :: l1) l2 = (nil, rev_append (a :: l1) l2)
It is not possible to use IHl1, because the RHS of the current subgoal is (nil, rev_append (a :: l1) l2), which does not contain (nil, rev_append l1 l2). I tried to run simpl tactic on it, but it didn't work, as IHl1 is still not applicable.
I totally understand that I can prove this by changing the | nil => (l1, l2) line in my_rev_append into | nil => l2. However, are there any possibility to prove this theorem without changing the definition of my_rev_append?
Your definition has l2 varying through the induction. Therefore, the proof of the theorem should also have l2 varying through the induction. To do this, do not introduce l2 before starting the induction, leaving it in the goal. The inductive hypothesis, whose type is modeled on this goal, then allows you to pass a different value for it in the recursive case.
Theorem my_rev_append_correct : forall (l1 l2 : list nat), my_rev_append l1 l2 = (nil, rev_append l1 l2).
Proof.
induction l1 as [ | x l1 rec]; intros l2.
- reflexivity.
- apply rec.
Qed.
Consider the following proposition in Coq:
Inductive subseq : list nat -> list nat -> Prop :=
| nil_s : forall (l: list nat), subseq nil l
| cons_in l1 l2 x (H: subseq l1 l2) : subseq (x :: l1) (x :: l2)
| cons_nin l1 l2 x (H: subseq l1 l2) : subseq l1 (x :: l2)
.
Lemma subseq_remove_rewritten: forall (x:nat) (l1' l1 l2 : list nat),
subseq l1' l2 ->
l1' = (x :: l1) ->
subseq l1 l2.
Proof.
intros x l1' l1 l2 H1 H2.
induction H1.
- discriminate.
- injection H2 as H3 H4.
rewrite H4 in H1.
apply cons_nin. apply H1.
- apply IHsubseq in H2.
apply cons_nin. apply H2.
Qed.
Lemma subseq_remove: forall (x:nat) (l1 l2 : list nat),
subseq (x :: l1) l2 ->
subseq l1 l2.
Proof.
intros x l1 l2 H.
apply subseq_remove_rewritten with (x:=x) (l1':=x :: l1).
apply H.
reflexivity.
Qed.
I worked in Isabelle before Coq. There originally, the induction tactic could not solve directly this goal and the trick was to come up with a lemma like subseq_remove_rewritten and then prove the original goal. This is the situation in the manual Isabelle/HOL: A Proof Assistant for Higher-Order Logic. Later, the tactic became smarter and one can write patterns in which to abstract on. So the proof is written like this:
lemma
assumes "subseq (x # l1) l2"
shows "subseq l1 l2"
using assms
apply(induction "x # l1" "l2" rule: subseq.induct)
apply simp
apply(intro subseq.intros(3),simp)
by (intro subseq.intros(3))
I was wondering if Coq has a similar way to avoid proving a lemma like subseq_remove_rewritten and go directly to prove subseq_remove.
You can use the dependent induction tactic (documented here). For example:
Require Import Coq.Lists.List.
Import ListNotations.
Require Import Coq.Program.Equality. (* Needed to use the tactic *)
Inductive subseq : list nat -> list nat -> Prop :=
| nil_s : forall (l: list nat), subseq nil l
| cons_in l1 l2 x (H: subseq l1 l2) : subseq (x :: l1) (x :: l2)
| cons_nin l1 l2 x (H: subseq l1 l2) : subseq l1 (x :: l2)
.
Lemma subseq_remove: forall (x:nat) (l1 l2 : list nat),
subseq (x :: l1) l2 ->
subseq l1 l2.
Proof.
intros x l1 l2 H.
dependent induction H generalizing x.
- now apply cons_nin.
- eauto using cons_nin.
Qed.
Unfortunately, though this tactic has been around for a while, it is still described as experimental in the reference manual, and I don't know if the Coq developers have any plans of improving it in the future. It has a few deficiencies, such as not allowing the user to name the variables and hypotheses used in the induction proof. I personally prefer to add the equality assumptions to the proof myself, as in your first attempt, or to reformulate the definition of subseq as a Fixpoint, so that you can invert the hypothesis by simplification. For example:
Require Import Coq.Lists.List.
Import ListNotations.
Fixpoint subseq (l1 l2 : list nat) : Prop :=
match l1, l2 with
| [], _ => True
| x1 :: l1, [] => False
| x1 :: l1, x2 :: l2 => x1 = x2 /\ subseq l1 l2 \/ subseq (x1 :: l1) l2
end.
Lemma subseq_nin x2 l1 l2 : subseq l1 l2 -> subseq l1 (x2 :: l2).
Proof. destruct l1 as [|x1 l1]; simpl; eauto. Qed.
Lemma subseq_remove: forall (x:nat) (l1 l2 : list nat),
subseq (x :: l1) l2 ->
subseq l1 l2.
Proof.
intros x l1 l2 H.
induction l2 as [|x2 l2 IH]; try easy.
destruct H as [[<- H]|H]; eauto using subseq_nin.
Qed.
In Logic chapter a tail recursive version of reverse list function is introduced. We need to prove that it works correctly:
Fixpoint rev_append {X} (l1 l2 : list X) : list X :=
match l1 with
| [] => l2
| x :: l1' => rev_append l1' (x :: l2)
end.
(* Tail recursion rev *)
Definition tr_rev {X} (l : list X) : list X :=
rev_append l [].
But before proving it I wanted to prove a lemma:
Lemma rev_append_app: forall (X: Type) (x: X) (l : list X),
rev_append l [x] = rev_append l [] ++ [x].
Proof.
intros X x l. induction l as [| h t IH].
- simpl. reflexivity.
- simpl.
Here I am stuck:
X : Type
x, h : X
t : list X
IH : rev_append t [x] = rev_append t [ ] ++ [x]
============================
rev_append t [h; x] = rev_append t [h] ++ [x]
What to do next?
As you noticed during your attempted proof, when taking the induction step from rev_append l [x] to rev_append (h :: t) [x], you end up with the term rev_append t [h; x] after simplification. The induction step does not lead towards the base case of the rev_append function, but to another recursive invocation that you cannot simplify.
Notice how the induction hypothesis that you would like to apply makes a statement about rev_append t [x] for some fixed x, but in your goal, the extra h list element before it gets in the way, and the induction hypothesis is of no use.
This is what Bubbler's answer was referring to when stating that your induction hypothesis is not strong enough: it only makes a statement about the case where the second argument is a list with a single element. But even after just the induction step (one recursive application), that list already has at least two elements!
As suggested by Bubbler, the helper lemma rev_append l (l1 ++ l2) = rev_append l l1 ++ l2 is stronger and does not have this problem: when used as the induction hypothesis, it can be applied to rev_append t [h; x] as well, allowing you to prove equality with rev_append t [h] ++ [x].
When attempting to prove the helper lemma, you may get stuck (like I did) in the same way as when proving rev_append_app itself. The crucial bit of advice that helped me proceed was to be careful which of the universally quantified variables you introduce before you start the induction. If you specialize any of them too early on, you might weaken your induction hypothesis and become stuck again. You may need to change the order of these quantified variables or use the generalize dependent tactic (see the Tactics chapter of Logic Foundations).
You can see that the induction hypothesis IH is not strong enough to prove the goal. Here what you need is a more general statement to prove in the first place. You can find more exercises dedicated to this topic here. (Actually, tail-recursive reverse is one of the exercises.)
In your case, the fully generalized statement could be as follows:
Lemma rev_append_app': forall (X: Type) (l l1 l2 : list X),
rev_append l (l1 ++ l2) = rev_append l l1 ++ l2.
Proving this by induction is trivial. Then you can prove your own statement as a corollary of this one:
Corollary rev_append_app: forall (X: Type) (x: X) (l : list X),
rev_append l [x] = rev_append l [] ++ [x].
Proof. intros. apply (rev_append_app _ _ [] [x]). Qed.
use the generalize dependent tactic like this:
Lemma rev_append_app: forall (X: Type) (l l1: list X) (x : X),
rev_append l (l1 ++ [x]) = rev_append l l1 ++ [x].
intros.
generalize dependent l1.
induction l as [| h t IH].
- intros.
easy.
- intros.
apply (IH (h::l1)).
Qed.
I'm currently working through the Logical Foundations book and I'm stuck on the last part of Exercise: 4 stars, advanced (subsequence) (subseq_trans).
Here is my definition for subseq:
Inductive subseq { X : Type } : list X -> list X -> Prop :=
| s1 : forall l, subseq [] l
| s2 : forall (x : X) (l l': list X), subseq l l' -> subseq l (x :: l')
| s3 : forall (x : X) (l l' : list X), subseq l l' -> subseq (x :: l) (x :: l').
And here is my proof for subseq_trans:
Theorem subseq_trans : forall (X : Type) (l1 l2 l3 : list X),
subseq l1 l2 -> subseq l2 l3 -> subseq l1 l3.
Proof.
intros X l1 l2 l3 H H'.
generalize dependent H.
generalize dependent l1.
induction H'.
{ intros l1 H. inversion H. apply s1. }
{ intros l1 H. apply s2. apply IHH'. apply H. }
{ intros l1 H. apply s2. apply IHH'. apply s2 in H. (* Unable to find an instance for the variable x. *) }
Here is the proof context before the failed apply:
1 subgoal
X : Type
x : X
l, l' : list X
H' : subseq l l'
IHH' : forall l1 : list X, subseq l1 l -> subseq l1 l'
l1 : list X
H : subseq l1 (x :: l)
______________________________________(1/1)
subseq l1 l
I have tried explicitly instantiating x like this:
apply s2 with (x:=x) in H
But that gives me:
No such bound variable x (possible names are: x0, l0 and l'0).
Thanks in advance.
As diagnosed by #tbrk, this is a renaming done by Coq in the presence of maximal implicit arguments (see this issue). This is due to the declaration of {X : Type} in the definition of subsequence.
One solution is to use # to turn all implicit arguments to non-implicit and avoid this renaming issue. This would give:
apply #s2 with (x:=x) in H.
You may find the eapply tactic useful to see what is going on.
...
{ intros l1 H. apply s2. apply IHH'. eapply s2 in H.
gives subseq l1 (?1 :: x :: l), where you can instantiate the ?1 with whatever you want, but, as you can now see, applying s2 forward from that assumption doesn't advance the proof.
Another possibility is to apply s2 to x and then to the assumption H:
apply (s2 x) in H.
I also find it strange that apply s2 with (x:=x) does not work. Coq seems to be doing some renaming behind the scenes, probably to avoid confusion with the x in the proof context. The following sequence applies without error:
rename x into y. apply s2 with (x:=y) in H.