Proving a theorem in Coq using almost only rewrites - no "cleverness" - coq

I'm trying to formulate a problem so that only rewriting will be sufficient
to prove the goal. I want to avoid "clever" uses of Propositions and instead use
bools that can be computed by Coq.
I define a boolean test function member that returns true iff an element is in a list,
and different that returns true iff no element is in both lists.
I want to prove that I can rewrite different into an expression only using member.
Theorem different_member: forall xs ys y,
different xs ys = ((negb (member y ys)) || negb (member y xs)).
(The (negb X || Y) form is the boolean implication).
As a warm-up and reality check I want to prove
Theorem diff_mem:
forall xs ys,
different xs ys = true -> forall y, member y xs = true -> ~ member y ys = true.
The way to proceed is by induction on xs, but I keep messing up on the final step.
Very grateful for some help on both of the theorems! Here is the relevant part of development.
Require Import Arith.
Require Import List.
Require Import Bool.
Import List.ListNotations.
Open Scope list.
Open Scope bool.
Fixpoint member x ys :=
match ys with
| [] => false
| y :: ys' => (beq_nat x y) || (member x ys')
end.
Lemma mem1: forall x, member x [] = false.
Proof. auto. Qed.
Lemma mem2: forall x y l, member x (y::l) = (beq_nat x y) || (member x l).
Proof. auto. Qed.
Fixpoint different xs ys :=
match xs with
| [] => true
| x::xs' => (negb (member x ys)) && (different xs' ys)
end.
Lemma diff1: forall ys, different [] ys = true.
Proof. auto. Qed.
Lemma diff2: forall x xs ys,
different (x::xs) ys = (negb (member x ys)) && (different xs ys).
Proof. auto. Qed.
Theorem diff_mem1: forall xs ys, different xs ys = true -> forall y, member y xs = true -> ~ member y ys = true.
Proof.
Abort.
Theorem different_member:
forall xs ys y, different xs ys =
((negb (member y ys)) || negb (member y xs)).
Proof.
Abort.
EDIT:
Here is a proof of the diff_mem1 theorem. (sleeping on it, and thinking before starting to bang on it in ProofGeneral sometimes helps...). The proof of the other theorem follows the same structure.
However, the question and ultimate goal is still how to solve it entirely with rewrites and hints, so that one could (almost) do induction xs; auto..
Theorem diff_mem1: forall xs ys,
different xs ys = true -> forall y, member y xs = true -> ~ member y ys = true.
Proof.
induction xs as [|a xs]; intros ys Hdiff y Hxs Hys.
- inversion Hxs.
- (* we assume y is a member of ys, and of (a::xs) *)
(* it is also assumed that (a::xs) is different from ys *)
(* consider the cases y=a and y<>a *)
remember (beq_nat y a) as Q; destruct Q.
+ (* this case is absurd since y is a member of both ys and (y::xs) *)
apply eq_sym in HeqQ; apply beq_nat_true in HeqQ.
subst a.
simpl in Hdiff.
rewrite Hys in Hdiff.
inversion Hdiff.
+ (* this case is also absurd since y is a member of both ys and xs *)
simpl in Hdiff, Hxs.
rewrite <- HeqQ in Hxs.
simpl in Hxs.
rewrite Bool.andb_true_iff in Hdiff; destruct Hdiff as [_ Hdiff1].
destruct (IHxs ys Hdiff1 y Hxs Hys).
Qed.
EDIT2:
I will close this as #Arthur gave the correct answer as to why I failed in the initial attempt, but for completeness I want to add a solution along the lines of what I was aiming for.
I wrote an Ltac tactic my_simple_rewrite that does a number of try rewrite with lemma_x in * (about 20 different lemmas, that are only re-written from left to right). They are simple lemmas about bools and the mem1, mem2, diff1, and diff2 from above. To prove the theorem I used it, and only specified the induction variable xs and which bool expressions to do a case analysis on (using a home-made Ltac bool_destruct), and got the following proof.
Theorem different_member:
forall xs ys, different xs ys = true ->
forall y, ((negb (member y ys)) || negb (member y xs)) = true.
Proof.
induction xs as [| a xs]; intros; my_simple_rewrite.
- congruence.
- try
match goal with
| [ HH:_ |- _] => (generalize (IHxs ys HH y); intro IH)
end;
bool_destruct (member a ys);
bool_destruct (member y ys);
bool_destruct (member a xs);
bool_destruct (member y xs);
bool_destruct (beq_nat y a);
my_simple_rewrite;
congruence.
Qed.
The idea is that this can almost be automated. Picking which terms to destruct can be automated, and notice that it tries to instantiate the induction hypotheisis with anything it can throw a stick at - ("if it works, fine! otherwise try the next alternative...").
For future reference, the full development is in https://gist.github.com/larsr/10b6f4817b5117b335cc

The problem with your result is that it doesn't hold. For instance, try
Compute different [2] [1; 2]. (* false *)
Compute (negb (member 1 [2]) || negb (member 1 [1; 2])). (* true *)
This happens because, in order to obtain the converse, we need the right-hand side to be valid for all y. The correct form is:
forall xs ys,
different xs ys = true <->
(forall y, negb (member y xs) || negb (member x xs)).
Nevertheless, you are right that specifying certain results as boolean equations makes them more convenient to use in many situations. This style is heavily used, for instance, in the Ssreflect library, where they write theorems such as:
eqn_leq : forall m n, (m == n) = (m <= n) && (n <= m)
Here, the == and <= operators are booelan functions that test for equality and order on natural numbers. The first one is generic, and works for any type that is declared with a boolean equality function, called eqType in Ssreflect.
Here's a version of your theorem using Ssreflect:
Require Import Ssreflect.ssreflect Ssreflect.ssrfun Ssreflect.ssrbool.
Require Import Ssreflect.ssrnat Ssreflect.eqtype Ssreflect.seq.
Section Different.
Variable T : eqType.
Implicit Types xs ys : seq T.
Fixpoint disjoint xs ys :=
match xs with
| [::] => true
| x :: xs' => (x \notin ys) && disjoint xs' ys
end.
Lemma disjointP xs ys :
reflect (forall x, x \in xs -> x \notin ys)
(disjoint xs ys).
Proof.
elim: xs=> [|x xs IH] /=; first exact: ReflectT.
apply/(iffP andP)=> [[x_nin /IH {IH} IH] x'|xsP].
by rewrite inE=> /orP [/eqP ->|] //; auto.
apply/andP; rewrite xsP /= ?inE ?eqxx //.
apply/IH=> x' x'_in; apply: xsP.
by rewrite inE x'_in orbT.
Qed.
End Different.
I've renamed different to disjoint, and have used the Ssreflect list membership operators \in and \notin, available for lists with elements in any eqType. Note that the statement of disjointP has an implicit conversion from bool to Prop (which maps b to b = true), and that it is stated with the reflect predicate, which you can think of as being like the "if and only if" connective, but relating a Prop to a bool.
Ssreflect makes extensive use of reflect predicate and the view mechanism (the / signs you see on the proof script) to convert between boolean and propositional statements of the same fact. Thus, although we can't state the equivalence with a plain boolean equality, we can keep much of the convenience with the reflect predicate. For instance:
Goal forall n, n \in [:: 1; 2; 3] -> n \notin [:: 4; 5; 6].
Proof. by apply/disjointP. Qed.
What happened here is that Coq used disjointP to convert the above goal to disjoint [:: 1; 2; 3] [:: 4; 5; 6] (the [:: ... ] is just Ssreflect notation for lists), and could find that that goal was true just by computation.

Related

Theorem that finding in a list works properly

I am proving theorem about finding in a list. I got stuck at proving that if you actually found something then it is true. What kind of lemmas or strategy may help for proving such kind of theorems? I mean it looks like induction on the list is not enough in this case. But still the theorem is surely true.
(*FIND P = OPTION_MAP (SND :num # α -> α ) ∘ INDEX_FIND (0 :num) P*)
Require Import List.
Require Import Nat.
Fixpoint INDEX_FIND {a:Type} (i:nat) (P:a->bool) (l:list a) :=
match l with
| nil => None
| (h::t) => if P h then Some (i,h) else INDEX_FIND (S i) P t
end.
Definition FIND {a:Type} (P:a->bool) (l:list a)
:= (option_map snd) (INDEX_FIND 0 P l).
Theorem find_prop {a:Type} P l (x:a):
(FIND P l) = Some x
->
(P x)=true.
Proof.
unfold FIND.
unfold option_map.
induction l.
+ simpl.
intro H. inversion H.
+ simpl.
destruct (P a0).
- admit.
- admit.
Admitted.
(this is a translation of definition from HOL4 which also lacks such kind of theorem)
HOL version of the theorem:
Theorem find_prop:
FIND (P:α->bool) (l:α list) = SOME x ⇒ P x
Proof
cheat
QED
It looks like what you are missing is an equation relating P a0 and its destructed value. This can be obtained with the variant of destruct documented there destruct (P a0) eqn:H.
You may want to try to strengthen the property before proving your theorem. Using the SSReflect proof language, you can try the following route.
Lemma index_find_prop {a:Type} P (x:a) l :
forall i j, (INDEX_FIND i P l) = Some (j, x) -> P x = true.
Proof.
elim: l => [//=|x' l' IH i j].
rewrite /INDEX_FIND.
case Px': (P x').
- by case=> _ <-.
- exact: IH.
Qed.
Lemma opt_snd_inv A B X x :
option_map (#snd A B) X = Some x -> exists j, X = Some (j, x).
Proof.
case: X => ab; last by [].
rewrite (surjective_pairing ab) /=.
case=> <-.
by exists ab.1.
Qed.
Theorem find_prop {a:Type} P l (x:a):
(FIND P l) = Some x -> (P x)=true.
Proof.
rewrite /FIND => /(#opt_snd_inv _ _ (INDEX_FIND 0 P l) x) [j].
exact: index_find_prop.
Qed.
I'm confident there are shorter proofs ;)

How to prove that given an equality that a string is its reverse, its tail is also its reverse?

Theorem rev_cons :
forall X x (l : list X),
x :: l = rev (x :: l) -> l = rev l.
This is just so intuitive to me that it blows my mind that I can't make any headway on it. I start off with an induction on l, solve the base case using reflexivity and immediately get stuck on the other.
What exactly am I missing here?
I don't think it's true. Case in point:
Require Import List.
Axiom rev_cons :
forall X x (l : list X),
x :: l = rev (x :: l) -> l = rev l.
Theorem argh : False.
assert (H := rev_cons _ 1 (2 :: 1 :: nil) eq_refl).
inversion H.
Qed.

Prove inequality of complex objects

I have a pair of maps that are trivially incompatible. I'm wondering what's the graceful/automatized way to get a proof of it.
Require Import Coq.Strings.String.
(* Prelude: the total_map data structure from Software Foundations, slightly modified *)
Definition total_map := string -> nat.
Definition empty_st : total_map := (fun _ => 0).
Definition t_update (m : total_map) k v := fun k' => if string_dec k k' then v else m k'.
Notation "a '!->' x" := (t_update empty_st a x) (at level 100).
Notation "x '!->' v ';' m" := (t_update m x v) (at level 100, v at next level, right associativity).
(* The actual goal I'm trying to solve *)
Definition X: string := "X".
Definition Y: string := "Y".
Goal forall n, (X !-> n; Y !-> n) <> (X !-> 1; Y !-> 2).
Proof.
intros n contra.
remember (X !-> n; Y !-> n) as st.
remember (st X) as n1.
assert (n1 = n). { rewrite Heqn1; rewrite Heqst; cbv; reflexivity. }
assert (n1 = 1). { rewrite Heqn1; rewrite contra; cbv; reflexivity. }
remember (st Y) as n2.
assert (n2 = n). { rewrite Heqn2; rewrite Heqst; cbv; reflexivity. }
assert (n2 = 2). { rewrite Heqn2; rewrite contra; cbv; reflexivity. }
congruence.
Qed.
In order to automate this, you need to have a precise description of your proof strategy. Here is one possible proof strategy:
To prove an inequality of total_maps:
First introduce the equality hypothesis.
Then, for every key that's been added to either map, add the hypothesis that the value associated to that key is the same in both maps.
Then simplify all such equality hypotheses by unfolding t_update, using that string_dec x x is true, and seeing if any other string_decs compute down.
Finally, solve the goal by congruence.
We can automate each of these steps. Altogether, it becomes:
Require Import Coq.Strings.String.
(* Prelude: the total_map data structure from Software Foundations, slightly modified *)
Definition total_map := string -> nat.
Definition empty_st : total_map := (fun _ => 0).
Definition t_update (m : total_map) k v := fun k' => if string_dec k k' then v else m k'.
Notation "a '!->' x" := (t_update empty_st a x) (at level 100).
Notation "x '!->' v ';' m" := (t_update m x v) (at level 100, v at next level, right associativity).
(* Automation *)
(* 1. First introduce the equality hypothesis. *)
Ltac start_proving_inequality H :=
intro H.
(* 2. Then, for every key that's been added to either map, add the hypothesis that the value associated to that key is the same in both maps. *)
(* To do this, we need a tactic that will pose a proof only if it does not already exist. *)
Ltac unique_pose_proof lem :=
let T := type of lem in
lazymatch goal with
| [ H : T |- _ ] => fail 0 "A hypothesis of type" T "already exists"
| _ => pose proof lem
end.
(* Maybe move this elsewhere? *)
Definition t_get (m : total_map) k := m k.
Ltac saturate_with_keys H :=
repeat match type of H with
| context[t_update _ ?k ?v]
=> unique_pose_proof (f_equal (fun m => t_get m k) H)
end.
(* 3. Then simplify all such equality hypotheses by unfolding `t_update`, using that `string_dec x x` is true, and seeing if any other `string_dec`s compute down. *)
Require Import Coq.Logic.Eqdep_dec.
Lemma string_dec_refl x : string_dec x x = left eq_refl.
Proof.
destruct (string_dec x x); [ apply f_equal | congruence ].
apply UIP_dec, string_dec.
Qed.
(* N.B. You can add more cases here to deal with other sorts of ways you might reduce [t_get] here *)
Ltac simplify_t_get_t_update_in H :=
repeat first [ progress cbv [t_get t_update empty_st] in H
| match type of H with
| context[string_dec ?x ?x] => rewrite (string_dec_refl x) in H
| context[string_dec ?x ?y]
=> let v := (eval cbv in (string_dec x y)) in
(* check that it fully reduces *)
lazymatch v with left _ => idtac | right _ => idtac end;
progress change (string_dec x y) with v in H
end ].
Ltac simplify_t_get_t_update :=
(* first we must change hypotheses of the form [(fun m => t_get m k) m = (fun m => t_get m k) m'] into [t_get _ _ = t_get _ _] *)
cbv beta in *;
repeat match goal with
| [ H : t_get _ _ = t_get _ _ |- _ ] => progress simplify_t_get_t_update_in H
end.
(* 4. Finally, solve the goal by `congruence`. *)
Ltac finish_proving_inequality := congruence.
(* Now we put it all together *)
Ltac prove_total_map_inequality :=
let H := fresh in
start_proving_inequality H;
saturate_with_keys H;
simplify_t_get_t_update;
finish_proving_inequality.
(* The actual goal I'm trying to solve *)
Definition X: string := "X".
Definition Y: string := "Y".
Goal forall n, (X !-> n; Y !-> n) <> (X !-> 1; Y !-> 2).
intros.
prove_total_map_inequality.
Qed.
Based on Jason Gross's answer and the fact that total_map is a decidable type, I've put together a bit of automation to deal with this. Note that this problem would probably be a very good fit for small-scale reflection.
(* TODO: don't bring trivial (n = n) or duplicated hypotheses into scope *)
(* Given two maps left and right, plus a lemma that they are equal, plus some key: assert that the values of the maps agree at the specified key *)
Ltac invert_total_map_equality_for_id lemma left right id :=
let H := fresh "H" in
assert (left id = right id) as H by (rewrite lemma; reflexivity);
cbv in H.
(* Recurse on the LHS map, extracting keys *)
Ltac invert_total_map_equality_left lemma left right left_remaining :=
match left_remaining with
| t_update ?left_remaining' ?id _ =>
invert_total_map_equality_for_id lemma left right id;
invert_total_map_equality_left lemma left right left_remaining'
| _ => idtac
end.
(* Recurse on the RHS map, extracting keys; move on to LHS once we've done all RHS keys *)
Ltac invert_total_map_equality_right lemma left right right_remaining :=
match right_remaining with
| t_update ?right_remaining' ?id _ =>
invert_total_map_equality_for_id lemma left right id;
invert_total_map_equality_right lemma left right right_remaining'
| _ => invert_total_map_equality_left lemma left right left
end.
(* Given a lemma that two total maps are equal, assert that their values agree at each defined key *)
Ltac invert_total_map_equality lem :=
let T := type of lem in
match T with
| ?left = ?right =>
match type of left with
| string -> nat =>
match type of right with
| string -> nat =>
invert_total_map_equality_right lem left right right
end
end
end.
Goal forall n, (X !-> n; Y !-> n) <> (X !-> 1; Y !-> 2).
Proof.
unfold not; intros.
invert_total_map_equality H.
congruence.
Qed.

Proving `forall x xs ys, subseq (x :: xs) ys -> subseq xs ys` in Coq

I have the following definition
Inductive subseq : list nat -> list nat -> Prop :=
| empty_subseq : subseq [] []
| add_right : forall y xs ys, subseq xs ys -> subseq xs (y::ys)
| add_both : forall x y xs ys, subseq xs ys -> subseq (x::xs) (y::ys)
.
Using this, I wish to prove the following lemma
Lemma del_l_preserves_subseq : forall x xs ys, subseq (x :: xs) ys -> subseq xs ys.
So, I tried looking at the proof of subseq (x :: xs) ys by doing destruct H.
Proof.
intros. induction H.
3 subgoals (ID 209)
x : nat
xs : list nat
============================
subseq xs [ ]
subgoal 2 (ID 216) is:
subseq xs (y :: ys)
subgoal 3 (ID 222) is:
subseq xs (y :: ys)
Why does the first subgoal ask me to prove subseq xs []? Shouldn't the destruct tactic know that the proof cannot be of the form empty_subseq since the type contains x :: xs and not []?
In general how do I prove the lemma that I am trying to prove?
Shouldn't the destruct tactic know that the proof cannot be of the form empty_subseq since the type contains x :: xs and not []?
In fact, destruct doesn't know that much. It just replaces x :: xs and xs with [] and [] in the empty_subseq case. In particular, this frequently leads to lost information in the context. Better alternatives:
Use inversion instead of destruct.
Use remember to ensure both type indices of subseq are variables before destruct. (remember (x :: xs) as xxs in H.) This more explicit goal management also works well with induction.
Li-yao's answer was actually useful. This is a proof of the lemma.
Lemma del_l_preserves_subseq : forall x xs ys, subseq (x :: xs) ys -> subseq xs ys.
Proof.
intros x xs ys.
induction ys as [|y ys'].
- intros. inversion H. (* Inversion will detect that no constructor matches the type of H *)
- intros. inversion H. (* Inversion will automatically discharge the first case *)
+ (* When [subseq (x :: xs) ys'] holds *)
apply IHys' in H2. now apply add_right.
+ (* When [subseq xs ys'] holds *)
now apply add_right.
Qed

Call a theorem using let-in

I have a function f returning a pair. Then I prove some results about it.
In my lemmas, my first attempt to get each component was using let (x, y) := f z in. But then, trying to use these lemmas seems cumbersome. apply does not work directly, I have to add the lemma in the hypothesis using pose proof or a variant of it and destruct f z to be able to use it. Is there a way to use let-in smoothly in lemmas ? Or is it discouraged because it is painful to use ?
To complete my question, here are the other attempts I made to write lemmas about f. I tried using fst (f z) and snd (f z) directly, but I also found it cumbersome. Finally, I started my lemmas with forall x y, (x,y) = f z ->.
Here is a concrete example.
Require Import List. Import ListNotations.
Fixpoint split {A} (l:list A) :=
match l with
| [] => ([], [])
| [a] => ([a], [])
| a::b::l => let (l1, l2) := split l in (a::l1, b::l2)
end.
Lemma split_in : forall {A} (l:list A) x,
let (l1, l2) := split l in
In x l1 \/ In x l2 <-> In x l.
Lemma split_in2 : forall {A} (l:list A) x,
In x (fst (split l)) \/ In x (snd (split l)) <-> In x l.
Lemma split_in3 : forall {A} (l:list A) x l1 l2,
(l1, l2) = split l ->
In x l1 \/ In x l2 <-> In x l.
You have found what I believe is the correct solution. let (l1, l2) := ... in ... will block reduction and break everything. Whether you use split_in2 or split_in3 depends on what your starting point is.
Note, however, that turning on Primitive Projections and redefining prod as a primitive record will make it so that split_in and split_in2 are actually the same theorem, because split l and (fst (split l), snd (split l)) are judgmentally equal. You can do this with
Set Primitive Projections.
Record prod {A B} := pair { fst : A ; snd : B }.
Arguments prod : clear implicits.
Arguments pair {A B}.
Add Printing Let prod.
Notation "x * y" := (prod x y) : type_scope.
Notation "( x , y , .. , z )" := (pair .. (pair x y) .. z) : core_scope.
Hint Resolve pair : core.