Remove useless hypothesis from context - coq

Sometimes I have a hypothesis in my proof context that I've used already, and now I know I won't need it anymore. In order to keep my context tidy while I work on the proof, I'd like to remove this hypothesis. Is there a tactic to do that?

If you use the SSReflect proof language, you can clear an hypothesis H by using the {H} notation. This can be done inline after many tactics such as move or rewrite, as in:
rewrite foo bar => {H}

Use the clear tactic:
Before:
1 goal
stuff ...
H : T
============================
goal
clear H.
After:
1 goal
stuff ...
============================
goal

Related

Finding rewrite rules

I have a hard time finding the available rewrite rules for my situation. As I don't want to bother you with each rewrite question, I was wondering do you have some tips for finding suitable rewrite rules?
Do you have any tips on how to solve and or search for rewriting the following example:
1 subgoal
H: P
H0: Q
__________
R
And say I have Lemma Join: P /\ Q = R
In order to do this rewrite, I suppose I need to get H and H0 first rewritten into P /\ Q.
So how would you solve or find the rewrite rules for such a case?
Another example
H: a <= b
____________
b < a
I am confident there should exists some commutativity rewrite rule for this, but how can I best find this rule?
Many thanks in advance!
First a tip so you don't run into this problem later: Don't confuse equality of types for logical equivalence. What you usually mean in your first example above is that P/\Q <-> R, not that the type P/\Q is definitionally the same type as R.
With regards to your question about finding lemmas in the library; yes, it is very important to be able to find things there. Coq's Search command lets you find all (Required) lemmas that contain a certain pattern somewhere in it, or some particular string. The latter is useful because the library tends to have a somewhat predicable naming scheme, for instance the names for lemmas about decidability often contains then string "dec", commutativity lemmas often are called something with "comm" etc.
Try for example to search for decidability lemmas about integers, i.e. lemmas that have the term Z somewhere inside them, and the name contains the string "dec".
Require Import ZArith.
Search "dec" Z.
Back to your question; in your case you want to find a lemma that ends with "something and something", so you can use the pattern ( _ /\ _ )
Search ( _ /\ _ ).
However, you get awfully many hits, because many lemmas ends with "something and something".
In your particular case, you perhaps want to narrow the search to
Search (?a -> ?b -> ?a /\ ?b).
but be careful when you are using pattern variables, because perhaps the lemma you was looking for had the arguments in the other order.
In this particular case you found the lemma
conj: forall [A B : Prop], A -> B -> A /\ B
which is not really a lemma but the actual constructor for the inductive type. It is just a function. And remember, every theorem/lemma etc. in type theory is "just a function". Even rewriting is just function application.
Anyway, take seriously the task of learning to find lemmas, and to read the output from Search. It will help you a lot.
Btw, the pattern matching syntax is like the term syntax but with holes or variables, which you will also use when you are writing Ltac tactics, so it is useful to know for many reasons.

Trivial lemma on real numbers on Coq

I want to prove following lemma.
Require Import Reals.Reals.
Open Scope R_scope.
Lemma trivial_lemma (r1 r2:R) : r1 - (r1 - r2) = r2.
Proof.
rewrite <- Ropp_minus_distr.
rewrite Ropp_plus_distr.
rewrite Ropp_involutive.
rewrite Ropp_minus_distr'.
Abort.
I know Rplus_opp_l, but I cannot apply it to my goal because of r2.
Please tell me your solution.
First you should know that the automatic tactic ring solves this kind of goals autommatically. In the long run, you should rely on this tactic often if you wish to be productive.
Second, it appears (through Search) that the library does not contain many lemmas about subtraction. In this case, you may have to unfold this operator to end up with a goal that has the more primitive addition and opposite operations. Here is a sequence of rewrites that does your work.
unfold Rminus.
rewrite Ropp_plus_distr.
rewrite Ropp_involutive.
rewrite <- Rplus_assoc.
rewrite Rplus_opp_r.
rewrite Rplus_0_l.
easy.
The fact that the library does not contain basic lemmas like this is an indication that the library designers intend users to rely more on the ring tactic.

Coq: viewing proof term during proof script writing

So, I've got a proof that looks like this:
induction t; intros; inversion H ; crush.
It solves all my goals, but when I do Qed, I get the following error:
Cannot guess decreasing argument of fix.
So somewhere in the generated proof term, there's non-well-founded recursion. The problem is, I have no idea where.
Is there a way to debug this kind of error, or to see the (possibly non halting) proof term that the tactics script generates?
You can use Show Proof. to view the proof term so far.
Another command that can help with seeing where the recursion went wrong is Guarded., which runs the termination checker on the proof term so far. You'll need to break apart the tactic script into independent sentences to use it, though. Here's an example:
Fixpoint f (n:nat) : nat.
Proof.
apply plus.
exact (f n).
Guarded.
(* fails with:
Error:
Recursive definition of f is ill-formed.
...
*)
Defined.
You can use the Show Proof. command inside proof mode to print the proof term produced so far.
In addition to the other excellent answers, I also want to point out that using induction inside an interactive-mode Fixpoint is usually a mistake, because you're recursing twice. Writing fixpoints in interactive mode is often tricky because most automation tools will happily make a recursive call at every possible opportunity, even when it would be ill-founded.
I would advise to use Definition instead of Fixpoint and use induction in the proof script. This invokes the explicit recursor, which allows for much better control of automation. The disadvantage is decreased flexibility since fixpoints have fewer restrictions than recursors - but as we've seen, that is both a blessing and a curse.

Tactics with variable arity

Say I want to have a tactic to clear multiple hypothesis at once, to do something like clear_multiple H1, H2, H3.. I tried to do that using pairs, like the following:
Ltac clear_multiple arg :=
match arg with
| (?f, ?s) => clear s; clear_multiple f
| ?f => clear f
end.
But then, the problem is that I have to place parenthesis to have a Prod:
Variable A: Prop.
Goal A -> A -> A -> True.
intros.
clear_multiple (H, H0, H1).
My question is, how to do that without using Prods ?
I checked this question, but it is not exactly what I want, since the number of arguments I want is not known.
You might like to know that the clear tactic can take multiple arguments, so you do not need to define a new tactic: you can just write clear H H0 H1.
Of course, you might want to define such n-ary tactics for other tasks. Coq has a tactic notation mechanism that supports such definitions. Unfortunately, they are not too powerful: you can only pass a list of arguments of a certain kind to a tactic that expects multiple arguments (like clear); I don't think it can give you a list that you can iterate on programmatically.

when is the `:` (colon) in necessary in ssreflect/Coq?

I am trying to understand the exact meaning of the : (colon) in Coq/ssreflect proofs in terms of non-ssreflect Coq.
I read that it has something to do with moving things to the goal (like generalize??) and is the opposite of =>, which move things to the hypotheses. However, I often find it confusing because proofs work either way with or without the :. Below is an example from a tutorial:
Lemma tmirror_leaf2 t : tmirror (tmirror t) = Leaf -> t = Leaf.
Proof.
move=> e.
by apply: (tmirror_leaf (tmirror_leaf e)).
Qed.
where,
tmirror_leaf
: forall t, tmirror t = Leaf -> t = Leaf
is a lemma that says if the mirror of a tree is a leaf, then the tree is a leaf.
I don't understand why we need the : here and not merely do the Coq apply. In fact, if I remove the :, it works just fine. Why does it make a difference?
Indeed, apply: H1 ... Hn is to all effects equivalent to move: H1 .. Hn; apply. A more interesting use of apply is apply/H and its variations, which can interpret views.
I think I found the answer while reading the SSReflect documentation. Essentially, ssr has redefined tactics like apply such that it operates on the first variable of the goal instead of something in the context. That's why the : is used in apply: XX. in the ssr way (which is equivalent to move: XX; apply.), and it also works if : is omitted as that is the traditional Coq way.
Quoting the documentation:
Furthermore, SSReflect redefines the basic Coq tactics case, elim, and
apply so that they can take better advantage of ':' and '=>'. These
Coq tactics require an argument from the context but operate on the
goal. Their SSReflect counterparts use the first variable or constant
of the goal instead, so they are "purely deductive":
they do not use or change the proof context. There is no loss since
`:' can readily be used to supply the required variable.