I have a proof which concludes in two cases which look like this:
+ rewrite H. apply lemma1.
+ apply lemma1.
While this is relatively simple I would like to combine this into a single tactic. What I want to do in English is, "try to rewrite and if that fails, do nothing and try to apply lemma1.
So a related question is "What is the tactic that does nothing?"
Here is one of my attempts:
try (rewrite H || nil); apply lemma1.
I don't know know how to figure out what the "empty tactic" in Ltac is, nor how to find out its name.
Here is another, where I "distributed out" lemma1.
do 2 try (rewrite H; apply lemma1 || apply lemma1).
which also didn't prove the second case.
I think you do not need a "do nothing" tactic in this case because when tactic given to the try fails try does nothing. If I see you right you want just try rewrite H; apply lemma1. and that is. So, it will try to rewrite H and in success case it will apply lemma1. Otherwise, when rewrite fails, it will just apply lemma1. So, it applies lemma1 in any case.
Related
Sometimes I have a hypothesis in my proof context that I've used already, and now I know I won't need it anymore. In order to keep my context tidy while I work on the proof, I'd like to remove this hypothesis. Is there a tactic to do that?
If you use the SSReflect proof language, you can clear an hypothesis H by using the {H} notation. This can be done inline after many tactics such as move or rewrite, as in:
rewrite foo bar => {H}
Use the clear tactic:
Before:
1 goal
stuff ...
H : T
============================
goal
clear H.
After:
1 goal
stuff ...
============================
goal
I want to prove following lemma.
Require Import Reals.Reals.
Open Scope R_scope.
Lemma trivial_lemma (r1 r2:R) : r1 - (r1 - r2) = r2.
Proof.
rewrite <- Ropp_minus_distr.
rewrite Ropp_plus_distr.
rewrite Ropp_involutive.
rewrite Ropp_minus_distr'.
Abort.
I know Rplus_opp_l, but I cannot apply it to my goal because of r2.
Please tell me your solution.
First you should know that the automatic tactic ring solves this kind of goals autommatically. In the long run, you should rely on this tactic often if you wish to be productive.
Second, it appears (through Search) that the library does not contain many lemmas about subtraction. In this case, you may have to unfold this operator to end up with a goal that has the more primitive addition and opposite operations. Here is a sequence of rewrites that does your work.
unfold Rminus.
rewrite Ropp_plus_distr.
rewrite Ropp_involutive.
rewrite <- Rplus_assoc.
rewrite Rplus_opp_r.
rewrite Rplus_0_l.
easy.
The fact that the library does not contain basic lemmas like this is an indication that the library designers intend users to rely more on the ring tactic.
I know that repeat applies a tactical multiple times until it fails.
The repeat tactical takes another tactic and keeps applying this tactic until it fails.
and the try tactic does nothing when it "fails":
If T is a tactic, then try T is a tactic that is just like T except that, if T fails, try T successfully does nothing at all (instead of failing).
does that mean if I were to do something like:
repeat (try reflexivity).
if the reflexivity fails then try does nothing (but not fail) so repeat just keeps applying try reflexivity. Is this correct? Or what is going?
The reason I ask is because I saw this theorem:
Theorem In10 : In 10 [1;2;3;4;5;6;7;8;9;10].
Proof.
repeat (try (left; reflexivity); right).
Qed.
when I asked a related question: Are Coq tacticals right associative or left associative?
source: https://softwarefoundations.cis.upenn.edu/lf-current/Imp.html
The actual semantics of repeat are that it stops if the tactic fails to make progress.
https://coq.inria.fr/distrib/current/refman/proof-engine/ltac.html?highlight=repeat#coq:tacn.repeat
So a simple use of repeat and try will not create an infinite loop if no change happens to your goal, even though the tactic does succeed.
However, it is indeed possible to make repeat go in an infinite loop, as long as it makes progress at each iteration. For instance, the following script tries to build a list by always using the cons constructor, rather than finishing at some point with nil:
Theorem there_exists_a_list_of_nat : list nat.
Proof.
repeat right.
It will indeed loop forever (make sure you know how to cancel a computation before you try and run it).
This pattern does not lead to an infinite loop because repeat t stops when t fails to make progress, not when it fails. The documentation (https://coq.inria.fr/refman/proof-engine/ltac.html#coq:tacn.repeat) adds this in a followup sentence, though it could certainly be clearer.
The additional explanation in Software Foundations is wrong; it claims repeat goes into an infinite loop when given a tactic that always succeeds, and gives repeat simpl as an example, but repeat simpl works (after at most one round of simpl it won’t do anything if run again so repeat stops).
So, I've got a proof that looks like this:
induction t; intros; inversion H ; crush.
It solves all my goals, but when I do Qed, I get the following error:
Cannot guess decreasing argument of fix.
So somewhere in the generated proof term, there's non-well-founded recursion. The problem is, I have no idea where.
Is there a way to debug this kind of error, or to see the (possibly non halting) proof term that the tactics script generates?
You can use Show Proof. to view the proof term so far.
Another command that can help with seeing where the recursion went wrong is Guarded., which runs the termination checker on the proof term so far. You'll need to break apart the tactic script into independent sentences to use it, though. Here's an example:
Fixpoint f (n:nat) : nat.
Proof.
apply plus.
exact (f n).
Guarded.
(* fails with:
Error:
Recursive definition of f is ill-formed.
...
*)
Defined.
You can use the Show Proof. command inside proof mode to print the proof term produced so far.
In addition to the other excellent answers, I also want to point out that using induction inside an interactive-mode Fixpoint is usually a mistake, because you're recursing twice. Writing fixpoints in interactive mode is often tricky because most automation tools will happily make a recursive call at every possible opportunity, even when it would be ill-founded.
I would advise to use Definition instead of Fixpoint and use induction in the proof script. This invokes the explicit recursor, which allows for much better control of automation. The disadvantage is decreased flexibility since fixpoints have fewer restrictions than recursors - but as we've seen, that is both a blessing and a curse.
I am having trouble understanding the concept of multiple successes in Coq's (8.5p1, ch9.2) branching and backtracking behavior. For example, from the documentation:
Backtracking branching
We can branch with the following structure:
expr1 + expr2
Tactics can be seen as having several successes. When a tactic fails
it asks for more successes of the prior tactics. expr1 + expr2 has all
the successes of v1 followed by all the successes of v2.
What I don't understand, is why do we need multiple successes in the first place? Isn't one success good enough to finish a proof?
Also from the documentation, it seems that there are less costly branching rules that are somehow "biased", including
first [ expr1 | ::: | exprn ]
and
expr1 || expr2
Why do we need the more costly option + and not always use the latter, more efficient tacticals?
The problem is that you are sometimes trying to discharge a goal but further subgoals might lead to a solution you thought would work to be rejected. If you accumulate all the successes then you can backtrack to wherever you made a wrong choice and explore another branch of the search tree.
Here is a silly example. let's say I want to prove this goal:
Goal exists m, m = 1.
Now, it's a fairly simple goal so I could do it manually but let's not. Let's write a tactic that, when confronted with an exists, tries all the possible natural numbers. If I write:
Ltac existNatFrom n :=
exists n || existNatFrom (S n).
Ltac existNat := existNatFrom O.
then as soon as I have run existNat, the system commits to the first successful choice. In particular this means that despite the recursive definition of existNatFrom, when calling existNat I'll always get O and only O.
The goal cannot be solved:
Goal exists m, m = 1.
Fail (existNat; reflexivity).
Abort.
On the other hand, if I use (+) instead of (||), I'll go through all possible natural numbers (in a lazy manner, by using backtracking). So writing:
Ltac existNatFrom' n :=
exists n + existNatFrom' (S n).
Ltac existNat' := existNatFrom' O.
means that I can now prove the goal:
Goal exists m, m = 1.
existNat'; reflexivity.
Qed.