Using `apply with` without giving names of parameters in Coq? - coq

In using the Coq apply ... with tactic, the examples I have seen all involve explicitly giving the names of variables to instantiate. For example, given a theorem about the transitivity of equality.
Theorem trans_eq : forall (X:Type) (n m o : X),
n = m -> m = o -> n = o.
To apply it:
Example test: forall n m: nat,
n = 1 -> 1 = m -> n = m.
Proof.
intros n m.
apply trans_eq with (m := 1). Qed.
Note that in the last line apply trans_eq with (m := 1)., I have to remember that the name of the variable to instantiate is m, rather than o or n or some other names y.
To me, whether m n o or x y z are used in the original statement of the theorem shouldn't matter, because they are like dummy variables or formal parameters of a function. And sometimes I can't remember the specific names I used or somebody else put down in a different file when defining the theorem.
Is there a way by which I can refer to the variables e.g. by their position and use something like:
apply trans_eq with (#1 := 1)
in the above example?
By the way, I tried: apply trans_eq with (1 := 1). and got Error: No such binder.
Thanks.

You can specialize the lemma with the right arguments. The _ is used for all arguments that we don't want to specialize (because they can be inferred). The # is required to specialize implicit arguments.
Example test: forall n m: nat,
n = 1 -> 1 = m -> n = m.
Proof.
intros n m.
apply (#trans_eq _ _ 1).
Qed.

You can omit the binder names after with, so in your case do apply trans_eq with 1.
Example test: forall n m: nat,
n = 1 -> 1 = m -> n = m.
Proof.
intros n m.
apply trans_eq with 1; auto.
Qed.
I've changed your original example a little to conclude the proof.
Why this works
To understand why this works, check the manual under Bindings:
Tactics that take a term as an argument may also accept bindings to
instantiate some parameters of the term by name or position. The
general form of a term with bindings is termtac with bindings where
bindings can take two different forms:
bindings::= (ident | ​natural := term)+
| one_term+
What is shown in this example is the form one_term, which is described as follows:
in the case of apply, or of constructor and its variants, only instances for the dependent products that are not bound in the conclusion of termtac are required.
Which is why only one term needs to be supplied.

Related

Cardinality of finFieldType / Euler's criterion

I am trying to prove a very limited form of Euler's criterion:
Variable F : finFieldType.
Hypothesis HF : (1 != -1 :> F).
Lemma euler (a : F) : a^+(#|F|.-1./2) = -1 -> forall x, x^+2 != a.
I have the bulk of the proof done already, but I am left with odd (#|F|.-1) = 0, that is, #|F|.-1 is even. (I'm not interested in characteristic 2). I can't seem to find useful facts in the math comp library about the cardinality of finFieldTypes. For example, I would expect a lemma saying there exists a p such that prime p and #|F| = p. Am I missing something here?
By the way, I could also have totally missed an already existing proof of Euler's criterion in the math comp library itself.
I am not aware of a proof of Euler's criterion, but I found two lemmas, in finfield, that give you the two results that you expected in order to finish your proof (the second one may not be presented as you expected):
First, you have the following lemma which gives you the prime p corresponding to the characteristic of your field F (as long as it is a finFieldType):
Lemma finCharP : {p | prime p & p \in [char F]}.
Then, another lemma gives you the cardinality argument :
Let n := logn p #|R|.
Lemma card_primeChar : #|R| = (p ^ n)%N.
The problem with the second lemma is that your field should be recognized as a PrimeCharType, which roughly corresponds to a ringType with an explicit characteristic. But given the first lemma, you are able to give such a structure to your field (which canonically has a ringType), on the fly. A possible proof could be the following
Lemma odd_card : ~~ odd (#|F|.-1).
Proof.
suff : odd (#|F|) by have /ltnW/prednK {1}<- /= := finRing_gt1 F.
have [p prime_p char_F] := (finCharP F); set F_pC := PrimeCharType p_char.
have H : #|F| = #|F_primeChar| by []; rewrite H card_primeChar -H odd_exp => {H F_pC}.
apply/orP; right; have := HF; apply: contraR=> /(prime_oddPn prime_p) p_eq2.
by move: char_F; rewrite p_eq2=> /oppr_char2 ->.
Qed.

What is difference between `destruct` and `case_eq` tactics in Coq?

I understood destruct as it breaks an inductive definition into its constructors. I recently saw case_eq and I couldn't understand what it does differently?
1 subgoals
n : nat
k : nat
m : M.t nat
H : match M.find (elt:=nat) n m with
| Some _ => true
| None => false
end = true
______________________________________(1/1)
cc n (M.add k k m) = true
In the above context, if I do destruct M.find n m it breaks H into true and false whereas case_eq (M.find n m) leaves H intact and adds separate proposition M.find (elt:=nat) n m = Some v, which I can rewrite to get same effect as destruct.
Can someone please explain me the difference between the two tactics and when which one should be used?
The first basic tactic in the family of destruct and case_eq is called case. This tactic modifies only the conclusion. When you type case A and A has a type T which is inductive, the system replaces A in the goal's conclusion by instances of all the constructors of type T, adding universal quantifications for the arguments of these constructors, if needed. This creates as many goals as there are constructors in type T. The formula A disappears from the goal and if there is any information about A in an hypothesis, the link between this information and all the new constructors that replace it in the conclusion gets lost. In spite of this, case is an important primitive tactic.
Loosing the link between information in the hypotheses and instances of A in the conclusion is a big problem in practice, so developers came up with two solutions: case_eq and destruct.
Personnally, when writing the Coq'Art book, I proposed that we write a simple tactic on top of case that keeps a link between A and the various constructor instances in the form of an equality. This is the tactic now called case_eq. It does the same thing as case but adds an extra implication in the goal, where the premise of the implication is an equality of the form A = ... and where ... is an instance of each constructor.
At about the same time, the tactic destruct was proposed. Instead of limiting the effect of replacement in the goal's conclusion, destruct replaces all instances of A appearing in the hypotheses with instances of constructors of type T. In a sense, this is cleaner because it avoids relying on the extra concept of equality, but it is still incomplete because the expression A may be a compound expression f B, and if B appears in the hypothesis but not f B the link between A and B will still be lost.
Illustration
Definition my_pred (n : nat) := match n with 0 => 0 | S p => p end.
Lemma example n : n <= 1 -> my_pred n <= 0.
Proof.
case_eq (my_pred n).
Gives the two goals
------------------
n <= 1 -> my_pred n = 0 -> 0 <= 0
and
------------------
forall p, my_pred n = S p -> n <= 1 -> S p <= 0
the extra equality is very useful here.
In this question I suggested that the developer use case_eq (a == b) when (a == b) has type bool because this type is inductive and not very informative (constructors have no argument). But when (a == b) has type {a = b}+{a <> b} (which is the case for the string_dec function) the constructors have arguments that are proofs of interesting properties and the extra universal quantification for the arguments of the constructors are enough to give the relevant information, in this case a = b in a first goal and a <> b in a second goal.

Adding complete disjunctive assumption in Coq

In mathematics, we often proceed as follows: "Now let us consider two cases, the number k can be even or odd. For the even case, we can say exists k', 2k' = k..."
Which expands to the general idea of reasoning about an entire set of objects by disassembling it into several disjunct subsets that can be used to reconstruct the original set.
How is this reasoning principle captured in coq considering we do not always have an assumption that is one of the subsets we want to deconstruct into?
Consider the follow example for demonstration:
forall n, Nat.Even n => P n.
Here we can naturally do inversion on Nat.Even n to get n = 2*x (and an automatically-false eliminated assumption that n = 2*x + 1). However, suppose we have the following:
forall n, P n
How can I state: "let us consider even ns and odd ns". Do I need to first show that we have decidable forall n : nat, even n \/ odd n? That is, introduce a new (local or global) lemma listing all the required subsets? What are the best practices?
Indeed, to reason about a splitting of a class of objects in Coq you need to show an algorithm splitting them, unless you want to reason classically (there is nothing wrong with that).
IMO, a key point is getting such decidability hypotheses "for free". For instance, you could implement odd : nat -> bool as a boolean function, as it is done in some libraries, then you get the splitting for free.
[edit]
You can use some slightly more convenient techniques for pattern matching, by enconding the pertinent cases as inductives:
Require Import PeanoNat Nat Bool.
CoInductive parity_spec (n : nat) : Type :=
| parity_spec_odd : odd n = true -> parity_spec n
| parity_spec_even: even n = true -> parity_spec n
.
Lemma parityP n : parity_spec n.
Proof.
case (even n) eqn:H; [now right|left].
now rewrite <- Nat.negb_even, H.
Qed.
Lemma test n : even n = true \/ odd n = true.
Proof. now case (parityP n); auto. Qed.

Apply partially instantiated lemma

Let us assume that we want to prove the following (totally contrived) lemma.
Lemma lem : (forall n0 : nat, 0 <= n0 -> 0 <= S n0) -> forall n, le 0 n.
We want to apply nat_ind to prove it. Here is a possible proof:
Proof.
intros H n. apply nat_ind. constructor. exact H.
Qed.
But why not directly using H in the apply tactic, using something like apply (nat_ind _ _ H) or eapply (nat_ind _ _ H) ? But the first one fails, and the second one hides the remaining goal in an existential variable.
Is it possible in apply or its derivatives to skip hypotheses in order to specify the other arguments while keeping them as classic goals in the remainder of the proof ?
If you do
intros. refine (nat_ind _ _ H _).
then you only have
0 <= 0
left. Is that useful in your case?
Another approach (more universal than in my other answer) would be using the apply ... with ... construct, like this:
intros H n.
apply nat_ind with (2 := H).
Here, 2 is referring to the inductive step parameter of nat_ind (see the Coq v8.5 reference manual, 8.1.3):
In a bindings list of the form (ref_1 := term_1) ... (ref_n := term_n), ref is either an ident or a num. ... If ref_i is some number n, this number denotes the n-th non dependent premise of the term, as determined by the type of term.
This partial proof
intros H n.
apply nat_ind, H.
will give you 0 <= 0 as the only subgoal left.
This approach uses the apply tactic, but does not answer the question in its generality, since it will work only if you want to instantiate the last parameter (which is the case for the example in the question).
Here is quote from the Coq reference manual:
apply term_1 , ... , term_n
This is a shortcut for apply term_1 ; [ .. | ... ; [ .. | apply term_n ]... ], i.e. for the successive applications of term_(i+1) on the last subgoal generated by apply term_i, starting from the application of term_1.
Also, since it's just syntactic sugar, the solution may be considered cheating (and, I guess, abuse of the original intent of the Coq tactics developers) in the context of the question.

Stuck in the construction of a very simple function

I am learning Coq. I am stuck on a quite silly problem (which has no motivation, it is really silly). I want to build a function from ]2,+oo] to the set of integers mapping x to x-3. That should be simple... In any language I know, it is simple. But not in Coq. First, I write (I explain with a lot of details so that someone can explain what I don't understand in the behaviour of Coq)
Definition f : forall n : nat, n > 2 -> nat.
I get a subgoal
============================
forall n : nat, n > 2 -> nat
which means that Coq wants a map from a proof of n>2 to the set of integers. Fine. So I want to tell it that n = 3 + p for some integer p, and then return the integer p. I write :
intros n H.
And I get the context/subgoal
n : nat
H : n > 2
============================
nat
Then i suppose that I have proved n = 3 + p for some integer p by
cut(exists p, 3 + p = n).
I get the context/subgoal
n : nat
H : n > 2
============================
(exists p : nat, 3 + p = n) -> nat
subgoal 2 (ID 6) is:
exists p : nat, 3 + p = n
I move the hypothesis in the context by
intro K.
I obtain:
n : nat
H : n > 2
K : exists p : nat, 3 + p = n
============================
nat
subgoal 2 (ID 6) is:
exists p : nat, 3 + p = n
I will prove the existence of p later. Now I want to finish the proof by exact p. So i need first to do a
destruct K as (p,K).
and I obtain the error message
Error: Case analysis on sort Set is not allowed for inductive
definition ex.
And I am stuck.
You are absolutely right! Writing this function should be easy in any reasonable programming language, and, fortunately, Coq is no exception.
In your case, it is much easier to define your function by simply ignoring the proof argument you are supplying:
Definition f (n : nat) : nat := n - 3.
You may then wonder "but wait a second, the natural numbers aren't closed under subtraction, so how can this make sense?". Well, in Coq, subtraction on the natural numbers isn't really subtraction: it is actually truncated. If you try to subtract, say, 3 from 2, you get 0 as an answer:
Goal 2 - 3 = 0. reflexivity. Qed.
What this means in practice is that you are always allowed to "subtract" two natural numbers and get a natural number back, but in order for this subtraction make sense, the first argument needs to be greater than the second. We then get lemmas such as the following (available in the standard library):
le_plus_minus_r : forall n m, n <= m -> n + (m - n) = m
In most cases, working with a function that is partially correct, such as this definition of subtraction, is good enough. If you want, however, you can restrict the domain of f to make its properties more pleasant. I've taken the liberty of doing the following script with the ssreflect library, which makes writing this kind of function easier:
Require Import Ssreflect.ssreflect Ssreflect.ssrfun Ssreflect.ssrbool.
Require Import Ssreflect.ssrnat Ssreflect.eqtype.
Definition f (n : {n | 2 < n}) : nat :=
val n - 3.
Definition finv (m : nat) : {n | 2 < n} :=
Sub (3 + m) erefl.
Lemma fK : cancel f finv.
Proof.
move=> [n Pn] /=; apply/val_inj=> /=.
by rewrite /f /= addnC subnK.
Qed.
Lemma finvK : cancel finv f.
Proof.
by move=> n; rewrite /finv /f /= addnC addnK.
Qed.
Now, f takes as an argument a natural number n that is greater than 2 (the {x : T | P x} form is syntax sugar for the sig type from the standard library, which is used for forming types that work like subsets). By restricting the argument type, we can write an inverse function finv that takes an arbitrary nat and returns another number that is greater than 2. Then, we can prove lemmas fK and finvK, which assert that fK and finvK are inverses of each other.
On the definition of f, we use val, which is ssreflect's idiom for extracting the element out of a member of a type such as {n | 2 < n}. The Sub function on finv does the opposite, packaging a natural number n with a proof that 2 < n and returning an element of {n | 2 < n}. Here, we rely crucially on the fact that the < is expressed in ssreflect as a boolean computation, so that Coq can use its computation rules to check that erefl, a proof of true = true, is also a valid proof of 2 < 3 + m.
To conclude, the mysterious error message you got in the end has to do with Coq's rules governing computational types, with live in Type, and propositional types, which live in Prop. Coq's rules forbid you from using proofs of propositions to build elements that have computational content (such as natural numbers), except in very particular cases. If you wanted, you could still finish your definition by using {p | 3 + p = n} instead of exists p, 3 + p = n -- both mean the same thing, except the former lives in Type while the latter lives in Prop.