Suppose I have a premise like this:
H2: ~ a b c <> a b c
And I wish to change it to:
a b c = a b c
Where
a is Term -> Term -> Term
b and c are both Term
How can I do it? Thanks!
If you unfold the definitions of ~ and <>, you hypothesis has the following type:
H2: (a b c = a b c -> False) -> False
Therefore, what you wish to achieve is what logicians usually call "double negation elimination". It is not an intuitionistically-provable theorem, and is therefore defined in the Classical module of Coq (see http://coq.inria.fr/distrib/V8.4/stdlib/Coq.Logic.Classical_Prop.html for details):
Classical.NNPP : forall (p : Prop), ~ ~ p -> p
I assume your actual problem is more involved than a b c = a b c, but for the sake of mentioning it, if you really care about obtaining that particular hypothesis, you can safely prove it without even looking at H2:
assert (abc_refl : a b c = a b c) by reflexivity.
If your actual example is not immediately reflexive and the equality is actually false, maybe you want to turn your goal into showing that H2 is absurd. You can do so by eliminating H2 (elim H2., which is basically doing a cut on the False type), and you will end up in the context:
H2 : ~ a b c <> a b c
EQ : a b c = a b c
=====================
False
I'm not sure whether all of this helps, but you might have oversimplified your problem so that I cannot provide more insight on what your real problem is.
Just a little thought to add to Ptival's answer - if your desired goal was not trivially solved by reflexivity, you could still make progress provided you had decidable equality on your Term, for example by applying this little lemma:
Section S.
Parameter T : Type.
Parameter T_eq_dec : forall (x y : T), {x = y} + {x <> y}.
Lemma not_ne : forall (x y : T), ~ (x <> y) -> x = y.
Proof.
intros.
destruct (T_eq_dec x y); auto.
unfold not in *.
assert False.
apply (H n).
contradiction.
Qed.
End S.
Related
I have a function Z -> Z -> whatever which I treat as a sort of a map from (Z, Z) to whatever, let's type it as FF.
With whatever being a simple sum constructible from nix or inj_whatever.
This map I initialize with some data, in the fashion of:
Definition i (x y : Z) (f : FF) : FF :=
fun x' y' =>
if andb (x =? x') (y =? y')
then inj_whatever
else f x y.
The =? represents boolean decidable equality on Z, from Coq's ZArith.
Now I would like to have equality on two of such FFs, I don't mind invoking functional_extensionality. What I would like to do now is to have Coq computationally decide equality of two FFs.
For example, suppose we do something along the lines of:
Definition empty : FF := fun x y => nix.
Now we add some arbitrary values to make foo and foo', those are equivalent under functional extensionality:
Definition foo := i 0 0 (i 0 (-42) (i 56 1 empty)).
Definition foo' := i 0 (-42) (i 56 1 (i 0 0 empty)).
What is a good way to automatically have Coq determine foo = foo'. Ltac level stuff? Actual terminating computation? Do I need domain restriction to a finite one?
The domain restriction is a bit of an intricate one. I manipulate the maps in a way f : FF -> FF, where f can extend the subset of Z x Z that the computation is defined on. As such, come to think of it, it can't be f : FF -> FF, but more like f : FF -> FF_1 where FF_1 is a subset of Z x Z that is extended by a small constant. As such, when one applies f n times, one ends up with FF_n which is equivalent to domain restriction of FF plus n * constant to the domain. So the function f slowly (by a constant factor) expands the domain FF is defined on.
As I said in the comment more specifics are needed in order to elaborate a satisfactory answer. See the below example --- intended for a step by step description --- on how to play with equality on restricted function ranges using mathcomp:
From mathcomp Require Import all_ssreflect all_algebra.
Set Implicit Arguments.
Unset Strict Implicit.
Unset Printing Implicit Defensive.
(* We need this in order for the computation to work. *)
Section AllU.
Variable n : nat.
(* Bounded and unbounded fun *)
Definition FFb := {ffun 'I_n -> nat}.
Implicit Type (f : FFb).
Lemma FFP1 f1 f2 : reflect (f1 = f2) [forall x : 'I_n, f1 x == f2 x].
Proof. exact/(equivP eqfunP)/ffunP. Qed.
Lemma FFP2 f1 f2 :
[forall x : 'I_n, f1 x == f2 x] = all [fun x => f1 x == f2 x] (enum 'I_n).
Proof.
by apply/eqfunP/allP=> [eqf x he|eqf x]; apply/eqP/eqf; rewrite ?enumT.
Qed.
Definition f_inj (f : nat -> nat) : FFb := [ffun x => f (val x)].
Lemma FFP3 (f1 f2 : nat -> nat) :
all [fun x => f1 x == f2 x] (iota 0 n) -> f_inj f1 = f_inj f2.
Proof.
move/allP=> /= hb; apply/FFP1; rewrite FFP2; apply/allP=> x hx /=.
by rewrite !ffunE; apply/hb; rewrite mem_iota ?ltn_ord.
Qed.
(* Exercise, derive bounded eq from f_inj f1 = f_inj f2 *)
End AllU.
The final lemma should indeed allow you reduce equality of functions to a computational, fully runnable Gallina function.
A simpler version of the above, and likely more useful to you is:
Lemma FFP n (f1 f2 : nat -> nat) :
[forall x : 'I_n, f1 x == f2 x] = all [pred x | f1 x == f2 x] (iota 0 n).
Proof.
apply/eqfunP/allP=> eqf x; last by apply/eqP/eqf; rewrite mem_iota /=.
by rewrite mem_iota; case/andP=> ? hx; have /= -> := eqf (Ordinal hx).
Qed.
But it depends on how you (absent) condition on range restriction is specified.
After your edit, I think I should add a note on the more general topic of map equality, indeed you can define a more specific type of maps other than A -> B and then build a decision procedure.
Most typical map types [including the ones in the stdlib] will work, as long as they support the operation of "binding retrieval", so you can reduce equality to the check of finitely-many bound values.
In fact, the maps in Coq's standard library do already provide you such computational equality function.
Ok, this is a rather brutal solution which does not attempt to avoid doing the same case distinctions multiple times but it's fully automated.
We start with a tactic which inspects whether two integers are equal (using Z.eqb) and translates the results to a proposition which omega can deal with.
Ltac inspect_eq y x :=
let p := fresh "p" in
let q := fresh "q" in
let H := fresh "H" in
assert (p := proj1 (Z.eqb_eq x y));
assert (q := proj1 (Z.eqb_neq x y));
destruct (Z.eqb x y) eqn: H;
[apply (fun p => p eq_refl) in p; clear q|
apply (fun p => p eq_refl) in q; clear p].
We can then write a function which fires the first occurence of i it can find. This may introduce contradictory assumptions in the context e.g. if a previous match has revealed x = 0 but we now call inspect x 0, the second branch will have both x = 0 and x <> 0 in the context. It will be automatically dismissed by omega.
Ltac fire_i x y := match goal with
| [ |- context[i ?x' ?y' _ _] ] =>
unfold i at 1; inspect_eq x x'; inspect_eq y y'; (omega || simpl)
end.
We can then put everything together: call functional extensionality twice, repeat fire_i until there's nothing else to inspect and conclude by reflexivity (indeed all the branches with contradictions have been dismissed automatically!).
Ltac eqFF :=
let x := fresh "x" in
let y := fresh "y" in
intros;
apply functional_extensionality; intro x;
apply functional_extensionality; intro y;
repeat fire_i x y; reflexivity.
We can see that it discharges your lemma without any issue:
Lemma foo_eq : foo = foo'.
Proof.
unfold foo, foo'; eqFF.
Qed.
Here is a self-contained gist with all the imports and definitions.
When attempting to formalize the class which corresponds to an algebraic structure (for example the class of all monoids), a natural design is to create a type monoid (a:Type) as a product type which models all the required fields (an element e:a, an operator app : a -> a -> a, proofs that the monoid laws are satisfied etc.). In doing so, we are creating a map monoid: Type -> Type. A possible drawback of this approach is that given a monoid m:monoid a (a monoid with support type a) and m':monoid b (a monoid wih support type b), we cannot even write the equality m = m' (let alone prove it) because it is ill-typed. An alternative design would be to create a type monoid where the support type is just another field a:Type, so that given m m':monoid, it is always meaningful to ask whether m = m'. Somehow, one would like to argue that if m and m' have the same supports (a m = a m) and the operators are equals (app m = app m', which may be achieved thanks to some extensional equality axiom), and that the proof fields do not matter (because we have some proof irrelevance axiom) etc. , then m = m'. Unfortunately, we can't event express the equality app m = app m' because it is ill-typed...
To simplify the problem, suppose we have:
Inductive myType : Type :=
| make : forall (a:Type), a -> myType.
.
I would like to have results of the form:
forall (a b:Type) (x:a) (y:b), a = b -> x = y -> make a x = make b y.
This statement is ill-typed so we can't have it.
I may have axioms allowing me to prove that two types a and b are same, and I may be able to show that x and y are indeed the same too, but I want to have a tool allowing me to conclude that make a x = make b y. Any suggestion is welcome.
A low-tech way to prove this is to insert a manual type-cast, using the provided equality. That is, instead of having an assumption x = y, you have an assumption (CAST q x) = y. Below I explicitly write the cast as a match, but you could also make it look nicer by defining a function to do it.
Inductive myType : Type :=
| make : forall (a:Type), a -> myType.
Lemma ex : forall (a b:Type) (x:a) (y:b) (q: a = b), (match q in _ = T return T with eq_refl => x end) = y -> make a x = make b y.
Proof.
destruct q.
intros q.
congruence.
Qed.
There is a nicer way to hide most of this machinery by using "heterogenous equality", also known as JMeq. I recommend the Equality chapter of CPDT for a detailed introduction. Your example becomes
Require Import Coq.Logic.JMeq.
Infix "==" := JMeq (at level 70, no associativity).
Inductive myType : Type :=
| make : forall (a:Type), a -> myType.
Lemma ex : forall (a b:Type) (x:a) (y:b), a = b -> x == y -> make a x = make b y.
Proof.
intros.
rewrite H0.
reflexivity.
Qed.
In general, although this particular theorem can be proved without axioms, if you do the formalization in this style you are likely to encounter goals that can not be proven in Coq without axioms about equality. In particular, injectivity for this kind of dependent records is not provable. The JMEq library will automatically use an axiom JMeq_eq about heterogeneous equality, which makes it quite convenient.
In Coq Tutorial, section 1.3.1 and 1.3.2, there are two elim applications:
The first one:
1 subgoal
A : Prop
B : Prop
C : Prop
H : A /\ B
============================
B /\ A
after applying elim H,
Coq < elim H.
1 subgoal
A : Prop
B : Prop
C : Prop
H : A /\ B
============================
A -> B -> B /\ A
The second one:
1 subgoal
H : A \/ B
============================
B \/ A
After applying elim H,
Coq < elim H.
2 subgoals
H : A \/ B
============================
A -> B \/ A
subgoal 2 is:
B -> B \/ A
There are three questions. First, in the second example, I don't understand what inference rule (or, logical identity) is applied to the goal to generate the two subgoals. It is clear to me for the first example, though.
The second question, according to the manual of Coq, elim is related to inductive types. Therefore, it appears that elim cannot be applied here at all, because I feel that there are no inductive types in the two examples (forgive me for not knowing the definition of inductive types). Why can elim be applied here?
Third, what does elim do in general? The two examples here don't show a common pattern for elim. The official manual seems to be designed for very advanced users, since they define a term upon several other terms that are defined by even more terms, and their language is ambiguous.
Thank you so much for answering!
Jian, first let me note that the manual is open source and available at https://github.com/coq/coq ; if you feel that the wording / definition order could be improved please open an issue there or feel free to submit a pull request.
Regarding your questions, I think you would benefit from reading some more comprehensive introduction to Coq such as "Coq'art", "Software Foundations" or "Programs and Proofs" among others.
In particular, the elim tactic tries to apply the so called "elimination principle" for a particular type. It is called elimination because in a sense, the rule allows you to "get rid" of that particular object, allowing you to continue on the proof [I recommend reading Dummett for a more throughout discussion of the origins of logical connectives]
In particular, the elimination rule for the ∨ connective is usually written by logicians as follows:
A B
⋮ ⋮
A ∨ B C C
────────────────
C
that is to say, if we can derive C independently from A and B, then we can derive it from A ∨ B. This looks obvious, doesn't it?
Going back to Coq, it turns out that this rule has a computational interpretation thanks to the "Curry-Howard-Kolmogorov" equivalence. In fact, Coq doesn't provide most of the standard logical connectives as a built in, but it allow us to define them by means of "Inductive" datatypes, similar to those in Haskell or OCaml.
In particular, the definition of ∨ is:
Inductive or (A B : Prop) : Prop :=
| or_introl : A -> A \/ B
| or_intror : B -> A \/ B
that is to say, or A B is the piece of data that either contains an A or a B, together with a "tag", that allows us to "match" to know which one do we really have.
Now, the "elimination principle for or" has type:
or_ind : forall A B P : Prop, (A -> P) -> (B -> P) -> A \/ B -> P
The great thing of Coq is that such principle is not a "built-in", just a regular program! Think, could you write the code of the or_ind function? I'll give you a hint:
Definition or_ind A B P (hA : A -> P) (hB : B -> P) (orW : A \/ B) :=
match orW with
| or_introl aW => ?
| or_intror bW => ?
end.
Once this function is defined, all that elim does, is to apply it, properly instantiating the variable P.
Exercise: solve your second example using apply and ord_ind instead of elim. Good luck!
Say I have a hypothesis like this:
FooProp a b
I want to change the hypothesis to this form:
exists a, FooProp a b
How can I do this?
I know I can do assert (exists a, FooProp a b) by eauto but I'm trying to find a solution that doesn't require me to explicitly write down the entire hypothesis; this is bad for automation and is just generally a headache when the hypothesis are nontrivial. Ideally I'd like to specify intro_exists a in H1 or something; it really should be that simple.
EDIT: Why? Because I have a lemma like this:
Lemma find_instr_in:
forall c i,
In i c <-> (exists z : Z, find_instr z c = Some i).
And a hypothesis like this:
H1: find_instr z c = Some i
And I'm trying to rewrite like this:
rewrite <- find_instr_in in H1
Which fails with the error Found no subterm matching "exists z, ..." .... But if I assert (exists z, find_instr z c = Some i) by eauto. first the rewrite works.
How about something like this:
Ltac intro_exists' a H :=
pattern a in H; apply ex_intro in H.
Tactic Notation "intro_exists" ident(a) "in" ident(H) := intro_exists' a H.
Section daryl.
Variable A B : Type.
Variable FooProp : A -> B -> Prop.
Goal forall a b, FooProp a b -> False.
intros.
intro_exists a in H.
Admitted.
End daryl.
The key to this is the pattern tactic, which finds occurrences of a term and abstracts them into a function applied to an argument. So pattern a converts the type of H from FooProp a b to (fun x => FooProp x b) a. After that, Coq can figure out what you mean when you apply ex_intro.
Edit:
All that being said, in your concrete case I would actually recommend a different approach, which is to not state your lemma like that. Instead it is convenient to split it into two lemmas, one for each direction. The forwards direction is just the same, but the backwards direction should be restated as follows
forall c i z,
find_instr z c = Some i -> In i c.
If you do this, then the rewrite will succeed without needing to introduce the existential.
I encountered a strange situation using setoid_replace where a proof step of the form:
setoid_replace (a - c + d) with b by my_tactic
fails with Error: No matching clauses for match goal, but after appending an extra idtac to the tactic:
setoid_replace (a - c + d) with b by (my_tactic; idtac)
the proof succeeds. My understanding of idtac was that it was essentially a no-op. Why does the presence of idtac make a difference here?
Here's the full code. I'm using Coq 8.4pl6 through Proof General.
Require Import QArith.
Open Scope Q.
Lemma rearrange_eq_r a b c d :
a == b -> b + d == a + c -> c == d.
Proof.
intro a_eq_b; rewrite a_eq_b; symmetry; now apply Qplus_inj_l with (z := b).
Qed.
Ltac rearrange :=
match goal with
| [ H : _ == _ |- _ == _ ] => apply rearrange_eq_r with (1 := H); ring
end.
Lemma test_rearrange a b c d e (H0 : e < b) (H1 : b + c == a + d) : e < a - c + d.
Proof.
(* Why is the extra 'idtac' required in the line below? *)
setoid_replace (a - c + d) with b by (rearrange; idtac).
assumption.
Qed.
Note: as Matt observes, idtac doesn't seem to be special here: it seems that any tactic (including fail!) can be used in place of idtac to make the proof succeed.
Thanks to Jason Gross on the Coq bug tracker for explaining this. This has to do with order of evaluation in the Ltac tactic language. In the failing case, the match in rearrange is being applied to the inequality in the immediate goal rather than to the equality generated by setoid_replace. Here's Jason's response on the bug report:
This is because the [match] is evaluated before the [setoid_replace]
is run. It is one of the unfortunate trip-ups of Ltac that things
like [match] and [let ... in ...] are evaluated eagerly until a
statement with semicolons, or other non-match non-let-in statement is
reached. If you add [idtac; ] before the [match] in [rearrange], your
problem will go away.