Case analysis on max - ssreflect - coq

I have the following in my goal:
maxr x0 x
I would like to do a case analysis, to consider what happens in the case that x0 is greatest, and the case the x is greatest. Is this possible in ssreflect?
Generally it would be something like (for example with if statement)
have [ something | something'] := ifP
However, I cannot find the appropriate syntax to do a case analysis with max.

You can use lerP or ltrP:
Check lerP.
(*
lerP
: forall (R : realDomainType) (x y : R),
ler_xor_gt (R:=R) x y (minr y x) (minr x y)
(maxr y x) (maxr x y) `|(x - y)%R| `|(y - x)%R|
(x <= y)%R (y < x)%R
*)
In action:
From mathcomp Require Import all_ssreflect all_algebra.
Import Num.Def Num.Theory.
Goal forall a b : int, maxr a b = a.
move=> a b.
case: lerP.
(*
2 goals (ID 4070)
a, b : int
============================
(a <= b)%R -> b = a
goal 2 (ID 4071) is:
(b < a)%R -> a = a
*)
Abort.
I found them with Search maxr., which is not the fastest way (it shows several results), but at least it works.

Related

Beta expansion in coq: can I make a term into a function, abstracting over another given term?

I want to rewrite a term, as a function in a sort of beta expansion (inverse of beta reduction).
So, for example in the term a + 1 = RHS I would like to replace it as (fun x => x + 1) a = RHS. Obviously, the two terms are equal by betta reduction, but I can't figure out how to automate it.
The tactic pattern comes very close to what I want, except it only applies to a full goal, and I can't see how I would use it in a term inside an equality.
Similarly, I thought I could use the context holes. Here is my best attempt
Ltac betaExpansion term a:=
let T:= type of a in
match term with
context hole [a] =>
idtac hole;
let f:= fun x => context hole [x] in
remember ( fun x:T => f x ) as f'
end.
Goal forall a: nat, a + 1 = 0.
intros a.
match goal with
|- ?LHS = _ =>
betaExpansion LHS a (*Error: Variable f should be bound to a term but is bound to a tacvalue.*)
end.
This obviously fails, because f is a tacvalue when I really need a normal value. Can I somehow evaluate the expression to make it a value?
You should have a look at the pattern tactic. pattern t replaced all occurrences of t in the goal by a beta expanded variable.
You may also use the change ... with ... at tactic.
Goal forall (a:nat) , a+1 = 2* (a+1) - (a+1).
intro x; change (x+1) with ((fun z => z) (x+1)) at 1 3.
(*
x : nat
============================
(fun z : nat => z) (x + 1) = 2 * (x + 1) - (fun z : nat => z) (x + 1)
*)
Or, more automatically
Ltac betaexp term i :=
let x := fresh "x" in
let T := type of term in
change term with ((fun x : T => x) term) at i.
Goal forall (a:nat) , a+1 = a+1 .
intro x; betaexp (x+1) ltac:(1).

Finding a well founded relation to prove termination of a function that stops decreasing at some point

Suppose we have:
Require Import ZArith Program.
Program Fixpoint range (from to : Z) {measure f R} : list :=
if from <? to
then from :: range (from + 1) to
else [].
I'd like to convince Coq that this terminates - I tried by measuring the size of the range as abs (to - from). However, this doesn't quite work because once the range is empty (that is, from >= to), it simply starts increasing once again.
I've also tried measuring with:
Definition get_range (from to : Z) : option nat :=
let range := (to - from) in
if (range <? 0)
then None
else Some (Z_to_nat (Z.abs range) (Z.abs_nonneg range)).
using my custom:
Definition preceeds_eq (l r : option nat) : Prop :=
match l, r with
| None, None => False
| None, (Some _) => True
| (Some _), None => False
| (Some x), (Some y) => x < y
end.
and the cast:
Definition Z_to_nat (z : Z) (p : 0 <= z) : nat.
Proof.
dependent destruction z.
- exact (0%nat).
- exact (Pos.to_nat p).
- assert (Z.neg p < 0) by apply Zlt_neg_0.
contradiction.
Defined.
But it runs into the issue that I cannot show that None < None and using reflexive preceeds_eq makes the relation not well founded, which brings me back to the same problem.
Is there a way to convince Coq that range terminates? Is my approach completely broken?
If you map the length of you interval to nat using Z.abs_nat or Z.to_nat functions, and use a function deciding if the range is not-empty with a more informative result type (Z_lt_dec) then the solution becomes very simple:
Require Import ZArith Program.
Program Fixpoint range (from to : Z) {measure (Z.abs_nat (to - from))} : list Z :=
if Z_lt_dec from to
then from :: range (from + 1) to
else [].
Next Obligation. apply Zabs_nat_lt; auto with zarith. Qed.
Using Z_lt_dec instead of its boolean counter-part gives you the benefit of propagating the proof of from < to into the context, which gives you the ability to deal with the proof obligation easily.

Computing with a finite subset of an infinite representation in Coq

I have a function Z -> Z -> whatever which I treat as a sort of a map from (Z, Z) to whatever, let's type it as FF.
With whatever being a simple sum constructible from nix or inj_whatever.
This map I initialize with some data, in the fashion of:
Definition i (x y : Z) (f : FF) : FF :=
fun x' y' =>
if andb (x =? x') (y =? y')
then inj_whatever
else f x y.
The =? represents boolean decidable equality on Z, from Coq's ZArith.
Now I would like to have equality on two of such FFs, I don't mind invoking functional_extensionality. What I would like to do now is to have Coq computationally decide equality of two FFs.
For example, suppose we do something along the lines of:
Definition empty : FF := fun x y => nix.
Now we add some arbitrary values to make foo and foo', those are equivalent under functional extensionality:
Definition foo := i 0 0 (i 0 (-42) (i 56 1 empty)).
Definition foo' := i 0 (-42) (i 56 1 (i 0 0 empty)).
What is a good way to automatically have Coq determine foo = foo'. Ltac level stuff? Actual terminating computation? Do I need domain restriction to a finite one?
The domain restriction is a bit of an intricate one. I manipulate the maps in a way f : FF -> FF, where f can extend the subset of Z x Z that the computation is defined on. As such, come to think of it, it can't be f : FF -> FF, but more like f : FF -> FF_1 where FF_1 is a subset of Z x Z that is extended by a small constant. As such, when one applies f n times, one ends up with FF_n which is equivalent to domain restriction of FF plus n * constant to the domain. So the function f slowly (by a constant factor) expands the domain FF is defined on.
As I said in the comment more specifics are needed in order to elaborate a satisfactory answer. See the below example --- intended for a step by step description --- on how to play with equality on restricted function ranges using mathcomp:
From mathcomp Require Import all_ssreflect all_algebra.
Set Implicit Arguments.
Unset Strict Implicit.
Unset Printing Implicit Defensive.
(* We need this in order for the computation to work. *)
Section AllU.
Variable n : nat.
(* Bounded and unbounded fun *)
Definition FFb := {ffun 'I_n -> nat}.
Implicit Type (f : FFb).
Lemma FFP1 f1 f2 : reflect (f1 = f2) [forall x : 'I_n, f1 x == f2 x].
Proof. exact/(equivP eqfunP)/ffunP. Qed.
Lemma FFP2 f1 f2 :
[forall x : 'I_n, f1 x == f2 x] = all [fun x => f1 x == f2 x] (enum 'I_n).
Proof.
by apply/eqfunP/allP=> [eqf x he|eqf x]; apply/eqP/eqf; rewrite ?enumT.
Qed.
Definition f_inj (f : nat -> nat) : FFb := [ffun x => f (val x)].
Lemma FFP3 (f1 f2 : nat -> nat) :
all [fun x => f1 x == f2 x] (iota 0 n) -> f_inj f1 = f_inj f2.
Proof.
move/allP=> /= hb; apply/FFP1; rewrite FFP2; apply/allP=> x hx /=.
by rewrite !ffunE; apply/hb; rewrite mem_iota ?ltn_ord.
Qed.
(* Exercise, derive bounded eq from f_inj f1 = f_inj f2 *)
End AllU.
The final lemma should indeed allow you reduce equality of functions to a computational, fully runnable Gallina function.
A simpler version of the above, and likely more useful to you is:
Lemma FFP n (f1 f2 : nat -> nat) :
[forall x : 'I_n, f1 x == f2 x] = all [pred x | f1 x == f2 x] (iota 0 n).
Proof.
apply/eqfunP/allP=> eqf x; last by apply/eqP/eqf; rewrite mem_iota /=.
by rewrite mem_iota; case/andP=> ? hx; have /= -> := eqf (Ordinal hx).
Qed.
But it depends on how you (absent) condition on range restriction is specified.
After your edit, I think I should add a note on the more general topic of map equality, indeed you can define a more specific type of maps other than A -> B and then build a decision procedure.
Most typical map types [including the ones in the stdlib] will work, as long as they support the operation of "binding retrieval", so you can reduce equality to the check of finitely-many bound values.
In fact, the maps in Coq's standard library do already provide you such computational equality function.
Ok, this is a rather brutal solution which does not attempt to avoid doing the same case distinctions multiple times but it's fully automated.
We start with a tactic which inspects whether two integers are equal (using Z.eqb) and translates the results to a proposition which omega can deal with.
Ltac inspect_eq y x :=
let p := fresh "p" in
let q := fresh "q" in
let H := fresh "H" in
assert (p := proj1 (Z.eqb_eq x y));
assert (q := proj1 (Z.eqb_neq x y));
destruct (Z.eqb x y) eqn: H;
[apply (fun p => p eq_refl) in p; clear q|
apply (fun p => p eq_refl) in q; clear p].
We can then write a function which fires the first occurence of i it can find. This may introduce contradictory assumptions in the context e.g. if a previous match has revealed x = 0 but we now call inspect x 0, the second branch will have both x = 0 and x <> 0 in the context. It will be automatically dismissed by omega.
Ltac fire_i x y := match goal with
| [ |- context[i ?x' ?y' _ _] ] =>
unfold i at 1; inspect_eq x x'; inspect_eq y y'; (omega || simpl)
end.
We can then put everything together: call functional extensionality twice, repeat fire_i until there's nothing else to inspect and conclude by reflexivity (indeed all the branches with contradictions have been dismissed automatically!).
Ltac eqFF :=
let x := fresh "x" in
let y := fresh "y" in
intros;
apply functional_extensionality; intro x;
apply functional_extensionality; intro y;
repeat fire_i x y; reflexivity.
We can see that it discharges your lemma without any issue:
Lemma foo_eq : foo = foo'.
Proof.
unfold foo, foo'; eqFF.
Qed.
Here is a self-contained gist with all the imports and definitions.

Using 'convoy pattern' to obtain proof inside code of equality of pattern match

It is a standard example of beginner's textbooks on category theory to argue that a preorder gives rise to a category (where the hom-set hom(x,y) is a singleton or empty depending on whether x <= y). When attempting to formalize this idea in coq, it is natural to view an arrow of as a triple (x,y,pxy) where x y:A (A being a type on which we have a preorder) and pxy is a proof that x <= y. So naturally, when attempting to define a composition of two arrows (x,y,pxy) and (y',z,pyz), we need to returnSome arrow whenever y = y' (or None otherwise). This implies that we are able to test for equality within the function, and compute a proof (the last field of our triple, which may rely on the fact that things are equal).
For the sake of this question, suppose I have:
Parameter eq_dec : forall {A:Type}, A -> A -> bool.
and:
Axiom eq_dec_correct : forall (A:Type) (x y:A),
eq_dec x y = true -> x = y. (* don't care about equivalence here *)
and let us assume I am attempting something simpler than defining composition between arrows, by writing a function which returns a proof that x = y whenever x = y.
Definition test {A:Type} (x y : A) : option (x = y) :=
match eq_dec x y with
| true => Some (eq_dec_correct A x y _)
| false => None
end.
This doesn't work of course, but probably gives you the idea of what I am trying to achieve. Any suggestion is greatly appreciated.
EDIT: Ok it seems this is a case of 'convoy pattern'. I have found this link which suggested to me:
Definition test (A:Type) (x y:A) : option (x = y) :=
match eq_dec x y as b return eq_dec x y = b -> option (x = y) with
| true => fun p => Some (eq_dec_correct A x y p)
| false => fun _ => None
end (eq_refl (eq_dec x y)).
This seems to be working. It is a bit magical and confusing but I'll get my head round it.

Messing around with category theory

Motivation: I am attempting to study category theory while creating a Coq formalization of the ideas I find in whatever textbook I follow. In order to make this formalization as simple as possible, I figured I should identify objects with their identity arrow, so a category can be reduced to a set (class, type) of arrows X with a source mapping s:X->X, target mapping t:X->X, and composition mapping product : X -> X -> option X which is a partial mapping defined for t f = s g. Obviously the structure (X,s,t,product) should follow various properties. For the sake of clarity, I am spelling out the formalization I chose below, but there is no need to follow it I think in order to read my question:
Record Category {A:Type} : Type := category
{ source : A -> A
; target : A -> A
; product: A -> A -> option A
; proof_of_ss : forall f:A, source (source f) = source f
; proof_of_ts : forall f:A, target (source f) = source f
; proof_of_tt : forall f:A, target (target f) = target f
; proof_of_st : forall f:A, source (target f) = target f
; proof_of_dom: forall f g:A, target f = source g <-> product f g <> None
; proof_of_src: forall f g h:A, product f g = Some h -> source h = source f
; proof_of_tgt: forall f g h:A, product f g = Some h -> target h = target g
; proof_of_idl: forall a f:A,
a = source a ->
a = target a ->
a = source f ->
product a f = Some f
; proof_of_idr: forall a f:A,
a = source a ->
a = target a ->
a = target f ->
product f a = Some f
; proof_of_asc:
forall f g h fg gh:A,
product f g = Some fg ->
product g h = Some gh ->
product fg h = product f gh
}
.
I have no idea how practical this is and how far it will take me. I see this as an opportunity to learn category theory and Coq at the same time.
Problem: My first objective was to create a 'Category' which would resemble as much as possible the category Set. In a set theoretic framework, I would probably consider the class of triplets (a,b,f) where f is a map with domain a and range a subset of b. With this in mind I tried:
Record Arrow : Type := arrow
{ dom : Type
; cod : Type
; arr : dom -> cod
}
.
So that Arrow becomes my base type on which I could attempt building a structure of category. I start embedding Type into Arrow:
Definition id (a : Type) : Arrow := arrow a a (fun x => x).
which allows me to define the source and target mappings:
Definition domain (f:Arrow) : Arrow := id (dom f).
Definition codomain (f:Arrow) : Arrow := id (cod f).
Then I move on to defining a composition on Arrow:
Definition compose (f g: Arrow) : option Arrow :=
match f with
| arrow a b f' =>
match g with
| arrow b' c g' =>
match b with
| b' => Some (arrow a c (fun x => (g' (f' x))))
| _ => None
end
end
end.
However, this code is illegal as I get the error:
The term "f' x" has type "b" while it is expected to have type "b'".
Question: I have the feeling I am not going to get away with this, My using Type naively would take me to some sort of Russel paradox which Coq will not allow me to do. However, just in case, is there a way to define compose on Arrow?
Your encoding does not work in plain Coq because of the constructive nature of the theory: it is not possible to compare two sets for equality. If you absolutely want to follow this approach, Daniel's comment sketches a solution: you need to assume a strong classical principle to be able to check whether the endpoints of two arrows match, and then manipulate an equality proof to make Coq accept the definition.
Another approach is to have separate types for arrows and objects, and use type dependency to express the compatibility requirement on arrow endpoints. This definition requires only three axioms, and considerably simplifies the construction of categories:
Set Implicit Arguments.
Unset Strict Implicit.
Unset Printing Implicit Defensive.
Record category : Type := Category {
obj : Type;
hom : obj -> obj -> Type;
id : forall {X}, hom X X;
comp : forall X Y Z, hom X Y -> hom Y Z -> hom X Z;
(* Axioms *)
idL : forall X Y (f : hom X Y), comp id f = f;
idR : forall X Y (f : hom X Y), comp f id = f;
assoc : forall X Y Z W
(f : hom X Y) (g : hom Y Z) (h : hom Z W),
comp f (comp g h) = comp (comp f g) h
}.
We can now define the category of sets and ask Coq to automatically prove the axioms for us.
Require Import Coq.Program.Tactics.
Program Definition Sets : category := {|
obj := Type;
hom X Y := X -> Y;
id X := fun x => x;
comp X Y Z f g := fun x => g (f x)
|}.
(This does not lead to any circularity paradoxes, because of Coq's universe mechanism: Coq understands that the Type used in this definition is actually smaller than the one used to define category.)
This encoding is sometimes inconvenient due to the lack of extensionality in Coq's theory, because it prevents certain axioms from holding. Consider the category of groups, for example, where the morphisms are functions that commute with the group operations. A reasonable definition for these morphisms could be as follows (assuming that there is some type group representing groups, with * denotes multiplication and 1 denotes the neutral element).
Record group_morphism (X Y : group) : Type := {
mor : X -> Y;
mor_1 : mor 1 = 1;
mor_m : forall x1 x2, mor (x1 * x2) = mor x1 * mor x2
}.
The problem is that the properties mor_1 and mor_m interfere with the notion of equality for elements of group_morphism, making the proofs for associativity and identity that worked for Sets break. There are two solutions:
Adopt extra axioms into the theory so that the required properties still go through. In the above example, you would need proof irrelevance:
proof_irrelevance : forall (P : Prop) (p q : P), p = q.
Change the category axioms so that the identities are valid up to some equivalence relation specific to that category, instead of the plain Coq equality. This approach is followed here, for example.