How to use Coq GenericMinMax to prove facts about the reals - coq

I'm trying to prove
Theorem T20d :forall (x y:R), (0<x /\ 0<y) -> 0 < Rmin x y.
with
Lemma min_glb_lt n m p : p < n -> p < m -> p < min n m.
which is in Coq.Structures.GenericMinMax
which I imported with Require Import Coq.Structures.GenericMinMax
however, I still get "reference min_glb_lt" not found when I try to use it? I suspect I need to open a scope, but I don't know which scope.

First of all, the GenericMinMax library defines generic structures, so you can't use them directly to solve a concrete problem. That library mostly contains functors. In other words, it provides interfaces which you need to implement to be able to use them.
In our case, we need to implement the MinMaxLogicalProperties functor (or some other functor that includes this one), because it includes the required lemma.
Several Coq standard libraries provide such implementations. Luckily for us, it has already been done for the reals in the file Rminmax.v, inside the module R, this line specifically:
Include UsualMinMaxProperties R_as_OT RHasMinMax.
So, we can use it like so:
Require Import Reals.
Require Import Rminmax. Import R.
Local Open Scope R_scope.
Theorem T20d (x y : R) :
(0 < x /\ 0 < y) -> 0 < Rmin x y.
Proof.
intros [? ?].
now apply min_glb_lt.
Qed.
Alternatively, we could've referred to the lemma by its qualified name R.min_glb_lt -- that would've let us get rid of Import R.

Related

Unable to prove that only 0 is less than 1 with mathcomp's finTypes

This question might seem very stupid, but I'm unable to prove that the only natural number
less than 1 is 0. I'm using mathcomp's finType library, and the goal that I want to prove is:
Lemma ord0_eq1 (a : ordinal 1) : a = ord0.
The problem is that if I destruct a and ord0 I obtain the following goal:
∀ (m : nat) (i : (m < 1)%N), Ordinal i = Ordinal (ltn0Sn 0)
Now I may use case and derive absurd if m is not equal to 0. But if m is equal to 0, I get:
∀ i : (0 < 1)%N, Ordinal i = Ordinal (ltn0Sn 0)
And the only way to prove this equality is to prove that forall i : 0 < 1, i = (ltn0Sn 0).
But I don't know how to prove equality between two proofs of the same Prop without using
proof irrelevance, and I don't want to add an axiom to my theory.
There must be some way to use Ssreflect's reflection capabilities to solve this goal, but I
haven't found anything: I can get to the point where I need to prove the equality between two
proofs of an equality and I could use UIP (uniqueness of identity proofs), but that's another
axiom and I don't want to use it.
I can't believe that I have to add an axiom to prove this goal: the
less-than relation can only be determined by computation. There should be
a way to prove that forall (m n : nat) (a b : m < n), a = b without UIP or proof irrelevance.
Thanks
EDIT: I am using mathcomp's ssrnat library and not Coq's Arith module. The "<" notation is bound to ssrnat's ltn, not to Arith's lt.
The predicate defining the ordinal type is a boolean equality, hence satisfies proof irrelevance. In cases like this, you can appeal to val_inj:
From mathcomp Require Import all_ssreflect.
Lemma ord0_eq1 (a : ordinal 1) : a = ord0.
Proof. by apply/val_inj; case: (val a) (valP a). Qed.

Use condition in a ssreflect finset comprehension

I have a function f that takes a x of fintype A and a proof of P x to return an element of fintype B. I want to return the finset of f x for all x satisfying P, which could be written like this :
From mathcomp
Require Import ssreflect ssrbool fintype finset.
Variable A B : finType.
Variable P : pred A.
Variable f : forall x : A, P x -> B.
Definition myset := [set (#f x _) | x : A & P x].
However, this fails as Coq does not fill the placeholder with the information on the right, and I do not know how to provide it explicitly.
I did not find an indication on how to do that, both in the ssreflect code and book. I realize that I could probably do it by using a sigma-type {x : A ; P x} and modifying a bit f, but it feels more convoluted than it should. Is there a simple / readable way to do that ?
Actually the easiest way to make things work in the way you propose is to use a sigma type:
Definition myset := [set f (tagged x) | x : { x : A | P x }].
But indeed, f is a bit of a strange function, I guess we'll need to know more details about your use case to understand where are you going.

Extracting integer from int_Ring type in mathcomp's ssralg

A bit of set up for the question: The notation `_i is defined to be the i-th component of a sequence, but is also meant to be the i-th coefficient of a polynomial. The following code outputs Negz 2 : int_ZmodType:
From mathcomp Require Import all_ssreflect.
From mathcomp Require Import all_algebra.
Open Scope ring_scope.
Definition my_seq := [:: Posz 4; Negz 2].
Eval compute in my_seq`_1.
The type of my_seq is seq int. The type int has constructors Posz and Negz.
The header of
https://github.com/math-comp/math-comp/blob/master/mathcomp/algebra/poly.v
informs us that Poly s is a polynomial with coefficients from the sequence s. It also says that p`_i is the i-th coefficient of a polynomial p. I expected the following code to output Negz 2:
Definition my_polynomial := Poly my_seq.
Eval compute in my_polynomial`_1.
The resulting term is not Negz 2, though it does have type int_Ring. There is a sequence constructor polyseq for polynomials. Indeed, the type of polyseq my_polynomial is seq int_Ring. However, doing Eval compute in (polyseq my_polynomial)`_1. gives the same mess.
In transitioning from the concrete type int to int_Ring, has the value of the integer been lost? Or, is there a way to recover the value of an int from an int_Ring? The way int_Ring is packaged, it doesn't look like it's possible, because the constructors don't reference elements. However, the same can be said of int_ZmodType. For reference, those types are defined in
https://github.com/math-comp/math-comp/blob/master/mathcomp/algebra/ssralg.v
This is not completely answering the question, but I managed to prove that the coefficient is indeed Negz 2. I give the proof here for the record. Note that I am not familiar at all with ssreflect, so there may be better and more natural ways to do this.
From mathcomp Require Import all_ssreflect.
From mathcomp Require Import all_algebra.
Open Scope ring_scope.
Definition my_seq := [:: Posz 4; Negz 2].
Eval compute in my_seq`_1.
Definition my_polynomial := Poly my_seq.
Example test : my_polynomial `_1 = Negz 2.
Proof.
cbn.
rewrite 2!polyseq_cons. cbn.
rewrite 2!size_polyC. cbn.
rewrite polyseqC. cbn. reflexivity.
Qed.
EDIT: As explained in the comments below, there exist simpler proofs of this fact.
Here's a version of your code that works:
From mathcomp Require Import all_ssreflect.
From mathcomp Require Import all_algebra.
Open Scope ring_scope.
Definition my_seq := [:: Posz 4; Negz 2].
Definition my_poly := #Polynomial _ my_seq erefl.
Compute my_poly`_1.
Instead of using the simpler Poly wrapper function defined in the library, this version calls directly the constructor of polynomial. If you look at the definition of this type, you'll see that a polynomial is simply a record containing the sequence of coefficients of the polynomial, plus a proof of a boolean equality asserting that the last element of this sequence (the leading coefficient) is not zero. (The second argument in the above expression is a proof that true = true, which is understood by Coq as the same thing as a proof of (last 1 polyseq != 0) = true, by the rules of computation.) You can check manually that there is nothing preventing the expression we're computing from reducing, so Coq is able to figure out the answer.
To see what is wrong with your original attempt, we have to unfold it a little bit. I have included the relevant definitions here in order, expanding some notations:
Poly s := foldr cons_poly (polyC 0)
polyC c := insubd poly_nil [:: c]
(* from eqtype.v *)
insubd {T : Type} {P : pred T} {sT : subType T P} u0 (x : T) : sT :=
odflt u0 (insub x)
insub {T : Type} {P : pred T} {sT : subType T P} (x : T) : option sT
:= if #idP (P x) is ReflectT Px then #Some sT (Sub x Px) else None
And here we find the culprit: Poly is defined in terms of insub, which in turn is defined by case analysis on idP, which is an opaque lemma! And Coq's reduction gets stuck when an opaque term gets in the way. (In case you are curious, what is going on here is that insub is testing, using idP, whether the leading coefficient of the polynomial is indeed different from zero, and, if so, using that fact to build the polynomial.)
The problem is that many definitions in ssreflect were not made to compute fully inside the logic. This is due to two reasons. One is performance: by allowing everything to fully reduce, we can make type checking much slower. The other is that ssreflect is tailored for convenience of reasoning, so many definitions are not the most efficient. The CoqEAL library was developed to connect definitions with better computational behavior to ones that are easier to reason about, like in ssreflect; unfortunately, I don't know if the project is still being maintained.

Adding complete disjunctive assumption in Coq

In mathematics, we often proceed as follows: "Now let us consider two cases, the number k can be even or odd. For the even case, we can say exists k', 2k' = k..."
Which expands to the general idea of reasoning about an entire set of objects by disassembling it into several disjunct subsets that can be used to reconstruct the original set.
How is this reasoning principle captured in coq considering we do not always have an assumption that is one of the subsets we want to deconstruct into?
Consider the follow example for demonstration:
forall n, Nat.Even n => P n.
Here we can naturally do inversion on Nat.Even n to get n = 2*x (and an automatically-false eliminated assumption that n = 2*x + 1). However, suppose we have the following:
forall n, P n
How can I state: "let us consider even ns and odd ns". Do I need to first show that we have decidable forall n : nat, even n \/ odd n? That is, introduce a new (local or global) lemma listing all the required subsets? What are the best practices?
Indeed, to reason about a splitting of a class of objects in Coq you need to show an algorithm splitting them, unless you want to reason classically (there is nothing wrong with that).
IMO, a key point is getting such decidability hypotheses "for free". For instance, you could implement odd : nat -> bool as a boolean function, as it is done in some libraries, then you get the splitting for free.
[edit]
You can use some slightly more convenient techniques for pattern matching, by enconding the pertinent cases as inductives:
Require Import PeanoNat Nat Bool.
CoInductive parity_spec (n : nat) : Type :=
| parity_spec_odd : odd n = true -> parity_spec n
| parity_spec_even: even n = true -> parity_spec n
.
Lemma parityP n : parity_spec n.
Proof.
case (even n) eqn:H; [now right|left].
now rewrite <- Nat.negb_even, H.
Qed.
Lemma test n : even n = true \/ odd n = true.
Proof. now case (parityP n); auto. Qed.

`No more subgoals, but there are non-instantiated existential variables` in Coq proof language?

I was following (incomplete) examples in Coq 8.5p1 's reference manual in chapter 11 about the mathematical/declarative proof language. In the example below for iterated equalities (~= and =~), I got a warning Insufficient Justification for rewriting 4 into 2+2, and eventually got an error saying:
No more subgoals, but there are non-instantiated existential
variables:
?Goal : [x : R H : x = 2 _eq0 : 4 = x * x
|- 2 + 2 = 4]
You can use Grab Existential Variables.
Example:
Goal forall x, x = 2 -> x + x = x * x.
Proof.
proof. Show.
let x:R.
assume H: (x = 2). Show.
have ( 4 = 4). Show.
~= (2*2). Show.
~= (x*x) by H. Show.
=~ (2+2). Show. (*Problem Here: Insufficient Justification*)
=~ H':(x + x) by H.
thus thesis by H'.
end proof.
Fail Qed.
I'm not familiar with the mathematical proof language in Coq and couldn't understand why this happens. Can someone help explain how to fix the error?
--EDIT--
#Vinz
I had these random imports before the example:
Require Import Reals.
Require Import Fourier.
Your proof would work for nat or Z, but it fails in case of R.
From the Coq Reference Manual (v8.5):
The purpose of a declarative proof language is to take the opposite approach where intermediate states are always given by the user, but the transitions of the system are automated as much as possible.
It looks like the automation fails for 4 = 2 + 2. I don't know what kind of automation uses the declarative proof engine, but, for instance, the auto tactic is not able to prove almost all simple equalities, like this one:
Open Scope R_scope.
Goal 2 + 2 = 4. auto. Fail Qed.
And as #ejgallego points out we can prove 2 * 2 = 4 using auto only by chance:
Open Scope R_scope.
Goal 2 * 2 = 4. auto. Qed.
(* `reflexivity.` would do here *)
However, the field tactic works like a charm. So one approach would be to suggest the declarative proof engine using the field tactic:
Require Import Coq.Reals.Reals.
Open Scope R_scope.
Unset Printing Notations. (* to better understand what we prove *)
Goal forall x, x = 2 -> x + x = x * x.
Proof.
proof.
let x : R.
assume H: (x = 2).
have (4 = 4).
~= (x*x) by H.
=~ (2+2) using field. (* we're using the `field` tactic here *)
=~ H':(x + x) by H.
thus thesis by H'.
end proof.
Qed.
The problem here is that Coq's standard reals are defined in an axiomatic way.
Thus, + : R -> R -> R and *, etc... are abstract operations, and will never compute. What does this mean? It means that Coq doesn't have a rule on what to do with +, contrary for instance to the nat case, where Coq knows that:
0 + n ~> 0
S n + m ~> S (n + m)
Thus, the only way to manipulate + for the real numbers it to manually apply the corresponding axioms that characterize the operator, see:
https://coq.inria.fr/library/Coq.Reals.Rdefinitions.html
https://coq.inria.fr/library/Coq.Reals.Raxioms.html
This is what field, omega, etc... do. Even 0 + 1 = 1 is not probable by computation.
Anton's example 2 + 2 = 4 works by chance. Actually, Coq has to parse the numeral 4 to a suitable representation using the real axioms, and it turns out that 4 is parsed as Rmult (Rplus R1 R1) (Rplus R1 R1) (to be more efficient), which is the same than the left side of the previous equality.