using addf_div for rat_numDomainType - coq

I am trying to apply the addf_div theorem from math-comp's ssralg to the following:
1 / a%:R + 1 / a%:R. I want to show that this is 2 / a%:R, but addf_div is over fieldTypes and can't be applied. Is there a way to apply addf_div to the goal.
Here, a is a nat.

I'm not at all an expert with ssralg, but I managed to get this direct proof, which I'm pretty sure can be very simplified.
From mathcomp Require Import all_ssreflect all_algebra.
Set Implicit Arguments.
Unset Strict Implicit.
Unset Printing Implicit Defensive.
Open Scope ring_scope.
Import GRing.Theory.
Variable (R : numFieldType).
Variable (a : nat).
Definition a' : R := a%:R.
Lemma foo : 1 / a' + 1 / a' = 2%:R / a'.
Proof. by rewrite -mulrDr -mulr2n mulrnAr -mulrnAl. Qed.
Note that addf_div can be used inside the proof too, but using it doesn't seem to be making the proof simpler.

Related

Extracting integer from int_Ring type in mathcomp's ssralg

A bit of set up for the question: The notation `_i is defined to be the i-th component of a sequence, but is also meant to be the i-th coefficient of a polynomial. The following code outputs Negz 2 : int_ZmodType:
From mathcomp Require Import all_ssreflect.
From mathcomp Require Import all_algebra.
Open Scope ring_scope.
Definition my_seq := [:: Posz 4; Negz 2].
Eval compute in my_seq`_1.
The type of my_seq is seq int. The type int has constructors Posz and Negz.
The header of
https://github.com/math-comp/math-comp/blob/master/mathcomp/algebra/poly.v
informs us that Poly s is a polynomial with coefficients from the sequence s. It also says that p`_i is the i-th coefficient of a polynomial p. I expected the following code to output Negz 2:
Definition my_polynomial := Poly my_seq.
Eval compute in my_polynomial`_1.
The resulting term is not Negz 2, though it does have type int_Ring. There is a sequence constructor polyseq for polynomials. Indeed, the type of polyseq my_polynomial is seq int_Ring. However, doing Eval compute in (polyseq my_polynomial)`_1. gives the same mess.
In transitioning from the concrete type int to int_Ring, has the value of the integer been lost? Or, is there a way to recover the value of an int from an int_Ring? The way int_Ring is packaged, it doesn't look like it's possible, because the constructors don't reference elements. However, the same can be said of int_ZmodType. For reference, those types are defined in
https://github.com/math-comp/math-comp/blob/master/mathcomp/algebra/ssralg.v
This is not completely answering the question, but I managed to prove that the coefficient is indeed Negz 2. I give the proof here for the record. Note that I am not familiar at all with ssreflect, so there may be better and more natural ways to do this.
From mathcomp Require Import all_ssreflect.
From mathcomp Require Import all_algebra.
Open Scope ring_scope.
Definition my_seq := [:: Posz 4; Negz 2].
Eval compute in my_seq`_1.
Definition my_polynomial := Poly my_seq.
Example test : my_polynomial `_1 = Negz 2.
Proof.
cbn.
rewrite 2!polyseq_cons. cbn.
rewrite 2!size_polyC. cbn.
rewrite polyseqC. cbn. reflexivity.
Qed.
EDIT: As explained in the comments below, there exist simpler proofs of this fact.
Here's a version of your code that works:
From mathcomp Require Import all_ssreflect.
From mathcomp Require Import all_algebra.
Open Scope ring_scope.
Definition my_seq := [:: Posz 4; Negz 2].
Definition my_poly := #Polynomial _ my_seq erefl.
Compute my_poly`_1.
Instead of using the simpler Poly wrapper function defined in the library, this version calls directly the constructor of polynomial. If you look at the definition of this type, you'll see that a polynomial is simply a record containing the sequence of coefficients of the polynomial, plus a proof of a boolean equality asserting that the last element of this sequence (the leading coefficient) is not zero. (The second argument in the above expression is a proof that true = true, which is understood by Coq as the same thing as a proof of (last 1 polyseq != 0) = true, by the rules of computation.) You can check manually that there is nothing preventing the expression we're computing from reducing, so Coq is able to figure out the answer.
To see what is wrong with your original attempt, we have to unfold it a little bit. I have included the relevant definitions here in order, expanding some notations:
Poly s := foldr cons_poly (polyC 0)
polyC c := insubd poly_nil [:: c]
(* from eqtype.v *)
insubd {T : Type} {P : pred T} {sT : subType T P} u0 (x : T) : sT :=
odflt u0 (insub x)
insub {T : Type} {P : pred T} {sT : subType T P} (x : T) : option sT
:= if #idP (P x) is ReflectT Px then #Some sT (Sub x Px) else None
And here we find the culprit: Poly is defined in terms of insub, which in turn is defined by case analysis on idP, which is an opaque lemma! And Coq's reduction gets stuck when an opaque term gets in the way. (In case you are curious, what is going on here is that insub is testing, using idP, whether the leading coefficient of the polynomial is indeed different from zero, and, if so, using that fact to build the polynomial.)
The problem is that many definitions in ssreflect were not made to compute fully inside the logic. This is due to two reasons. One is performance: by allowing everything to fully reduce, we can make type checking much slower. The other is that ssreflect is tailored for convenience of reasoning, so many definitions are not the most efficient. The CoqEAL library was developed to connect definitions with better computational behavior to ones that are easier to reason about, like in ssreflect; unfortunately, I don't know if the project is still being maintained.

How to use Coq GenericMinMax to prove facts about the reals

I'm trying to prove
Theorem T20d :forall (x y:R), (0<x /\ 0<y) -> 0 < Rmin x y.
with
Lemma min_glb_lt n m p : p < n -> p < m -> p < min n m.
which is in Coq.Structures.GenericMinMax
which I imported with Require Import Coq.Structures.GenericMinMax
however, I still get "reference min_glb_lt" not found when I try to use it? I suspect I need to open a scope, but I don't know which scope.
First of all, the GenericMinMax library defines generic structures, so you can't use them directly to solve a concrete problem. That library mostly contains functors. In other words, it provides interfaces which you need to implement to be able to use them.
In our case, we need to implement the MinMaxLogicalProperties functor (or some other functor that includes this one), because it includes the required lemma.
Several Coq standard libraries provide such implementations. Luckily for us, it has already been done for the reals in the file Rminmax.v, inside the module R, this line specifically:
Include UsualMinMaxProperties R_as_OT RHasMinMax.
So, we can use it like so:
Require Import Reals.
Require Import Rminmax. Import R.
Local Open Scope R_scope.
Theorem T20d (x y : R) :
(0 < x /\ 0 < y) -> 0 < Rmin x y.
Proof.
intros [? ?].
now apply min_glb_lt.
Qed.
Alternatively, we could've referred to the lemma by its qualified name R.min_glb_lt -- that would've let us get rid of Import R.

What's the difference between logical (Leibniz) equality and local definition in Coq?

I am having trouble understanding the difference between an equality and a local definition. For example, when reading the documentation about the set tactic:
remember term as ident
This behaves as set ( ident := term ) in * and
using a logical (Leibniz’s) equality instead of a local definition
Indeed,
set (ca := c + a) in *. e.g. generates ca := c + a : Z in the context, while
remember (c + a ) as ca. generates Heqca : ca = c + a in the context.
In case 2. I can make use of the generated hypothesis like rewrite Heqca., while in case 1., I cannot use rewrite ca.
What's the purpose of case 1. and how is it different from case 2. in terms of practical usage?
Also, if the difference between the two is fundamental, why is remember described as a variant of set in the documentation (8.5p1)?
You could think of set a := b + b in H as rewriting H to be:
(fun a => H[b+b/a]) (b+b)
or
let a := b + b in
H[b+b/a]
That is, it replaces all matched patterns b+b by a fresh variable a, which is then instantiated to the value of the pattern. In this regard, both H and the rewrited hypotheses remain equal by "conversion".
Indeed, remember is in a sense a variant of set, however its implications are very different. In this case, remember will introduce a new proof of equality eq_refl: b + b = b + b, then it will abstract away the left part. This is convenient for having enough freedom in pattern matching etc... This is remember in terms of more atomic tactics:
Lemma U b c : b + b = c + c.
Proof.
assert (b + b = b + b). reflexivity.
revert H.
generalize (b+b) at 1 3.
intros n H.
In addition to #ejgallego's answer.
Yes, you cannot rewrite a (local) definition, but you can unfold it:
set (ca := c + a) in *.
unfold ca.
As for the differences in their practical use -- they are quite different. For example, see this answer by #eponier. It relies on the remember tactic so that induction works as we'd like to. But, if we replace remember with set it fails:
Inductive good : nat -> Prop :=
| g1 : good 1
| g3 : forall n, good n -> good (n * 3)
| g5 : forall n, good n -> good (n + 5).
Require Import Omega.
The variant with remember works:
Goal ~ good 0.
remember 0 as n.
intro contra. induction contra; try omega.
apply IHcontra; omega.
Qed.
and the variant with set doesn't (because we didn't introduce any free variables to work with):
Goal ~ good 0.
set (n := 0). intro contra.
induction contra; try omega.
Fail apply IHcontra; omega.
Abort.

`No more subgoals, but there are non-instantiated existential variables` in Coq proof language?

I was following (incomplete) examples in Coq 8.5p1 's reference manual in chapter 11 about the mathematical/declarative proof language. In the example below for iterated equalities (~= and =~), I got a warning Insufficient Justification for rewriting 4 into 2+2, and eventually got an error saying:
No more subgoals, but there are non-instantiated existential
variables:
?Goal : [x : R H : x = 2 _eq0 : 4 = x * x
|- 2 + 2 = 4]
You can use Grab Existential Variables.
Example:
Goal forall x, x = 2 -> x + x = x * x.
Proof.
proof. Show.
let x:R.
assume H: (x = 2). Show.
have ( 4 = 4). Show.
~= (2*2). Show.
~= (x*x) by H. Show.
=~ (2+2). Show. (*Problem Here: Insufficient Justification*)
=~ H':(x + x) by H.
thus thesis by H'.
end proof.
Fail Qed.
I'm not familiar with the mathematical proof language in Coq and couldn't understand why this happens. Can someone help explain how to fix the error?
--EDIT--
#Vinz
I had these random imports before the example:
Require Import Reals.
Require Import Fourier.
Your proof would work for nat or Z, but it fails in case of R.
From the Coq Reference Manual (v8.5):
The purpose of a declarative proof language is to take the opposite approach where intermediate states are always given by the user, but the transitions of the system are automated as much as possible.
It looks like the automation fails for 4 = 2 + 2. I don't know what kind of automation uses the declarative proof engine, but, for instance, the auto tactic is not able to prove almost all simple equalities, like this one:
Open Scope R_scope.
Goal 2 + 2 = 4. auto. Fail Qed.
And as #ejgallego points out we can prove 2 * 2 = 4 using auto only by chance:
Open Scope R_scope.
Goal 2 * 2 = 4. auto. Qed.
(* `reflexivity.` would do here *)
However, the field tactic works like a charm. So one approach would be to suggest the declarative proof engine using the field tactic:
Require Import Coq.Reals.Reals.
Open Scope R_scope.
Unset Printing Notations. (* to better understand what we prove *)
Goal forall x, x = 2 -> x + x = x * x.
Proof.
proof.
let x : R.
assume H: (x = 2).
have (4 = 4).
~= (x*x) by H.
=~ (2+2) using field. (* we're using the `field` tactic here *)
=~ H':(x + x) by H.
thus thesis by H'.
end proof.
Qed.
The problem here is that Coq's standard reals are defined in an axiomatic way.
Thus, + : R -> R -> R and *, etc... are abstract operations, and will never compute. What does this mean? It means that Coq doesn't have a rule on what to do with +, contrary for instance to the nat case, where Coq knows that:
0 + n ~> 0
S n + m ~> S (n + m)
Thus, the only way to manipulate + for the real numbers it to manually apply the corresponding axioms that characterize the operator, see:
https://coq.inria.fr/library/Coq.Reals.Rdefinitions.html
https://coq.inria.fr/library/Coq.Reals.Raxioms.html
This is what field, omega, etc... do. Even 0 + 1 = 1 is not probable by computation.
Anton's example 2 + 2 = 4 works by chance. Actually, Coq has to parse the numeral 4 to a suitable representation using the real axioms, and it turns out that 4 is parsed as Rmult (Rplus R1 R1) (Rplus R1 R1) (to be more efficient), which is the same than the left side of the previous equality.

Eval compute is incomplete when own decidability is used in Coq

The Eval compute command does not always evaluate to a simple expression.
Consider the code:
Require Import Coq.Lists.List.
Require Import Coq.Arith.Peano_dec.
Import ListNotations.
Inductive I : Set := a : nat -> I | b : nat -> nat -> I.
Lemma I_eq_dec : forall x y : I, {x = y}+{x <> y}.
Proof.
repeat decide equality.
Qed.
And, if I execute the following command:
Eval compute in (if (In_dec eq_nat_dec 10 [3;4;5]) then 1 else 2).
Coq tells me that the result is 2. However, when I execute the following expression:
Eval compute in (if (In_dec I_eq_dec (a 2) [(a 1);(a 2)]) then 1 else 2).
I get a long expression where the In-predicate seems to be unfolded, but no result is given.
What do I have to change to obtain the answer 1 in the last Eval compute line ?
In Coq there are two terminator commands for proof scripts: Qed and Defined. The difference between them is that the former creates opaque terms, which cannot be unfolded, even by Eval compute. The latter creates transparent terms, which can then be unfolded as usual. Thus, you just have to put Defined in the place of Qed.:
Require Import Coq.Lists.List.
Require Import Coq.Arith.Peano_dec.
Import ListNotations.
Inductive I : Set := a : nat -> I | b : nat -> nat -> I.
Lemma I_eq_dec : forall x y : I, {x = y}+{x <> y}.
Proof.
repeat decide equality.
Defined.
Eval compute in (if (In_dec I_eq_dec (a 2) [(a 1);(a 2)]) then 1 else 2).
I personally find the sumbool type {A} + {B} not very nice for expressing decidable propositions, precisely because proofs and computation are too tangled together; in particular, proofs affect how terms reduce. I find it better to follow the Ssreflect style, separate proofs and computation and relate them via a special predicate:
Inductive reflect (P : Prop) : bool -> Set :=
| ReflectT of P : reflect P true
| ReflectF of ~ P : reflect P false.
this gives a convenient way for saying that a boolean computation returns true iff some property is true. Ssreflect provides support for conveniently switching between the computational boolean view and the logical view.
If you want to evaluate your proofs, you need to make them transparent. You do that by finishing them with the Defined command. The Qed command makes them opaque, meaning it discards their computational content.