How do I the calculate the sqrt of a natural or rational number in coq? - coq

I'm learning coq and I'm trying to make my own Point and Line data types. I'd like to make a function that returns the length of a line, but I can't seem to find the sqrt function that will return a calculation. I tried used Coq.Reals.R_sqrt, but apparently that's only used for abstract math, so it won't run the calculation.
So then I tried importing Coq.Numbers.Natural.Abstract.NSqrt and Coq.Numbers.NatInt.NZSqrt. but neither put sqrt function into the environment.
This is what I have so far...
Require Import Coq.QArith.QArith_base.
Require Import Coq.Numbers.NatInt.NZSqrt.
Require Import Coq.Numbers.Natural.Abstract.NSqrt.
Require Import Coq.ZArith.BinInt.
Inductive Point : Type :=
point : Q -> Q -> Point.
Inductive Line : Type :=
line : Point -> Point -> Line.
Definition line_fst (l:Line) :=
match l with
| line x y => x
end.
Definition line_snd (l:Line) :=
match l with
| line x y => y
end.
Definition point_fst (p:Point) :=
match p with
| point x y => x
end.
Definition point_snd (p:Point) :=
match p with
| point x y => y
end.
(* The reference sqrt was not found in the current environment. *)
Definition line_length (l:Line) :=
sqrt(
(minus (point_snd(line_fst l)) (point_fst(line_fst l)))^2
+
(minus (point_snd(line_snd l)) (point_fst(line_snd l)))^2
).
Example line_example : (
line_length (line (point 0 0) (point 0 2)) = 2
).

If you want your square root to compute, there are two things you can do.
Use the CoRN library. Then you will get real number functions that actually compute. However, those real numbers will just be functions that you can ask for approximations to a certain precision, so might not be what you are looking for. Also, I think CoRN is not terribly well maintained nowadays.
Use the natural number square root from the standard library that you already mentioned, and use that to calculate the square roots that you want to a certain precision yourself. This function you get by importingBinNat:
Coq < Require Import BinNat.
[Loading ML file z_syntax_plugin.cmxs ... done]
Coq < Eval compute in N.sqrt 1000.
= 31%N
: N

Related

How to solve the very simple Syntax error: 'end' expected after [branches] (in [term_match]). in Coq? (in Gallina and not ltac)

I was trying to write a very simple program to sum nats in a list (copy pasted from here):
Fixpoint sum (l : list nat) : nat :=
match l with
| [] => 0
| x :: xs => x + sum xs
end.
but my local Coq and jsCoq complain:
Syntax error: 'end' expected after [branches] (in [term_match]).
Why is this? (note I didn't even implement this but my implementation looks the same pretty much)
I've implemented recursive functions before:
Inductive my_nat : Type :=
| O : my_nat
| S : my_nat -> my_nat.
Fixpoint add_left (n m : my_nat) : my_nat :=
match n with
| O => m
| S n' => S (add_left n' m)
end.
which doesn't complain...
I did see this question How to match a "match" expression? but it seems to address some special issue in ltac and I am not using ltac.
The location of the error is on the [], which suggests that Coq does not understand that notation. Upon finding undefined notation, the parser has no idea what to do and produces an essentially meaningless error message.
To have standard list notation be defined, you need to import it from the standard library via:
Require Import List.
Import ListNotations.
The stdlib module List contains the module ListNotations, which defines [] (and more generally [ x ; y ; .. ; z ]). List also defines the notation x :: xs.
when picking up excerpts from a development, you also have to find what are the syntax changing commands that have an effect on this exceprts: Module importation, scope opening, argument declarations (for implicits), Notations, and coercions". In the current case, the file is actually provided by the author of the exercises through this pointer.

Coq: About "%" and "mod" as a notation symbol

I'm trying to define a notation for modulo equivalence relation:
Inductive mod_equiv : nat -> nat -> nat -> Prop :=
| mod_intro_same : forall m n, mod_equiv m n n
| mod_intro_plus_l : forall m n1 n2, mod_equiv m n1 n2 -> mod_equiv m (m + n1) n2
| mod_intro_plus_r : forall m n1 n2, mod_equiv m n1 n2 -> mod_equiv m n1 (m + n2).
(* 1 *) Notation "x == y 'mod' z" := (mod_equiv z x y) (at level 70).
(* 2 *) Notation "x == y % z" := (mod_equiv z x y) (at level 70).
(* 3 *) Notation "x == y %% z" := (mod_equiv z x y) (at level 70).
All three notations are accepted by Coq. However, I can't use the notation to state a theorem in some cases:
(* 1 *)
Theorem mod_equiv_sym : forall (m n p : nat), n == p mod m -> p == n mod m.
(* Works fine as-is, but gives error if `Arith` is imported before:
Syntax error: 'mod' expected after [constr:operconstr level 200] (in [constr:operconstr]).
*)
(*************************************)
(* 2 *)
Theorem mod_equiv_sym : forall (m n p : nat), n == p % m -> p == n % m.
(* Gives the following error:
Syntax error: '%' expected after [constr:operconstr level 200] (in [constr:operconstr]).
*)
(*************************************)
(* 3 *)
Theorem mod_equiv_sym : forall (m n p : nat), n == p %% m -> p == n %% m.
(* Works just fine. *)
The notation mod is defined under both Coq.Init.Nat and Coq.Arith.PeanoNat at top level. Why is the new notation x == y 'mod' z fine in one environment but not in the other?
The notation % seems to conflict with the built-in % notation, yet the Coq parser gives almost the same error message as the mod case, and the message isn't very helpful in either case. Is this intended behavior? IMO, if the parser can't understand a notation inside such a trivial context, the notation shouldn't have been accepted in the first place.
Your first question has an easy answer. The initial state of Coq is (in part) determined by Coq.Init.Prelude, which (as of this answer) contains the line
Require Coq.Init.Nat.
This is to say, Coq.Init.Prelude isn't imported, only made available with Require. Notations are only active if the module containing them is imported. If the notation is declared local (Local Notation ...) then even importing the module doesn't activate the notation.
The second question is trickier and delves into how Coq parses notations. Let's start with an example that works. In Coq.Init.Notations (which is actually imported in Coq.Init.Prelude), the notation "x <= y < z" is reserved.
Reserved Notation "x <= y < z" (at level 70, y at next level).
In Coq.Init.Peano (which is also imported), a meaning is given to the notation. We won't really worry about that part, since we're mostly concerned with parsing.
To see what effect reserving a notation has, you can use the vernacular command Print Grammar constr.. This will display a long list of everything that goes into parsing a constr (a basic unit of Coq's grammar). The entry for this notation is found a ways down the list.
| "70" RIGHTA
[ SELF; "?="; NEXT
[...]
| SELF; "<="; NEXT; "<"; NEXT
[...]
| SELF; "="; NEXT ]
We see that the notation is right associative (RIGHTA)1 and lives at level 70. We also see that the three variables in the notation, x, y and z are parsed at level 70 (SELF), level 71 (NEXT) and level 71 (NEXT) respectively.2
During parsing, Coq starts at level 0 and looks at the next token. Until there's a token that should be parsed at the current level, the level is increased. So notations with lower levels take precedence over those with higher level. (This is conceptually how it works - it's probably optimized a bit).
When a complex notation is found, such as after "5 <= ", the parser remembers the grammar of that notation3: SELF; "<="; NEXT; "<"; NEXT. After "5 <=", we parse y at level 71, meaning that if nothing works at less than level 71, we stop trying to parse y and move on.
After that, the next token has to be "<", then we parse z at level 71 if it is.
The great thing about levels is that it allows interaction with other notations without needing parentheses. For example, in the code 1 * 2 < 3 + 4 <= 5 * 6, we don't need parentheses because * and + are declared at lower levels (40 and 50 respectively). So when we're parsing y (at level 71), we're able to parse all of 3 + 4 before moving on to <= z. Additionally, when we parse z, we can capture 5 * 6 because * parses at a lower level than the parsing level for z.
Alright, now that we understand that, we can figure out what's going on in your case.
When Arith (or Nat) are imported, mod becomes a notation. Specifically we have a left associative notation at level 40 whose grammar is SELF; "mod"; NEXT (use Print Grammar constr. to check). When you define your mod notation, the entry is right associative at level 70 with grammar SELF; "=="; constr:operconstr LEVEL "200"; "mod"; NEXT. The middle section just means that y is parsed at level 200 (as a constr - just like everything else we've talked about).
Thus, when parsing n == p mod m, we parse n == fine, then start parsing y at level 200. Since Arith's mod is only at level 40, that's how we'll parse p mod m. But then our x == y mod z notation is left hanging. We're at the end of the statement and mod z still hasn't been parsed.
Syntax error: 'mod' expected after [constr:operconstr level 200] (in [constr:operconstr]).
(does the error make more sense now?)
If you really want to use your mod notation while still using Arith's mod notation, you'll need to parse y at a lower level. Since x mod y is at level 40, we could make y be level 39 with
Notation "x == y 'mod' z" := (mod_equiv z x y) (at level 70, y at level 39).
Since arithmetic operations are at levels 40 and above, that means that we'd need to write 5 == (3 * 4) mod 7 using parentheses.
For your "%" notation, it's going to be difficult. "%" is normally used for scope delimitation (e.g. (x + y)%nat) and binds very tightly at level 1. You could make y parse at level 0, but that means that no notations at all can be used for y without parentheses. If that's acceptable, go ahead.
Since "%%" doesn't clash with anything (in the standard library), you're free to use it here at whatever level is convenient. You may want to make y parse at a lower level (y at next level is pretty standard), but it's not necessary.
Right associativity is the default. Apparently Coq's parser doesn't have a "no associativity" option, so even if you explicitly say "no associativity", it's still registered as right associative. This doesn't often cause trouble in practice.
This is why the notation is reserved with "y at the next level". By default, variables in the middle of the notation are parsed at level 200, which can be seen by reserving a similar notation like Reserved Notation "x ^ y ^ z" (at level 70). and using Print Grammar constr. to see the parsing levels. As we'll see, this is what happens with x == y mod z.
What happens if more than one notation starts with "5 <="? The one with the lower level will obviously be taken, but if they have the same level, it tries both and backtracks if it doesn't parse. However, if one notation finishes, it doesn't backtrack even if that choice causes trouble later. I'm not sure of the exact rules, but I suspect it depends on which notation is declared first.

Use condition in a ssreflect finset comprehension

I have a function f that takes a x of fintype A and a proof of P x to return an element of fintype B. I want to return the finset of f x for all x satisfying P, which could be written like this :
From mathcomp
Require Import ssreflect ssrbool fintype finset.
Variable A B : finType.
Variable P : pred A.
Variable f : forall x : A, P x -> B.
Definition myset := [set (#f x _) | x : A & P x].
However, this fails as Coq does not fill the placeholder with the information on the right, and I do not know how to provide it explicitly.
I did not find an indication on how to do that, both in the ssreflect code and book. I realize that I could probably do it by using a sigma-type {x : A ; P x} and modifying a bit f, but it feels more convoluted than it should. Is there a simple / readable way to do that ?
Actually the easiest way to make things work in the way you propose is to use a sigma type:
Definition myset := [set f (tagged x) | x : { x : A | P x }].
But indeed, f is a bit of a strange function, I guess we'll need to know more details about your use case to understand where are you going.

Finding a well founded relation to prove termination of a function that stops decreasing at some point

Suppose we have:
Require Import ZArith Program.
Program Fixpoint range (from to : Z) {measure f R} : list :=
if from <? to
then from :: range (from + 1) to
else [].
I'd like to convince Coq that this terminates - I tried by measuring the size of the range as abs (to - from). However, this doesn't quite work because once the range is empty (that is, from >= to), it simply starts increasing once again.
I've also tried measuring with:
Definition get_range (from to : Z) : option nat :=
let range := (to - from) in
if (range <? 0)
then None
else Some (Z_to_nat (Z.abs range) (Z.abs_nonneg range)).
using my custom:
Definition preceeds_eq (l r : option nat) : Prop :=
match l, r with
| None, None => False
| None, (Some _) => True
| (Some _), None => False
| (Some x), (Some y) => x < y
end.
and the cast:
Definition Z_to_nat (z : Z) (p : 0 <= z) : nat.
Proof.
dependent destruction z.
- exact (0%nat).
- exact (Pos.to_nat p).
- assert (Z.neg p < 0) by apply Zlt_neg_0.
contradiction.
Defined.
But it runs into the issue that I cannot show that None < None and using reflexive preceeds_eq makes the relation not well founded, which brings me back to the same problem.
Is there a way to convince Coq that range terminates? Is my approach completely broken?
If you map the length of you interval to nat using Z.abs_nat or Z.to_nat functions, and use a function deciding if the range is not-empty with a more informative result type (Z_lt_dec) then the solution becomes very simple:
Require Import ZArith Program.
Program Fixpoint range (from to : Z) {measure (Z.abs_nat (to - from))} : list Z :=
if Z_lt_dec from to
then from :: range (from + 1) to
else [].
Next Obligation. apply Zabs_nat_lt; auto with zarith. Qed.
Using Z_lt_dec instead of its boolean counter-part gives you the benefit of propagating the proof of from < to into the context, which gives you the ability to deal with the proof obligation easily.

Extracting integer from int_Ring type in mathcomp's ssralg

A bit of set up for the question: The notation `_i is defined to be the i-th component of a sequence, but is also meant to be the i-th coefficient of a polynomial. The following code outputs Negz 2 : int_ZmodType:
From mathcomp Require Import all_ssreflect.
From mathcomp Require Import all_algebra.
Open Scope ring_scope.
Definition my_seq := [:: Posz 4; Negz 2].
Eval compute in my_seq`_1.
The type of my_seq is seq int. The type int has constructors Posz and Negz.
The header of
https://github.com/math-comp/math-comp/blob/master/mathcomp/algebra/poly.v
informs us that Poly s is a polynomial with coefficients from the sequence s. It also says that p`_i is the i-th coefficient of a polynomial p. I expected the following code to output Negz 2:
Definition my_polynomial := Poly my_seq.
Eval compute in my_polynomial`_1.
The resulting term is not Negz 2, though it does have type int_Ring. There is a sequence constructor polyseq for polynomials. Indeed, the type of polyseq my_polynomial is seq int_Ring. However, doing Eval compute in (polyseq my_polynomial)`_1. gives the same mess.
In transitioning from the concrete type int to int_Ring, has the value of the integer been lost? Or, is there a way to recover the value of an int from an int_Ring? The way int_Ring is packaged, it doesn't look like it's possible, because the constructors don't reference elements. However, the same can be said of int_ZmodType. For reference, those types are defined in
https://github.com/math-comp/math-comp/blob/master/mathcomp/algebra/ssralg.v
This is not completely answering the question, but I managed to prove that the coefficient is indeed Negz 2. I give the proof here for the record. Note that I am not familiar at all with ssreflect, so there may be better and more natural ways to do this.
From mathcomp Require Import all_ssreflect.
From mathcomp Require Import all_algebra.
Open Scope ring_scope.
Definition my_seq := [:: Posz 4; Negz 2].
Eval compute in my_seq`_1.
Definition my_polynomial := Poly my_seq.
Example test : my_polynomial `_1 = Negz 2.
Proof.
cbn.
rewrite 2!polyseq_cons. cbn.
rewrite 2!size_polyC. cbn.
rewrite polyseqC. cbn. reflexivity.
Qed.
EDIT: As explained in the comments below, there exist simpler proofs of this fact.
Here's a version of your code that works:
From mathcomp Require Import all_ssreflect.
From mathcomp Require Import all_algebra.
Open Scope ring_scope.
Definition my_seq := [:: Posz 4; Negz 2].
Definition my_poly := #Polynomial _ my_seq erefl.
Compute my_poly`_1.
Instead of using the simpler Poly wrapper function defined in the library, this version calls directly the constructor of polynomial. If you look at the definition of this type, you'll see that a polynomial is simply a record containing the sequence of coefficients of the polynomial, plus a proof of a boolean equality asserting that the last element of this sequence (the leading coefficient) is not zero. (The second argument in the above expression is a proof that true = true, which is understood by Coq as the same thing as a proof of (last 1 polyseq != 0) = true, by the rules of computation.) You can check manually that there is nothing preventing the expression we're computing from reducing, so Coq is able to figure out the answer.
To see what is wrong with your original attempt, we have to unfold it a little bit. I have included the relevant definitions here in order, expanding some notations:
Poly s := foldr cons_poly (polyC 0)
polyC c := insubd poly_nil [:: c]
(* from eqtype.v *)
insubd {T : Type} {P : pred T} {sT : subType T P} u0 (x : T) : sT :=
odflt u0 (insub x)
insub {T : Type} {P : pred T} {sT : subType T P} (x : T) : option sT
:= if #idP (P x) is ReflectT Px then #Some sT (Sub x Px) else None
And here we find the culprit: Poly is defined in terms of insub, which in turn is defined by case analysis on idP, which is an opaque lemma! And Coq's reduction gets stuck when an opaque term gets in the way. (In case you are curious, what is going on here is that insub is testing, using idP, whether the leading coefficient of the polynomial is indeed different from zero, and, if so, using that fact to build the polynomial.)
The problem is that many definitions in ssreflect were not made to compute fully inside the logic. This is due to two reasons. One is performance: by allowing everything to fully reduce, we can make type checking much slower. The other is that ssreflect is tailored for convenience of reasoning, so many definitions are not the most efficient. The CoqEAL library was developed to connect definitions with better computational behavior to ones that are easier to reason about, like in ssreflect; unfortunately, I don't know if the project is still being maintained.