I have seen a Coq notation definition for "evaluates to" as follows:
Notation "e '||' n" := (aevalR e n) : type_scope.
I am trying to change the symbol '||' to something else as || is often times used for logical or. However, I always get an error
A left-recursive notation must have an explicit level
For example, this happens when I change '||' to:
'\|/', '\||/', '|_|', '|.|', '|v|', or '|_'.
Is there something special about || here? and how should I fix it to make these other notations work (if possible)?
If I am right, if you overload a notation, Coq uses the properties of the first definition. The notation _ '||' _ has already a level, so Coq uses this level for your definition.
But with new symbols, Coq cannot do that, and you have to specify the level:
Notation "e '|.|' n" := (aevalR e n) (at level 50) : type_scope.
For already defined notations, this is even stronger than what I wrote above. You cannot redefine the level of a notation. Try for example:
Notation "e '||' n" := (aevalR e n) (at level 20) : type_scope.
Related
I'm trying to overload the notation for +. The following doesn't work:
Definition my_add (n m :nat):= n + m.
Fail Notation "x + y":= (my_add x y) (at level 50, y at next level).
Fails with the message Notation "_ + _" is already defined at level 50 with arguments constr at level 50, constr at next level while it is now required to be at level 50 with arguments constr at next level, constr at next level.
Which suggests that I need to bind x at level 50.
Definition my_add (n m :nat):= n + m.
Fail Notation "x + y":= (my_add x y) (at level 50, x at level 50, y at next level).
Fail with The level of the leftmost non-terminal cannot be changed..
I'm sure I have defined this notation before, so perhaps this is a recent change to Coq, or am I forgetting something obvious?
Symbols in notations must have unique precedences and associativity. So for a preexisting notation, no annotation is necessary since it's already set:
Notation "x + y":= (my_add x y).
For some reason you can set associativity alone. You can also set both level and associativity. In any case, they have to match the preexisting values, if any.
Notation "x + y":= (my_add x y) (left associativity).
Notation "x + y":= (my_add x y) (at level 50, left associativity).
You can also use notation scopes in order to use the same symbol with different meanings, with various bells and whistles to control how they are set. See the manual.
I would like to create infix notations for (overloaded) operations. I apply the - to my knowledge - standard two-step approach in Coq:
Typeclass used for actual operator overloading (e.g. overload add
operation)
Notation scope used for definition of operator symbol
(e.g. infix notation '+')
This approach seems to work fine. E.g. overloading of the '-' symbol (which is used at least once in the predefined notation scope nat_scope) poses no problems. However, the '+' and '*' symbols (also used in the predefined notation scope nat_scope) lead to erros in certain circumstances. Is there anything special about the '+' and '*' symbols in Coq that has to be considered when adding them to a new notation scope?
Example
Let's model classical sets as predicates, i.e. as functions of type A->Prop:
Definition pred1 (a:nat) := a=1. (* pred1 corresponds to {1} *)
Definition pred2 (a:nat) := a=1 \/ a=2. (* pred2 corresponds to {1,2} *)
Union (addition) is modelled as the overloaded operation pred_add via the typeclass PredAdd and corresponding typeclass instances. For the sake of simplicity only the typeclass instance add_elem (addition of elements to sets) is provided:
Class PredAdd (X Y Z: Type) := {pred_add: X->Y->Z->Prop}.
Instance add_elem {A:Type}: PredAdd (A->Prop) (A) (A) := {
pred_add (P:A->Prop) (a:A) (x:A) := P x \/ x=a
}.
The infix notation '+' is used for the operation pred_add:
Declare Scope mypred_scope.
Notation "x + y" := (pred_add x y) (at level 50, left associativity): mypred_scope.
Open Scope mypred_scope.
Now, let's use the test case {1} + 2 = {1,2} to see what works and what does not work.
Operator overloading and infix notation '+' seem to work fine (at first glance):
Example ex1: (pred_add pred1 2) 1. unfold pred_add, pred1; simpl; auto. Qed.
Example ex2: forall (p1 p2:nat->Prop) (x:nat),
p1 = pred1 + 2 -> p2 = pred2 -> p1 x=p2 x.
unfold pred_add, pred1, pred2; intros; subst; auto.
Qed.
However, the application of notation '+' fails in certain cases.
E.g. in the following case:
Fail Example ex2: (pred1 + 2) 2.
we get the error message:
The command has indeed failed with message: The term "pred1" has type
"nat -> Prop" while it is expected to have type "Type".
Interestingly, the outcome seems to depend on the particular choice of the notation symbol.
The same example works fine with e.g. the symbol '-':
Notation "x - y" := (pred_add x y) (at level 50, left associativity): mypred_scope.
Example ex3: (pred1 - 2) 2. unfold pred_add, pred1; simpl; auto. Qed.
but fails with the notation symbol '*':
Notation "x * y" := (pred_add x y) (at level 40, left associativity): mypred_scope.
Fail Example ex4: (pred1 * 2) 2.
The command has indeed failed with message: The term "pred1" has type
"nat -> Prop" while it is expected to have type "Type".
(The symbols '-' and '*' may be a confusing choice for an add operation. But this lets us keep the example short and simple.)
Platform
Tested on Coq version: 8.15.2
+ and * are also overloaded as sum and product types in type_scope, which is automatically open on the right of :, taking precedence over the open mypred_scope. This is why the error messages say that a Type is expected. Hence on the right of : you have to reopen mypred_scope.
Delimit Scope mypred_scope with mypred.
Example ex2: (pred1 + 2)%mypred 2.
I've trouble understanding the (point of the) gauntlet one has to pass to bypass the uniform inheritance condition (UIC). Per the instruction
Let /.../ f: forall (x₁:T₁)..(xₖ:Tₖ)(y:C u₁..uₙ), D v₁..vₘ be a
function which does not verify the uniform inheritance condition. To
declare f as coercion, one has first to declare a subclass C' of C
/.../
In the code below, f is such a function:
Parameter C: nat -> Type.
Parameter D: nat -> Prop.
Parameter f: forall {x y}(z:C x), D y.
Parameter f':> forall {x y}(z:C x), D y. (*violates uic*)
Print Coercions. (* #f' *)
Yet I do not have to do anything except putting :> to declare it as a coercion. Maybe the gauntlet will somehow help to avoid breaking UIC? Not so:
Definition C' := fun x => C x.
Fail Definition Id_C_f := fun x d (y: C' x) => (y: C d). (*attempt to define Id_C_f as in the manual*)
Identity Coercion Id_C_f: C' >-> C.
Fail Coercion f: C' >-> D. (*Cannot recognize C' as a source class of f*)
Coercion f'' {x y}(z:C' x): D y := f z. (*violates uic*)
Print Coercions. (* #f' #f'' Id_C_f *)
The question: What am I missing here?
I've trouble understanding the (point of the) gauntlet one has to pass to bypass the uniform inheritance condition (UIC).
Intuitively, the uniform inheritance condition says (roughly) "it's syntactically possible to determine every argument to the coercion function just from the type of the source argument".
The developer that added coercions found it easier (I presume) to write the code implementing coercions if the uniform inheritance condition is assumed. I'm sure that a pull request relaxing this constraint and correctly implementing more general coercions would be welcomed!
That said, note that there is a warning message (not an error message) when you declare a coercion that violates the UIC. It will still add it to the table of coercions. Depending on your version of Coq, the coercion might never trigger, or you might get an error message at type inference time when the code applying the coercion builds an ill-typed term because it tries to apply the coercion assuming the UIC holds when it actually doesn't, or (in older versions of Coq) you can get anomalies (see, e.g., bug reports #4114, #4507, #4635, #3373, and #2828).
That said, here is an example where Identity Coercions are useful:
Require Import Coq.PArith.PArith. (* positive *)
Require Import Coq.FSets.FMapPositive.
Definition lookup {A} (map : PositiveMap.t A) (idx : positive) : option A
:= PositiveMap.find idx map.
(* allows us to apply maps as if they were functions *)
Coercion lookup : PositiveMap.t >-> Funclass.
Definition nat_tree := PositiveMap.t nat.
Axiom mymap1 : PositiveMap.t nat.
Axiom mymap2 : nat_tree.
Local Open Scope positive_scope. (* let 1 mean 1:positive *)
Check mymap1 1. (* mymap1 1 : option nat *)
Fail Check mymap2 1.
(* The command has indeed failed with message:
Illegal application (Non-functional construction):
The expression "mymap2" of type "nat_tree"
cannot be applied to the term
"1" : "positive" *)
Identity Coercion Id_nat_tree : nat_tree >-> PositiveMap.t.
Check mymap2 1. (* mymap2 1 : option nat *)
Basically, in the extremely limited case where you have an identifier which would be recognized as the source of an existing coercion if you unfolded its type a bit, you can use Identity Coercion to do that. (You can also do it by defining a copy of your existing coercion with a different type signature, and declaring that a coercion too. But then if you have some lemmas that mention one coercion, and some lemmas that mention the other, rewrite will have issues.)
There is one other use case for Identity Coercions, which is that, when your source is not an inductive type, you can use them for folding and not just unfolding identifiers, by playing tricks with Modules and Module Types; see this comment on #3115 for an example.
In general, though, there isn't a way that I know of to bypass the uniform inheritance condition.
In the following snippet of Coq code (cut down from a real example), I'm trying to declare the first argument to exponent_valid as implicit:
Require Import ZArith.
Open Scope Z.
Record float_format : Set := mk_float_format {
minimum_exponent : Z
}.
Record float (fmt : float_format) : Set := mk_float {
exponent : Z;
exponent_valid : minimum_exponent fmt <= exponent
}.
Arguments exponent_valid {fmt} _.
As I understand it, the exponent_valid function takes two arguments: one of type float_format and one of type float, and the first can be inferred. However, compiling the above snippet fails with the following error message:
File "/Users/mdickinson/Desktop/Coq/floats/bug.v", line 13, characters 0-33:
Error: The following arguments are not declared: _.
And indeed, changing the Arguments declaration to:
Arguments exponent_valid {fmt} _ _.
makes the error message go away.
That's okay; I'm a relative newcomer to Coq and I can well believe that I've overlooked something. But now for the bit that's really confusing me: if I replace the <= in the definition of exponent_valid with a <, the code compiles without error!
I have two questions:
Why do I need an extra _ in the first case ?
Why does replacing <= with < make a difference to the number of parameters expected by exponent_valid ?
In case it's relevant, I'm working with Coq 8.4pl5.
exponent_valid has type
forall (fmt : float_format) (f : float fmt), minimum_exponent fmt <= exponent fmt f.
Without notations it's
forall (fmt : float_format) (f : float fmt), Z.le (minimum_exponent fmt) (exponent fmt f).
Z.le is defined as
= fun x y : Z => not (#eq comparison (Z.compare x y) Gt).
not is defined as
= fun A : Prop => A -> False.
And so exponent_valid's type is convertible to
forall (fmt : float_format) (f : float fmt),
(minimum_exponent fmt ?= exponent fmt f) = Gt -> False,
which means the function can take upto three arguments.
But, I guess it's debatable whether the Arguments command should take convertibility into account or even if it needs to be supplied information on all arguments to a function. Maybe users should just be allowed to drop any trailing underscores.
Your understanding is correct, and this looks like a (pretty weird) bug to me. I just filed a report on the bug tracker.
Edit: Ah, flockshade's observation below went by completely unnoticed while I was looking at this stuff. It does sense for it to have three arguments after all!
Given I have a very simple definition for code generation. It is only defined for certain cases and throws a runtime exception otherwise.
definition "blubb a = (if P a then True else undefined)"
Now I want to show blubb correct. The case in which the exception is thrown should be ignored (from my point of view, not the mathematical point of view). However, I end up with a subgoal that assumes that some arbitrary value X is undefined. The following lemma is more or less equivalent to the subgoal. I want to show False as I want to ignore the case in which the exception is thrown (i.e., undefined is returned).
lemma "X = undefined ⟹ False"
This is not provable.
try
Nitpick found a counterexample for card 'a = 1:
Free variable:
X = a1
What is the best way to show correctness of functions that might throw exceptions or deal with undefined?
This relates to this question.
undefined is a constant in Isabelle which you don't know anything about. In particular, you can not in general prove that X ≠ undefined.
If you want to write functions that are only valid for certain inputs, you might consider using the 'a option type, as follows:
definition "blubb a ≡ (if P a then Some True else None)"
and then in your proofs assume that blubb a is defined as follows:
lemma "∃x. blubb a = Some x ⟹ Q (blubb a)"
...
or simply:
lemma "a ∈ dom blubb ⟹ Q (blubb a)"
...
The value of blubb a can then be extracted using the (blubb a).