exact value of the derivative on Coq - coq

I want to represent the exact value of the derivative.
I can calculate an approximation like this.
Require Import Coq.Reals.Reals.
Open Scope R_scope.
Definition QuadraticFunction (x:R) := x^2.
Definition differentiation (x:R)(I:R -> R):=
let h := 0.000000001 in
((I (x+h)) - (I x)) / h.
But, we can not calculate the exact value of the derivative on the computer.
For that reason, I want to represent the exact value with the inductive type or others somehow.
I know D_in of Reals.Rderiv, which returns Prop.
I need your help. Thank you.

I need to make four remarks
You should look at coquelicot, this is an extension library on top of Reals that has a nicer treatment of derivatives.
There is no inductive type involved in the presentation of real numbers. In fact, it is part of theoretical folklore that real numbers cannot be represented as an inductive type, in the sense that we usually mean. In an inductive type, you can usually compare two elements by a finite computation. In real numbers, such a comparison faces the difficulty that some numbers are defined by a process of infinite refinement. One of the foundations of the real numbers is that the set is complete, meaning that every Cauchy sequence has a limit. This is often used as a way to define new real numbers.
what does it mean to calculate a number? How do you calculate PI (the circle ratio). You cannot return 3.14, because it is not the exact value. So you need to keep PI as the result. But why would PI be better than (4 * atan(1)), or
lim(4 - 4/3 + 4/5 - 4/7 ...)? So, you do not calculate real numbers as you would do with pocket calculators, because you need to keep the precision. The best you can do, is to return an exact representation as rational value when the real number is rational, an "understandable symbolic expression", or an interval approximation. But interval approximations are not exact, and understandable symbolic expression is an ambiguous specification. How do you choose which expression is most understandable?
There is no function that takes an arbitrary function and returns its derivative in a point as a real number, because we have to take into account that some functions are not derivable everywhere. The Reals library does have a function that makes it possible to talk about the value of a derivative for a derivable function. This is called derive.
Here is a script that does the whole process.
Require Import Coq.Reals.Reals.
Require Import Coq.Reals.Reals.
Open Scope R_scope.
Definition QuadraticFunction (x:R) := x^2.
Lemma derivable_qf : derivable QuadraticFunction.
Proof.
now repeat apply derivable_mult;
(apply derivable_id || apply derivable_const).
Qed.
Definition QuadraticFunctionDerivative :=
derive QuadraticFunction derivable_qf.
Now you have a name for the derivative function, and you can even show that it is equal to another simple function. But whether this other simple function is the result of calculating the derivative is subjective. Here is an example using just the Reals library, but using Coquelicot would give a much more concise script (because derivative computation can be automated, interested readers should also look at the answer by #larsr to the same question).
Lemma QuadraticFunctionDerivativeSimple (x : R) :
QuadraticFunctionDerivative x = 2 * x.
Proof.
unfold QuadraticFunctionDerivative, derive, QuadraticFunction; simpl.
rewrite derive_pt_eq.
replace (2 * x) with (1 * (x * 1) + x * (1 * 1 + x * 0)) by ring.
apply (derivable_pt_lim_mult (fun x => x) (fun x => x * 1)).
apply derivable_pt_lim_id.
apply (derivable_pt_lim_mult (fun x => x) (fun x => 1)).
apply derivable_pt_lim_id.
apply derivable_pt_lim_const.
Qed.
This is probably not the best way to solve the problem, but it is the one I came up with after thinking about the problem for a few minutes.

I recommend #Yves' thoughtful answer, and also want to recommend Coquelicot because of its very readable formalisation of Real Analysis.
Coquelicot has a theorem for the derivative of (f x) ^ n, and in your case f = id (the identity function) and n = 2, so using Coquelicot's theorem, you could prove your lemma like this:
From Coquelicot Require Import Coquelicot.
Require Import Reals.
Open Scope R.
Goal forall x, is_derive (fun x => x^2) x (2*x).
intros x.
evar (e:R). replace (2*x) with e.
apply is_derive_pow.
apply is_derive_id.
unfold e, one. simpl. ring.
Qed.
Coquelicot separates the proof that the derivative exists (is_derive) from a function (Derive) that "computes" the derivative, and has a theorem showing that Derive gives the right answer if the derivative exists.
is_derive_unique: is_derive f x l -> Derive f x = l
This makes it much easier to work with derivatives in expressions with rewrite that using the formulation in the standard library. Just do the rewrites, and the proofs that the derivative really exists ends up as side-conditions.
(Note that I used evars above. It is useful to do that if you want to be able to apply a theorem, but the expressions are not "obviously" (i.e. computationally) equal to Coq. I for similar reasons find it useful to do eapply is_derive_ext to do rewrites inside the function that is being worked on. Just a hint...)
Also, Coquelicot has some useful tactics that can automate some of the reasoning. For example:
Lemma Derive_x3_plus_cos x: Derive (fun x => x^3 + cos x) x = 3*(x^2) - sin x.
apply is_derive_unique.
auto_derive; auto; ring.
Qed.

Related

Is there any way to rewrite the function in "is_lim"?

I'm using Coq and Coquelicot Library, and I'd like to know a better way to handle limit easily.
When I want to prove \lim_{x \to 1} (x^2-1)/(x-1) = 2, I code as follows.
Require Import Reals Lra.
From mathcomp Require Import all_ssreflect.
From Coquelicot Require Import Coquelicot.
Lemma lim_1_2 : is_lim (fun x:R => (x^2 - 1)/(x - 1)) 1 2.
Proof.
apply (is_lim_ext_loc (fun x:R => x + 1)).
- rewrite /Rbar_locally' /locally' /within /locally.
exists (mkposreal 1 Rlt_0_1).
move => y Hyball Hyneq1.
field; lra.
- apply is_lim_plus'; [apply is_lim_id | apply is_lim_const].
Qed.
In this example, I explicitly write the goal term (fun x:R => x + 1). Is there any way to transform (fun x:R => (x^2 - 1)/(x - 1)) to (fun x:R => x + 1) like rewrite tactic? In other words, I'm looking for a similar tactic as under for eq_big_nat.
Coquelicot is optimized for ease of use and uses total functions rather than dependent restrictions wherever possible - e.g. you can write down an integral without having a prove that it exists, but as far as I know this does not extend to division by zero. To make your above equation work, one would need a definition of division, which somehow can handle the 0/0 you get for x=1. One can define a division for functions (polynomials) which can handle this in a reasonable way - and this is what you are using implicitly by stating that this makes sense, but one cannot define division for individual real numbers which can handle 0/0 in the way you would like. But the division operator you use above is a division on individual numbers and not on polynomials. In informal mathematics one is sometimes a bit sloppy about such things.
Besides the 0/0 issue, you also would have to use the axiom of functional extensionality, which states that two functions are equal in case they are equal for each point.
Here is a snippet of Coq which shows what one can be done and where the issues are :
Require Import Reals.
Require Import Lra.
Require Import FunctionalExtensionality.
Open Scope R.
Definition dom := {x : R | x<>1}.
Definition dom2R (x : dom) : R := proj1_sig x.
Coercion dom2R : dom >-> R.
Example Example:
(fun x : dom => (x^2 - 1)/(x - 1))
= (fun x : dom => x + 1).
Proof.
apply functional_extensionality.
intros [x xH].
cbv.
field.
lra.
Qed.
All in all it is not that bad with the implicit coercion from dom to Real, although the function is in reality more complicated than it looks since each x is an implicit coercion projecting from dom to R.
Also one could have an axiom of functional extensionality, which works if the domain of one function is a subset of the domain of the other function. I am not sure if this would be consistent, though and it would also require a non standard definition of equality because with the usual equality only things of the same type can be equal. This would allow you to equate the polynomial fraction with the polynomial on the full R.
I hope this explains why things are as they are. Coquelicot relies on the division operator from the standard library, for which you can't prove anything in case the denominator is zero. This is sometimes inconvenient, but to my knowledge (which is not very extensive - I am physicist not mathematician) up to now nobody came up with a definition of division which allows you to easily do what you want.

Extensionally equal predicates and equality of universally quantified applications

I am trying to define a recursive predicate using well-founded fixpoints with the obligation to show F_ext when rewriting with Fix_eq. The CPDT says that most such obligations are dischargeable with straightforward proof automation, but unhappily this does not appear to be so for my predicate.
I have reduced the problem to the following lemma (from Proper (pointwise_relation A eq ==> eq) (#all A)). Is it provable in Coq without additional axioms?
Lemma ext_fa:
forall (A : Type) (f g : A -> Prop),
(forall x, f x = g x) ->
(forall x, f x) = (forall x, g x).
It can be shown with extensionality of predicates or functions, but since the conclusion is weaker than the usual one (f = g) I naively thought it would be possible to produce a proof without using additional axioms. After all, both sides of the equality only involve applications of f and g; how could any intensional differences be discerned?
Have I missed a simple proof or is the lemma unprovable?
You might be interested in this code I wrote a while ago, which includes variants of Fix_eq for various numbers of arguments, and don't depend on function extensionality. Note that you don't need to change Fix_F, and can instead just prove variants of Fix_eq.
To answer the question you asked, rather than solve your context, the lemma you state is called "forall extensionality".
It is present in Coq.Logic.FunctionalExtensionality, where the axiom of function extensionality is used to prove it. The fact that the standard library version uses an axiom to prove this lemma is, at the very least, strong evidence that it is not provable without axioms in Coq.
Here is a proof sketch of that fact. Since Coq is strongly normalizing*, every proof of x = y in the empty context is judgmentally equal to eq_refl. That is, if you can prove x = y in the empty context, then x and y are convertible. Let f x := inhabited (Vector.t (x + 1)) and let g x := inhabited (Vector.t (1 + x)). It is straightforward to prove forall x, f x = g x by induction on x. Therefore, if your lemma were true without axioms, we could get a proof of
(forall x, inhabited (Vector.t (x + 1))) = (forall x, inhabited (Vector.t (1 + x)))
in the empty context, and hence eq_refl ought to prove this statement. We can easily check and see that eq_refl does not prove this statement. So your lemma ext_fa is not provable without axioms.
Note that equality for functions and equality for types are severely under-specified in Coq. Essentially, the only types (or functions) that you can prove equal in Coq are the ones that are judgmentally equal (or, more precisely, the ones that are expressible as two judgmentally equal lambdas applied to provably-equal closed terms). The only types that you can prove not equal are the ones which are provably not isomorphic. The only functions that you can prove not equal are the ones which provably differ on some concrete element of the domain that you provide. There's a lot of space between the equalities that you can prove, and the inequalities you can prove, and you don't get to say anything about things in this space without axioms.
*Coq isn't actually strongly normalizing because there are some issues with coinductives. But modulo that, it's strongly normalizing.

How to perform induction over two inductive predicates?

I am starting with Coq and trying to formalize Automated Amortised Analysis. I am at lemma 4.13:
Lemma preservation : forall (Sigma : prog_sig) (Gamma : context) (p : program)
(s : stack) (h h' : heap) (e : expr) (c : type0) (v : val),
(* 4.1 *) has_type Sigma Gamma e c ->
(* 4.2 *) eval p s h e v h' ->
(* 4.3 *) mem_consistant_stack h s Gamma ->
(* 4.a *) mem_consistant h' v c /\ (* 4.b *) mem_consistant_stack h' s Gamma.
Proof.
intros Sigma Gamma p s h h' e c v WELL_TYPED EVAL.
The manual proof uses a double induction:
Proof. Note that claim (4.b) follows directly by Lemma 4.8 and Lemma
4.12. Each location l ∈ dom( H ) is either left unaltered or is overwritten by the value Bad and hence does not invalidate memory
consistency.
However, the first claim (4.a) requires a proof by
induction on the lengths of the derivations of (4.2) and (4.1) ordered
lexicographically with the derivation of the evaluation taking
priority over the typing derivation. This is required since an in-
duction on the length of the typing derivation alone would fail for
the case of function application: in order to allow recursive
functions, the type rule for application is a terminal rule relying on
the type given for the function in the program’s signature. However,
proving this case requires induction on the statement that the body of
the function is well-typed, which is most certainly a type derivation
of a longer length (i.e. longer than one step), prohibiting us from
using the induction hypothesis. Note in this particular case that the
length of the derivation for the evaluation statement does decrease.
An induction over the length of the derivation for premise (4.2) alone
fails similarly. Consider the last step in the derivation of premise
(4.1) being derived by the application of a structural rule, then the
length of the derivation for (4.2) remains exactly the same, while the
length of the derivation for premise (4.1) does decrease by one step.
Using induction on the lexicographically ordered lengths of the type
and evaluation derivations allows us to use the induction hypothesis
if either the length of the deriva- tion for premise (4.2) is
shortened or if the length of the derivation for premise (4.2) remains
unchanged while the length of the typing derivation is reduced. We
first treat the cases where the last step in the typing derivation was
obtained by application of a structural rule, which are all the cases
which leave the length of the derivation for the evaluation unchanged.
We then continue to consider the remaining cases based
604.3 Operational Semantics on the evaluation rule that had been applied last to derive premise (4.2), since the remaining type rules
are all syntax directed and thus unambiguously determined by the
applied evaluation rule.
How can such a "double induction" be performed in Coq?
I tried induction EVAL; induction WELL_TYPED, but got 418 subgoals, most of wich are unprovable.
I also tried to start with induction EVAL and use induction WELL_TYPED later, but go stucked in a similar situation.
I agree with #jbapple that a minimal example is better. That said, it may be that you are simply missing a concept of length of derivation. Note that the usual concept of proof by induction over a predicate actually implements something that is close to induction over the height of derivations, but not quite.
I propose that you exhibit two new predicates eval_n and
has_type_n that each express the same as eval
and has_type, but with an extra argument with meaning "... and derivation has size n". There are several ways in which size might be defined but I suspect that height will be enough for you.
Then you can prove
eval a1 .. ak <-> exists n, eval_n a1 .. ak n
and
has_type a1 .. ak <-> exists n, has_type a1 .. ak n
You should then be able to prove
forall p : nat * nat, forall a1 ... ak, eval_n a1 .. ak (fst p) ->
has_type_n a1 .. ak (snd p) -> YOUR GOAL
by well founded induction on pairs of natural numbers, using the lexical order construction of library Wellfounded (I suggest library Lexicographic_Product.v, it is a bit of an overkill for just pairs of natural numbers, but you only need to find the right instantiation).
This will be unwieldy because induction hypotheses will only refer to pairs
of numbers that are comparable for the lexical order, and you will have to perform inversions on the hypotheses concerning eval_n and has_type_n, but that should go through.
There probably exists a simpler solution, but for lack of more information from your side, I can only propose the big gun.

How to define unspecified constants in Coq

My question is how to define unspecified constants in Coq.
To make clear what I mean, assume the following toy system:
I want to define a function
f:nat->nat, which has the value 0 at all but one place w, where it has the value 1.
The place w shall be a parameter of the system.
All proofs of the system can assume that w is fixed but arbitrary.
My idea was to introduce
Parameter w:nat.
But I get stuck by defining f(x), because I don't have a clue how to match x with a.
What would be the right way to handle this?
Or, is it the wrong way using w as a Parameter?
(This is NOT a homework question)
This is how I'd do it:
Require Import Arith.
Parameter w : nat.
Definition f (n : nat) := if beq_nat n w then 1 else 0.
When proving properties about f you can then use lemmas specifying that beq_nat n w is indeed deciding whether n = w. You can find them by using e.g.
SearchAbout beq_nat.

Definition by property in coq

I am having trouble with formalizing definitions of the following form: define an integer such that some property holds.
Let's say that I formalized the definition of the property:
Definition IsGood (x : Z) : Prop := ...
Now I need a definition of the form:
Definition Good : Z := ...
assuming that I proved that an integer with the property exists and is unique:
Lemma Lemma_GoodExistsUnique : exists! (x : Z), IsGood x.
Is there an easy way of defining Good using IsGood and Lemma_GoodExistsUnique?
Since, the property is defined on integer numbers, it seems that no additional axioms should be necessary. In any event, I don't see how adding something like the axiom of choice can help with the definition.
Also, I am having trouble with formalizing definitions of the following form (I suspect this is related to the problem I described above, but please indicate if that is not the case): for every x, there exists y, and these y are different for different x. Like, for example, how to define that there are N distinct good integer numbers using IsGood:
Definition ThereAreNGoodIntegers (N : Z) (IsGood : Z -> Prop) := ...?
In real-world mathematics, definitions like that occur every now and again, so this should not be difficult to formalize if Coq is intended to be suitable for practical mathematics.
The short answer to your first question is: in general, it is not possible, but in your particular case, yes.
In Coq's theory, propositions (i.e., Props) and their proofs have a very special status. In particular, it is in general not possible to write a choice operator that extracts the witness of an existence proof. This is done to make the theory compatible with certain axioms and principles, such as proof irrelevance, which says that all proofs of a given proposition are equal to each other. If you want to be able to do this, you need to add this choice operator as an additional axiom to your theory, as in the standard library.
However, in certain particular cases, it is possible to extract a witness out of an abstract existence proof without recurring to any additional axioms. In particular, it is possible to do this for countable types (such as Z) when the property in question is decidable. You can for instance use the choiceType interface in the Ssreflect library to get exactly what you want (look for the xchoose function).
That being said, I would usually advice against doing things in this style, because it leads to unnecessary complexity. It is probably easier to define Good directly, without resorting to the existence proof, and then prove separately that Good has the sought property.
Definition Good : Z := (* ... *)
Definition IsGood (z : Z) : Prop := (* ... *)
Lemma GoodIsGood : IsGood Good.
Proof. (* ... *) Qed.
Lemma GoodUnique : forall z : Z, IsGood z -> z = Good.
If you absolutely want to define Good with an existence proof, you can also change the proof of Lemma_GoodExistsUnique to use a connective in Type instead of Prop, since it allows you to extract the witness directly using the proj1_sig function:
Lemma Lemma_GoodExistsUnique : {z : Z | Good z /\ forall z', Good z' -> z' = z}.
Proof. (* ... *) Qed.
As for your second question, yes, it is a bit related to the first point. Once again, I would recommend that you write down a function y_from_x with type Z -> Z that will compute y given x, and then prove separately that this function relates inputs and outputs in a particular way. Then, you can say that the ys are different for different xs by proving that y_from_x is injective.
On the other hand, I'm not sure how your last example relates to this second question. If I understand what you want to do correctly, you can write something like
Definition ThereAreNGoodIntegers (N : Z) (IsGood : Z -> Prop) :=
exists zs : list Z,
Z.of_nat (length zs) = N
/\ NoDup zs
/\ Forall IsGood zs.
Here, Z.of_nat : nat -> Z is the canonical injection from naturals to integers, NoDup is a predicate asserting that a list doesn't contain repeated elements, and Forall is a higher-order predicate asserting that a given predicate (in this case, IsGood) holds of all elements of a list.
As a final note, I would advice against using Z for things that can only involve natural numbers. In your example, your using an integer to talk about the cardinality of a set, and this number is always a natural number.