There are 4 exercises in Poly module related to Church numerals:
Definition cnat := forall X : Type, (X -> X) -> X -> X.
As far as I understand cnat is a function that takes a function f(x), it's argument x and returns it's value for this argument: f(x).
Then there are 4 examples for 0, 1, 2 and 3 represented in Church notation.
But how to solve this? I understand that we must apply the function one more time. The value returned by cnat will be the argument. But how to code it? Use a recursion?
Definition succ (n : cnat) : cnat
(* REPLACE THIS LINE WITH ":= _your_definition_ ." *). Admitted.
Update
I tried this:
Definition succ (n : cnat) : cnat :=
match n with
| zero => one
| X f x => X f f(x) <- ?
Remember that a Church numeral is a function of two arguments (or three if you also count the type). The arguments are a function f and a start value x0. The Church numeral applies f to x0 some number of times. Four f x0 would correspond to f (f (f (f x0))) and Zero f x0 would ignore f and just be x0.
For the successor of n, remember that n will apply any function f for you n times, so if your task is to create a function applies some f on some x0 n+1 times, just leave the bulk of the work to the church numeral n, by giving it your f and x0, and then finish off with one more application of f to the result returned by n.
You won't be needing any match because functions are not inductive data types that can be case analysed upon...
You can write the Definition for succ in the following way:
Definition succ (n : cnat) : cnat :=
fun (X : Type) (f : X -> X) (x : X) => f (n X f x).
As far as I understand cnat is a function that takes a function f(x), it's argument x and returns it's value for this argument: f(x).
Note that cnat itself isn't a function. Instead, cnat is the type of all such functions. Also note that elements of cnat take X as an argument as well. It'll help to keep the definition of cnat in mind.
Definition succ (n: cnat): cnat.
Proof.
unfold cnat in *. (* This changes `cnat` with its definition everywhere *)
intros X f x.
After this, our goal is just X, and we have n : forall X : Type, (X -> X) -> X -> X, X, f and x as premises.
If we applied n to X, f and x (as n X f x), we would get an element of X, but this isn't quite what we want, since the end result would just be n again. Instead, we need to apply f an extra time somewhere. Can you see where? There are two possibilities.
Related
I'm trying to do a simple function that like this
Fixpoint find_0 (n : BinNums.Z) :=
match n with
Z0 => n
| Zpos p => find_0 p
| Zneg q => find_0 q
end.
But p is positive and not Z so this is ill typed.
If I try
Fixpoint find_0 (n : BinNums.Z) :=
match n with
Z0 => n
| Zpos p => find_0 (n - 1)
| Zneg q => find_0 (n + 1)
end.
then Coq can't verify that this is strong normalizing, the error
Recursive definition of find_0 is ill-formed.
In environment
find_0 : BinNums.Z -> BinNums.Z
n : BinNums.Z
p : positive
Recursive call to find_0 has principal argument equal to
"n - 1" instead of a subterm of "n".
Recursive definition is:
"fun n : BinNums.Z =>
match n with
| 0 => n
| Z.pos _ => find_0 (n - 1)
| BinInt.Z.neg _ => find_0 (n + 1)%Z
end".
What to do in this situation?
Regards
Since the definition of Bignums.Z:
Inductive Z : Set :=
Z0 : Z
| Zpos : positive -> Z
| Zneg : positive -> Z.
is not recursive, you cannot write recursive functions over it. Instead you write a simple non recursive Definition to handle the constructors of Bignums.Z and there you call recursive functions you define on positives. It is also good style to define the functions you need on positives separataely on positives.
Every recursive function in Coq must clearly have a decreasing aspect. For many inductive types, this decreasing aspect is provided naturally by the recursive structure of this inductive type. If you look at the definition of positive, you see this:
Inductive positive : Set :=
xI : positive -> positive | xO : positive -> positive | xH : positive.
When an object p of type positive fits the pattern xO q, q is visibly smaller than p, if only as a piece of data (and here because the agreed meaning of xO is multiplication by 2, q is also numerically smaller than p).
When looking at the data type for Z, you see that there is no recursion and thus no visible decreasing pattern, where the smaller object would also be of type Z, so you cannot write a recursive function using the Fixpoint approach.
However, there exists an extension of Coq, called Equations, that will make it possible to write the function you want. The trick is that you still need to explain that something is decreasing during the recursive call. This calls for an extra object, a relation that is known to have no infinite path. Such a relation is called well-founded. Here the relation we will use is called Zwf.
From Equations Require Import Equations.
Require Import ZArith Zwf Lia.
#[export]Instance Zwf_wf (base : Z) : WellFounded (Zwf base).
Proof.
constructor; apply Zwf_well_founded.
Qed.
Equations find0 (x : Z) : Z by wf (Z.abs x) (Zwf 0) :=
find0 Z0 := Z0; find0 (Zpos p) := find0 (Zpos p - 1);
find0 (Zneg p) := find0 (Zneg p + 1).
Next Obligation.
set (x := Z.pos p); change (Zwf 0 (Z.abs (x - 1)) (Z.abs x)).
unfold Zwf. lia.
Qed.
Next Obligation.
set (x := Z.neg p); change (Zwf 0 (Z.abs (x + 1)) (Z.abs x)).
unfold Zwf. lia.
Qed.
There is a little more work than for a direct recursive function using Fixpoint, because we need to explain that we are using a well founded relation, make sure the Equations extension will find the information (this is the purpose of the Instance part of the script. Then, we also need to show that each recursive call satisfies the decrease property. Here, what decreases is the numeric value of the absolute value of the integer.
The Equations tool will gives you a collection of theorems to help reason on the find0 function. In particular, theorems find0_equation1, find0_equation2, and find0_equation3 really express that we defined a function that follows the algorithm you intended.
I want to partially differentiate functions which expects n arguments for arbitrary natural number n. I hope to differentiate arbitrary an argument only once and not the others.
Require Import Reals.
Open Scope R_scope.
Definition myFunc (x y z:R) :R:=
x^2 + y^3 + z^4.
I expect function 3*(y^2) when I differentiate myFunc with y.
I know partial_derive in Coquelicot.
Definition partial_derive (m k : nat) (f : R → R → R) : R → R → R :=
fun x y ⇒ Derive_n (fun t ⇒ Derive_n (fun z ⇒ f t z) k y) m x.
partial_derive can partially differentiate f:R → R → R, but not possible for arbitrary number of arguments.
I thought about using dependent type listR.
Inductive listR :nat -> Type:=
|RO : Euc 0
|Rn : forall {n}, R -> listR n -> listR (S n).
Notation "[ ]" := RO.
Notation "[ r1 , .. , r2 ]" := (Rn r1 .. ( Rn r2 RO ) .. ).
Infix ":::" := Rn (at level 60, right associativity).
Fixpoint partial_derive_nth {n} (k:nat) (f : listR n -> R) (e:listR n): listR n -> R:=
k specifies argument number to differentiate.
We can not define partial_derive_nth like partial_derive because we can not specify the name of arguments of fun in recursion.
Please tell me how to partially differentiate functions which has arbitrary number of arguments.
For your function myFunc, you can write the partial derivative like so:
Definition pdiv2_myFunc (x y z : R) :=
Derive (fun y => myFunc x y z) y.
You can then prove that it has the value you expect for any choice of x, y, and z. Most of the proof can be done automatically, thanks to the tactics provided in Coquelicot.
Lemma pdiv2_myFunc_value (x y z : R) :
pdiv2_myFunc x y z = 3 * y ^ 2.
Proof.
unfold pdiv2_myFunc, myFunc.
apply is_derive_unique.
auto_derive; auto; ring.
Qed.
I am a bit surprised that the automatic tactic auto_derive does not handle a goal of the form Derive _ _ = _, so I have to apply theorem is_derive_unique myself.
I would like to avoid copying and pasting the parameters and return type of functions of the same type that I am trying to define. Since, in my opinion, that would be bad programming practice.
For example, I am defining the following functions:
Definition metric_non_negative {X : Type} (d : X -> X -> R) :=
forall x y : X, (d x y) >= 0.
Definition metric_identical_arguments {X : Type} (d : X -> X -> R) :=
forall x y : X, (d x y) = 0 <-> x = y.
I would like to be able to define both functions without repeatedly typing the redundancy:
{X : Type} (d : X -> X -> R)
I would also like to potentially define a third function, in which case the solution should generalize to the case where more than two functions of the same type are being defined. Is this possible, and how so?
As Anton Trunov mentioned in his comment, it sounds exactly like you want to use a section:
Section Metric.
Context {X: Type}.
Variable (d: X -> X -> nat).
Definition metric_non_negative :=
forall x y : X, (d x y) >= 0.
Definition metric_identical_arguments :=
forall x y : X, (d x y) = 0 <-> x = y.
End Metric.
Note that I've used Context to make X an implicit argument; you can also use Set Implicit Arguments. and make it a Variable to let Coq set its implicitness automatically.
I have a term rewriting system (A, →) where A is a set and → a infix binary relation on A. Given x and y of A, x → y means that x reduces to y.
To implement some properties I simply use the definitions from Coq.Relations.Relation_Definitions and Coq.Relations.Relation_Operators.
Now I want to formalize the following property :
→ is terminating, that is : there is no infinite descending chain a0 → a1 → ...
How can I achieve that in Coq ?
Showing that a rewriting relation terminates is the same thing as showing that it is well-founded. This can be encoded with an inductive predicate in Coq:
Inductive Acc {A} (R : A -> A -> Prop) (x: A) : Prop :=
Acc_intro : (forall y:A, R x y -> Acc R y) -> Acc R x.
Definition well_founded {A} (R : A -> A -> Prop) :=
forall a:A, Acc R a.
(This definition is essentially the same one of the Acc and well_founded predicates in the standard library, but I've changed the order of the relation to match the conventions used in rewriting systems.)
Given a type A and a relation R on A, Acc R x means that every sequence of R reductions starting from x : A is terminating; thus, well_founded R means that every sequence starting at any point is terminating. (Acc stands for "accessible".)
It might not be very clear why this definition works; first, how can we even show that Acc R x holds for any x at all? Notice that if x is an element does not reduce (that is, such that R x y never holds for any y), then the premise of Acc_intro trivially holds, and we are able to conclude Acc R x. For instance, this would allow us to show Acc gt 0. If R is indeed well-founded, then we can work backwards from such base cases and conclude that other elements of A are accessible. A formal proof of well-foundedness is more complicated than that, because it has to work generically for every x, but this at least shows how we could show that each element is accessible separately.
OK, so maybe we can show that Acc R x holds. How do we use it, then?
With the induction and recursion principles that Coq generates for Acc; for instance:
Acc_ind : forall A (R : A -> A -> Prop) (P : A -> Prop),
(forall x : A, (forall y : A, R x y -> P y) -> P x) ->
forall x : A, Acc R x -> P x
When R is well-founded, this is simply the principle of well-founded induction. We can paraphrase it as follows. Suppose that we can show that P x holds for any x : A while making use of an induction hypothesis that says that P y holds whenever R x y. (Depending on the meaning of R, this could mean that x steps to y, or that y is strictly smaller than x, etc.) Then, P x holds for any x such that Acc R x. Well-founded recursion works similarly, and intuitively expresses that a recursive definition is valid if every recursive call is performed on "smaller" elements.
Adam Chlipala's CPDT has a chapter on general recursion that has a more comprehensive coverage of this material.
I'm confused by Coq on its way dealing with existential quantification.
I have a predicate P and an assumption H
P : nat -> Prop
H : exists n, P n
while the current goal is (whatever)
(Some goal)
If I want to instantiate n in H, I will do
elim H.
However after the elimination, the current goal becomes
forall n, P n -> (Some goal)
It looks like Coq converts an existential quantifier into a universal one. I know that (forall a, P a -> Q a) -> ((exists a, P a) -> Q a) out of my limited knowledge on first-order logic. But the reverse proposition seems to be incorrect. If the 'forall' one and 'exists' one are not equivalent, why Coq would do such conversion?
Does 'elim' in Coq replace the goal with a harder to prove one? Or could anyone please show why ((exists a, P a) -> Q a) -> (forall a, P a -> Q a) holds in first-order logic?
Maybe the missing key is that the goal:
forall n, P n -> (Some goal)
is to be read as:
forall n, (P n -> (Some goal))
and not as:
(forall n, P n) -> (Some goal)
That is, the goal you are given just gives you an arbitrary n and a proof P n, which is indeed the proper way to eliminate an existential (you don't get to know the value of the witness since it could be any value that makes P true, you just get to know that there is a n and that P n holds).
On the contrary, the latter would provide you with a function that can build P n for any n you pass it, which is indeed a stronger statement than the one you have.
I realize this question is old but I would like to add the following important clarification:
In Coq, (and more generally, in intuitionistic logic) the existential quantifier is defined (see here) as follows
(exists x, (P x)) := forall (P0 : Prop), ((forall x, (P x -> P0)) -> P0)
Intuitively this can be read as
(exists x, P x) is the smallest proposition which holds whenever P x0 holds for some x0
In fact one can easily prove the following two theorems in Coq:
forall x0, (P x0 -> (exists x, P x)) (* the introduction rule -- proved from ex_intro *)
and (provided A : Prop)
(exists x : A, P x) -> {x : A | P x} (* the elimination rule -- proved from ex_ind *)
So a Coq goal of the form
H1...Hn, w : (exists x, P x) |- G
is transformed (using elim) to a Coq goal of the form
H1...Hn, w : (exists x, P x) |- forall x0, (P x0 -> G)
because whenever h : forall x0, (P x0 -> G), then G is precisely justified by the proof term
(ex_ind A P G h w) : G
which works whenever G : Prop.
Note: the elimination rule above is only valid whenever A : Prop, and cannot be proved whenever A : Type. In Coq, this mean that we do not have the ex_rect eliminator.
From my understanding (see here for more details), this is a design choice to preserve good program extraction properties.