Is there any way to rewrite the function in "is_lim"? - coq

I'm using Coq and Coquelicot Library, and I'd like to know a better way to handle limit easily.
When I want to prove \lim_{x \to 1} (x^2-1)/(x-1) = 2, I code as follows.
Require Import Reals Lra.
From mathcomp Require Import all_ssreflect.
From Coquelicot Require Import Coquelicot.
Lemma lim_1_2 : is_lim (fun x:R => (x^2 - 1)/(x - 1)) 1 2.
Proof.
apply (is_lim_ext_loc (fun x:R => x + 1)).
- rewrite /Rbar_locally' /locally' /within /locally.
exists (mkposreal 1 Rlt_0_1).
move => y Hyball Hyneq1.
field; lra.
- apply is_lim_plus'; [apply is_lim_id | apply is_lim_const].
Qed.
In this example, I explicitly write the goal term (fun x:R => x + 1). Is there any way to transform (fun x:R => (x^2 - 1)/(x - 1)) to (fun x:R => x + 1) like rewrite tactic? In other words, I'm looking for a similar tactic as under for eq_big_nat.

Coquelicot is optimized for ease of use and uses total functions rather than dependent restrictions wherever possible - e.g. you can write down an integral without having a prove that it exists, but as far as I know this does not extend to division by zero. To make your above equation work, one would need a definition of division, which somehow can handle the 0/0 you get for x=1. One can define a division for functions (polynomials) which can handle this in a reasonable way - and this is what you are using implicitly by stating that this makes sense, but one cannot define division for individual real numbers which can handle 0/0 in the way you would like. But the division operator you use above is a division on individual numbers and not on polynomials. In informal mathematics one is sometimes a bit sloppy about such things.
Besides the 0/0 issue, you also would have to use the axiom of functional extensionality, which states that two functions are equal in case they are equal for each point.
Here is a snippet of Coq which shows what one can be done and where the issues are :
Require Import Reals.
Require Import Lra.
Require Import FunctionalExtensionality.
Open Scope R.
Definition dom := {x : R | x<>1}.
Definition dom2R (x : dom) : R := proj1_sig x.
Coercion dom2R : dom >-> R.
Example Example:
(fun x : dom => (x^2 - 1)/(x - 1))
= (fun x : dom => x + 1).
Proof.
apply functional_extensionality.
intros [x xH].
cbv.
field.
lra.
Qed.
All in all it is not that bad with the implicit coercion from dom to Real, although the function is in reality more complicated than it looks since each x is an implicit coercion projecting from dom to R.
Also one could have an axiom of functional extensionality, which works if the domain of one function is a subset of the domain of the other function. I am not sure if this would be consistent, though and it would also require a non standard definition of equality because with the usual equality only things of the same type can be equal. This would allow you to equate the polynomial fraction with the polynomial on the full R.
I hope this explains why things are as they are. Coquelicot relies on the division operator from the standard library, for which you can't prove anything in case the denominator is zero. This is sometimes inconvenient, but to my knowledge (which is not very extensive - I am physicist not mathematician) up to now nobody came up with a definition of division which allows you to easily do what you want.

Related

How does one access the dependent type unification algorithm from Coq's internals -- especially the one from apply and the substitution solution?

TLDR: I want to be able to compare two terms -- one with a hole and the other without the hole -- and extract the actual lambda term that complete the term. Either in Coq or in OCaml or a Coq plugin or in anyway really.
For example, as a toy example say I have the theorem:
Theorem add_easy_0'':
forall n:nat,
0 + n = n.
Proof.
The (lambda term) proof for this is:
fun n : nat => eq_refl : 0 + n = n
if you had a partial proof script say:
Theorem add_easy_0'':
forall n:nat,
0 + n = n.
Proof.
Show Proof.
intros.
Show Proof.
Inspected the proof you would get as your partial lambda proof as:
(fun n : nat => ?Goal)
but in fact you can close the proof and therefore implicitly complete the term with the ddt unification algorithm using apply:
Theorem add_easy_0'':
forall n:nat,
0 + n = n.
Proof.
Show Proof.
intros.
Show Proof.
apply (fun n : nat => eq_refl : 0 + n = n).
Show Proof.
Qed.
This closes the proof but goes not give you the solution for ?Goal -- though obviously Coq must have solved the CIC/ddt/Coq unification problem implicitly and closes the goals. I want to get the substitution solution from apply.
How does one do this from Coq's internals? Ideally while remaining in Coq but OCaml internals or coq plugin solutions or in fact any solution I am happy with.
Appendix 1: how did I realize apply must use some sort of "coq unification"
I knew that apply must be doing this because in the description of the apply tactic I know apply must be using unification due to it saying this:
This tactic applies to any goal. The argument term is a term well-formed in the local context. The tactic apply tries to match the current goal against the conclusion of the type of term. If it succeeds, then the tactic returns as many subgoals as the number of non-dependent premises of the type of term.
This is very very similar to what I once saw in a lecture for unification in Isabelle:
with some notes on what that means:
- You have/know rule [[A1; … ;An]] => A (*)
- that says: that given A1; …; An facts then you can conclude A
- or in backwards reasoning, if you want to conclude A, then you must give a proof of A1; …;An (or know Ai's are true)
- you want to close the proof of [[B1; …; Bm]] => C (**) (since thats your subgoal)
- so you already have the assumptions B1; …; Bm lying around for you, but you wish to be able to conclude C
- Say you want to transform subgoal (**) using rule (*). Then this is what’s going on:
- first you need to see if your subgoal (**) is a "special case" of your rule (*). You commence by checking if the conclusion (targets) of the rules are "equivalent". If the conclusions match then instead of showing C you can now show A instead. But for you to have (or show) A, you now need to show A1; … ;An using the substitution that made C and A match. The reason you need to show A1;...;An is because if you show them you get A automatically according to rule (*) -- which by the "match" (unification) shows the original goal you were after. The main catch is that you need to do this by using the substitution that made A and C match. So:
- first see if you can “match” A and C. The conclusions from both side must match. This matching is called unification and returns a substitution sig that makes the terms equal
- sig = Unify(A,C) s.t. sig(A) = sig(C)
- then because we transformed the subgoal (**) using rule (*), we must then proceed to prove the obligations from the rule (*) we used to match to conclusion of the subgoal (**). from the assumptions of the original subgoal in (**) (since those are still true) but using the substitution sig that makes the rules match.
- so the new subgoals if we match the current subgoal (*) with rule (**) is:
- [[sig(B1); … ; sig(Bm) ]] => sigm(A1)
- ...
- [[sig(B1); … ; sig(Bm) ]] => sigm(An)
- Completing/closing the proof above (i.e. proving it) shows/proves:
- [[sig(B1); …;sig(Bm) ]] => sig(C)
- Command: apply (rule <(*)>) where (*) stands for the rule names
Appendix2: why not exact?
Note that initially I thought exact was the Coq tactic I wanted to intercept but I was wrong I believe. My notes on exact:
- exact p. (assuming p has type U).
- closes a proof if the goal term T (i.e. a Type) matches the type of the given term p.
- succeeds iff T and U are convertible (basically, intuitively if they unify https://coq.inria.fr/refman/language/core/conversion.html#conversion-rules since are saying if T is convertible to U)
conversion seems to be equality check not really unification i.e. it doesn't try to solve a system of symbolic equations.
Appendix 3: Recall unification
brief notes:
- unification https://en.wikipedia.org/wiki/Unification_(computer_science)
- an algorithm that solves a system of equations between symbolic expressions/terms
- i.e. you want
- cons2( cons1( x, y, ...,) ..., cons3(a, b, c), ... ) = cons1(x, nil)
- x = y
- basically a bunch of term LHS term RHS and want to know if you can make them all equal given the terms/values and variables in them...
- term1 = term2, term3 = term4 ? with some variables perhaps.
- the solution is the substitution of the variables that satisfies all the equations
bounty
I'm genuinely curious about intercepting the apply tactic or call its unification algorithm.
apply indeed solve a unification, according to the document.
The tactic apply relies on first-order unification with dependent types unless the conclusion of the type of term is of the form P (t1 ... tn) with P to be instantiated.
Note that generally, the apply will turn one "hole" to several "hole"s that each cooresponds to a subgoal generated by it.
I have no idea how to access the internal progress of apply and get the substitution it uses.
However, You can call unify t u to do unification maully. you can refer to the official document. As far as I am concerned, the unicoq plugin provides another unification algorithm, and you can use munify t u to find unification between two items, see the Unicoq official repo.
An example of using unify and mutify:
From Unicoq Require Import Unicoq.
Theorem add_easy_0'':
forall n:nat,
0 + n = n.
Proof.
Show Proof.
intros.
Show Proof.
refine ?[my_goal].
Show my_goal.
munify (fun t : nat => eq_refl : 0 + t = t) (fun n : nat => ?my_goal).
(* unify (fun t : nat => eq_refl : 0 + t = t) (fun n : nat => ?my_goal). *)
Qed.
However, I wonder whether I have understand your question correctly.
Do you want to name the goal?
If you want to "extract the actual lambda term that complete the (parial) term". The so-called "lamda term" is the goal at that time, isn't it? If so, why to you want to "extract" it? It is just over there! Do you want to store the current subgoal and name it? If so, the abstract tactic perhaps helps, as mentioned in How to save the current goal / subgoal as an `assert` lemma
For example:
Theorem add_easy_0'':
forall n:nat,
0 + n = n.
Proof.
Show Proof.
intros.
Show Proof.
abstract apply eq_refl using my_name.
Check my_name.
(*my_name : forall n : nat, 0 + n = n*)
Show Proof.
(*(fun n : nat => my_name n)*)
Qed.
Do you want to get the substituion?
Are you asking a substituion that make the goal term and the conclusion of the theorem applied match? For example:
Require Import Arith.
Lemma example4 : 3 <= 3.
Proof.
Show Proof.
Check le_n.
(* le_n : forall n : nat, n <= n *)
apply le_n.
Are you looking forward to get something like n=3? If you want to get such a "substitution", I am afraid the two tactics mentioned above will not help. Writing OCaml codes should be needed.
Do you want store the prove of current goal?
Or are you looking forward to store the proof of the current goal? Perhaps you can try assert, as mentioned in Using a proven subgoal in another subgoal in Coq.

exact value of the derivative on Coq

I want to represent the exact value of the derivative.
I can calculate an approximation like this.
Require Import Coq.Reals.Reals.
Open Scope R_scope.
Definition QuadraticFunction (x:R) := x^2.
Definition differentiation (x:R)(I:R -> R):=
let h := 0.000000001 in
((I (x+h)) - (I x)) / h.
But, we can not calculate the exact value of the derivative on the computer.
For that reason, I want to represent the exact value with the inductive type or others somehow.
I know D_in of Reals.Rderiv, which returns Prop.
I need your help. Thank you.
I need to make four remarks
You should look at coquelicot, this is an extension library on top of Reals that has a nicer treatment of derivatives.
There is no inductive type involved in the presentation of real numbers. In fact, it is part of theoretical folklore that real numbers cannot be represented as an inductive type, in the sense that we usually mean. In an inductive type, you can usually compare two elements by a finite computation. In real numbers, such a comparison faces the difficulty that some numbers are defined by a process of infinite refinement. One of the foundations of the real numbers is that the set is complete, meaning that every Cauchy sequence has a limit. This is often used as a way to define new real numbers.
what does it mean to calculate a number? How do you calculate PI (the circle ratio). You cannot return 3.14, because it is not the exact value. So you need to keep PI as the result. But why would PI be better than (4 * atan(1)), or
lim(4 - 4/3 + 4/5 - 4/7 ...)? So, you do not calculate real numbers as you would do with pocket calculators, because you need to keep the precision. The best you can do, is to return an exact representation as rational value when the real number is rational, an "understandable symbolic expression", or an interval approximation. But interval approximations are not exact, and understandable symbolic expression is an ambiguous specification. How do you choose which expression is most understandable?
There is no function that takes an arbitrary function and returns its derivative in a point as a real number, because we have to take into account that some functions are not derivable everywhere. The Reals library does have a function that makes it possible to talk about the value of a derivative for a derivable function. This is called derive.
Here is a script that does the whole process.
Require Import Coq.Reals.Reals.
Require Import Coq.Reals.Reals.
Open Scope R_scope.
Definition QuadraticFunction (x:R) := x^2.
Lemma derivable_qf : derivable QuadraticFunction.
Proof.
now repeat apply derivable_mult;
(apply derivable_id || apply derivable_const).
Qed.
Definition QuadraticFunctionDerivative :=
derive QuadraticFunction derivable_qf.
Now you have a name for the derivative function, and you can even show that it is equal to another simple function. But whether this other simple function is the result of calculating the derivative is subjective. Here is an example using just the Reals library, but using Coquelicot would give a much more concise script (because derivative computation can be automated, interested readers should also look at the answer by #larsr to the same question).
Lemma QuadraticFunctionDerivativeSimple (x : R) :
QuadraticFunctionDerivative x = 2 * x.
Proof.
unfold QuadraticFunctionDerivative, derive, QuadraticFunction; simpl.
rewrite derive_pt_eq.
replace (2 * x) with (1 * (x * 1) + x * (1 * 1 + x * 0)) by ring.
apply (derivable_pt_lim_mult (fun x => x) (fun x => x * 1)).
apply derivable_pt_lim_id.
apply (derivable_pt_lim_mult (fun x => x) (fun x => 1)).
apply derivable_pt_lim_id.
apply derivable_pt_lim_const.
Qed.
This is probably not the best way to solve the problem, but it is the one I came up with after thinking about the problem for a few minutes.
I recommend #Yves' thoughtful answer, and also want to recommend Coquelicot because of its very readable formalisation of Real Analysis.
Coquelicot has a theorem for the derivative of (f x) ^ n, and in your case f = id (the identity function) and n = 2, so using Coquelicot's theorem, you could prove your lemma like this:
From Coquelicot Require Import Coquelicot.
Require Import Reals.
Open Scope R.
Goal forall x, is_derive (fun x => x^2) x (2*x).
intros x.
evar (e:R). replace (2*x) with e.
apply is_derive_pow.
apply is_derive_id.
unfold e, one. simpl. ring.
Qed.
Coquelicot separates the proof that the derivative exists (is_derive) from a function (Derive) that "computes" the derivative, and has a theorem showing that Derive gives the right answer if the derivative exists.
is_derive_unique: is_derive f x l -> Derive f x = l
This makes it much easier to work with derivatives in expressions with rewrite that using the formulation in the standard library. Just do the rewrites, and the proofs that the derivative really exists ends up as side-conditions.
(Note that I used evars above. It is useful to do that if you want to be able to apply a theorem, but the expressions are not "obviously" (i.e. computationally) equal to Coq. I for similar reasons find it useful to do eapply is_derive_ext to do rewrites inside the function that is being worked on. Just a hint...)
Also, Coquelicot has some useful tactics that can automate some of the reasoning. For example:
Lemma Derive_x3_plus_cos x: Derive (fun x => x^3 + cos x) x = 3*(x^2) - sin x.
apply is_derive_unique.
auto_derive; auto; ring.
Qed.

Equality of finite maps in coq (defined using map2)

Suppose I want to define a type of Monomials in Coq. These would be finite maps from some ordered set of variables to nat where, say, x²y³ is represented by the map that sends x to 2, y to 3 and where everything else gets the default value, zero.
The basic definitions don't seem so hard:
Require Import
Coq.FSets.FMapFacts
Coq.FSets.FMapList
Coq.Structures.OrderedType.
Module Monomial (K : OrderedType).
Module M := FMapList.Make(K).
Module P := WProperties_fun K M.
Module F := P.F.
Definition Var : Type := M.key.
Definition Monomial : Type := M.t nat.
Definition mon_one : Monomial := M.empty _.
Definition add_at (a : option nat) (b : option nat) : option nat :=
match a, b with
| Some aa, Some bb => Some (aa + bb)
| Some aa, None => Some aa
| None, Some bb => Some bb
| None, None => None
end.
Definition mon_times (M : Monomial) (M' : Monomial) : Monomial :=
M.map2 add_at M M'.
End Monomial.
At this point, I'd like to prove something like:
Lemma mon_times_comm : forall M M', mon_times M M' = mon_times M' M.
I can see how to prove that the two maps are Equal using the lemma Equal_mapsto_iff, but I'd really like to say that my type really represents monomials and that multiplication is genuinely commutative (and the maps are eq).
I'm pretty new to Coq: is this a reasonable thing to try to prove?
Also, I realise that this might depend on the finite map implementation: if FMapList was the wrong choice and another implementation makes this easier, please point me at that!
I can see how to prove that the two maps are Equal using the lemma Equal_mapsto_iff, but I'd really like to say that my type really represents monomials and that multiplication is genuinely commutative (and the maps are eq).
I'm pretty new to Coq: is this a reasonable thing to try to prove?
Also, I realise that this might depend on the finite map implementation: if FMapList was the wrong choice and another implementation makes this easier, please point me at that!
Indeed, you are on the right track. The set type you are using doesn't have the property that two sets with the same elements are definitionally equal in Coq. As such sets are implemented as binary trees you may have Node(A, Node(B,C)) <> Node(Node(A,B),C).
In particular, having a good "set type" is an extremely challenging task in Coq due to several issues, see the anwser How to define set in coq without defining set as a list of elements for a bit more of discussion.
Doing proper algebra does indeed require a lot of complex infrastructure, #ErikMD's pointer is the right one, you should have a look at math-comp and related papers to get an understanding on the state of the art. Of course, keep experimenting!
Regarding the formalization of monomials and multivariate polynomials in Coq, you could consider using the multinomials library. It is available on OPAM:
$ opam install coq-mathcomp-multinomials
and it naturally proves a similar result to your mon_times_comm lemma:
From mathcomp Require Import ssreflect ssrfun ssrbool eqtype ssrnat seq.
From mathcomp Require Import choice finfun tuple fintype ssralg bigop.
From SsrMultinomials Require Import freeg mpoly.
Lemma test1 (n : nat) (m1 m2 : 'X_{1..n}) : (m1 + m2 = m2 + m1)%MM.
Proof.
move=> *.
by rewrite addmC.
Qed.
Lemma test2 (n : nat) (R : comRingType) (p q : {mpoly R[n]}) :
(p * q = q * p)%R.
Proof.
move=> *.
by rewrite mpoly_mulC.
Qed.
Note that the multinomials library is built upon the MathComp library that is strongly related to the SSReflect extension of the Coq proof language.
Finally, note that this library is very convenient to develop Coq proofs involving multinomial polynomials, but doesn't directly allow computing with these Coq datatypes (Eval vm_compute in ...). If you are also interested in that aspect, you may also want to take a look at the CoqEAL library (and in particular its multipoly.v theory that relies on FMaps).

What does the "functional induction" tactic do in Coq?

I have used functional induction in this proof that I have been trying. As far as I understand, it essentially allows one to perform induction on all parameters of a recursive function "at the same time".
The tactics page states that:
The tactic functional induction performs case analysis and induction following the definition of a function. It makes use of a principle generated by Function
I assume that principle is something technical whose definition I do not know. What does it mean?
In the future, how do I find out what this tactic is doing? (Is there some way to access the LTac?)
Is there a more canonical way of solving the theorem which I pose below?
Require Import FunInd.
Require Import Coq.Lists.List.
Require Import Coq.FSets.FMapInterface.
Require Import FMapFacts.
Require Import FunInd FMapInterface.
Require Import
Coq.FSets.FMapList
Coq.Structures.OrderedTypeEx.
Module Import MNat := FMapList.Make(Nat_as_OT).
Module Import MNatFacts := WFacts(MNat).
Module Import OTF_Nat := OrderedTypeFacts Nat_as_OT.
Module Import KOT_Nat := KeyOrderedType Nat_as_OT.
(* Consider using https://coq.inria.fr/library/Coq.FSets.FMapFacts.html *)
(* Consider using https://coq.inria.fr/library/Coq.FSets.FMapFacts.html *)
(* Consider using https://coq.inria.fr/library/Coq.FSets.FMapFacts.html *)
Definition NatToNat := MNat.t nat.
Definition NatToNatEmpty : NatToNat := MNat.empty nat.
(* We wish to show that map will have only positive values *)
Function insertNats (n: nat) (mm: NatToNat) {struct n}: NatToNat :=
match n with
| O => mm
| S (next) => insertNats next (MNat.add n n mm)
end.
Theorem insertNatsDoesNotDeleteKeys:
forall (n: nat) (k: nat) (mm: NatToNat),
MNat.In k mm -> MNat.In k (insertNats n mm).
intros n.
intros k mm.
intros kinmm.
functional induction insertNats n mm.
exact kinmm.
rewrite add_in_iff in IHn0.
assert(S next = k \/ MNat.In k mm).
auto.
apply IHn0.
exact H.
Qed.
The "principle" just means "an induction principle" - a complete set of cases that must be proved in order to prove some motive "inductively".
The difference between Function and Fixpoint in Coq is that the former creates an induction principle and recursion principle based on the given definition, and then each return value is passed in (as a lambda, if there are variables bound by case analysis or there is a recursive call's value involved). This generally computes slower. The principles generated are with respect to a generated Inductive type, each variant of which is a case of the function's call scheme. The latter Fixpoint uses Coq's limited termination analysis to justify the well-foundedness of a function's recursion*. Fixpoint is faster, because it uses OCaml's own pattern matching and direct recursion in computation.
How is the induction scheme created? First, all the function parameters are abstracted into a forall. Then, each branch of a match expression creates a new case to prove for the scheme (the number of cases multiplies for each nested match). All the "return" positions in a function are in scope of some number of match expressions' bindings; each binding is an argument to an induction case that must produce the motive on the reconstructed arguments (e.g., if in the case of a list A's cons, we have an a : A and a list_a : list A binding, so then we must produce a Motive (cons a list_a) result). If there is a recursive call with the list_a argument, then we get a further binding of an induction hypothesis of type Motive list_a.
An actual Coq implementer will probably correct me on the specifics of the above, but it's more or less how induction schemes are inferred from well-founded recursive functions.
This is all fairly rough, and better explained in the documentation on Function and Functional Scheme.
The termination analysis is maybe was too smart for its own good. It needed to be revised (by IIRC Maxime Dénès, good job) following a proof of False given (a consequence of) the univalence axiom.

Definition by property in coq

I am having trouble with formalizing definitions of the following form: define an integer such that some property holds.
Let's say that I formalized the definition of the property:
Definition IsGood (x : Z) : Prop := ...
Now I need a definition of the form:
Definition Good : Z := ...
assuming that I proved that an integer with the property exists and is unique:
Lemma Lemma_GoodExistsUnique : exists! (x : Z), IsGood x.
Is there an easy way of defining Good using IsGood and Lemma_GoodExistsUnique?
Since, the property is defined on integer numbers, it seems that no additional axioms should be necessary. In any event, I don't see how adding something like the axiom of choice can help with the definition.
Also, I am having trouble with formalizing definitions of the following form (I suspect this is related to the problem I described above, but please indicate if that is not the case): for every x, there exists y, and these y are different for different x. Like, for example, how to define that there are N distinct good integer numbers using IsGood:
Definition ThereAreNGoodIntegers (N : Z) (IsGood : Z -> Prop) := ...?
In real-world mathematics, definitions like that occur every now and again, so this should not be difficult to formalize if Coq is intended to be suitable for practical mathematics.
The short answer to your first question is: in general, it is not possible, but in your particular case, yes.
In Coq's theory, propositions (i.e., Props) and their proofs have a very special status. In particular, it is in general not possible to write a choice operator that extracts the witness of an existence proof. This is done to make the theory compatible with certain axioms and principles, such as proof irrelevance, which says that all proofs of a given proposition are equal to each other. If you want to be able to do this, you need to add this choice operator as an additional axiom to your theory, as in the standard library.
However, in certain particular cases, it is possible to extract a witness out of an abstract existence proof without recurring to any additional axioms. In particular, it is possible to do this for countable types (such as Z) when the property in question is decidable. You can for instance use the choiceType interface in the Ssreflect library to get exactly what you want (look for the xchoose function).
That being said, I would usually advice against doing things in this style, because it leads to unnecessary complexity. It is probably easier to define Good directly, without resorting to the existence proof, and then prove separately that Good has the sought property.
Definition Good : Z := (* ... *)
Definition IsGood (z : Z) : Prop := (* ... *)
Lemma GoodIsGood : IsGood Good.
Proof. (* ... *) Qed.
Lemma GoodUnique : forall z : Z, IsGood z -> z = Good.
If you absolutely want to define Good with an existence proof, you can also change the proof of Lemma_GoodExistsUnique to use a connective in Type instead of Prop, since it allows you to extract the witness directly using the proj1_sig function:
Lemma Lemma_GoodExistsUnique : {z : Z | Good z /\ forall z', Good z' -> z' = z}.
Proof. (* ... *) Qed.
As for your second question, yes, it is a bit related to the first point. Once again, I would recommend that you write down a function y_from_x with type Z -> Z that will compute y given x, and then prove separately that this function relates inputs and outputs in a particular way. Then, you can say that the ys are different for different xs by proving that y_from_x is injective.
On the other hand, I'm not sure how your last example relates to this second question. If I understand what you want to do correctly, you can write something like
Definition ThereAreNGoodIntegers (N : Z) (IsGood : Z -> Prop) :=
exists zs : list Z,
Z.of_nat (length zs) = N
/\ NoDup zs
/\ Forall IsGood zs.
Here, Z.of_nat : nat -> Z is the canonical injection from naturals to integers, NoDup is a predicate asserting that a list doesn't contain repeated elements, and Forall is a higher-order predicate asserting that a given predicate (in this case, IsGood) holds of all elements of a list.
As a final note, I would advice against using Z for things that can only involve natural numbers. In your example, your using an integer to talk about the cardinality of a set, and this number is always a natural number.