I want to use this definition to assume that certain equalities on the members of set R hold:
Definition wiring: Prop
(globalHasVoltage -> (voltageOf voltageIn) = vcc)
/\
(globalHasGround -> (
(voltageOf control) = zero
/\
(voltageOf ground) = zero
)
)
.
It seems coq distinguishes between Prop and bool, what are the differences, and how may i solve that issue?
Also If this definition implies some other definition (per say lets call it toBeEvaluated) and assuming that conversion between bool and prop can be done could this
Definition toBeEvaluated: Prop := (voltageOf voltageIn) = vcc.
be proven using unwraps and tauto. (In particular will it work with functions which have exact definitions)
The difference between Prop and bool is that definitions in Prop might be undecidable, while definitions in bool can always be computed (unless you use axioms). Many number types have bool and Prop equality operators, but R doesn't because equality in R is in principle undecidable, so one can't write an equality function for R which results in a bool. Imagine e.g. the equality of different infinite series which sum up to pi - one can't design a general algorithm which decides if two series result in pi or not. Electronics uses functions like sin which rely on such infinite series.
A few options / thoughts:
R is not a very appropriate type for signal levels. E.g. voltage levels like GND or VCC are not mathematically equal everywhere. You could e.g. work with ranges in Q to express signal levels.
Another appropriate type might be floating point numbers, which are supported by Coq (meanwhile also natively). Have a look at the coq-flocq package. For floating point numbers equality is decidable, but they won't be able to represent a voltage like 1.8V exactly.
Another option is to have an inductive type which has a few well known signal levels (GND, VCC, ...) but also a constructor for arbitrary R (either classic or constructive). At least for the well known levels equality would be decidable then, but not for the arbitrary levels.
Even though = is not decidable in R, you can usually proof equality of R expressions, e.g. using the ring or field tactic. But you can't prove automatically that say sin(pi/4)=cos(pi/4). Well of cause one can automate this as well, but such automation always will have limits. But this means that your equalities always need to be proven with tactics and can't be just computed.
Related
I have a type BoundedNat n, representing natural numbers smaller than n. My current implementation is as follows:
Definition BoundedNat n := {x : nat | x < n}.
Manipulating elements of type BoundedNat n is relatively heavyweight. I constantly need to wrap (using exist n ltac:(lia)) and unwrap (using proj1_sig) elements. How can I best piggyback off the underlying type's notations, equality, ordering, etc.?
Though you can definitely roll up your own implementation of bounded natural numbers, I strongly encourage you to reuse an existing one. My favorite library for that is ssreflect. It contains an ordinal n type family that corresponds to your BoundedNat, defined in fintype.v (doc here). There is a coercion from ordinal to nat so that you can readily reuse most operators on natural numbers transparently -- e.g. you can write i < j directly when i j : ordinal n.
Building terms of ordinal is more complicated, since it requires the proof argument. There is no best way of finding this proof, so the way to proceed depends on the application. For instance, adding a constant to a bounded nat is common enough to deserve a specialized operation in ssreflect:
rshift : forall m n, ordinal n -> ordinal (m + n)
One of the advantages of using ssreflect is that it comes with generic support for subset types like ordinal. For instance, there is a insub : nat -> option (ordinal n) function that succeeds if an only if its argument is bounded by n. This function works not only for nat and ordinal, but for any pair of types connected by the subtype interface: sT is a subtype of T if it is of the form {x : T | P x} for some boolean predicate P. Thus, you can manipulate subtypes with a consistent interface rather than rolling up your own each time.
How can I convert nat to Q (Rational) in Coq?
I want to be able to write things like this:
Require Import Coq.QArith.QArith.
Open Scope Q_scope.
Definition a := 2/3.
When I try to do this, Coq tells me:
Error: The term "2" has type "nat" while it is expected to have type "Q".
You can write something like:
Definition a := Z.of_nat 2 # Pos.of_nat 3.
The # operator is just notation for the Qmake constructor of the Q type. That constructor takes elements of Z and positive as arguments, so you need the casts to be able to put nats in there.
If you're using literal number syntax, you can also use Z and positive directly:
Definition a := 2 # 3.
The difference is that this definition won't mention the convertions for nat; the numbers will already be in the right type, because Coq interprets the number notation as a Z and a positive directly.
I personally don't like the standard Coq rational number library very much, because it uses equivalence rather than Leibniz equality; that is, the elements of Q 1 # 1 and 2 # 2 are equivalent as rational numbers, but are not equal according to Coq's equality:
Goal (1 # 1 <> 2 # 2).
congruence.
Qed.
There's a feature called setoid rewrite that allows you to pretend that they are equal. It works by only allowing you to rewrite on functions where you proved to be compatible with the notion of equivalence on Q. However, there are still cases where it is harder to use than Leibniz equality.
You can also try the rat library of the Ssreflect and MathComp packages (see the documentation here). It has a definition of rationals that works with Leibniz equality, and it is more comprehensive than Coq's.
I noticed that in Coq's definition of rationals the inverse of zero is defined to zero. (Usually, division by zero is not well-defined/legal/allowed.)
Require Import QArith.
Lemma inv_zero_is_zero: (/ 0) == 0.
Proof. unfold Qeq. reflexivity. Qed.
Why is it so?
Could it cause problems in calculations with rationals, or is it safe?
The short answer is: yes, it is absolutely safe.
When we say that division by zero is not well-defined, what we actually mean is that zero doesn't have a multiplicative inverse. In particular, we can't have a function that computes a multiplicative inverse for zero. However, it is possible to write a function that computes the multiplicative inverse for all other elements, and returns some arbitrary value when such an inverse doesn't exists (e.g. for zero). This is exactly what this function is doing.
Having this inverse operator be defined everywhere means that we'll be able to define other functions that compute with it without having to argue explicitly that its argument is different from zero, making it more convenient to use. Indeed, imagine what a pain it would be if we made this function return an option instead, failing when we pass it zero: we would have to make our entire code monadic, making it harder to understand and reason about. We would have a similar problem if writing a function that requires a proof that its argument is non-zero.
So, what's the catch? Well, when trying to prove anything about a function that uses the inverse operator, we will have to add explicit hypotheses saying that we're passing it an argument that is different from zero, or argue that its argument can never be zero. The lemmas about this function then get additional preconditions, e.g.
forall q, q <> 0 -> q * (/ q) = 1
Many other libraries are structured like that, cf. for instance the definition of the field axioms in the algebra library of MathComp.
There are some cases where we want to internalize the additional preconditions required by certain functions as type-level constraints. This is what we do for instance when we use length-indexed vectors and a safe get function that can only be called on numbers that are in bounds. So how do we decide which one to go for when designing a library, i.e. whether to use a rich type with a lot of extra information and prevent bogus calls to certain functions (as in the length-indexed case) or to leave this information out and require it as explicit lemmas (as in the multiplicative inverse case)? Well, there's no definite answer here, and one really needs to analyze each case individually and decide which alternative will be better for that particular case.
When I prove some theorem, my goal evolves as I apply more and more tactics. Generally speaking the goal tends to split into sub goals, where the subgoals are more simple. At some final point Coq decides that the goal is proven. How this "proven" goal may look like? These goals seems to be fine:
a = a. (* Any object is identical to itself (?) *)
myFunc x y = myFunc x y. (* Result of the same function with the same params
is always the same (?) *)
What else can be here or can it be that examples are fundamentally wrong?
In other words, when I finally apply reflexivity, Coq just says ** Got it ** without any explanation. Is there any way to get more details on what it actually did or why it decided that the goal is proven?
You're actually facing a very general notion that seems not so general because Coq has some user-friendly facility for reasoning with equality in particular.
In general, Coq accepts a goal as solved as soon as it receives a term whose type is the type of the goal: it has been convinced the proposition is true because it has been convinced the type that this proposition describes is inhabited, and what convinced it is the actual witness you helped build along your proof.
For the particular case of inductive datatypes, the two ways you are going to be able to proved the proposition P a b c are:
by constructing a term of type P a b c, using the constructors of the inductive type P, and providing all the necessary arguments.
or by reusing an existing proof or an axiom in the environment whose type you can get to match P a b c.
For the even more particular case of equality proofs (equality is just an inductive datatype in Coq), the same two ways I list above degenerate to this:
the only constructor of equality is eq_refl, and to apply it you need to show that the two sides are judgementally equal. For most purposes, this corresponds to goals that look like T a b c = T a b c, but it is actually a slightly more broad notion of equality (see below). For these, all you have to do is apply the eq_refl constructor. In a nutshell, that is what reflexivity does!
the second case consists in proving that the equality holds because you have other equalities in your context, nothing special here.
Now one part of your question was: when does Coq accept that two sides of an equality are equal by reflexivity?
If I am not mistaken, the answer is when the two sides of the equality are αβδιζ-convertible.
What this grossly means is that there is a way to make them syntactically equal by repeated applications of:
α : sane renaming of non-free variables
β : computing reducible expressions
δ : unfolding definitions
ι : simplifying matches
ζ : expanding let-bound expressions
[someone please correct me if more rules apply or if I got one wrong]
For instance some of the things that are not captured by these rules are:
equality of functions that do more or less the same thing in different ways:
(fun x => 0 + x) = (fun x => x + 0)
quicksort = mergesort
equality of terms that are stuck reducing but would be equal:
forall n, 0 + n = n + 0
I'm interested in multi-level data integrity checking and correcting. Where multiple error correcting codes are being used (they can be 2 of the same type of codes). I'm under the impression that a system using 2 codes would achieve maximum effectiveness if the 2 hash codes being used were orthogonal to each other.
Is there a list of which codes are orthogonal to what? Or do you need to use the same hashing function but with different parameters or usage?
I expect that the first level ecc will be a reed-solomon code, though I do not actually have control over this first function, hence I cannot use a single code with improved capabilities.
Note that I'm not concerned with encryption security.
Edit: This is not a duplicate of
When are hash functions orthogonal to each other? due to it essentially asking what the definition of orthogonal hash functions are. I want examples of which hash functions that are orthogonal.
I'm not certain it is even possible to enumerate all orthogonal hash functions. However, you only asked for some examples, so I will endeavour to provide some as well as some intuition as to what properties seem to lead to orthogonal hash functions.
From a related question, these two functions are orthogonal to each other:
Domain: Reals --> Codomain: Reals
f(x) = x + 1
g(x) = x + 2
This is a pretty obvious case. It is easier to determine orthogonality if the hash functions are (both) perfect hash functions such as these are. Please note that the term "perfect" is meant in the mathematical sense, not in the sense that these should ever be used as hash functions.
It is a more or less trivial case for perfect hash functions to satisfy orthogonality requirements. Whenever the functions are injective they are perfect hash functions and are thus orthogonal. Similar examples:
Domain: Integers --> Codomain: Integers
f(x) = 2x
g(x) = 3x
In the previous case, this is an injective function but not bijective as there is exactly one element in the codomain mapped to by each element in the domain, but there are many elements in the codomain that are not mapped to at all. These are still adequate for both perfect hashing and orthogonality. (Note that if the Domain/Codomain were Reals, this would be a bijection.)
Functions that are not injective are more tricky to analyze. However, it is always the case that if one function is injective and the other is not, they are not orthogonal:
Domain: Reals --> Codomain: Reals
f(x) = e^x // Injective -- every x produces a unique value
g(x) = x^2 // Not injective -- every number other than 0 can be produced by two different x's
So one trick is thus to know that one function is injective and the other is not. But what if neither is injective? I do not presently know of an algorithm for the general case that will determine this other than brute force.
Domain: Naturals --> Codomain: Naturals
j(x) = ceil(sqrt(x))
k(x) = ceil(x / 2)
Neither function is injective, in this case because of the presence of two obvious non-injective functions: ceil and abs combined with a restricted domain. (In practice most hash functions will not have a domain more permissive than integers.) Testing out values will show that j will have non-unique results when k will not and vice versa:
j(1) = ceil(sqrt(1)) = ceil(1) = 1
j(2) = ceil(sqrt(2)) = ceil(~1.41) = 2
k(1) = ceil(x / 2) = ceil(0.5) = 1
k(2) = ceil(x / 2) = ceil(1) = 1
But what about these functions?
Domain: Integers --> Codomain: Reals
m(x) = cos(x^3) % 117
n(x) = ceil(e^x)
In these cases, neither of the functions are injective (due to the modulus and the ceil) but when do they have a collision? More importantly, for what tuples of values of x do they both have a collision? These questions are hard to answer. I would suspect they are not orthogonal, but without a specific counterexample, I'm not sure I could prove that.
These are not the only hash functions you could encounter, of course. So the trick to determining orthogonality is first to see if they are both injective. If so, they are orthogonal. Second, see if exactly one is injective. If so, they are not orthogonal. Third, see if you can see the pieces of the function that are causing them to not be injective, see if you can determine its period or special cases (such as x=0) and try to come up with counter-examples. Fourth, visit math-stack-exchange and hope someone can tell you where they break orthogonality, or prove that they don't.