I am struggling with the plus operator between two propositions (maybe types) in Coq. I already could figure out this is something like "or" (maybe "xor") and I think it says that something is decidable but I cannot understand the complete meaning of it, and where does this sign come from (in classical mathematics).
P. S. Of course I already googled and researched but couldn't find the complete sophisticated answer I want.
That's the sum datatype, where A + B is basically A or B. The main difference with A \/ B is that it lives in Type, so it has computational content. That is to say, given A \/ B you cannot produce a boolean such that if A then true else false.
Another way to see it is that for A B : Prop, A + B -> A \/ B holds, but not the converse.
Prop is a special, impredicative sort in Coq; I recommend reading the manual about it.
Related
I understand the advantage of IO monad and List Monad over Functor, however, I don't understand the advantage of Option/Maybe Monad over Functor.
Is that simply the integration of types of the language?
or
What is the advantage of Option/Maybe Monad over Functor in specific use?
PS. asking the advantage in specific use is not opinion-based because if there is, it can be pointed out without subjective aspect.
PS.PS. some member here is so eager to push repeatedly
Option is both a functor and a monad?
should be the answer, or QA is duplicate, but actually not.
I already know the basics such as
Every monad is an applicative functor and every applicative functor is a functor
as the accepted answer there, and that is not what I'm not asking here.
Having an excellent answer here that is Not included in the previous QA.
The aspect, detail, or resolution of each question is quite different, so please avoid "bundling" different things in rough manner here.
Let's look at the types.
fmap :: Functor f => (a -> b) -> (f a -> f b)
(<*>) :: Applicative f => f (a -> b) -> (f a -> f b)
flip (>>=) :: Monad f => (a -> f b) -> (f a -> f b)
For a functor, we can apply an ordinary function a -> b to a value of type f a to get a value of type f b. The function never gets a say in what happens to the f part, only the inside. Thinking of functors as sort of box-like things, the function in fmap never sees the box itself, just the inside, and the value gets taken out and put back into the exact same box it started at.
An applicative functor is slightly more powerful. Now, we have a function which is also in a box. So the function gets a say in the f part, in a limited sense. The function and the input each have f parts (which are independent of each other) which get combined into the result.
A monad is even more powerful. Now, the function does not, a priori, have an f part. The function takes a value and produces a new box. Whereas in the Applicative case, our function's box and our value's box were independent, in the Monad case, the function's box can depend on the input value.
Now, what's all this mean? You've asked me to focus on Maybe, so let's talk about Maybe in the concrete.
fmap :: (a -> b) -> (Maybe a -> Maybe b)
(<*>) :: Maybe (a -> b) -> (Maybe a -> Maybe b)
flip (>>=) :: (a -> Maybe b) -> (Maybe a -> Maybe b)
As a reminder, Maybe looks like this.
data Maybe a = Nothing | Just a
A Maybe a is a value which may or may not exist. From a functor perspective, we'll generally think of Nothing as some form of failure and Just a as a successful result of type a.
Starting with fmap, the Functor instance for Maybe allows us to apply a function to the inside of the Maybe, if one exists. The function gets no say over the success or failure of the operation: A failed Maybe (i.e. a Nothing) must remain failed, and a successful one must remain successful (obviously, we're glossing over undefined and other denotational semantic issues here; I'm assuming that the only way a function can fail is with Nothing).
Now (<*>), the applicative operator, takes a Maybe (a -> b) and a Maybe a. Either of those two might have failed. If either of them did, then the result is Nothing, and only if the two both succeeded do we get a Just as our result. This allows us to curry operations. Concretely, if I have a function of the form g :: a -> b -> c and I have values ma :: Maybe a and mb :: Maybe b, then we might want to apply g to ma and mb. But when we start to do that, we have a problem.
fmap g ma :: Maybe (b -> c)
Now we've got a function that may or may not exist. We can't fmap that over mb, because a nonexistent function (a Nothing) can't be an argument to fmap. The problem is that we have two independent Maybe values (ma and mb in our example) which are fighting, in some sense, for control. The result should only exist if both are Just. Otherwise, the result should be Nothing. It's sort of a Boolean "and" operation, in that if any of the intermediates fail, then the whole calculation fails. (Note: If you're looking for a Boolean "or", where any individual success can recover from prior failure, then you're looking for Alternative)
So we write
(fmap g ma) <*> mb :: Maybe c
or, using the more convenient synonym Haskell provides for this purpose,
g <$> ma <*> mb :: Maybe c
Now, the key word in the above situation is independent. ma and mb have no say over the other's success or failure. This is good in many cases, because code like this can often be parallelized (there are very efficient command line argument parsing libraries that exploit just this property of Applicative). But, obviously, it's not always what we want.
Enter Monad. In the Maybe monad, the provided function produces a value of type Maybe b based on the input a. The Maybe part of the a and of the b are no longer independent: the latter can depend directly on the former.
For example, take the classic example of Maybe: a square root function. We can't take a square root of a negative number (let's assume we're not working with complex numbers here), so our hypothetical square root looks like
sqrt :: Double -> Maybe Double
sqrt x | x < 0 = Nothing
| otherwise = Just (Prelude.sqrt x)
Now, suppose we've got some number r. But r isn't just a number. It came from earlier in our computation, and our computation might have failed. Maybe it did a square root earlier, or tried to divide by zero, or something else entirely, but it did something that has some chance of producing a Nothing. So r is Maybe Double, and we want to take its square root.
Obviously, if r is already Nothing, then its square root is Nothing; we can't possibly take a square root if we've already failed to compute everything else. On the other hand, if r is a negative number, then sqrt is going to fail and produce Nothing despite the fact that r is itself Just. So what we really want is
case r of
Nothing -> Nothing
Just r' -> sqrt r'
And this is exactly what the Monad instance for Maybe does. That code is equivalent to
r >>= sqrt
The result of this entire computation (and, namely, whether or not it is Nothing or Just) depends not just on whether or not r is Nothing but also on r's actual value. Two different Just values of r can produce success or failure depending on what sqrt does. We can't do that with just a Functor, we can't even do that with Applicative. It takes a Monad.
By subtyping, here I mean implicit coercion between types, not sig.
In programming languages, sum types have associated data and it matters which variant is being used, so e.g. A can not be a subtype of Either<A,B> in haskell. The same is true for decideable coq. That is, A can not be a subtype of A + B in general, since A + A has one bit more data than A.
However, Props have no data in runtime, so why coq doesn't consider A a subtype of A \/ B and allow using each member of it as a member of A \/ B without explicit or_introl? I think it makes proof shorter and more generic. Is there a fundamental limit or unsoundness problem that makes it impossible, or it is just an unneeded feature?
I think the main issue is indeed one of utility:
If you're looking at proving A\/B, you probably aren't building a proof of A or B by hand, but rather applying a bunch of powerful techniques with no regard for efficiency or conciseness, for exactly the reasons you said.
This means you can apply powerful tactics such as auto or intuition, or even firstorder if you're feeling lucky.
These tactics prove much more than A -> A \/ B, and often perform many steps, including that one, which makes the coercion somewhat useless (and perhaps even confusing!).
How can I get Coq to let me prove syntactic type inequality?
Negating univalence
I've read the answer to this question, which suggests that if you assume univalence, then the only way to prove type inequality is through cardinality arguments.
My understanding is - if Coq's logic is consistent with univalence, it should also be consistent with the negation of univalence. While I get that the negation of univalence would really be that some isomorphic types are non-equal, I believe it should be possible to express that no isomorphic types (that aren't identical) are equal.
Inequality for type constructors
In effect, I would like Coq to treat types and type constructors as inductive definitions, and do a typical inversion-style argument to say that my two very clearly different types are not equal.
Can it be done? This would need to be:
Useable for concrete types i.e. no type variables, and so
Not necessarily decidable
That strikes me as being weak enough to be consistent.
Context
I have a polymorphic judgement (effectively an inductive type with parameters forall X : Type, x -> Prop) for which the choice of X is decided by the constructors of the judgement.
I want to prove that, for all judgements for a certain choice of X (say X = nat) some property holds, but if I try to use inversion, some constructors give me hypotheses like nat = string (for example). These type equality hypotheses appear even for types with the same cardinality, so I can't (and don't want to) make cardinality arguments to produce a contradiction.
The unthinkable...
Should I produce an Inductive closed-world encoding of the types I care about, and let that be the polymorphic variable of the above judgement?
If you want to use type inequality, I think that the best you can do is to assume axioms for every pair of types you care about:
Axiom nat_not_string : nat <> string.
Axiom nat_not_pair : forall A B, nat <> A * B.
(* ... *)
In Coq, there is no first-class way of talking about the name of an inductively defined type, so there shouldn't be a way of stating this family of axioms with a single assumption. Naturally, you might be able to write a Coq plugin in OCaml to generate those axioms automatically every time an inductive type is defined. But the number of axioms you need grows quadratically in the number of types, so I think would quickly get out of hand.
Your "unthinkable" approach is probably the most convenient in this case, actually.
(Nit: "if Coq's logic is consistent with univalence, it should also be consistent with the negation of univalence". Yes, but only because Coq cannot prove univalence.)
I am formalizing a grammar which is essentially one over boolean expressions. In coq, you can get boolean-like things in Prop or more explicitly in bool.
So for example, I could write:
true && true
Or
True /\ True
The problem is that in proofs (which is what I really care about) I can do a case analysis in domain bool, but in Prop this is not possible (since all members are not enumerable, I suppose). Giving up this tactic and similar rewriting tactics seems like a huge drawback even for very simple proofs.
In general, what situations would one choose Prop over bool for formalizing? I realize this is a broad question, but I feel like this is not addressed in the Coq manual sufficiently. I am interested in real world experience people have had going down both routes.
There are lots of different opinions on this. My personal take is that you are often better off not making this choice: it makes sense to have two versions of a property, one in Prop, the other one in bool.
Why would you want this? As you pointed out, booleans support case analysis in proofs and functions, which general propositions do not. However, Prop is more convenient to use in certain cases. Suppose you have a type T with finitely many values. We can write a procedure
all : (T -> bool) -> bool
that decides whether a boolean property P : T -> bool holds of all elements of T. Imagine that we know that all P = true, for some property P. We might want to use this fact to conclude that P x = true for some value x. To do this, we need to prove a lemma about all:
allP : forall P : T -> bool,
all P = true <-> (forall x : T, P x = true)
This lemma connects two different formulations of the same property: a boolean one and a propositional one. When reasoning about all in a proof, we can invoke allP to convert freely between the two. We can also have different conversion lemmas:
allPn : forall P,
all P = false <-> (exists x, P x = false)
In fact, we are free to choose any Coq proposition whatsoever to relate to a boolean computation (as long, of course, as we can prove that the two are logically equivalent). For instance, if we would like to have a custom induction principle associated with a boolean property, we can look for an equivalent formulation as an inductively defined proposition.
The Mathematical Components library is a good example of development that follows this style. Indeed, because it is so pervasive there, the library provides a special view mechanism for writing conversion lemmas like the one above and applying them. In plain Coq, we can also use the rewrite tactic to apply logical equivalences more conveniently.
Of course, there are many situations where it does not make sense to have two formulations of the same property. Sometimes, you are forced to use Prop, because the property you care about is undecidable. Sometimes, you might feel that you wouldn't gain anything by writing your property in Prop, and may keep it only as a boolean.
In addition to the Software Foundations chapter linked above, this answer discusses the difference between bool and Prop in more depth.
Why doesn't Coq accept this lemma as a hint?
Lemma contr : forall p1 : Prop, False -> p1.
Proof. tauto. Qed.
Hint Resolve contr : Hints.
Or other lemmas that end with a Prop variable?
Looking at the documentation for Hint Resolve ( http://coq.inria.fr/distrib/V8.4/refman/Reference-Manual011.html##command232 ):
term cannot be used as a hint
The type of term contains products over variables which do not appear in the conclusion. A typical example is a transitivity axiom. In that case the apply tactic fails, and thus is useless.
However, it does not seem (to me) to be the case here, as the only product is over p1 which does appear in the conclusion.
The problem here seems to be that your conclusion have absolutely no shape at all. auto seems to work by matching the shape of your goal with the shape of the return type of the hint database. Here, it might be upset by the fact that your goal is just a quantified variable. I am not sure whether this is a reasonable thing to trip over, but this particular instance might be the only case where you might have such a shapeless return type (with obviously the same case for Set and Type), so it's not a big deal.
So, you probably don't need this as a hint?... just use tactics such as tauto, intuition or perform any kind of elimination/destruction/inversion on the value of type False in your environment... not very satisfactory but eh :\