polymorphic equality in coq - coq

I cannot find a standard library == function which is overloaded and returns a boolean (or a sumbool, or something I can compute with). I would like to be able to do
3==5
and
"hello" == "hello"
without having to specify the type of the arguments. I would be surprised if Coq does not have this feature for equality types; could someone tell me where to find it? I have a feeling it has something to do with ssreflect but I cannot figure it out.
Thanks.

Ssreflect has the eqType class, which has exactly what you need:
From mathcomp Require Import ssreflect ssrfun ssrbool eqtype.
Check (3 == 5).
Most standard types have an equality operator defined in ssreflect. Unfortunately, strings are not one of them, though it is not to hard to roll up your own. (The Deriving library ships with an instance, but it is not marked as stable yet.)

Related

Use of build-In function in Coq

I want to use lemma (count_occ_In)related to count_occ function from library.I have Imported library in the Coq Script. But still I am unable to use it. If I copy count_occ & eq_dec from library in the scipt,then it works. My question is why I should redefine function when I have included library module. Please guide me how to write lemma by adding the library module only(Not defining the functions again)?.
With the additional information this should work for you:
Require Import List Arith.
Import ListNotations.
Check count_occ.
Search nat ({?x = ?y} + {?x <> ?y}).
Definition count_occ_nat := count_occ Nat.eq_dec.
Definition count_occ_In_nat := count_occ_In Nat.eq_dec.
Check count_occ_nat.
Check count_occ_In_nat.
See how I used Check to find which arguments count_occ takes and how I used Search to find a suitable instance for the decidability of equality function.
count_occ is written like this because it should work for lists of any type with decidable equality, but then to use it, you need a proof that for your type equality is decidable, and you have to give this explicitly.
In modern definitions one makes such arguments implicit and uses type classes to automatically fill in information like decidable equality, but count_occ has an explicit argument for that, so you have to supply it.

Type inequality without cardinality arguments

How can I get Coq to let me prove syntactic type inequality?
Negating univalence
I've read the answer to this question, which suggests that if you assume univalence, then the only way to prove type inequality is through cardinality arguments.
My understanding is - if Coq's logic is consistent with univalence, it should also be consistent with the negation of univalence. While I get that the negation of univalence would really be that some isomorphic types are non-equal, I believe it should be possible to express that no isomorphic types (that aren't identical) are equal.
Inequality for type constructors
In effect, I would like Coq to treat types and type constructors as inductive definitions, and do a typical inversion-style argument to say that my two very clearly different types are not equal.
Can it be done? This would need to be:
Useable for concrete types i.e. no type variables, and so
Not necessarily decidable
That strikes me as being weak enough to be consistent.
Context
I have a polymorphic judgement (effectively an inductive type with parameters forall X : Type, x -> Prop) for which the choice of X is decided by the constructors of the judgement.
I want to prove that, for all judgements for a certain choice of X (say X = nat) some property holds, but if I try to use inversion, some constructors give me hypotheses like nat = string (for example). These type equality hypotheses appear even for types with the same cardinality, so I can't (and don't want to) make cardinality arguments to produce a contradiction.
The unthinkable...
Should I produce an Inductive closed-world encoding of the types I care about, and let that be the polymorphic variable of the above judgement?
If you want to use type inequality, I think that the best you can do is to assume axioms for every pair of types you care about:
Axiom nat_not_string : nat <> string.
Axiom nat_not_pair : forall A B, nat <> A * B.
(* ... *)
In Coq, there is no first-class way of talking about the name of an inductively defined type, so there shouldn't be a way of stating this family of axioms with a single assumption. Naturally, you might be able to write a Coq plugin in OCaml to generate those axioms automatically every time an inductive type is defined. But the number of axioms you need grows quadratically in the number of types, so I think would quickly get out of hand.
Your "unthinkable" approach is probably the most convenient in this case, actually.
(Nit: "if Coq's logic is consistent with univalence, it should also be consistent with the negation of univalence". Yes, but only because Coq cannot prove univalence.)

refining types with modules and typeclasses

I'm not very familiar with modules or typeclasses in Coq, but based on my basic understanding of them, I believe I have a problem where I should use them.
I want to define a sum function that adds all the elements of a polymorphic list t. It should only work when the type of elements of the list (t) has some definition for plus_function and plus_id_element. The definition of sum I'd like to write would be something like this:
Fixpoint sum {t : Type} (l : list t) :=
match l with
| Nil => plus_id_element t
| Cons x xs => ((plus_function t) x (sum xs))
end.
I don't know what's the usual way to achieve something like this in Coq. I believe in Idris, for example, one would replace t by an interface/typeclass that defines plus_function and plus_id_element. Although typeclasses exist in Coq, I haven't seen them used very often and I believe people usually use modules instead to achieve something similar. I'm not sure if I'm mixing unrelated concepts. Are modules and typeclasses useful for this problem? What would be the recommended approach?
Indeed typeclasses were designed for this exact task, but then you face the hard problem of designing the particular class hierarchy that will fit your problem well.
Coq doesn't provide a standard hierarchy of mathematical operators, and there are many delicate trade-offs in building one with regard to the choice of classes, operators, and axioms.
I thus recommend starting from a mature development such as the MathComp library. MathComp is based on "Canonical Structures" which are a similar notion to type classes, and provides quite a few ready to use classes. The Packaging Mathematical Structures" paper contains more details, but the basic idea is that types pack their operators. For example, if you want to reason about abelian modules you can use the zmodType structure:
From mathcomp Require Import all_ssreflect all_algebra.
Open Scope ring_scope.
Definition sum (A: zmodType) (s : seq A) := foldr +%R 0 s.
to define the sum over a list of elements of the abelian group A. Even better, don't define you own sum operator and just use the one provided by the library: \sum_(x <- s) x:
Lemma eq_sum (A: zmodType) (s : seq A) : sum s = \sum_(x <- s) x.
Proof. by rewrite unlock. Qed.

Coinductive principles corresponding to the *_rect family of functions

Defining a new type foo gives me a recursion principle foo_rect, which elegantly abstracts over fix. Could a coinductive equivalent (abstracting over cofix) be defined by "flipping the arrows" somehow?
This isn't possible due to the non-modular way that Coq checks the guardedness condition for cofixpoints. Fortunately, the Paco library solves this problem by doing exactly what you want as long as you phrase your definitions in a particular way.
There is a good tutorial for the Paco library here: http://plv.mpi-sws.org/paco/tutorial.html

What are practical uses of applicative style?

I am a Scala programmer, learning Haskell now. It's easy to find practical use cases and real world examples for OO concepts, such as decorators, strategy pattern etc. Books and interwebs are filled with it.
I came to the realization that this somehow is not the case for functional concepts. Case in point: applicatives.
I am struggling to find practical use cases for applicatives. Almost all of the tutorials and books I have come across so far provide the examples of [] and Maybe. I expected applicatives to be more applicable than that, seeing all the attention they get in the FP community.
I think I understand the conceptual basis for applicatives (maybe I am wrong), and I have waited long for my moment of enlightenment. But it doesn't seem to be happening. Never while programming, have I had a moment when I would shout with a joy, "Eureka! I can use applicative here!" (except again, for [] and Maybe).
Can someone please guide me how applicatives can be used in a day-to-day programming? How do I start spotting the pattern? Thanks!
Applicatives are great when you've got a plain old function of several variables, and you have the arguments but they're wrapped up in some kind of context. For instance, you have the plain old concatenate function (++) but you want to apply it to 2 strings which were acquired through I/O. Then the fact that IO is an applicative functor comes to the rescue:
Prelude Control.Applicative> (++) <$> getLine <*> getLine
hi
there
"hithere"
Even though you explicitly asked for non-Maybe examples, it seems like a great use case to me, so I'll give an example. You have a regular function of several variables, but you don't know if you have all the values you need (some of them may have failed to compute, yielding Nothing). So essentially because you have "partial values", you want to turn your function into a partial function, which is undefined if any of its inputs is undefined. Then
Prelude Control.Applicative> (+) <$> Just 3 <*> Just 5
Just 8
but
Prelude Control.Applicative> (+) <$> Just 3 <*> Nothing
Nothing
which is exactly what you want.
The basic idea is that you're "lifting" a regular function into a context where it can be applied to as many arguments as you like. The extra power of Applicative over just a basic Functor is that it can lift functions of arbitrary arity, whereas fmap can only lift a unary function.
Since many applicatives are also monads, I feel there's really two sides to this question.
Why would I want to use the applicative interface instead of the monadic one when both are available?
This is mostly a matter of style. Although monads have the syntactic sugar of do-notation, using applicative style frequently leads to more compact code.
In this example, we have a type Foo and we want to construct random values of this type. Using the monad instance for IO, we might write
data Foo = Foo Int Double
randomFoo = do
x <- randomIO
y <- randomIO
return $ Foo x y
The applicative variant is quite a bit shorter.
randomFoo = Foo <$> randomIO <*> randomIO
Of course, we could use liftM2 to get similar brevity, however the applicative style is neater than having to rely on arity-specific lifting functions.
In practice, I mostly find myself using applicatives much in the same way like I use point-free style: To avoid naming intermediate values when an operation is more clearly expressed as a composition of other operations.
Why would I want to use an applicative that is not a monad?
Since applicatives are more restricted than monads, this means that you can extract more useful static information about them.
An example of this is applicative parsers. Whereas monadic parsers support sequential composition using (>>=) :: Monad m => m a -> (a -> m b) -> m b, applicative parsers only use (<*>) :: Applicative f => f (a -> b) -> f a -> f b. The types make the difference obvious: In monadic parsers the grammar can change depending on the input, whereas in an applicative parser the grammar is fixed.
By limiting the interface in this way, we can for example determine whether a parser will accept the empty string without running it. We can also determine the first and follow sets, which can be used for optimization, or, as I've been playing with recently, constructing parsers that support better error recovery.
I think of Functor, Applicative and Monad as design patterns.
Imagine you want to write a Future[T] class. That is, a class that holds values that are to be calculated.
In a Java mindset, you might create it like
trait Future[T] {
def get: T
}
Where 'get' blocks until the value is available.
You might realize this, and rewrite it to take a callback:
trait Future[T] {
def foreach(f: T => Unit): Unit
}
But then what happens if there are two uses for the future? It means you need to keep a list of callbacks. Also, what happens if a method receives a Future[Int] and needs to return a calculation based on the Int inside? Or what do you do if you have two futures and you need to calculate something based on the values they will provide?
But if you know of FP concepts, you know that instead of working directly on T, you can manipulate the Future instance.
trait Future[T] {
def map[U](f: T => U): Future[U]
}
Now your application changes so that each time you need to work on the contained value, you just return a new Future.
Once you start in this path, you can't stop there. You realize that in order to manipulate two futures, you just need to model as an applicative, in order to create futures, you need a monad definition for future, etc.
UPDATE: As suggested by #Eric, I've written a blog post: http://www.tikalk.com/incubator/blog/functional-programming-scala-rest-us
I finally understood how applicatives can help in day-to-day programming with that presentation:
https://web.archive.org/web/20100818221025/http://applicative-errors-scala.googlecode.com/svn/artifacts/0.6/chunk-html/index.html
The autor shows how applicatives can help for combining validations and handling failures.
The presentation is in Scala, but the author also provides the full code example for Haskell, Java and C#.
Warning: my answer is rather preachy/apologetic. So sue me.
Well, how often in your day-to-day Haskell programming do you create new data types? Sounds like you want to know when to make your own Applicative instance, and in all honesty unless you are rolling your own parser, you probably won't need to do it very much. Using applicative instances, on the other hand, you should learn to do frequently.
Applicative is not a "design pattern" like decorators or strategies. It is an abstraction, which makes it much more pervasive and generally useful, but much less tangible. The reason you have a hard time finding "practical uses" is because the example uses for it are almost too simple. You use decorators to put scrollbars on windows. You use strategies to unify the interface for both aggressive and defensive moves for your chess bot. But what are applicatives for? Well, they're a lot more generalized, so it's hard to say what they are for, and that's OK. Applicatives are handy as parsing combinators; the Yesod web framework uses Applicative to help set up and extract information from forms. If you look, you'll find a million and one uses for Applicative; it's all over the place. But since it's so abstract, you just need to get the feel for it in order to recognize the many places where it can help make your life easier.
I think Applicatives ease the general usage of monadic code. How many times have you had the situation that you wanted to apply a function but the function was not monadic and the value you want to apply it to is monadic? For me: quite a lot of times!
Here is an example that I just wrote yesterday:
ghci> import Data.Time.Clock
ghci> import Data.Time.Calendar
ghci> getCurrentTime >>= return . toGregorian . utctDay
in comparison to this using Applicative:
ghci> import Control.Applicative
ghci> toGregorian . utctDay <$> getCurrentTime
This form looks "more natural" (at least to my eyes :)
Coming at Applicative from "Functor" it generalizes "fmap" to easily express acting on several arguments (liftA2) or a sequence of arguments (using <*>).
Coming at Applicative from "Monad" it does not let the computation depend on the value that is computed. Specifically you cannot pattern match and branch on a returned value, typically all you can do is pass it to another constructor or function.
Thus I see Applicative as sandwiched in between Functor and Monad. Recognizing when you are not branching on the values from a monadic computation is one way to see when to switch to Applicative.
Here is an example taken from the aeson package:
data Coord = Coord { x :: Double, y :: Double }
instance FromJSON Coord where
parseJSON (Object v) =
Coord <$>
v .: "x" <*>
v .: "y"
There are some ADTs like ZipList that can have applicative instances, but not monadic instances. This was a very helpful example for me when understanding the difference between applicatives and monads. Since so many applicatives are also monads, it's easy to not see the difference between the two without a concrete example like ZipList.
I think it might be worthwhile to browse the sources of packages on Hackage, and see first-handedly how applicative functors and the like are used in existing Haskell code.
I described an example of practical use of the applicative functor in a discussion, which I quote below.
Note the code examples are pseudo-code for my hypothetical language which would hide the type classes in a conceptual form of subtyping, so if you see a method call for apply just translate into your type class model, e.g. <*> in Scalaz or Haskell.
If we mark elements of an array or hashmap with null or none to
indicate their index or key is valid yet valueless, the Applicative
enables without any boilerplate skipping the valueless elements while
applying operations to the elements that have a value. And more
importantly it can automatically handle any Wrapped semantics that
are unknown a priori, i.e. operations on T over
Hashmap[Wrapped[T]] (any over any level of composition, e.g. Hashmap[Wrapped[Wrapped2[T]]] because applicative is composable but monad is not).
I can already picture how it will make my code easier to
understand. I can focus on the semantics, not on all the
cruft to get me there and my semantics will be open under extension of
Wrapped whereas all your example code isn’t.
Significantly, I forgot to point out before that your prior examples
do not emulate the return value of the Applicative, which will be a
List, not a Nullable, Option, or Maybe. So even my attempts to
repair your examples were not emulating Applicative.apply.
Remember the functionToApply is the input to the
Applicative.apply, so the container maintains control.
list1.apply( list2.apply( ... listN.apply( List.lift(functionToApply) ) ... ) )
Equivalently.
list1.apply( list2.apply( ... listN.map(functionToApply) ... ) )
And my proposed syntactical sugar which the compiler would translate
to the above.
funcToApply(list1, list2, ... list N)
It is useful to read that interactive discussion, because I can't copy it all here. I expect that url to not break, given who the owner of that blog is. For example, I quote from further down the discussion.
the conflation of out-of-statement control flow with assignment is probably not desired by most programmers
Applicative.apply is for generalizing the partial application of functions to parameterized types (a.k.a. generics) at any level of nesting (composition) of the type parameter. This is all about making more generalized composition possible. The generality can’t be accomplished by pulling it outside the completed evaluation (i.e. return value) of the function, analogous to the onion can’t be peeled from the inside-out.
Thus it isn’t conflation, it is a new degree-of-freedom that is not currently available to you. Per our discussion up thread, this is why you must throw exceptions or stored them in a global variable, because your language doesn’t have this degree-of-freedom. And that is not the only application of these category theory functors (expounded in my comment in moderator queue).
I provided a link to an example abstracting validation in Scala, F#, and C#, which is currently stuck in moderator queue. Compare the obnoxious C# version of the code. And the reason is because the C# is not generalized. I intuitively expect that C# case-specific boilerplate will explode geometrically as the program grows.