How to properly load type Int from Coq.ZArith.Int? - coq

I'm new to coq and I am trying to use the "int" type from ZArith.Int but coq cannot find it.
Require Export ZArith Int.
Open Scope Int_scope.
when I use "int" in my definitions such as (... -> int -> ...), coq cannot find it. How can I properly load it along with the library's operations?

That library actually formalizes an abstract module of integers, that can be later instantiated with a concrete implementation. In Coq, the implementation of integers from the standard library is called Z. There's an instance of the Int module type in terms of Z defined in that library, called Z_as_Int; to use the definitions available there with Z, you just need to refer to them prefixed by the module name, e.g. Z_as_Int._0. However, given that most theorems are proven directly over Z, without relying on the interface defined in Int, it is probably better to just use Z directly.

Related

Use of build-In function in Coq

I want to use lemma (count_occ_In)related to count_occ function from library.I have Imported library in the Coq Script. But still I am unable to use it. If I copy count_occ & eq_dec from library in the scipt,then it works. My question is why I should redefine function when I have included library module. Please guide me how to write lemma by adding the library module only(Not defining the functions again)?.
With the additional information this should work for you:
Require Import List Arith.
Import ListNotations.
Check count_occ.
Search nat ({?x = ?y} + {?x <> ?y}).
Definition count_occ_nat := count_occ Nat.eq_dec.
Definition count_occ_In_nat := count_occ_In Nat.eq_dec.
Check count_occ_nat.
Check count_occ_In_nat.
See how I used Check to find which arguments count_occ takes and how I used Search to find a suitable instance for the decidability of equality function.
count_occ is written like this because it should work for lists of any type with decidable equality, but then to use it, you need a proof that for your type equality is decidable, and you have to give this explicitly.
In modern definitions one makes such arguments implicit and uses type classes to automatically fill in information like decidable equality, but count_occ has an explicit argument for that, so you have to supply it.

How do you use the class method for Data.Function.Memoize?

I am working on an NP search problem and was told I can speed up the search process by using said package. Since memoisation is a new concept to me, I find it hard to wrap my head around anything other than the 'standard' memoised Fibonacci sequence.
In order to instantiate a data type 'a' as Memoizable I need to define a function memoize (:: (a-> v) -> a -> v) on it.
I have a Datatype data Formula which is in the classes (Eq, Ord, Show). I will have to define my own instance declaration, but don't know what function is expected.
What exactly is this function supposed to define for memoisation to work? The package description doesn't elaborate on this, and I doubt function application (which fits the type signature) will speed anything up.
You should read about typeclasses. Here is how I understand the package.
The following definition is given:
class Memoizable a where
memoize ∷ (a → v) → a → v
You should think of the memoize function as something like:
memoize :: (Memoize a) => (a → v) → a → v
Ie: you can apply it to a function from a to v iif an instance of Memoize a is declared. The package declare instances for some basic types like Int
So if you wish to memoize compute :: Int -> WidgetData, you shoud use memoize compute which has the same type without doing anything.
If you wish to memoize a function which takes as input a type without a Memoize instance, you will have to declare it yourself. More likely, you should rely on the template functions like deriveMemoizable to do that for you:
{-# LANGUAGE TemplateHaskell #-} -- put this at the top
deriveMemoizable ''T
I doubt function application (which fits the type signature) will
speed anything up.
It depends of the problem at hands. If compute is expensive, and you call it twice with the same input, it will store the results and avoid computing them twice. If it is not the case, you will increase the memory usage of your program without any gain.

How do Frege classes work?

It seems that Frege's ideas about type-classes differ significantly from Haskell. In particular:
The syntax appears to be different, for no obvious reason.
Function types cannot have class instances. (Seems a rather odd rule...)
The language spec says something about implementing superclasses in a subclass instance declaration. (But not if you have diamond inheritance... it won't be an error, but it's not guaranteed to work somehow?)
Frege is less fussy about what an instance looks like. (Type aliases are allowed, type variables are not required to be distinct, etc.)
Methods can be declared as native, though it is not completely clear what the meaning of this is.
It appears that you can write type.method to access a method. Again, no indication as to what this means or why it's useful.
Subclass declarations can provide default implementations for superclass methods. (?)
In short, it would be useful if somebody who knows about this stuff could write an explanation of how this stuff works. It's listed in the language spec, but the descriptions are a little bit terse.
(Regarding the syntax: I think Haskell's instance syntax is more logical. "If X is an instance of Y and Z, then it is also an instance of Q in the following way..." Haskell's class syntax has always seemed a bit strange to me. If X implements Eq, that does not imply that it implements Ord, it implies that it could implement Ord if it wants to. I'm not sure what a better symbol would be though...)
Per Ingo's answer:
I'm assuming that providing a default implementation for a superclass method only works if you declare your instances "all at once"?
For example, suppose Foo is a superclass of Bar. Suppose each class has three methods (foo1, foo2, foo3, bar1, bar2, bar3), and Bar provides a default implementation for foo1. That should mean that
instance Bar FB where
foo2 = ...
foo3 = ...
bar1 = ...
bar2 = ...
bar3 = ...
should work. But would this work:
instance Foo FB where
foo2 = ...
foo3 = ...
instance Bar FB where
bar1 = ...
bar2 = ...
bar3 = ...
So if I declare a method as native in a class declaration, that just sets the default implementation for that method?
So if I do something like
class Foobar f where
foo :: f -> Int
native foo
bar :: f -> String
native bar
then that just means that if I write an empty instance declaration for some Java native class, then foo maps to object.foo() in Java?
In particular, if a class method is declared as native, I can still provide some other implementation for it if I choose to?
Every type [constructor] is a namespace. I get how that would be helpful for the infamous named fields problem. I'm not sure why you'd want to declare other things in the scope of this namespace...
You seem to have read the language spec very carefully. Great. But, no, type classes/instances do not differ substantially from Haskell 2010. Just a bit, and that bit is notational.
Your points:
ad 1. Yes. The rule is that the constraints, if any, are attached to the type and the class name follows the keyword. But this will change soon in favor of the Haskell syntax when multi param type classes are added to the language.
ad 2. Meanwhile, function types are fully supported. This will be included in the next release. The current release has only support for (a->b), though.
ad 3. Yes. Consider our categoric classes hierarchy Functor -> Applicative -> Monad. You can just write the following instead of 3 separate instances:
instance Monad Foo where
-- implementation of all methods that are due Monad, Applicative, Functor
ad 4. Yes, currently. There will be changes with multi param type classes, however. The lang spec recommends to stay with the Haskell 2010 rules.
ad 5. You'd need that if you model Java Class Hierarchies with type classes. native function declarations are nothing special for type classes/instances. Because you can have an annotation and a default implementation in a class (just as like in Haskell 2010), you can have this in the form of a native declaration, which gives a) the type and b) the implementation (by referring to a Java method).
ad 6. It's orthogonality. Just as you can write M.foo where M is a module, you can write T.foo when T is a type (constructor), because both are namespaces. In addition, if you have a "record", you may need to write T.f x when Frege cannot infer the type of x.
foo x = x.a + x.b -- this doesn't work, type of x is unknown
-- remedy 1: provide a type signature
foo :: Record -> Int -- Record being some data type
-- remedy 2: access the field getter functions directly
foo x = Record.a x + Record.b x
ad 7. Yes, for example, Ord has a default implementation for (==) in terms of compare. Hence you can make an Ord instance of something without implementing (==).
Hope this helps. Generally, it must be said, the lang spec needs a) completion and b) updates. If only the day had 36 hours .....
The syntactic issue is also discussed here: https://groups.google.com/forum/?fromgroups#!topic/frege-programming-language/2mCNWMVg5eY
---- Second part ------------
Your example would not work, because, if you define instance Foo FB then this must hold, irrespective of other instances and subclasses. The default foo1 method in Bar will be used only if no Foo instance exists.
then that just means that if I write an empty instance declaration for
some Java native class, then foo maps to object.foo() in Java?
Yes, but it depends on the native declaration, it doesn't have to be an Java instance method of that java class, it could also be a static method or a method of another class, or just a member access, etc.
In particular, if a class method is declared as native, I can still
provide some other implementation for it if I choose to?
Sure, just like with any other default class methods. Say a default class method is implemented using pattern guards, that does not mean that you must use pattern guards for your implementation.
Look,
native [pure] foo "javaspec" :: a -> b -> c
just means: please make me a frege function foo with type a -> b -> c that happens to use javaspec for implementation. (How exactly is supposed to be described in Chapter 6 of the language reference. It's not done yet. Sorry.)
For example:
native pure int2long "(long)" :: Int -> Long
The compiler will see tat this is syntactically a cast operation, and when it sees:
... int2long val ...
it will generate java code like:
((long)(unbox(val))
Apart from that, it will also make a wrapper, so that you can, for example:
map int2long [1,2,4]
The point is that, if I tell you: there is a function X.Y.z, you're not able to tell whether this is a native or a regular one without looking at the source code. Hence, native is the way to lift Java methods, operators and so forth to the Frege realm. Practically everything that is known as "primOp" in Haskell is just a native function in Frege. For example,
pure native + :: Int -> Int -> Int
(It's not always that easy, of course.)
Every type [constructor] is a namespace. I get how that would be
helpful for the infamous named fields problem. I'm not sure why you'd
want to declare other things in the scope of this namespace...
It gives you somewhat more control regarding the top namespace. Apart from that, you don't have to define other things there. I just did not see a reason to forbid it once I committed to this simple approach to tackle the record field problem.

Interface with multiple implementations in OCaml

What is the conventional way to create an interface in OCaml? It's possible to have an interface with a single implementation by creating an interface file foo.mli and an implementation file foo.ml, but how can you create multiple implementations for the same interface?
You must use modules and signatures. A .ml file implicitly define a module, and a .mli its signature. With explicit modules and signature, you can apply a signature to several different modules.
See this chapter of the online book "Developing Applications with OCaml".
If you're going to have multiple implementations for the same signature, define your signature inside a compilation unit, rather than as a compilation unit, and (if needed) similarly for the modules. There's an example of that in the standard library: the OrderedType signature, that describes modules with a type and a comparison function on that type:
module type OrderedType = sig
type t
val compare : t -> t -> int
end
This signature is defined in both set.mli and map.mli (you can refer to it as either Set.OrderedType or Map.OrderedType, or even write it out yourself: signatures are structural). There are several compilation units in the standard library that have this signature (String, Nativeint, etc.). You can also define your own module, and you don't need to do anything special when defining the module: as long as it has a type called t and a value called compare of type t -> t -> int, the module has that signature. There's a slightly elaborate example of that in the standard library: the Set.Make functor builds a module which has the signature OrderedType, so you can build sets of sets that way.
(* All four modules passed as arguments to Set.Make have the signature Set.OrderedType *)
module IntSet = Set.Make(module type t = int val compare = Pervasives.compare end)
module StringSet = Set.Make(String)
module StringSetSet = Set.Make(StringSet)
module IntSetSet = Set.Make(IntSet)

Do you know of any examples of elegant solutions in dynamically typed languages?

Imagine two languages which (apart from the type information) do have exactly the same syntax, but one is statically typed while the other one uses dynamic typing. Then, for every program written in the statically typed language, one can derive an equivalent dynamically typed program by removing all type information. As this is not neccessarily possible the other way around, the class of dynamically typed programs thus is strictly larger than the class of statically typed programs. Let's call this dynamically typed programs, for which there is no mapping of variables to types making them statically typed "real dynamically typed programs".
As both language families are definitly turing-complete, we can be sure that for every such real dynamically typed program there exists a statically typed program doing exactly the same thing, but I often read that "experienced programmers are able to write very elegant code in dynamically typed languages". I thus ask myself: Are there any good examples of real dynamically typed programs, for which any equivalent statically typed programm clearly is much more complex / much less "elegant" (whatever that may mean)?
Do you know of any such examples?
I am sure that for many "elegance" problems of static languages, static type checking itself isn't to blame, but the lack of expressiveness of the static type system implemented in the language and the limited capabilities of the compiler. If this is done "righter" (like in Haskell for example), then suddenly the programs turn out to be terse, elegant .. and safer that their dynamic counterpart.
Here's an illustration (C++ specific, sorry): C++ is so powerful, that it implements a metalanguage with it's template class system. But still, a very simple function is hard to declare:
template<class X,class Y>
? max(X x, Y y)
There is an astounding amount of possible solutions, like ?=boost::variant<X,Y> or computing ?=is_convertible(X,Y)?(X:is_convertible(Y,X):Y:error), none of them really satisfiying.
But now imagine a preprocessor, that could transform an input program into it's equivalent continuation passing style form, where each continuation is a callable object which accepts all possible argument types. A CPS version of max would look like this:
template<class X, class Y, class C>
void cps_max(X x, Y y, C cont) // cont is a object which can be called with X or Y
{
if (x>y) cont(x); else cont(y);
}
The problem is gone, max calls a continuation which accepts X or Y. So, there is a solution for max with static type checking, but we can't express max in it's non-CPS form, untransform(cps_max) is undefined, so to speak. So,we have some argument that max can be done right, but we don't have the means to do so. This is lack of expressiveness.
Update for 2501:
Assume there are some unrelated types X and Y and there is a bool operator<(X,Y). What shouldmax(X,Y) return? Let us further assume, that X and Y both have a member function foo();. How could we make it possible to write:
void f(X x, Y y) {
max(X,Y).foo();
}
returning either X or Y and invoking foo() on the result is no problem for a dynamic language, but close to impossible for most static languages. However, we can have the intended functionality by rewriting f() to use cps_max:
struct call_foo { template<class T> void operator(const T &t) const { t.foo(); } };
void f(X x, Y y) {
cps_max(x,y,call_foo());
}
So this can't be a problem for static type checking, but it looks very ugly and does not scale well beyond simple examples. So what is missing from this static language that we can not provide a static and readable solution.
Yes, check out Eric Raymonds Python Success Story. Basically, it is about how much easier reflection-type tasks are with dynamically typed programming languages.
Groovy and XML
Groovy and Domain-specific language