Controlling exportation of constructors in code extracted from Coq - coq

I'm looking at writing code in Coq and extracting this code for use in a large Haskell project. I want to build a single module in Coq, prove properties, then use Haskell's module system to prevent violation of these properties (via smart constructors).
I can't find any indication that it's possible to extract Coq code into a Haskell module with an explicit export list. It seems I must hand-modify the extracted Coq code, which isn't a big deal but I want to know if I have this right. Does anyone have an alternate proposal?

I just looked at the latest coq source (r14456). There doesn't seem to be any code to generate an export list.
Seems you'll have to do this yourself.

Related

Rascal: Containing Imported Symbols

I work with multiple grammars in the repl. The grammars use same names for some of their rules.
One of the documentation recipes mentions full qualification, to disambiguate type annotations in function pattern matching (it's in a note of the load function, but not in the code of this page - the .jar has it correct). But that might become tedious, so maybe there is aliasing for imports (like in Python import regex as r)?! And using full qualification in the first argument of the parse function doesn't seem to help to disambiguate all parse rules that are invoked recursively, parse(#lang::java::\syntax::Java18::CompilationUnit, src). At least it produces weird errors if I also import lang::java::\syntax::Java15.
In general, what is a safe way to handle symbols from different modules with same names?
Alternatively, is there a way to "unload" a module in the repl?
Some background information:
Rascal modules are open for reasons of extensibility, in particular data, syntax definitions and overloaded functions can be extended by importing another module; In this way you can extend a language and its processing functions by importing another module and adding rules and function alternatives at leisure.
There is a semantic difference between importing and extending a module. In particular import is not transitive and fuses only the uses of a name inside the importing module, while extend is transitive and also fuses recursive uses of a name in the module that is extended. So for extending a language, you'd default to using extend, while for using a library of functions you'd use import.
We are planning to remove the fusing behavior from import completely in one of the releases of 2020. After this all conflictingly imported non-terminal names must be disambiguated by prefixing with the module name, and prefixing will not have a side-effect of fusing recursively used non-terminals from different modules anymore. Not so for extend, which will still fuse the non-terminal and functions all the way.
all the definitions in a REPL instance simulate the semantics of the members of a single anonymous module.
So to answer your questions:
it's not particularly safe to handle symbols from different imported modules with the same name, until we fix the semantics of import that is.
the module prefix trick only works "top-level", below this the types are fused anyway because the code which reifies a non-terminal as a grammar does not propagate the prefix. It wouldn't know how.
Unimporting a module:
rascal>import IO;
ok
rascal>println("x");
x
ok
rascal>:un
undeclare unimport
rascal>:unimport IO
ok
rascal>println("x");
|prompt:///|(0,7,<1,0>,<1,7>): Undeclared variable: println
probably one of the least used features in the environment; caveat emptor!
To work around these issues, a way is to write functions inside a different module for every separate language/language version, and create a top module which imports these if you want to bundle the functionality in a single interface. This way, because import is not transitive, the namespaces stay separate and clean. Of course this does not solve the REPL issue; the only thing I can offer there is to start a fresh REPL for each language version you are playing with.

invoke coq typechecker from external programs

What would be the best way to interact with Coq from an external program? For example, let's say I want to programmatically generate programs / proofs in some language other than Coq and I just want to call Coq to typecheck them. Is there a standard way to do something like that?
You have a couple of options.
Construct .v files, invoke coqc, check the return code and parse the output of coqc.
This is, in some sense, the most stable way to interact with Coq. It has the most inter-version stability. It's also the most inflexible; you create a .v file, and check it all in one go.
For an example of this method, see my Coq bug minimizer (specificially get_coq_output in diagnose_error.py), which repeatedly makes small alterations to a .v file and checks to see that the alterations don't change the error message given by coqc.
Use the XML protocol to communicate with coqtop
This is the method used by CoqIDE and by upcoming versions of ProofGeneral. Logitext invokes from Haskell a custom patched version of coqtop with the pgip protocol, which was an earlier attempt at a more standardized way of communication with the prover (see this issue for more details).
This is becoming more stable, and gives more fine-grained control over what you want checked. For example, it allows you to check multiple proofs within a single session, which is important if you depend on a large library that takes time to load, and need to check many small proofs.
Write a custom OCaml toplevel wrapper for the interface to Coq that you want
The main example of this that I'm aware of is PIDEtop, which is used in the Coqoon Eclipse plugin. I suspect that some of the other entries in the GUI section of Related Tools use this method.
Note that coqtop is itself a toplevel wrapper in this style; the files in the toplevel/ folder of the Coq project are likely to be informative.
This gives you the most flexibility and reusability, at the cost of having to design your own protocol, or implement an existing protocol.
Write your external program in OCaml and link with Coq
Much like (3), this method gives you as much flexibility as you want. In fact, the only difference between this and (3) is that in (3), you separate out the communication with Coq into its own binary, whereas here, you fuse communication with Coq with the other functionality of your program. I'm not aware of programs in this style, though I believe coqchk may qualify, as I think it shares a couple of files with the Coq kernel (see the checker/ folder in the Coq codebase).
Regardless of which way you choose, I think that modelling off of existing projects will be more fruitful than making use of (as-yet incomplete) documentation on the various APIs and protocols. The API has been undergoing a lot of revision recently, in an attempt to get it into a reasonable and stable state, and the XML protocol has also been subject to recent improvements; #ejgallego has been the driving force behind much of these improvements.

Scala source code definition of "def" and other built ins

I was doing some research of how to solve this question. However, I am wondering if I can start learning how the function works, or how they pass the argument into the local scope by reading the source code of scala.
I know the source code of scala is hosted in Github, my question is how to locate the definition of def.
Or more generally, how to locate the source code of certain built in functions, operators?
The source code for everything in the Scala standard library is under https://github.com/scala/scala/tree/2.11.x/src/library/scala.
Also, the Scaladoc for the standard library includes links to the source code. So e.g. if you're interested in scala.Option and you're looking at http://www.scala-lang.org/api/2.11.7/#scala.Option, notice that page has "Source: Option.scala" where "Option.scala" is hyperlinked to the source code.
For something like def, which is not part of the standard library, but part of the language, well... there is no single place where def itself is defined. The compiler has 25 phases (you can list them by running scalac -Xshow-phases) and basically every phase participates in the job of making def mean what it means.
If you want to understand def, you'd probably be better off reading the Scala Language Specification; it's highly technical, but still much more approachable than the source code for the compiler.
The part of the spec that addresses your question about named and default arguments is SLS 6.6.1.

Purescript plugin system

Does purescript have something like Haskell's System.Plugins?
I need to create some 'generic interface' (sorry for this, I've been programming in object oriented languages for almost 15 years) that other developers will be able to use just by putting a module file in a plugins directory.
I wonder if it is possible since as far as I know Purescript does not have any metadata carried with types at runtime.
From a cursory glance, Haskell's plugins package is about dynamic loading of Haskell code. The similar concept in JavaScript is eval or adding a script element to the DOM.
You can make any type assumption for eval'd code using a foreign import or unsafeCoerce. However, you must take care to ensure that the assumption is correct.
I am not aware of a purescript package oriented around these sorts of plugins. In my estimation there would be too much variability in what a plugin could be to really have a sole package for it.

Using Inline::CPP vs SWIG - when?

In this question i saw two different answers how to directly call functions written in C++
Inline::CPP (and here are more, like Inline::C, Inline::Lua, etc..)
SWIG
Handmade (as daxim told - majority of modules are handwritten)
I just browsed nearly all questions in SO tagged [perl][swig] for finding answer for the next questions:
What are the main differences using (choosing between) SWIG and Inline::CPP or Handwritten?
When is the "good practice" - recommented to use Inline::CPP (or Inline:C) and when is recommented to use SWIG or Handwritten?
As I thinking about it, using SWIG is more universal for other uses, like asked in this question and Inline::CPP is perl-specific. But, from the perl's point of view, is here some (any) significant difference?
I haven't used SWIG, so I cannot speak directly to it. But I'm pretty familiar with Inline::CPP.
If you would like to compose C++ code that gets compiled and becomes callable from within Perl, Inline::CPP facilitates this. So long as the C++ code doesn't change, it should only compile once. If you base a module on Inline::CPP, the code will be compiled at module install time, so another user never really sees the first time compilation lag; it happens at install time, just before the testing phase.
Inline::CPP is not 100% free of portability isues. The target user must have a C++ compiler that is of similar flavor to the C compiler used to build Perl, and the C++ standard libraries should be of versions that produce binary-compatible code with Perl. Inline::CPP has about a 94% success rate with the CPAN testers. And those last 6% almost always boil down to issues of the installation process not correctly deciphering what C++ compiler and libraries to use. ...and of those, it usually comes down to the libraries.
Let's assume you as a module author find yourself in that 95% who have no problem getting Inline::CPP installed. If you know that your target audience will fall into that same category, then producing a module based on Inline::CPP is simple. You basically have to add a couple of directives (VERSION and NAME), and swap out your Makefile.PL's ExtUtils::MakeMaker call to Inline::MakeMaker (it will invoke ExtUtils::MakeMaker). You might also want a CONFIGURE_REQUIRES directive to specify a current version of ExtUtils::MakeMaker when you create your distribution; this insures that your users have a cleaner install experience.
Now if you're creating the module for general consumption and have no idea whether your target user will fit that 94% majority who can use Inline::CPP, you might be better off removing the Inline::CPP dependency. You might want to do this just to minimize the dependency chain anyway; it's nicer for your users. In that case, compose your code to work with Inline::CPP, and then use InlineX::CPP2XS to convert it to a plain old XS module. Your user will now be able to install without the process pulling Inline::CPP in first.
C++ is a large language, and Inline::CPP handles a large subset of it. Pay attention to the typemap file to determine what sorts of parameters can be passed (and converted) automatically, and what sorts are better dealt with using "guts and API" calls. One feature I wouldn't recommend using is automatic string conversion, as it would produce Unicode-unfriendly conversions. Better to handle strings explicitly through API calls.
The portion of C++ that isn't handled gracefully by Inline::CPP is template metaprogramming. You're free to use templates in your code, and free to use the STL. However, you cannot simply pass STL type parameters and hope that Inline::CPP will know how to convert them. It deals with POD (basic data types), not STL stuff. Furthermore, if you compose a template-based function or object method, the C++ compiler won't know what context Perl plans to call the function in, so it won't know what type to apply to the template at compiletime. Consequently, the functions and object methods exposed directly to Inline::CPP need to be plain functions or methods; not template functions or classes.
These limitations in practice aren't hard to deal with as long as you know what to expect. If you want to expose a template class directly to Inline::CPP, just write a wrapper class that either inherits or composes itself of the template class, but gives it a concrete type for Inline::CPP to work with.
Inline::CPP is also useful in automatically generating function wrappers for existing C++ libraries. The documentation explains how to do that.
One of the advantages to Inline::CPP over Swig is that if you already have some experience with perlguts, perlapi, and perlcall, you will feel right at home already. With Swig, you'll have to learn the Swig way of doing things first, and then figure out how to apply that to Perl, and possibly, how to do it in a way that is CPAN-distributable.
Another advantage of using Inline::CPP is that it is a somewhat familiar tool in the Perl community. You are going to find a lot more people who understand Perl XS, Inline::C, and to some extent Inline::CPP than you will find people who have used Swig with Perl. Although XS can be messy, it's a road more heavily travelled than using Perl with Swig.
Inline::CPP is also a common topic on the inline#perl.org mailing list. In addition to myself, the maintainer of Inline::C and several other Inline-family maintainers frequent the list, and do our best to assist people who need a hand getting going with the Inline family of modules.
You might also find my Perl Mongers talk on Inline::CPP useful in exploring how it might work for you. Additionally, Math::Prime::FastSieve stands as a proof-of-concept for basing a module on Inline::CPP (with an Inline::CPP dependency). Furthermore, Rob (sisyphus), the current Inline maintainer, and author of InlineX::CPP2XS has actually included an example in the InlineX::CPP2XS distribution that takes my Math::Prime::FastSieve and converts it to plain XS code using his InlineX::CPP2XS.
You should probably also give ExtUtils::XSpp a look. I think it requires you to declare a bit more stuff than Inline::CPP or SWIG, but it's rather powerful.