Combinatory logic library for proof assistants? - coq

I'm working through some intro-level combinatory logic exercises using Coq. I've written a crude library for it, but it isn't very efficient. Is there a combinatory logic library for Coq or other proof assistants? Definitions of combinators, terms & their relations along with proofs of some important theorems would be quite helpful.
There is a discussion about Agda and combinatory logic that looks interesting, but I can't tell whether it is relevant for my question.

Related

What is the name of the programming style enabled by dependent types (think Coq or Agda)?

There is a programming "style" (or maybe paradigm, i'm not sure what to call it) which is as follows:
First, you write a specification: a formal description of what your (whole, or part of) program is to do. This is done within the programming system; it is not a separate artifact.
Then, you write the program, but - and this is the key distinction between this programming style and others - every step of this writing task is guided in some way by the specification you've written in the previous step. How exactly this guidance happens varies wildly; in Coq you have a metaprogramming language (Ltac) which lets you "refine" the specification while building the actual program behind the scenes, whereas in Agda you compose a program by filling "holes" (i'm not actually sure how it goes in Agda, as i'm mostly used to Coq).
This isn't exactly everyone's favorite style of programming, but i'd like to try practicing it in general-purpose, popular programming languages. At least in Coq i've found it to be fairly addictive!
...but how would i even search for ways to do it outside proof assistants? Which leads us to the question: I'm looking for a name for this programming style, so that i can try looking up tools that let me program like that in other programming languages.
Mind you, of course a more proper question would be directly asking for examples of such tools, but AFAIK questions asking for lists of answers aren't appropriate for Stack Exchange sites.
And to be clear, i'm not all that hopeful i'm really going to find much; these are mostly academic pastimes, and your typical programming language isn't really amenable to this style of programming (for example, the specification language might end up being impossibly complex). But it's worth a shot!
It is called proof-driven development (or type-driven development). However, there is very little information about it.
This process you mention about slowly creating your program by means of ltac (in the case of coq) or holes (in the case of Agda and Idris) is called refinement. So you will also find reference in the literature for this style as proof by refinement or programming by refinement.
Now the most important thing to realize is that this style of programming is intrinsic to more complex type system that will allow you to extract as much information as possible the current environment. So it is natural to find attached with dependent types, although it is not necessarily the case.
As mentioned in another response you're also going to find references to it as Type-Driven Development, there is an idris book about it.
You may be interested in looking into some other projects such as Lean, Isabelle, Idris, Agda, Cedille, and maybe Liquid Haskell, TLA+ and SAW.
As pointed out by the two previous answers, a possible name for the program style you mention certainly is: type-driven development.
From the Coq viewpoint, you might be interested in the following two references:
Certified Programming with Dependent Types (CPDT, by Adam Chlipala): a Coq textbook that teaches advanced techniques to develop dependently-typed Coq theories and automate related proofs.
Experience Report: Type-Driven Development of Certified Tree Algorithms in Coq (by Reynald Affeldt, Jacques Garrigue, Xuanrui Qi, Kazunari Tanaka), published at the Coq Workshop 2019 (slides, extended abstract):
The authors also use the acronym TDD, which interestingly enough, also has another acceptation in the software engineering community: test-driven development (this widely used methodology naturally leads to high-quality test suites).
Actually, both acceptations of TDD share a common idea: one systematically starts by writing the specification (of the considered unit), then only after that, writing some code that fulfills the spec (make the unit tests pass), then we loop and incrementally specify+implement(+refactor) other code units.
Last but not least, there are some extra pointers in this discussion from the Discourse OCaml forum.

The foundations of Coq

I'm assuming Coq at some point moved to an LCF approach. In the past, I wondered about the foundations of the kernel in Isabelle. And I found some nice description of Isabelle/Pure in a master thesis summarizing somehow the existing literature.
I was wondering if there is a description of Coq's kernel covering the logical and implementation aspects of it.
I think your questions is similar to How does one implement Coq?.
At least I'm tempted to give a similar answer.
I think MetaCoq is the state-of-the-art effort to specify and (partially) verify the Coq kernel: https://github.com/MetaCoq/metacoq.
It is initially a library for meta-programming in Coq and as such implements a representation of the kernel inside Coq. It has evolved a lot and now contains the typing rules of (a subset of) Coq as well as formalisation of several meta-theoretical properties, a type-checker and an erasure mechanism.
Now understanding your question:
The Coq reference manual already offers some sort of specification of the Calculus of Inductive Constructions, which should always be up to date with the latest version of Coq.
The MetaCoq Project paper also attempts a specification of the predicative calculus of cumulative inductive constructions (PCUIC).
You seem to think that this somehow might have less value than a paper specification when done in the proof assistant itself, obviously I do not exactly think so (but I'm one of the authors, I'm biased). This is a fair concern, but at least as far as the specification is concerned, it only makes it much more precise than could be done on paper. The Coq reference manual can be imprecise at times. Our work also forces us to explicit invariants of representations that are not enforced in ocaml. Also we separate implementation and specification (the Coq reference manual is pretty implementation oriented). Arguably more works need to be done on this separation.
Otherwise, usually people treat subsets of these calculi, espcially regarding inductive types which are rather painful to lay out entirely.

How proof assistants are implemented?

What are the main blocks of a proof assistant?
I am just interested in knowing the internal logic of proof checking. For example, topics about graphical user interfaces of such assistants do not interest me.
A similar question to mine has been asked for compilers:
https://softwareengineering.stackexchange.com/questions/165543/how-to-write-a-very-basic-compiler
My concern is the same but for proof checking systems.
I'm hardly an expert on the matter (I'm only a user of these systems; I don't worry too much about their internals) and this will probably only be a vague partial answer, but the two main approaches that I know of are:
Dependently-typed systems (e.g. Coq, Lean, Agda) that use the Curry–Howard isomorphism. Statements are just types, and proofs are terms that have that type, so checking the validity of a proof is essentially just a special case of type checking a term. I don't want to say too much about this approach because I don't know too much about it and am afraid I'll get something wrong. Théo Winterhalter linked something in the comments above that may provide more context on this approach.
LCF-style theorems provers (e.g. Isabelle, HOL Light, HOL 4). Here a theorem is (roughly speaking) an opaque value of type thm in the implementation language. Only the comparatively small ‘proof kernel’ can create these thm values and all other parts of the system interact with this proof kernel. The kernel offers an interface consisting of various small functions that implement small inference steps such as modus ponens (if you have a theorem A ⟹ B and a theorem A, you can get the theorem B) or ∀-introduction (if you have the theorem P x for a fixed variable x, you can get the theorem ∀x. P x) etc. The kernel also offers an interface for defining new constants. In principle, as long as you can trust that these functions faithfully implement the basic inference steps of the underlying logic, you can trust that any thm value you can produce really corresponds to a theorem in your logic. For LCF-style provers, the answer of what the actual proof is is a bit more difficult to answer because they usually don't build proof terms (e.g. Isabelle has them, but they are disabled by default and not widely used). I think one could say that the history of how the kernel primitives are called constitute the proof, and if one were to record it, it could – in principle – be replayed and checked in another system.
In both cases the idea is that you have a kernel (the type checker in the former case and the inference kernel in the latter) that you have to trust, and then you have a large ecosystem of additional procedures around this to provide more convenience layers. Since they have to interact with the kernel in order to actually produce theorems, however, you do not have to trust that code.
All these different systems have various trade-offs about what parts of the system are in the kernel and what parts are not. In general, I think it is fair to say that the dependently-typed systems tend to have considerably larger kernels than the LCF-based ones (e.g. HOL Light has a particularly small and simple kernel).
There are also other systems that I believe do not fit into these two categories (e.g. Mizar, ACL2, PVS, Metamath, NuPRL) but I don't know anything about how these are implemented.
In the case of LCF, HOL and Isabelle, you'll find an extensive answer to your question in the journal article "From LCF to Isabelle/HOL". (It's open access.)
Most dependently typed systems, such as Coq, are also LCF-style theorem provers, as described in the article and in Eberl's answer. One significant difference is that such calculi incorporate full proof objects, so that one of the objectives of the LCF approach — to save space by not storing proofs — is abandoned. However, the objective of soundness is still met.

Confusing obligations generated by `Program` tactic

I'm quite new to coq proof assistant and am still finding my feet.
I've encountered a case which I don't know how to deal with: I tried to use Program Fixpoint tactic to weaken the requirements on my code to later prove the needed properties afterwards as so called Obligations. While most of them were easy, there were two obligations generated goals of which had form [a-quite-simplee-xpr] = [my-function-name]_obligation_3, generally speaking the goals were refering to other obligations which were proved before. I tried unfolding and do substitutions but it didn't really help.
If there's no general solution for such problems I can send the proof script + the screenshot of the obligation to add some context.
Thank you in advance.
One thing that might be happening is that you have types which contain both "data" and "proof" (typically if you're trying to make refinement types with sig, or a custom inductive Type which contains proof terms), and that your functions require proofs of propositional equality, which is generally too strong for such dependent types.
Proof terms should be irrelevant: the simplest way out is to resolve that goal using an axiom from ProofIrrelevance (in the stdlib).
There are axiom-free ways, but I believe they require much more work/expertise.

first-class module in Coq

Could anyone give me more information about the first-class module in Coq? I know that module in Coq is not a first-class. I would like to know the reason why? and is it possible that in the future module in Coq can be first-class?
Thank you very much
I’m not certain, but as I understand it it comes from essentially two points:
Coq is conservative. Because some of its main applications are in verification, Coq mostly restricts itself to constructs whose semantics are reasonably well-understood.
In the dependent-types setting, first-class modules are rather subtle and not fully understood. In particular, how much of the computational/reduction behaviour of definitions do you want visible outside a module? If none at all, then this is already available, as record types. But if some or all of the reduction behaviour is visible, then it’s hard to quantify exactly how much is, so it’s quite hard to analyse the semantics of the resulting modules.
I’m not an expert on the relevant literature, so I may well be wrong about 2, but I’ve gotten the impression that this is the basic situation.