Tracing tactics in Isabelle - trace

Section 9.4 The Classical Reasoner of the Isar Reference Manual writes:
The tactics can be traced, and their components can be called
directly; in this manner, any proof can be viewed interactively.
I have found sections in this manual about tracing the simplifier and higher order unification.
How can I trace e.g. the classical reasoner?

Related

The foundations of Coq

I'm assuming Coq at some point moved to an LCF approach. In the past, I wondered about the foundations of the kernel in Isabelle. And I found some nice description of Isabelle/Pure in a master thesis summarizing somehow the existing literature.
I was wondering if there is a description of Coq's kernel covering the logical and implementation aspects of it.
I think your questions is similar to How does one implement Coq?.
At least I'm tempted to give a similar answer.
I think MetaCoq is the state-of-the-art effort to specify and (partially) verify the Coq kernel: https://github.com/MetaCoq/metacoq.
It is initially a library for meta-programming in Coq and as such implements a representation of the kernel inside Coq. It has evolved a lot and now contains the typing rules of (a subset of) Coq as well as formalisation of several meta-theoretical properties, a type-checker and an erasure mechanism.
Now understanding your question:
The Coq reference manual already offers some sort of specification of the Calculus of Inductive Constructions, which should always be up to date with the latest version of Coq.
The MetaCoq Project paper also attempts a specification of the predicative calculus of cumulative inductive constructions (PCUIC).
You seem to think that this somehow might have less value than a paper specification when done in the proof assistant itself, obviously I do not exactly think so (but I'm one of the authors, I'm biased). This is a fair concern, but at least as far as the specification is concerned, it only makes it much more precise than could be done on paper. The Coq reference manual can be imprecise at times. Our work also forces us to explicit invariants of representations that are not enforced in ocaml. Also we separate implementation and specification (the Coq reference manual is pretty implementation oriented). Arguably more works need to be done on this separation.
Otherwise, usually people treat subsets of these calculi, espcially regarding inductive types which are rather painful to lay out entirely.

How proof assistants are implemented?

What are the main blocks of a proof assistant?
I am just interested in knowing the internal logic of proof checking. For example, topics about graphical user interfaces of such assistants do not interest me.
A similar question to mine has been asked for compilers:
https://softwareengineering.stackexchange.com/questions/165543/how-to-write-a-very-basic-compiler
My concern is the same but for proof checking systems.
I'm hardly an expert on the matter (I'm only a user of these systems; I don't worry too much about their internals) and this will probably only be a vague partial answer, but the two main approaches that I know of are:
Dependently-typed systems (e.g. Coq, Lean, Agda) that use the Curry–Howard isomorphism. Statements are just types, and proofs are terms that have that type, so checking the validity of a proof is essentially just a special case of type checking a term. I don't want to say too much about this approach because I don't know too much about it and am afraid I'll get something wrong. Théo Winterhalter linked something in the comments above that may provide more context on this approach.
LCF-style theorems provers (e.g. Isabelle, HOL Light, HOL 4). Here a theorem is (roughly speaking) an opaque value of type thm in the implementation language. Only the comparatively small ‘proof kernel’ can create these thm values and all other parts of the system interact with this proof kernel. The kernel offers an interface consisting of various small functions that implement small inference steps such as modus ponens (if you have a theorem A ⟹ B and a theorem A, you can get the theorem B) or ∀-introduction (if you have the theorem P x for a fixed variable x, you can get the theorem ∀x. P x) etc. The kernel also offers an interface for defining new constants. In principle, as long as you can trust that these functions faithfully implement the basic inference steps of the underlying logic, you can trust that any thm value you can produce really corresponds to a theorem in your logic. For LCF-style provers, the answer of what the actual proof is is a bit more difficult to answer because they usually don't build proof terms (e.g. Isabelle has them, but they are disabled by default and not widely used). I think one could say that the history of how the kernel primitives are called constitute the proof, and if one were to record it, it could – in principle – be replayed and checked in another system.
In both cases the idea is that you have a kernel (the type checker in the former case and the inference kernel in the latter) that you have to trust, and then you have a large ecosystem of additional procedures around this to provide more convenience layers. Since they have to interact with the kernel in order to actually produce theorems, however, you do not have to trust that code.
All these different systems have various trade-offs about what parts of the system are in the kernel and what parts are not. In general, I think it is fair to say that the dependently-typed systems tend to have considerably larger kernels than the LCF-based ones (e.g. HOL Light has a particularly small and simple kernel).
There are also other systems that I believe do not fit into these two categories (e.g. Mizar, ACL2, PVS, Metamath, NuPRL) but I don't know anything about how these are implemented.
In the case of LCF, HOL and Isabelle, you'll find an extensive answer to your question in the journal article "From LCF to Isabelle/HOL". (It's open access.)
Most dependently typed systems, such as Coq, are also LCF-style theorem provers, as described in the article and in Eberl's answer. One significant difference is that such calculi incorporate full proof objects, so that one of the objectives of the LCF approach — to save space by not storing proofs — is abandoned. However, the objective of soundness is still met.

Typed Racket Optimizer

I am learning some Typed Racket at the moment and i have a somewhat philosophical dilemma:
Racket claims to be a language development framework and Typed Racket is one such languages implemented on top of it. The documentation mentions that due to types being used, the compiler now can do more/better optimizations.
The concrete question:
Where do these optimizations happen?
1) In the compile/expand part (which is "programmable" as part of the language building framework)
-or-
2) further down the line in the (bytecode) optimizer (which is written in C and not directly modifieable via the framework).
If 2) is true, does that mean the type information is lost after the compile/expand stage and later "rebuilt/guessed" by the optimizer or has the intermediate representation been altered to to accomodate the type information and inform later stages about them?
The reason i am asking this specific question is because i want to get a feeling for how general the Racket language framework really is, i.e. is also viable for statically typed languages without any modifications in the backend versus the type system being only a front-end thing, while the code at runtime is still dynamically typed (but statically checked of course).
Thank you.
Typed Racket's optimizations occur during macro expansion. To see for yourself, you can change #lang typed/racket to #lang typed/racket #:no-optimize, which shows Typed Racket is in complete control of what optimizations are applied.
The optimizations consist of using type information to replace various uses of certain procedures with their unsafe equivalents. The unsafe procedures perform no runtime checks on the types of their arguments and cause undefined behavior (read: segfaults) if used incorrectly. You can find out more in the documentation section entitled Optimization in Typed Racket.
The exposure of the unsafe variants of procedures is what really makes it possible for user-defined languages to implement these optimizations. For example, if you wrote your own language with a type system that could prove vectors were never accessed with out-of-bounds indices you could replaces uses of vector-ref with unsafe-vector-ref.
There are similar optimizations that occur at the bytecode level, but these mostly apply when the JIT can infer type information that's not visible at macro expansion time. These are not user-controlled, but you don't have to rely on them.

Definition of a certified program

I see a couple of different research groups, and at least one book, that talk about using Coq for designing certified programs. Is there are consensus on what the definition of certified program is? From what I can tell, all it really means is that the program was proved total and type correct. Now, the program's type may be something really exotic such as a list with a proof that it's nonempty, sorted, with all elements >= 5, etc. However, ultimately, is a certified program just one that Coq shows is total and type safe, where all the interesting questions boil down to what was included in the final type?
Edit 1
Based on wjedynak's answer, I had a look at Xavier Leroy's paper "Formal Verification of a Realistic Compiler", which is linked in the answers below. I think this contains some good information, but I think the more informative information in this sequence of research can be found in the paper Mechanized Semantics for the Clight Subset of the C Language by Sandrine Blazy and Xavier Leroy. This is the language that the "Formal Verification" paper adds optimizations to. In it, Blazy and Leroy present the syntax and semantics of the Clight language and then discuss the validation of these semantics in section 5. In section 5, there's a list of different strategies used for validating the compiler, which in some sense provides an overview of different strategies for creating a certified program. These are:
Manual reviews
Proving properties of the semantics
Verified translations
Testing executable semantics
Equivalence with alternate semantics
In any case, there are probably points that could be added and I'd certainly like to hear about more.
Going back to my original question of what the definition is of a certified program, it's still a little unclear to me. Wjedynak sort of provides an answer, but really the work by Leroy involved creating a compiler in Coq and then, in some sense, certifying it. In theory, it makes it possible to now prove things about the C programs since we can now go C->Coq->proof. In that sense, it seems like there's this work flow where we could
Write a program in X language
Form of a model of the program from step 1 in Coq or some other proof assistant tool. This could involve creating a model of the programming language in Coq or it could involve creating a model of the program directly (i.e. rewriting the program itself in Coq).
Prove some property about the model. Maybe it's a proof about the values. Maybe it's the proof of the equivalence of statements (stuff like 3=1+2 or f(x,y)=f(y,x), whatever.)
Then, based on these proofs, call the original program certified.
Alternatively, we could create a specification of a program in a proof assistant tool and then prove properties about the specification, but not the program itself.
In any case, I'm still interested in hearing alternative definitions if anyone has them.
I agree that the notion seems vague, but in my understanding a certified program is a program equipped/together with the proof of correctness. Now, by using rich and expressive type signatures you can make it so there is no need for a separate proof, but this is often only a matter of convenience. The real issue is what do we mean by correctness: this a matter of specification. You can take a look at e.g. Xavier Leroy. Formal verification of a realistic compiler.
First note that the phrase "certified" has a slightly French bias: elsewhere the expression "verified" or "proven" is often used.
In any case it is important to ask what that actually means. X. Leroy and CompCert is a very good starting point: it is a big project about C compiler verification, and Leroy is always keen to explain to his audience why verification matters. Especially when talking to people from "certification agencies" who usually mean testing, not proving.
Another big verification project is L4.verified which uses Isabelle/HOL. This part of the exposition explains a bit what is actually stated and proven, and what are the consequences. Unfortunately, the actual proof is top secret, so it cannot be checked publicly.
A certified program is a program that is paired with a proof that the program satisfies its specification, i.e., a certificate. The key is that there exists a proof object that can be checked independently of the tool that produced the proof.
A verified program has undergone verification, which in this context may typically mean that its specification has been formalized and proven correct in a system like Coq, but the proof is not necessarily certified by an external tool.
This distinction is well attested in the scientific literature and is not specific to Francophones. Xavier Leroy describes it very clearly in Section 2.2 of A formally verified compiler back-end.
My understanding is that "certified" in this sense is, as Makarius pointed out, an English word chosen by Francophones where native speakers might instead have used "formally verified". Coq was developed in France, and has many Francophone developers and users.
As to what "formal verification" means, Wikipedia notes (license: CC BY-SA 3.0) that it:
is the act of proving ... the correctness of intended algorithms underlying a system with respect to a certain formal specification or property, using formal methods of mathematics.
(I realise you would like a much more precise definition than this. I hope to update this answer in future, if I find one.)
Wikipedia especially notes the difference between verification and validation:
Validation: "Are we trying to make the right thing?", i.e., is the product specified to the user's actual needs?
Verification: "Have we made what we were trying to make?", i.e., does the product conform to the specifications?
The landmark paper seL4: Formal Verification of an OS Kernel (Klein, et al., 2009) corroborates this interpretation:
A cynic might say that an implementation proof only shows that the
implementation has precisely the same bugs that the specification
contains. This is true: the proof does not guarantee that the
specification describes the behaviour the user expects. The
difference [in a verified approach compared to a non-verified one]
is the degree of abstraction and the absence of whole classes of bugs.
Which classes of bugs are those? The Agda tutorial gives some idea:
no runtime errors (inevitable errors like I/O errors are handled; others are excluded by design).
no non-productive infinite loops.
It may means free of runtime error (numeric overflow, invalid references …), which is already good compared to most developed software, while still weak. The other meaning is proved to be correct according to a domain formalization; that is, it does not only have to be formally free of runtime errors, it also has to be proved to do what it's expected to do (which must have been precisely defined).

Type systems of functional object-oriented languages

I would like to know how exactly modern typed functional object-oriented languages, such as Scala and OCaml, combine parametric polymorphism, subtyping and other their features.
Are they based on System F<:, or something stronger, or weaker?
Is there a well-studied formal type system, like System FC for Haskell, which could serve as a "core" for these languages?
OCaml
The "core" of OCaml type theory consists of extensions of System F,
but the module system corresponds to a mix of F<:
(modules can be coerced into stricter signature by subtyping) and
Fω.
In the core language (without considering subtyping at the
module level), subtyping is very restricted in OCaml, as subtyping
relations cannot be abstracted over (there is no
bounded quantification). The language emphasizes polymorphic
parametrism instead, and in particular even the "extensible" type it
supports use row polymorphism at their core (with a convenience layer
of subtyping between closed such types).
For an introduction to type-theoretic presentations of OCaml, see the online book by Didier Remy, Using, Understanding, and Unraveling the OCaml Language (From Practice to Theory and vice versa) . Its further reading chapter will give you more reference, in particular about the treatment of object-orientation.
There has been a lot of work on formalizations of the module system part; arguably, the ML module systems do not naturally fit Fω or F<:ω as a core formalism (for once, type parameters are named in a module system, instead of being passed by position as in lambda-calculi). One of the best explanations of the correspondence is F-ing modules, first published in 2010 by Andreas Rossberg, Claudio Russo and Derek Dreyer.
Jacques Garrigue has also done a lot of work on the more advanced features of the language (that cannot be summarized as "just syntactic sugar over system F"), namely Polymorphic Variants (equi-recursives structural types), labelled arguments, and GADTs). Various descriptions of these aspects can be found on his webpage, including mechanized proofs (in Coq) of polymorphic variants and the relaxed value restriction.
You should also look at the webpage A few papers on Caml, which points to some of the research article around the OCaml language.
Scala
The similar page for Scala is this one. Particularly relevant to your question are:
A Core Calculus for Scala Type Checking, by Vincent Cremet, François Garillot, Sergueï Lenglet and Martin Odersky, 2006
Generics of a Higher Kind, by Adriaan Moors, Frank Piessens, and Martin Odersky, 2008