on iPhone/ARM, which CPU registers are functions supposed to preserve, if any?
Old, but incorrect answer. Wikipedia is often inaccurate (sometimes downright incorrect), and this is an example of the former case. There is a generic calling convention (which is what Wikipedia documents), but OSes can deviate - both Android and iOS do (and likely Win 8 will, but we'll know that when the binaries start surfacing)
http://developer.apple.com/library/ios/#documentation/Xcode/Conceptual/iPhoneOSABIReference/Introduction/Introduction.html
provides the correct spec for iOS, so no sense in repeating here. Most notably, note the use of r7 and r12. Also note that ARMv6 and ARMv7 are different. By now, you want ARMv7 architectures (A4,5,6..)
Wikipedia's article on Calling Convention has a good summary of the conventions for ARM.
Related
I see a couple of different research groups, and at least one book, that talk about using Coq for designing certified programs. Is there are consensus on what the definition of certified program is? From what I can tell, all it really means is that the program was proved total and type correct. Now, the program's type may be something really exotic such as a list with a proof that it's nonempty, sorted, with all elements >= 5, etc. However, ultimately, is a certified program just one that Coq shows is total and type safe, where all the interesting questions boil down to what was included in the final type?
Edit 1
Based on wjedynak's answer, I had a look at Xavier Leroy's paper "Formal Verification of a Realistic Compiler", which is linked in the answers below. I think this contains some good information, but I think the more informative information in this sequence of research can be found in the paper Mechanized Semantics for the Clight Subset of the C Language by Sandrine Blazy and Xavier Leroy. This is the language that the "Formal Verification" paper adds optimizations to. In it, Blazy and Leroy present the syntax and semantics of the Clight language and then discuss the validation of these semantics in section 5. In section 5, there's a list of different strategies used for validating the compiler, which in some sense provides an overview of different strategies for creating a certified program. These are:
Manual reviews
Proving properties of the semantics
Verified translations
Testing executable semantics
Equivalence with alternate semantics
In any case, there are probably points that could be added and I'd certainly like to hear about more.
Going back to my original question of what the definition is of a certified program, it's still a little unclear to me. Wjedynak sort of provides an answer, but really the work by Leroy involved creating a compiler in Coq and then, in some sense, certifying it. In theory, it makes it possible to now prove things about the C programs since we can now go C->Coq->proof. In that sense, it seems like there's this work flow where we could
Write a program in X language
Form of a model of the program from step 1 in Coq or some other proof assistant tool. This could involve creating a model of the programming language in Coq or it could involve creating a model of the program directly (i.e. rewriting the program itself in Coq).
Prove some property about the model. Maybe it's a proof about the values. Maybe it's the proof of the equivalence of statements (stuff like 3=1+2 or f(x,y)=f(y,x), whatever.)
Then, based on these proofs, call the original program certified.
Alternatively, we could create a specification of a program in a proof assistant tool and then prove properties about the specification, but not the program itself.
In any case, I'm still interested in hearing alternative definitions if anyone has them.
I agree that the notion seems vague, but in my understanding a certified program is a program equipped/together with the proof of correctness. Now, by using rich and expressive type signatures you can make it so there is no need for a separate proof, but this is often only a matter of convenience. The real issue is what do we mean by correctness: this a matter of specification. You can take a look at e.g. Xavier Leroy. Formal verification of a realistic compiler.
First note that the phrase "certified" has a slightly French bias: elsewhere the expression "verified" or "proven" is often used.
In any case it is important to ask what that actually means. X. Leroy and CompCert is a very good starting point: it is a big project about C compiler verification, and Leroy is always keen to explain to his audience why verification matters. Especially when talking to people from "certification agencies" who usually mean testing, not proving.
Another big verification project is L4.verified which uses Isabelle/HOL. This part of the exposition explains a bit what is actually stated and proven, and what are the consequences. Unfortunately, the actual proof is top secret, so it cannot be checked publicly.
A certified program is a program that is paired with a proof that the program satisfies its specification, i.e., a certificate. The key is that there exists a proof object that can be checked independently of the tool that produced the proof.
A verified program has undergone verification, which in this context may typically mean that its specification has been formalized and proven correct in a system like Coq, but the proof is not necessarily certified by an external tool.
This distinction is well attested in the scientific literature and is not specific to Francophones. Xavier Leroy describes it very clearly in Section 2.2 of A formally verified compiler back-end.
My understanding is that "certified" in this sense is, as Makarius pointed out, an English word chosen by Francophones where native speakers might instead have used "formally verified". Coq was developed in France, and has many Francophone developers and users.
As to what "formal verification" means, Wikipedia notes (license: CC BY-SA 3.0) that it:
is the act of proving ... the correctness of intended algorithms underlying a system with respect to a certain formal specification or property, using formal methods of mathematics.
(I realise you would like a much more precise definition than this. I hope to update this answer in future, if I find one.)
Wikipedia especially notes the difference between verification and validation:
Validation: "Are we trying to make the right thing?", i.e., is the product specified to the user's actual needs?
Verification: "Have we made what we were trying to make?", i.e., does the product conform to the specifications?
The landmark paper seL4: Formal Verification of an OS Kernel (Klein, et al., 2009) corroborates this interpretation:
A cynic might say that an implementation proof only shows that the
implementation has precisely the same bugs that the specification
contains. This is true: the proof does not guarantee that the
specification describes the behaviour the user expects. The
difference [in a verified approach compared to a non-verified one]
is the degree of abstraction and the absence of whole classes of bugs.
Which classes of bugs are those? The Agda tutorial gives some idea:
no runtime errors (inevitable errors like I/O errors are handled; others are excluded by design).
no non-productive infinite loops.
It may means free of runtime error (numeric overflow, invalid references …), which is already good compared to most developed software, while still weak. The other meaning is proved to be correct according to a domain formalization; that is, it does not only have to be formally free of runtime errors, it also has to be proved to do what it's expected to do (which must have been precisely defined).
In the History of Lisp, McCarthy writes :
The unexpected appearance of an interpreter tended to freeze the form of the language, and some of the decisions made rather lightheartedly for the ``Recursive functions ...'' paper later proved unfortunate. These included the COND notation for conditional expressions which leads to an unnecessary depth of parentheses, and the use of the number zero to denote the empty list NIL and the truth value false. Besides encouraging pornographic programming, giving a special interpretation to the address 0 has caused difficulties in all subsequent implementations.
What's he talking about?
... zero to denote the empty list ...
because 0==() has been the emoticon for pornography since 1958.
Now you know.
The fact that too many implementation details were leaking at a higher level, i.e. showing up too much
The original Fortran III spec document, a technical paper disseminated in the Winter of 1958 describes some very explicit additions to the Fortran II language, including ... inline assembly.
The PDF is here
A tantalizing description of the "additions" follows :
Some taboo code is
Mysteriously, Fortran-III was never released to the public (see section 5.), but disseminated in limited fashion before quietly fading away.
I think it is about mixing numerical and logic values, which can still be seen in popular constructs, probably originated in Fortran, like while (1). There are a lot of "clever" C algorithms, that rely on the fact, that 0 is false and every other value isn't.
The same applies at large to API calls, like in POSIX or Linux kernel, some of which return 0 on failure, while some -1 (there's a rule of thumb, when to apply which, but it is just folklore, so often it is broken). Considering the fact, that at McCarthy's time, those things weren't developed yet, you can see his "prophetic" power even here.
Perhaps it was his way of talking about null references: the billion dollar mistake (T. Hoare).
Obviously, that will depend on what you want to do: numerical analysis, threading, databases, etc. I've seen the benchmarks; Larceny and Bigloo seem to come up ahead. Is there any implementation of Scheme that performs pretty well in several different benchmarks? Are there any that can create code that runs faster than produced by SBCL? I don't see why SBCL should be so fast - Scheme is a far simpler language than Common Lisp!
http://community.schemewiki.org/?Stalin
http://en.wikipedia.org/wiki/Stalin_(Scheme_implementation)
From Wikipedia:
Stalin (STAtic Language ImplementatioN) is an aggressive optimizing
batch whole-program Scheme compiler written by Jeffrey Mark Siskind.
It uses advanced flow analysis and type inference and a variety of
other optimization techniques to produce code. Stalin is intended for
production use in generating an optimized executable.
The compiler itself runs slowly, and there is little or no support for
debugging or other niceties. Full R4RS Scheme is supported, with a few
minor and rarely encountered omissions. Interfacing to external C
libraries is straightforward. The compiler itself does lifetime
analysis and hence does not generate as much garbage as might be
expected, but global reclamation of storage is done using the Boehm
garbage collector.
It seems that Stalin is no longer being developed.
Among the Schemes that are fully standards compliant (at least with R5RS) and ready for prime-time use, Chez Scheme must be the fastest.
Based on these benchmarks, it looks like Chez Scheme, Gambit, and Racket are roughly tied for the title of Fastest Scheme.
I am doing a compilers discipline at college and we must generate code for our invented language to any platform we want to. I think the simplest case is generating code for the Java JVM or .NET CLR. Any suggestion which one to choose, and which APIs out there can help me on this task? I already have all the semantic analysis done, just need to generate code for a given program.
Thank you
From what I know, on higher level, two VMs are actually quite similar: both are classic stack-based machines, with largely high-level operations (e.g. virtual method dispatch is an opcode). That said, CLR lets you get down to the metal if you want, as it has raw data pointers with arithmetic, raw function pointers, unions etc. It also has proper tailcalls. So, if the implementation of language needs any of the above (e.g. Scheme spec mandates tailcalls), or if it is significantly advantaged by having those features, then you would probably want to go the CLR way.
The other advantage there is that you get a stock API to emit bytecode there - System.Reflection.Emit - even though it is somewhat limited for full-fledged compiler scenarios, it is still generally enough for a simple compiler.
With JVM, two main advantages you get are better portability, and the fact that bytecode itself is arguably simpler (because of less features).
Another option that i came across what a library called run sharp that can generate the MSIL code in runtime using emit. But in a nicer more user friendly way that is more like c#. The latest version of the library can be found here.
http://code.google.com/p/runsharp/
In .NET you can use the Reflection.Emit Namespace to generate MSIL code.
See the msdn link: http://msdn.microsoft.com/en-us/library/3y322t50.aspx
I have a range of Win32 VCL applications developed with C++Builder from BCB5 onwards, and want to port them to ECB2009 or whatever it's now called.
Some of my applications use the old TNT/TMS unicode components, so I have a good mix of AnsiStrings and WideStrings throughout the code. The new version introduces UnicodeString, and a bunch of #defines that change the way functions like c_str behave.
I want to modify my code in a way that is as backwards-compatible as possible, so that the same code base can still be compiled and run (in a non-unicode fashion) on BCB2007 if necessary.
Particular areas of concern are:
Passing strings to/from Win32 API
functions
Interop with TXMLDocument
'Raw' strings used for RS232 comms, etc.
Rather than knife-and-fork the changes, I'm looking for guidelines that I can apply to ease the migration, while keeping backwards compatibility wherever possible.
If no such guidelines already exist, maybe we can formulate some here?
The biggest issue is compatibility for C++Builder 2009 and previous versions, the Unicode differences are some, but the project configuration files have changed as well. From the discussions I've been following on the CodeGear forums, there are not a whole lot of choices in the matter.
I think the first place to start, if you have not done so, is the C++Builder 2009 release notes.
The biggest thing seen has been the TCHAR mapping (to wchar or char); using the STL string varieties may be a help, since they shouldn't be very different between the two versions. The mapping existed in C++Builder 2007 as well (with the tchar header).
For any code that does not need to be explicitally Ansi or explitically Unicode, you should consider using the System::String, System::Char, and System::PChar typedefs as much as possible. That will help ease a lot of migration, and they work in previous versions.
When passing a System::String to an API function, you have to take into account the new "TCHAR maps to" setting in the Project options. If you try to pass AnsiString::c_str() when "TCHAR maps to" is set to "wchar_t", or UnicodeString::c_str() when "TCHAR maps to" is set to "char", you will have to perform appropriate typecasts. If you have "TCHAR maps to" set to "wchar_t". Technically, UnicodeString::t_str() does the same thing as TCHAR does in the API, however t_str() can be very dangerous if you misuse it (when "TCHAR maps to" is set to "char", t_str() transforms the UnicodeString's internal data to Ansi).
For "raw" strings, you can use the new RawByteString type (though I do not recommend it), or TBytes instead (which is an array of bytes - recommended). You should not be using Ansi/Wide/UnicodeString for non-character data to begin with. Most people used AnsiString as makeshift data buffers in past versions. Do not do that anymore. This is particularly important because AnsiString is now codepage-aware, and thus your data might get converted to other codepages when you least expect it.