History of EAFNOSUPPORT semantics? - sockets

I’m seeing three different “error strings” for the EAFNOSUPPORT / WSAEAFNOSUPPORT errno:
POSIX:
The implementation does not support the specified address family.
BSD (errno.h, _sys_errlist[]):
Address family not supported by protocol family
Windows®/Winsock2:
An address incompatible with the requested protocol was used
While the semantics of the latter two are pretty much identical, the former differs quite a bit (it does not reference a protocol family; rather, it states that the given address family is not supported in a particular place).
I’m assuming both interpretations are valid, especially given EPFNOSUPPORT (“Protocol family not supported”) is marked as nōn-POSIX in the BSD headers, but where does this difference come from? Incidentally, my back-of-the-head/historical(FSVO) understanding of this errno code matches the POSIX semantics more than the BSD/Winsock semantics…
I can imagine that the POSIX semantic is from older BSD sockets, and later EPFNOSUPPORT was added so EAFNOSUPPORT was redesignated in BSD sockets (and Winsock just took that), or POSIX is deliberately written in a different way.
Can anyone shed light on this, perhaps explain the histories (code heritage, etc)?

Related

Python3.8 typing Protocol: Anything standard for adapter registries?

The zope.interface has (among many other things) run-time adapter registries, which allow to find suitable implementations of some interface at run-time.
Now, the python3.8 has structural subtyping support, but the question is: Are there any standard library mechanisms to achieve at least some primitive run-time adaptation out-of-the box? In other words, having some instance animal and an interface IFlying, is it possible to lookup an adapter for IFlying(animal)? Or is the typing.Protocol purely for typechecking?
The motivation of this question is: Does it make sense to continue using zope.interface in the new code, or will typing.Protocol make that obsolete soon (at least for simple adapter cases)?
I can see opinions like this, which hint that some standard interface support is there, but can't find concrete examples on how to replace adapter registry with Python3.8 or more recent standard library (short of writing some library on top of it).
Note: I am aware of What to use in replacement of an interface/protocol in python , but my question is specifically about how to make adaptation (and even multiadapters) possible.

invoke coq typechecker from external programs

What would be the best way to interact with Coq from an external program? For example, let's say I want to programmatically generate programs / proofs in some language other than Coq and I just want to call Coq to typecheck them. Is there a standard way to do something like that?
You have a couple of options.
Construct .v files, invoke coqc, check the return code and parse the output of coqc.
This is, in some sense, the most stable way to interact with Coq. It has the most inter-version stability. It's also the most inflexible; you create a .v file, and check it all in one go.
For an example of this method, see my Coq bug minimizer (specificially get_coq_output in diagnose_error.py), which repeatedly makes small alterations to a .v file and checks to see that the alterations don't change the error message given by coqc.
Use the XML protocol to communicate with coqtop
This is the method used by CoqIDE and by upcoming versions of ProofGeneral. Logitext invokes from Haskell a custom patched version of coqtop with the pgip protocol, which was an earlier attempt at a more standardized way of communication with the prover (see this issue for more details).
This is becoming more stable, and gives more fine-grained control over what you want checked. For example, it allows you to check multiple proofs within a single session, which is important if you depend on a large library that takes time to load, and need to check many small proofs.
Write a custom OCaml toplevel wrapper for the interface to Coq that you want
The main example of this that I'm aware of is PIDEtop, which is used in the Coqoon Eclipse plugin. I suspect that some of the other entries in the GUI section of Related Tools use this method.
Note that coqtop is itself a toplevel wrapper in this style; the files in the toplevel/ folder of the Coq project are likely to be informative.
This gives you the most flexibility and reusability, at the cost of having to design your own protocol, or implement an existing protocol.
Write your external program in OCaml and link with Coq
Much like (3), this method gives you as much flexibility as you want. In fact, the only difference between this and (3) is that in (3), you separate out the communication with Coq into its own binary, whereas here, you fuse communication with Coq with the other functionality of your program. I'm not aware of programs in this style, though I believe coqchk may qualify, as I think it shares a couple of files with the Coq kernel (see the checker/ folder in the Coq codebase).
Regardless of which way you choose, I think that modelling off of existing projects will be more fruitful than making use of (as-yet incomplete) documentation on the various APIs and protocols. The API has been undergoing a lot of revision recently, in an attempt to get it into a reasonable and stable state, and the XML protocol has also been subject to recent improvements; #ejgallego has been the driving force behind much of these improvements.

Definition of a certified program

I see a couple of different research groups, and at least one book, that talk about using Coq for designing certified programs. Is there are consensus on what the definition of certified program is? From what I can tell, all it really means is that the program was proved total and type correct. Now, the program's type may be something really exotic such as a list with a proof that it's nonempty, sorted, with all elements >= 5, etc. However, ultimately, is a certified program just one that Coq shows is total and type safe, where all the interesting questions boil down to what was included in the final type?
Edit 1
Based on wjedynak's answer, I had a look at Xavier Leroy's paper "Formal Verification of a Realistic Compiler", which is linked in the answers below. I think this contains some good information, but I think the more informative information in this sequence of research can be found in the paper Mechanized Semantics for the Clight Subset of the C Language by Sandrine Blazy and Xavier Leroy. This is the language that the "Formal Verification" paper adds optimizations to. In it, Blazy and Leroy present the syntax and semantics of the Clight language and then discuss the validation of these semantics in section 5. In section 5, there's a list of different strategies used for validating the compiler, which in some sense provides an overview of different strategies for creating a certified program. These are:
Manual reviews
Proving properties of the semantics
Verified translations
Testing executable semantics
Equivalence with alternate semantics
In any case, there are probably points that could be added and I'd certainly like to hear about more.
Going back to my original question of what the definition is of a certified program, it's still a little unclear to me. Wjedynak sort of provides an answer, but really the work by Leroy involved creating a compiler in Coq and then, in some sense, certifying it. In theory, it makes it possible to now prove things about the C programs since we can now go C->Coq->proof. In that sense, it seems like there's this work flow where we could
Write a program in X language
Form of a model of the program from step 1 in Coq or some other proof assistant tool. This could involve creating a model of the programming language in Coq or it could involve creating a model of the program directly (i.e. rewriting the program itself in Coq).
Prove some property about the model. Maybe it's a proof about the values. Maybe it's the proof of the equivalence of statements (stuff like 3=1+2 or f(x,y)=f(y,x), whatever.)
Then, based on these proofs, call the original program certified.
Alternatively, we could create a specification of a program in a proof assistant tool and then prove properties about the specification, but not the program itself.
In any case, I'm still interested in hearing alternative definitions if anyone has them.
I agree that the notion seems vague, but in my understanding a certified program is a program equipped/together with the proof of correctness. Now, by using rich and expressive type signatures you can make it so there is no need for a separate proof, but this is often only a matter of convenience. The real issue is what do we mean by correctness: this a matter of specification. You can take a look at e.g. Xavier Leroy. Formal verification of a realistic compiler.
First note that the phrase "certified" has a slightly French bias: elsewhere the expression "verified" or "proven" is often used.
In any case it is important to ask what that actually means. X. Leroy and CompCert is a very good starting point: it is a big project about C compiler verification, and Leroy is always keen to explain to his audience why verification matters. Especially when talking to people from "certification agencies" who usually mean testing, not proving.
Another big verification project is L4.verified which uses Isabelle/HOL. This part of the exposition explains a bit what is actually stated and proven, and what are the consequences. Unfortunately, the actual proof is top secret, so it cannot be checked publicly.
A certified program is a program that is paired with a proof that the program satisfies its specification, i.e., a certificate. The key is that there exists a proof object that can be checked independently of the tool that produced the proof.
A verified program has undergone verification, which in this context may typically mean that its specification has been formalized and proven correct in a system like Coq, but the proof is not necessarily certified by an external tool.
This distinction is well attested in the scientific literature and is not specific to Francophones. Xavier Leroy describes it very clearly in Section 2.2 of A formally verified compiler back-end.
My understanding is that "certified" in this sense is, as Makarius pointed out, an English word chosen by Francophones where native speakers might instead have used "formally verified". Coq was developed in France, and has many Francophone developers and users.
As to what "formal verification" means, Wikipedia notes (license: CC BY-SA 3.0) that it:
is the act of proving ... the correctness of intended algorithms underlying a system with respect to a certain formal specification or property, using formal methods of mathematics.
(I realise you would like a much more precise definition than this. I hope to update this answer in future, if I find one.)
Wikipedia especially notes the difference between verification and validation:
Validation: "Are we trying to make the right thing?", i.e., is the product specified to the user's actual needs?
Verification: "Have we made what we were trying to make?", i.e., does the product conform to the specifications?
The landmark paper seL4: Formal Verification of an OS Kernel (Klein, et al., 2009) corroborates this interpretation:
A cynic might say that an implementation proof only shows that the
implementation has precisely the same bugs that the specification
contains. This is true: the proof does not guarantee that the
specification describes the behaviour the user expects. The
difference [in a verified approach compared to a non-verified one]
is the degree of abstraction and the absence of whole classes of bugs.
Which classes of bugs are those? The Agda tutorial gives some idea:
no runtime errors (inevitable errors like I/O errors are handled; others are excluded by design).
no non-productive infinite loops.
It may means free of runtime error (numeric overflow, invalid references …), which is already good compared to most developed software, while still weak. The other meaning is proved to be correct according to a domain formalization; that is, it does not only have to be formally free of runtime errors, it also has to be proved to do what it's expected to do (which must have been precisely defined).

Dynamic typing and programming distributed systems

Coming from Scala (and Akka), I recently began looking at other languages that were designed with distributed computing in mind, namely Erlang (and a tiny bit of Oz and Bloom). Both Erlang and Oz are dynamically typed, and if I remember correctly (will try to find link) people have tried to add types to Erlang and managed to type a good portion of it, but could not successfully coerce the system to make it fit the last bit?
Oz, while a research language, is certainly interesting to me, but that is dynamically typed as well.
Bloom's current implementation is in Ruby, and is consequently dynamically typed.
To my knowledge, Scala (and I suppose Haskell, though I believe that was built initially more as an exploration into pure lazy functional languages as opposed to distributed systems) is the only language that is statically typed and offer language-level abstractions (for lack of a better term) in distributed computing.
I am just wondering if there are inherent advantages of dynamic typing over static typing, specifically in the context of providing language level abstractions for programming distributed systems.
Not really. For example, the same group that invented Oz later did some work on Alice ML, a project whose mission statement was to rethink Oz as a typed, functional language. And although it remained a research project, I'd argue that it was enough proof of concept to demonstrate that the same basic functionality can be supported in such a setting.
(Full disclosure: I was a PhD student in that group at the time, and the type system of Alice ML was my thesis.)
Edit: The problem with adding types to Erlang isn't distribution, it simply is an instance of the general problem that adding types to a language after the fact never works out well. On the other hand, there still is Dialyzer for Erlang.
Edit 2: I should mention that there were other interesting research projects for typed distributed languages, e.g. Acute, which had a scope similar to Alice ML, or ML5, which used modal types to enable stronger checking of mobility characteristics. But they have only survived in the form of papers.
There are no inherent advantages of dynamic typing over static typing for distributed systems. Both have their own advantages and disadvantages in general.
Erlang (Akka is inspired from Erlang Actor Model) is dynamically typed. Dynamic typing in Erlang was historically chosen for simple reasons; those who implemented Erlang at first mostly came from dynamically typed languages particularly Prolog, and as such, having Erlang dynamic was the most natural option to them. Erlang was built with failure in mind.
Static typing helps in catching many errors during compilation time itself rather than at runtime as in case of dynamic typing. Static Typing was tried in Erlang and it was a failure. But Dynamic typing helps in faster prototyping. Check this link for reference which talks a lot about the difference.
Subjectively, I would rather think about the solution/ algorithm of a problem rather than thinking about the type of each of the variable that I use in the algorithm. It also helps in quick development.
These are few links which might help
BenefitsOfDynamicTyping
static-typing-vs-dynamic-typing
BizarroStaticTypingDebate
Cloud Haskell is maturing quickly, statically-typed, and awesome. The only thing it doesn't feature is Erlang-style hot code swapping - that's the real "killer feature" of dynamically-typed distributed systems (the "last bit" that made Erlang difficult to statically type).

Writing programs in dynamic languages that go beyond what the specification allows

With the growth of dynamically typed languages, as they give us more flexibility, there is the very likely probability that people will write programs that go beyond what the specification allows.
My thinking was influenced by this question, when I read the answer by bobince:
A question about JavaScript's slice and splice methods
The basic thought is that splice, in Javascript, is specified to be used in only certain situations, but, it can be used in others, and there is nothing that the language can do to stop it, as the language is designed to be extremely flexible.
Unless someone reads through the specification, and decides to adhere to it, I am fairly certain that there are many such violations occuring.
Is this a problem, or a natural extension of writing such flexible languages? Or should we expect tools like JSLint to help be the specification police?
I liked one answer in this question, that the implementation of python is the specification. I am curious if that is actually closer to the truth for these types of languages, that basically, if the language allows you to do something then it is in the specification.
Is there a Python language specification?
UPDATE:
After reading a couple of comments, I thought I would check the splice method in the spec and this is what I found, at the bottom of pg 104, http://www.mozilla.org/js/language/E262-3.pdf, so it appears that I can use splice on the array of children without violating the spec. I just don't want people to get bogged down in my example, but hopefully to consider the question.
The splice function is intentionally generic; it does not require that its this value be an Array object.
Therefore it can be transferred to other kinds of objects for use as a method. Whether the splice function
can be applied successfully to a host object is implementation-dependent.
UPDATE 2:
I am not interested in this being about javascript, but language flexibility and specs. For example, I expect that the Java spec specifies you can't put code into an interface, but using AspectJ I do that frequently. This is probably a violation, but the writers didn't predict AOP and the tool was flexible enough to be bent for this use, just as the JVM is also flexible enough for Scala and Clojure.
Whether a language is statically or dynamically typed is really a tiny part of the issue here: a statically typed one may make it marginally easier for code to enforce its specs, but marginally is the key word here. Only "design by contract" -- a language letting you explicitly state preconditions, postconditions and invariants, and enforcing them -- can help ward you against users of your libraries empirically discovering what exactly the library will let them get away with, and taking advantage of those discoveries to go beyond your design intentions (possibly constraining your future freedom in changing the design or its implementation). And "design by contract" is not supported in mainstream languages -- Eiffel is the closest to that, and few would call it "mainstream" nowadays -- presumably because its costs (mostly, inevitably, at runtime) don't appear to be justified by its advantages. "Argument x must be a prime number", "method A must have been previously called before method B can be called", "method C cannot be called any more once method D has been called", and so on -- the typical kinds of constraints you'd like to state (and have enforced implicitly, without having to spend substantial programming time and energy checking for them yourself) just don't lend themselves well to be framed in the context of what little a statically typed language's compiler can enforce.
I think that this sort of flexibility is an advantage as long as your methods are designed around well defined interfaces rather than some artificial external "type" metadata. Most of the array functions only expect an object with a length property. The fact that they can all be applied generically to lots of different kinds of objects is a boon for code reuse.
The goal of any high level language design should be to reduce the amount of code that needs to be written in order to get stuff done- without harming readability too much. The more code that has to be written, the more bugs get introduced. Restrictive type systems can be, (if not well designed), a pervasive lie at worst, a premature optimisation at best. I don't think overly restrictive type systems aid in writing correct programs. The reason being that the type is merely an assertion, not necessarily based on evidence.
By contrast, the array methods examine their input values to determine whether they have what they need to perform their function. This is duck typing, and I believe that this is more scientific and "correct", and it results in more reusable code, which is what you want. You don't want a method rejecting your inputs because they don't have their papers in order. That's communism.
I do not think your question really has much to do with dynamic vs. static typing. Really, I can see two cases: on one hand, there are things like Duff's device that martin clayton mentioned; that usage is extremely surprising the first time you see it, but it is explicitly allowed by the semantics of the language. If there is a standard, that kind of idiom may appear in later editions of the standard as a specific example. There is nothing wrong with these; in fact, they can (unless overused) be a great productivity boost.
The other case is that of programming to the implementation. Such a case would be an actual abuse, coming from either ignorance of a standard, or lack of a standard, or having a single implementation, or multiple implementations that have varying semantics. The problem is that code written in this way is at best non-portable between implementations and at worst limits the future development of the language, for fear that adding an optimization or feature would break a major application.
It seems to me that the original question is a bit of a non-sequitor. If the specification explicitly allows a particular behavior (as MUST, MAY, SHALL or SHOULD) then anything compiler/interpreter that allows/implements the behavior is, by definition, compliant with the language. This would seem to be the situation proposed by the OP in the comments section - the JavaScript specification supposedly* says that the function in question MAY be used in different situations, and thus it is explicitly allowed.
If, on the other hand, a compiler/interpreter implements or allows behavior that is expressly forbidden by a specification, then the compiler/interpreter is, by definition, operating outside the specification.
There is yet a third scenario, and an associated, well defined, term for those situations where the specification does not define a behavior: undefined. If the specification does not actually specify a behavior given a particular situation, then the behavior is undefined, and may be handled either intentionally or unintentionally by the compiler/interpreter. It is then the responsibility of the developer to realize that the behavior is not part of the specification, and, should s/he choose to leverage the behavior, the developer's application is thereby dependent upon the particular implementation. The interpreter/compiler providing that implementation is under no obligation to maintain the officially undefined behavior beyond backwards compatibility and whatever commitments the producer may make. Furthermore, a later iteration of the language specification may define the previously undefined behavior, making the compiler/interpreter either (a) non-compliant with the new iteration, or (b) come out with a new patch/version to become compliant, thereby breaking older versions.
* "supposedly" because I have not seen the spec, myself. I go by the statements made, above.