Python3.8 typing Protocol: Anything standard for adapter registries? - interface

The zope.interface has (among many other things) run-time adapter registries, which allow to find suitable implementations of some interface at run-time.
Now, the python3.8 has structural subtyping support, but the question is: Are there any standard library mechanisms to achieve at least some primitive run-time adaptation out-of-the box? In other words, having some instance animal and an interface IFlying, is it possible to lookup an adapter for IFlying(animal)? Or is the typing.Protocol purely for typechecking?
The motivation of this question is: Does it make sense to continue using zope.interface in the new code, or will typing.Protocol make that obsolete soon (at least for simple adapter cases)?
I can see opinions like this, which hint that some standard interface support is there, but can't find concrete examples on how to replace adapter registry with Python3.8 or more recent standard library (short of writing some library on top of it).
Note: I am aware of What to use in replacement of an interface/protocol in python , but my question is specifically about how to make adaptation (and even multiadapters) possible.

Related

invoke coq typechecker from external programs

What would be the best way to interact with Coq from an external program? For example, let's say I want to programmatically generate programs / proofs in some language other than Coq and I just want to call Coq to typecheck them. Is there a standard way to do something like that?
You have a couple of options.
Construct .v files, invoke coqc, check the return code and parse the output of coqc.
This is, in some sense, the most stable way to interact with Coq. It has the most inter-version stability. It's also the most inflexible; you create a .v file, and check it all in one go.
For an example of this method, see my Coq bug minimizer (specificially get_coq_output in diagnose_error.py), which repeatedly makes small alterations to a .v file and checks to see that the alterations don't change the error message given by coqc.
Use the XML protocol to communicate with coqtop
This is the method used by CoqIDE and by upcoming versions of ProofGeneral. Logitext invokes from Haskell a custom patched version of coqtop with the pgip protocol, which was an earlier attempt at a more standardized way of communication with the prover (see this issue for more details).
This is becoming more stable, and gives more fine-grained control over what you want checked. For example, it allows you to check multiple proofs within a single session, which is important if you depend on a large library that takes time to load, and need to check many small proofs.
Write a custom OCaml toplevel wrapper for the interface to Coq that you want
The main example of this that I'm aware of is PIDEtop, which is used in the Coqoon Eclipse plugin. I suspect that some of the other entries in the GUI section of Related Tools use this method.
Note that coqtop is itself a toplevel wrapper in this style; the files in the toplevel/ folder of the Coq project are likely to be informative.
This gives you the most flexibility and reusability, at the cost of having to design your own protocol, or implement an existing protocol.
Write your external program in OCaml and link with Coq
Much like (3), this method gives you as much flexibility as you want. In fact, the only difference between this and (3) is that in (3), you separate out the communication with Coq into its own binary, whereas here, you fuse communication with Coq with the other functionality of your program. I'm not aware of programs in this style, though I believe coqchk may qualify, as I think it shares a couple of files with the Coq kernel (see the checker/ folder in the Coq codebase).
Regardless of which way you choose, I think that modelling off of existing projects will be more fruitful than making use of (as-yet incomplete) documentation on the various APIs and protocols. The API has been undergoing a lot of revision recently, in an attempt to get it into a reasonable and stable state, and the XML protocol has also been subject to recent improvements; #ejgallego has been the driving force behind much of these improvements.

What is the best way to implement optional library dependencies in Rust?

I am writing a toy software library in Rust that needs to be able to load images of almost any type into an internal data structure for the image. It is early days for the Rust ecosystem, and there is no one library/set of bindings that I would trust for this task.
I would ideally like:
Support multiple redundant external libraries that may or may not be available at runtime
Support multiple redundant external libraries that may or may not be available at compile-time.
Include at least one fallback implementation that ships with my code.
Fully encapsulate all of the file loading stuff behind a function that does path -> InternalImage loading.
Is there a best practice way to implement optional dependencies like this in Rust? Some of the libraries will be Rust, and some of them will probably be C libraries with Rust bindings.
Cargo, the Rust package manager, can help with that. It lets you declare optional compile-time dependencies. See the [features] section of Cargo's documentation.
For runtime dependencies I'm not sure. I think std::dynamic_lib could be helpful. See an example of using DynamicLibrary in a previous SO question.

Avoid namespace conflicts in Java MPI-Bindings

I am using the MPJ-api for my current project. The two implementations I am using are MPJ-express and Fast-MPJ. However, since they both implement the same API, namely the MPJ-API, I cannot simultaneously support both implementations due to name-space collisions.
Is there any way to wrap two different libraries with the same package and class-names such that both can be supported at the same time in Java or Scala?
So far, the only way I can think of is to move the module into separate projects, but I am not sure this would be the way to go.
If your code use only a subset of MPI functions (like most of the MPI code I've reviewed), you can write an abstraction layer (traits or even Cake-Pattern) which defines the ops your are actually using. You can then implement a concrete adapter for each implementation.
This approach will also work with non-MPI communication layers (think Akka, JGroups, etc.)
As a bonus point, you could use the SLF4J approach: the correct implementation is chosen at runtime according to what's actually in the classpath.

Why do web development frameworks tend to work around the static features of languages?

I was a little surprised when I started using Lift how heavily it uses reflection (or appears to), it was a little unexpected in a statically-typed functional language. My experience with JSP was similar.
I'm pretty new to web development, so I don't really know how these tools work, but I'm wondering,
What aspects of web development encourage using reflection?
Are there any tools (in statically typed languages) that handle (1) referring to code from a template page (2) object-relational mapping, in a way that does not use reflection?
Please see lift source. It doesn't use reflection for most of the code that I have studied. Almost everything is statically typed. If you are referring to lift views they are processed as Xml nodes, that too is not reflection.
Specifically referring to the <lift:Foo.bar/> issue:
When <lift:Foo.bar/> is encountered in the code, Lift makes a few guesses, how the original name should have been (different naming conventions) and then calls java.lang.Class.forName to get the class. (Relevant code in LiftSession.scala and ClassHelpers.scala.) It will only find classes registered with addToPackages during boot.
Note that it is also possible (and common) to register classes and methods manually. Convention is still that all transformations must be of the form NodeSeq => NodeSeq because that is the only thing which makes sense for an untyped HTML/XHTML output.
So, what you have is Lift‘s internal registry of node transformations on one side, and on the other side the implicit registry of the module. Both types use a simple string lookup to execute a method. I guess it is arguable if one is more reflection based than the other.

Code generation for Java JVM / .NET CLR

I am doing a compilers discipline at college and we must generate code for our invented language to any platform we want to. I think the simplest case is generating code for the Java JVM or .NET CLR. Any suggestion which one to choose, and which APIs out there can help me on this task? I already have all the semantic analysis done, just need to generate code for a given program.
Thank you
From what I know, on higher level, two VMs are actually quite similar: both are classic stack-based machines, with largely high-level operations (e.g. virtual method dispatch is an opcode). That said, CLR lets you get down to the metal if you want, as it has raw data pointers with arithmetic, raw function pointers, unions etc. It also has proper tailcalls. So, if the implementation of language needs any of the above (e.g. Scheme spec mandates tailcalls), or if it is significantly advantaged by having those features, then you would probably want to go the CLR way.
The other advantage there is that you get a stock API to emit bytecode there - System.Reflection.Emit - even though it is somewhat limited for full-fledged compiler scenarios, it is still generally enough for a simple compiler.
With JVM, two main advantages you get are better portability, and the fact that bytecode itself is arguably simpler (because of less features).
Another option that i came across what a library called run sharp that can generate the MSIL code in runtime using emit. But in a nicer more user friendly way that is more like c#. The latest version of the library can be found here.
http://code.google.com/p/runsharp/
In .NET you can use the Reflection.Emit Namespace to generate MSIL code.
See the msdn link: http://msdn.microsoft.com/en-us/library/3y322t50.aspx