Dynamic binding of services in Thrift or gRPC / passing service as argument - callback

I work with existing system that uses a lot of dynamic service registrations, using Andorid HIDL/AIDL, for example:
Multiple objects implement:
IHandler { Response handleRequet(Id subset, Request r)}
One object implements:
class IHandlerCoordinator {
Response handleRequet(Id subset, Request r);
void RegisterHandler(std::vector<Id> subsets, IHandler handler_for_subset_ids);
}
Multiple object on startup/dynamicaly register into IHandlerCoordinator (passing expected subset of what they can handle), and then IHandlerCoordinator dispatches incoming requests to clients.
In xIDL it requires passing services as arguments, how it can be emulated in Thrift / gRPC?

W/regard to Thrift: There is no such thing as callbacks yet. There have been some dicussions around that topic (see mailing list archives and/or JIRA) but there's no implementation. Biggest challenge is to do it in an transport-agnositic way, so the current consensus is that you have to implement it manually if you need it.
Technically, there's two general ways to do it:
implement a server instance also on the client side which receives the callbacks
integrate long running calls or a polling mechanism to actively retrieve "callback" data from the server by means of client calls
With gRPC it's easier, because gRPC focuses on HTTP. Thrift has been open from the beginning for any kind of transport you can imagine.

Related

What is the difference between a protocol and an interface in general?

I understand an interface is a set of publicly exposed things that one system can use to interact with others systems. I'm reading about WEBRTC protocol and to understand what a protocol is I went to the Wikipedia definition. It says more or less that a protocol is a system of rules that allows two systems to communicate. Ain't that the same as interface? Maybe I'm not understanding one or both.
An interface defines how two entities may communicate. A protocol defines how they should communicate and what that communication means.
Here is an interface:
public interface ICommunicate
{
string SendMessageAndGetResponse(string message);
}
A protocol then might be:
Send "Hello", if you get back "Hi" then send "How are you?" and the response will be a status. If you get back anything other than "Hi" from the initial message then the system is not functioning properly and you must send the message "Reboot" which you'll then get back "Rebooted!" if successful and anything else for failure.
In general interface mean "The point of interconnection or contact between entities." and transferred to software it means "The connection between parts of software." and also "In object-oriented programming, a piece of code defining a set of operations that other code must implement." (Source)
In general protocol means "The official formulas which appeared at the beginning or end of certain official documents such as charters, papal bulls etc." and transferred to computers it means "A set of formal rules describing how to transmit or exchange data, especially across a network.". (Source)
So protocol focuses more on the data exchange, whereas interface focuses more on software interaction independent of any data exchange.
Of course, in the end, software interaction is most of the time a data exchange. Passing arguments to a function is a data exchange. Calling a method/function is not directly a data exchange but you need to imagine it like this: Instead of calling different functions:
c = add(a, b);
c = sub(a, b);
you could as well always call the same function and pass the desired functionality as argument:
c = func("add", a, b);
c = func("sub", a, b);
and that way the functionality becomes data as well.
The terms are somewhat interchangeable. E.g. some programming languages call it interface to focus on the pure interaction of components (classes, objects, etc.) and some call it protocol to focus on the data exchange between the components.
On a network, a protocol is how data is exchanged; think of IP protocol or TCP protocol. But if you have communication endpoints you talk to over a network to trigger functionality, e.g. a REST API, than the sum of all these endpoints and their parameters can be called an interface, while triggering one of the interface functions would be done via a HTTP request and HTTP is a protocol and defines how data is transferred.
I think in some ways the word 'interface' could also be used for this (especially as the I in API), but generally when talking about what we're sending over the wire, the common word is protocol.
When you dive deep enough into exact definitions of words, the meaning and difference can sometimes break down a bit.
But avoiding super exact semantics, API/Interface tends to be a bit higher level than protocol.

Right REST method for logic execution

I know this is debatable but what is the right HTTP method which just takes an input and executes the logic and returns the response.
For ex: If I have to expose a REST endpoint which takes an integer and returns some number series ?
As of described in RFC for HTTP protocol (https://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html) HTTP methods can be idempotent ot not:
Methods can also have the property of "idempotence" in that (aside from error or expiration issues) the side-effects of N > 0 identical requests is the same as for a single request. The methods GET, HEAD, PUT and DELETE share this property. Also, the methods OPTIONS and TRACE SHOULD NOT have side effects, and so are inherently idempotent.
So if your logic changes state of the system noticeably - you better use non-idempotent method - POST. If all changes in the system by calling service method is only record to log file - use safe HTTP method, for instance GET.
For me since you are not creating /altering /removing any ressources it should be GET, but i will like to ear about other opinion on the poin.
What you are talking about doesn't really sound like REST. This sounds more like an RPC call. POST is usually the right http method for 'anything that doesn't fit well in another method', and is commonly used for RPC calls.

hunchentoot session- v. thread-localized values (ccl)

I'm using hunchentoot session values to make my server code re-entrant. Problem is that session values are, by definition, retained during the session, i.e., from one call from the same browser to the next, whereas what I really am looking for is what amount to thread-specific re-entrancy, so that all the values disappear between calls -- I want to treat each click as a separate "from scratch" event, even if they are from the same session . Easy enough to have the driver either set to nil, or delete my session values, but I'm wondering if there's a "correct" way to do this? I don't see any thread-based analog to hunchentoot:session-value in the documentation.
Thanks in advance for any guidance you can offer.
If you want a value to be "thread specific" and at the same time to be "from scratch" on every request, that requires that every request must be dispatched in a brand new thread. This is not the case according to the Hunchentoot documentation, which says that two models are supported: a single-threaded taskmaster and a thread-per-connection taskmaster.
If your configuration is multi-threaded, then a thread-specific variable bound in a request-handling can therefore be expected to be per-connection. In a single-threaded Hunchentoot setup, it will effectively be global, tied to the request servicing thread.
A thread-based analog to hunchentoot:session-value probably doesn't exist because it would only introduce behaviors into the web app which surprisingly change if the threading model is reconfigured, or if the request pattern from the browser changes. A browser can make multiple requests using the same connection, or close the connection between requests.
To extend the request objects with custom per-request, I would look into, perhaps, subclassing from the acceptor (how to do this is described in the docs). My custom acceptor would have a custom method of the process-connection generic function which would create extended/subclasses request objects carrying the extra stuff I wanted to put into a request.
Another way would be to have some global weak hash which binds request objects as keys to additional information.

Does vert.x have centralized filtering?

I am new to Vert.X.
Does Vert.x have a built in facility for centralized filters? What I mean are the kind of filters that you would use on a J2EE application.
For instance, all pages have to go through the auth filter, or something like that.
Is there a standardized way to achieve this in Vert.x?
I know this question is quite old, but for those still looking for filter in Vertx 3, the solution is to use subRouter as a filter:
// Your regular routes
router.route("/").handler((ctx) -> {
ctx.response().end("...");
});
// Have more routes here...
Router filterRouter = Router.router(vertx);
filterRouter.get().handler((ctx)->{
// Do something smart
// Forward to your actual router
ctx.next();
});
filterRouter.mountSubRouter("/", router);
Filtering is an implementation of the chain of responsibility in the servlet container. Vert.x does not have this kind of concept but with yoke (or apex in the new release) you are able to easily reproduce this behavior.
Give a look in the routing section: https://github.com/vert-x3/vertx-web/blob/master/vertx-web/src/main/asciidoc/index.adoc
HTH,
Carlo
Vert.x is unopinionated about how many things should be handled. But generally speaking, these types of features are typically implemented as "bus mods" (i.e. modules/verticles which receive input and produce output over the event bus) in Vert.x 2. In fact, the auth manager module may help you get a better understanding of how this is done:
https://github.com/vert-x/mod-auth-mgr
In Vert.x 3 the module system will be/is gone, but the pattern will remain the same. It's possible that some higher level framework built on Vert.x could support these types of filters, but Vert.x core will not.
If also recommend you poke around in Vert.x Apex if you're getting started building web applications on Vert.x:
https://github.com/vert-x3/vertx-apex
Vert.x is more similar to node.js than any other java based frameworks.
Vert.x depends on middlewares. You can define them and attach them to a route. Depending on the order they are defined in they will get called.
For example lets say you have a user application where you would like to run logging and request verification before the controller is called.
You can do something like follows:
Router router = Router.router(vertx);
router.route("/*").handler(BodyHandler.create()); // Allows Body to available in post calls
router.route().handler(new Handler<RoutingContext>() {
#Override
public void handle(RoutingContext event) {
//Handle logs
}
})
router.route("/user").handler(new Handler<RoutingContext>() {
#Override
public void handle(RoutingContext event) {
// handle verification for all apis starting with /user
}
});
Here depending on the route set of middleware will get called.
From my POV, this is exactly the opposite to what vert.x tries to achieve. A verticle being the core building block of the framework is supposed to keep the functions distributed, rather than centralized.
For the multithreaded (cluster) async environment that makes sense, because as soon as you start introducing something "centralized" (and usually synchronous), you would lose the async ability.
One of the options is to implement auth in your case would be to exchange the messages with respecive auth-verticle over the event bus. In this case you would have to handle the async aspect of such a request.

Why no Stub in REST?

EDIT : My original title was "Use of Stub in RPC" ; I edited the title just to let others know it is more than that question.
I have started developing some SOAP based services and I cannot understand the role of stubs. To quote Wiki :
The client and server use different address spaces, so conversion of parameters used in a function call have to be performed, otherwise the values of those parameters could not be used, because of pointers to the computer's memory pointing to different data on each machine. The client and server may also use different data representations even for simple parameters (e.g., big-endian versus little-endian for integers.) Stubs are used to perform the conversion of the parameters, so a Remote Function Call looks like a local function call for the remote computer.
This is dumb, but I don't understand this "practically". I have done some socket programming in Java, but I don't remember any step for "conversion of parameters" when my TCP/UDP clients interacted with my server. (I assume raw server-client communication using TCP/UDP sockets does come under RPC)
I have had some experience with RESTful service development, but I can't recognize the Stub analogue with REST either. Can someone please help me ?
Stubs for calls over the network (be they SOAP, REST, CORBA, DCOM, JSON-RPC, or whatever) are just helper classes that give you a wrapper function that takes care of all the underlying details, such as:
Initializing your TCP/UDP/whatever transport layer
Finding the right address to call and doing DNS lookups if needed
Connecting to the network endpoint where the server should be
Handling errors if the server isn't listening
Checking that the server is what we're expecting it to be (security checks, versioning, etc)
Negotiating the encoding format
Encoding (or "marshalling") your request parameters in a format suitable for transmission on the network (CDR, NDR, JSON, XML, etc.)
Transmitting your encoded request parameters over the network, taking care of chunking or flow control as necessary
Receiving the response(s) from the server
Decoding (or "unmarshalling") the response details
Returning the responses to your original calling code (or throwing an error if something went wrong)
There's no such thing as "raw" TCP communication. If you are using it in a request/response model and infer any kind of meaning from the data sent across the TCP connection then you've encoded some form of "parameters" in there. You just happened to build yourself what stubs would normally have provided.
Stubs try to make your remote calls look just like local in-process calls, but honestly that's a really bad thing to do. They're not the same at all, and they should be considered differently by your application.