What is the difference between a protocol and an interface in general? - interface

I understand an interface is a set of publicly exposed things that one system can use to interact with others systems. I'm reading about WEBRTC protocol and to understand what a protocol is I went to the Wikipedia definition. It says more or less that a protocol is a system of rules that allows two systems to communicate. Ain't that the same as interface? Maybe I'm not understanding one or both.

An interface defines how two entities may communicate. A protocol defines how they should communicate and what that communication means.
Here is an interface:
public interface ICommunicate
{
string SendMessageAndGetResponse(string message);
}
A protocol then might be:
Send "Hello", if you get back "Hi" then send "How are you?" and the response will be a status. If you get back anything other than "Hi" from the initial message then the system is not functioning properly and you must send the message "Reboot" which you'll then get back "Rebooted!" if successful and anything else for failure.

In general interface mean "The point of interconnection or contact between entities." and transferred to software it means "The connection between parts of software." and also "In object-oriented programming, a piece of code defining a set of operations that other code must implement." (Source)
In general protocol means "The official formulas which appeared at the beginning or end of certain official documents such as charters, papal bulls etc." and transferred to computers it means "A set of formal rules describing how to transmit or exchange data, especially across a network.". (Source)
So protocol focuses more on the data exchange, whereas interface focuses more on software interaction independent of any data exchange.
Of course, in the end, software interaction is most of the time a data exchange. Passing arguments to a function is a data exchange. Calling a method/function is not directly a data exchange but you need to imagine it like this: Instead of calling different functions:
c = add(a, b);
c = sub(a, b);
you could as well always call the same function and pass the desired functionality as argument:
c = func("add", a, b);
c = func("sub", a, b);
and that way the functionality becomes data as well.
The terms are somewhat interchangeable. E.g. some programming languages call it interface to focus on the pure interaction of components (classes, objects, etc.) and some call it protocol to focus on the data exchange between the components.
On a network, a protocol is how data is exchanged; think of IP protocol or TCP protocol. But if you have communication endpoints you talk to over a network to trigger functionality, e.g. a REST API, than the sum of all these endpoints and their parameters can be called an interface, while triggering one of the interface functions would be done via a HTTP request and HTTP is a protocol and defines how data is transferred.

I think in some ways the word 'interface' could also be used for this (especially as the I in API), but generally when talking about what we're sending over the wire, the common word is protocol.
When you dive deep enough into exact definitions of words, the meaning and difference can sometimes break down a bit.
But avoiding super exact semantics, API/Interface tends to be a bit higher level than protocol.

Related

Dynamic binding of services in Thrift or gRPC / passing service as argument

I work with existing system that uses a lot of dynamic service registrations, using Andorid HIDL/AIDL, for example:
Multiple objects implement:
IHandler { Response handleRequet(Id subset, Request r)}
One object implements:
class IHandlerCoordinator {
Response handleRequet(Id subset, Request r);
void RegisterHandler(std::vector<Id> subsets, IHandler handler_for_subset_ids);
}
Multiple object on startup/dynamicaly register into IHandlerCoordinator (passing expected subset of what they can handle), and then IHandlerCoordinator dispatches incoming requests to clients.
In xIDL it requires passing services as arguments, how it can be emulated in Thrift / gRPC?
W/regard to Thrift: There is no such thing as callbacks yet. There have been some dicussions around that topic (see mailing list archives and/or JIRA) but there's no implementation. Biggest challenge is to do it in an transport-agnositic way, so the current consensus is that you have to implement it manually if you need it.
Technically, there's two general ways to do it:
implement a server instance also on the client side which receives the callbacks
integrate long running calls or a polling mechanism to actively retrieve "callback" data from the server by means of client calls
With gRPC it's easier, because gRPC focuses on HTTP. Thrift has been open from the beginning for any kind of transport you can imagine.

Which file to extend for customized messages in veins? What is the purpose of AirFrame11p.msg?

I'm new to SUMO, Veins, OMNET++ and simulations with a bit background of networks. I have successfully setup environment and run veins 4.6 demo application. On google found that unlike RSU, Car modules are added on the fly.
In demo example car nodes send Airframe11p message, i'm not getting where this message is being populated because in TraCIDemo11p.cc methods (onWSA, onWSM, handleSelfMsg, handlePositionUpdate) we are dealing with WSM message types and BaseWaveApplLayer::checkAndTrackPacket methods ensures that message being sent is either BSM, WSM or WSA.
In veins\src\veins\modules\messages AirFrame11p.msg file exists but on finding references of "AirFrame11p" in project, matches are found in AirFrame11p_m.h and AirFrame11p_m.cc only. If demo is not using these files then for what purpose these files are added? and from where simulation gets the annotation of AirFrame11p.
I'm trying to simulate a car accident scenario without RSU using V2V communication, have replaced demo map with mine, generated random routes, now trying to remove RSU from demo application and exploring to send customized messages (including geo location, speed, direction, time etc) to nearby vehicles in specified range e.g. 100 meters using WiFi direct.
If i'm confusing something then please guide me. Thanks.
The short answer: The AirFrame11p message is a lower level message that encapsulates the upper layer messages. Just use the application message type that is appropriate for your application. If you want to replace the physical layer with WiFi direct instead of 11p, and you're starting from scratch, you're probably in for quite a bit of work, since the VEINS PHY implementation is very intricate. If you have an existing implementation of WiFi direct, it may be worth investigating the integration of VEINS' TraCI implementation with that code.
Encapsulation in VEINS
You are correct that the message types at the application layer are more diverse -- these message types (BSM and WSM) are used to encapsulate "application" behavior; it's just not very well visualized in the simulation execution. You can pause the simulation and look (for example) under scheduled events, where the queued packets can be examined visually.
Unlike regular networks, where such messages would be packaged in IP, MAC and PHY encapsulations, VEINS uses the following encapsulation process: BSMs are packaged in MAC frames (80211Pkt), which in turn are encapsulated by AirFrame11p signals. So basically, you should choose the correct message type for your application.
Footnote regarding application behavior:
Technically speaking, these messages would be more correctly placed at the Facilities layer (see e.g. ETSI's spec), since the periodic exchange of messages provides data stored in the facilities layer, which is then used by cITS/VANET applications that run on top. If you need this, look at Artery (as Ventu suggested in the comments).

Proper way to communicate with socket

Is there any design pattern or something else for the network communication using Socket.
I mean what i always do is :
I receive a message from my client or my server
I extract the type of this message (f.e : LOGIN or LOGOUT or
CHECK_TICKET etc ...)
And i test this type in a switch case statement
Then execute the suitable method for this type
this way is a little bit borring when u have a lot of type of message.
Each time i have to add a type, i have to add it in the switch case.
Plus, it take more machine operations when you have hundred or thousands type of message in your protocol (due to the switch case).
Thanks.
You could use a loop over a set of handler classes (i.e. one for each type of message supported). This is essentially the composite pattern. The Component and each Composite then become independently testable. Once written Component need never change again and the support for a new message becomes isolated to a single new class (or perhaps lambda or function pointer depending on language). Also you can add/remove/reorder Composites at runtime to the Component, if that was something you wanted from your design (alternatively if you wanted to prevent this, depending on your language you could use variadic templates). Also you could look at Chain of Responsibility.
However, if you thought that adding a case to a switch is a bit laborious, I suspect that writing a new class would be too.
P.S. I don't see a good way of avoiding steps 1 and 2.

Why no Stub in REST?

EDIT : My original title was "Use of Stub in RPC" ; I edited the title just to let others know it is more than that question.
I have started developing some SOAP based services and I cannot understand the role of stubs. To quote Wiki :
The client and server use different address spaces, so conversion of parameters used in a function call have to be performed, otherwise the values of those parameters could not be used, because of pointers to the computer's memory pointing to different data on each machine. The client and server may also use different data representations even for simple parameters (e.g., big-endian versus little-endian for integers.) Stubs are used to perform the conversion of the parameters, so a Remote Function Call looks like a local function call for the remote computer.
This is dumb, but I don't understand this "practically". I have done some socket programming in Java, but I don't remember any step for "conversion of parameters" when my TCP/UDP clients interacted with my server. (I assume raw server-client communication using TCP/UDP sockets does come under RPC)
I have had some experience with RESTful service development, but I can't recognize the Stub analogue with REST either. Can someone please help me ?
Stubs for calls over the network (be they SOAP, REST, CORBA, DCOM, JSON-RPC, or whatever) are just helper classes that give you a wrapper function that takes care of all the underlying details, such as:
Initializing your TCP/UDP/whatever transport layer
Finding the right address to call and doing DNS lookups if needed
Connecting to the network endpoint where the server should be
Handling errors if the server isn't listening
Checking that the server is what we're expecting it to be (security checks, versioning, etc)
Negotiating the encoding format
Encoding (or "marshalling") your request parameters in a format suitable for transmission on the network (CDR, NDR, JSON, XML, etc.)
Transmitting your encoded request parameters over the network, taking care of chunking or flow control as necessary
Receiving the response(s) from the server
Decoding (or "unmarshalling") the response details
Returning the responses to your original calling code (or throwing an error if something went wrong)
There's no such thing as "raw" TCP communication. If you are using it in a request/response model and infer any kind of meaning from the data sent across the TCP connection then you've encoded some form of "parameters" in there. You just happened to build yourself what stubs would normally have provided.
Stubs try to make your remote calls look just like local in-process calls, but honestly that's a really bad thing to do. They're not the same at all, and they should be considered differently by your application.

Should Events be externally mutable?

I am playing around with FRP and was wondering about how the act of an Event 'occurring' should be handled publicly. By this, I mean should a programmer be able to do the following within an FRP context:
event.occur(now, 5)
I have never seen examples of this in any FRP papers and it doesn't feel right to me. I feel that FRP frameworks should really hide this type of action and that occurrences of Events should happen behind the scenes only. Am I correct in thinking this?
To clarify, my approach would be to have 'occur' only accessible to the Event class itself. If an abstraction for some external source was needed (such as a mouse) this could be built by extending the Event class. In this way all the logic dealing with occurrence creation is abstracted.
Well, if the FRP library exposes a way to bind to external events — e.g. an existing event-based framework — then it must provide functionality equivalent to this, or it couldn't interact with the outside world.
However, the question is really: what do you mean by "external"? The FRP system itself is usually taken to be pure, so the idea of executing side-effectful code like event.occur(now, 5) from inside the FRP system isn't even meaningful. Generally, of course, a facility to execute such code in response to FRP events is provided, but this is usually taken not as part of the pure programming model, but as a facility to interface the network as a whole with the outside world.
So, in my opinion, there's two possible ways to interpret this question:
Should it be possible to trigger an event from outside of the FRP system? — definitely yes, as it's required for interfacing with the outside world, but this does not affect the programming model of FRP itself.
Should it be possible to trigger an event from "inside" of the FRP system, assuming some facility for executing side-effectful code in reaction to an event? — also yes, because allowing normal side-effectful code to cause events but forbidding it inside the code executed in response to events seems like a very strange (and circumventable) restriction, given that the intention of the facility is to interface with the outside world.
Indeed, it's possible to cause something just like #2 even if you explicitly forbid it: consider setting things up so that switchToWindow 3 is executed when the event buttonClicked triggers, e.g. (using reactive-banana notation):
reactimate (switchToWindow 3 <$ buttonClicked)
And say that we have an event
newWindowFocused :: Event Int
The reaction we've set up causes the newWindowFocused event to fire, even if firing events from inside code executed due to an event is prevented.
Now, everything I've said so far concerns only "external" events: those not expressed with pure FRP, but explicitly created to represent events that occur in the outside world, beyond the FRP system. If you're asking whether there should be a facility to cause special occurrences in purely-defined events, then my response is: absolutely not! This destroys the meaning of the system, because suddenly fmap f (union e1 e2) doesn't mean "occurs with value f x when either e1 or e2 occurs with value x", but instead "occurs with value f x when either e1 or e2 occurs with value x... or when some external code randomly decides to fire it".
Not only would such a facility make reasoning about the behaviour of an FRP system essentially meaningless,1 it'd also violate referential transparency: if you construct two events equivalent to fmap f (union e1 e2), then you can distinguish them by firing one and noticing that the other doesn't occur. You simply can't prevent this in all cases: imagine fmap g (union e1 e2), where f computes the same function as g; equality on functions is not decidable :)
Of course, it's entirely possible to implement FRP in an impure language, but I think providing a way to violate the referential transparency of the FRP system itself is a very bad thing, as it is, after all, a pure model.
If I understand it correctly, your solution to this flaw in the API (namely, exposing occur publicly, which breaks referential transparency of equivalent events, etc. as I talked about above) would be to make occur internal to your Event class, so that it cannot be used from outside. I agree that, if you need occur internally, this is the correct solution. I also agree that it's reasonable to expose it to subclasses if your implementation of external events is done by subclassing Event. That falls under "outside world glue", which falls outside the purvue of the FRP model itself, so it's perfectly OK to give it the ability to "break the rules" in this way — after all, that's essentially what it's for: disturbing the system with side-effects :)
So, in conclusion:
No, events should not expose this interface.
Yes, you are correct in thinking this :)
1 Of course, you could argue that external events do this full stop, as the whole behaviour of the system ultimately depends on the "edges" hooked up to the outside world, but this isn't really true: yes, you can't really assume anything about the external events themselves, but you can still rely on everything you build out of them to obey the laws of their constructions. Offering an "external firing" facility to every event means that no construction has any laws.