Does Crate work only with HTTP protocol? - crate

I couldn't find this information on tech specs...
Although REST is very convenient to a variety of scenarios, a native protocol sounds like much more high speed. Does the current implementation support native binary protocol ?

Yes, it's possible to use the native transport protocol.
Nodes connecting via this protocol know the cluster state and routing information which makes them faster. Right now, only the Java client supports this protocol.
Generally speaking we found that the HTTP protocol isn't much slower if you connect with high parallelity. Just connect asynchronously with many connections.

Related

Does gRPC vs NATS or Kafka make any sense?

For a long time, when it comes to the microservice architecture, NATS and Kafka are the first options that come to my mind. But recently I found this gRPC template in dotnet core and that grasped my attention. I read a lot about it and watched a lot of videos but I don't think any of those could address gRPC correctly as they usually contrast between gRPC and message brokers or protocols such as REST which I guess is pretty inappropriate although SOAP would be relevant here.
My assumption is that gRPC is a modern version of SOAP with better performance and less implementation hassle due to it protocol buffer. And I think that gRPC can by no means be compared against Kafka or NATS. And also that it cannot replace RESTful service as neither could SOAP.
Now, the question, to what extent are my assumptions true? For example, when it comes to selecting a communication bridge between nodes on a cluster, do I have to put gPRC among my options now (NATS, Kafkam Rabbit, etc) or should I consider that when creating a web proxy to bridge external request to my microservices?
Finally, how about real-time communication, can gRPC replace websocket/socket.io/signalR completely? What does it replace?
I often see people misplacing these technologies by one crucial aspect: public authentication.
For instance, check this graph:
This is a benchmark of Inverted Json (https://github.com/lega911/ijson), comparing some tools, such as iJson, RabbitMQ, Nats, 0MQ, etc.
Notice that Nats, ZeroMQ and iJson are not meant to be used as public end-points (for instance, Nats have user/password, token and keys, but it is useless in an open environment, such as web browsers, because there is no way to make the key non public).
On the other hand, GRPC works just fine with JWT and Oauth2, making it completely safe to public end-points (as safer as any other HTTP endpoint), 'cos those tokens are server-signed (so, even tough they are public, they can't be forged or tempered with)
So, what I'm trying to say is: there are techs meant to face public and techs meant to glue together servers and process within servers (which are private connections).
GRPC is public, ZeroMQ and iJson are totally private (iJson, for instance, don't have any kind of authentication). Nats works with keys or passwords, so, although is "safer" than iJson and ZeroMQ, it is not meant to be public.
When you say REST (I'm assuming HTTP here, because REST is just an architecture), websocket/socket.io/signalR, you are depicting all public interfaces. GRPC will cover you here (it's comparable to REST as request/response and websocket/socket.io/signalR because it supports half and full duplex streaming (similar to sockets)).
Nats, iJson, ZeroMQ, on the other hand, are not meant to do that. They are meant to communicate between services.
So, basically, REST/websocket/socket.io/signalR = gRPC.
Internal communication between services (in the same or in different servers) = NATs, iJSON, ZeroMQ.
(notice that I'm not even considering the other technologies in the graph, because they are products, IMO, not simple libraries you can use to achieve an end, such as RabbitMQ, nginx, etc. The other ones I'm not familiar enough to be able to make an opinion (but I'm surprised by the uvloop in that graph)).
Your intuition is correct that gRPC is not comparable to an asynchronous queueing system like kafka, Rabbit, etc.
It is however a replacement for synchronous server to server communication technologies often implemented over SOAP, RPC, REST, etc. where you are expecting to get a response from another server rather than firing a message into a queue and then effectively forgetting about the message.
gRPC is definitely an option for real-time communication. It can replace socket communication if you are not streaming to the browser(No gRPC support), have a look at the Bidirectional streaming support.
About replacing Kafka/Rabbit, gRPC can be used as a PubSub system as it supports Bidirectional streaming but I would not recommend it.

Can a TCP socket client and protobuf talk to a gRPC server?

I have two machines talking to each other over TCP, with Machine A using Machine B's service. Ideally I'd love to use an RPC framework for this. gRPC comes to mind since we already use protobuf.
However, Machine A already uses an external library that wraps TCP sockets for communication, so that the client app code can only send raw bytes (encoded in protobuf) to Machine B. So I wonder if I could adapt the code to working with the gRPC server on Machine B.
gRPC is built on top of http/2, and you can theoretically use any kind of connection (e.g. domain sockets, named pipes) to communicate using gRPC, so it is certainly possible. In fact, at least in go, gRPC uses TCP by default.
If the requirement is only that machine A communicates using TCP sockets, but you can use external libraries, then it should be fairly easy to use the gRPC library of the language of your choice to implement the client/server interaction on top of the raw TCP sockets.
On the other hand, if you can not use the gRPC library, while still possible you would have to implement the gRPC protocol yourself, and possibly the http/2 one as well if you can't use any external library; this would likely be a lot of work, and you may be better off creating a simpler, ad-hoc RPC protocol for your use case.
EDIT
The updated question makes the requirements clearer. If you need to use a specific TCP library, but you can use gRPC on top of that, then I think the difficulty of the task really depends on the programming language.
For instance, in go it would be as easy as creating a wrapper struct, implement the net.conn interface for it, and use it with the withDialer option when creating the client. In c++ it looks like it would be more difficult and entail implementing a custom transport (altough I'm not that familiar with the c++ gRPC API so there may be an easier way)

Is it reliable to use RPC over Internet?

Is it reliable to use RPC over Internet ? My boss mentioned ISP's will do filtering if they detect a binary protocol being transferred over wire. I still didn't under stand the reason why ISP should block binary protocols. ISP's purpose is to route the packets as soon as possible.
What are the general opinions in using RPC over Internet ? I can have alternatives to use a well known protocol like HTTP or HTTPS and transfer my content over that. But still what are the issues if I just plainly use a binary protocol in a not-so popular port ?
I would say you answered your own question. If some ISP's filter binary protocols then it isn't reliable.
As for why ISP's would block the protocol I don't know.
If you're worried about reliability then use HTTP as you have that available.

P2P Based Solutions prefers SIP or XMPP (Jingle) for Signaling

It’s just a start where I am exploring more in P2P side, and finding reasons in terms of Scalability or anything else for : SIP or XMPP (Jingle) for following use case :
P2P Client Application Capable to perform File Transfer on all Network Traversal Scenarios.
// For Signaling (e.g.; to connect/locate/disconnect peers) both XMPP (Jingle) or SIP are available.
May I know possible reasons to use what and why? Any practical use? e.g.; Scalability or anything which really makes a difference for the above Use Case
Jingle is an XMPP extension to handle multimedia sessions. In effect Jingle is the XMPP equivalent of SIP.
As far as a P2P file application goes:
Jingle and SIP are roughly equivalent as far as scalability goes. Both separate the signalling and media providing more flexibility (and consequently complications) with the way server side components can be deployed.
XMPP/Jingle has a better security design making it much more practicable to enforce clients using an SSL signalling layer. SIP does support SSL but it's more convoluted and also doesn't enjoy widespread support in the real World,
As far as NAT goes you're going to have the same problems with both. The scalability you get from having separate signalling and media paths comes back to bite when NAT is involved. There are a few different mechanisms to deal with NAT the latest attempt is ICE. ICE is collection of different mechanisms to try and resolve different NAT configurations and it's worth bearing in mind that not all configurations can be resolved and the fallback is to use a media proxying server such as TURN.
If I was you I'd use XMPP but before starting I'd work out exactly what NAT configurations need to be supported. If you need to support arbitrary clients from anywhere on the internet then you will not be able to rely on always being able to establish direct P2P communications between your clients and that's where you will face your biggest challenge.

Difference between socket and websocket?

I'm building web app that needs to communicate with another application using socket connections. This is new territory for me, so want to be sure that sockets are different than websockets. It seems like they're only conceptually similar.
Asking because initially I'd planned on using Django as the foundation for my project, but in the SO post I linked to above it's made very clear that websockets aren't possible (or at least not reliable, even with something like django-websockets) using the preferred Django setup (Apache with mod_wsgi). Yet I've found other posts that casually import Python's socket module for something as simple as grabbing the server's hostname.
So:
Are they really different?
Is there any reason not to use Django for a project that relies on establishing socket connections with an outside server?
To answer your questions.
Even though they achieve (in general) similar things, yes, they are really different. WebSockets typically run from browsers connecting to Application Server over a protocol similar to HTTP that runs over TCP/IP. So they are primarily for Web Applications that require a permanent connection to its server. On the other hand, plain sockets are more powerful and generic. They run over TCP/IP but they are not restricted to browsers or HTTP protocol. They could be used to implement any kind of communication.
No. There is no reason.
Websockets use sockets in their implementation. Websockets are based on a standard protocol (now in final call, but not yet final) that defines a connection "handshake" and message "frame." The two sides go through the handshake procedure to mutually accept a connection and then use the standard message format ("frame") to pass messages back and forth.
I'm developing a framework that will allow you to communicate directly machine to machine with installed software. It might suit your purpose. You can follow my blog if you wish: http://highlevellogic.blogspot.com/2011/09/websocket-server-demonstration_26.html
WebSocket is just another application level protocol over TCP protocol, just like HTTP.
Some snippets < Spring in Action 4> quoted below, hope it can help you understand WebSocket better.
In its simplest form, a WebSocket is just a communication channel
between two applications (not necessarily a browser is
involved)...WebSocket communication can be used between any kinds of
applications, but the most common use of WebSocket is to facilitate
communication between a server application and a browser-based application.
You'd have to use WebSockets (or some similar protocol module e.g. as supported by the Flash plugin) because a normal browser application simply can't open a pure TCP socket.
The Socket.IO module available for node.js can help a lot, but note that it is not a pure WebSocket module in its own right.
It's actually a more generic communications module that can run on top of various other network protocols, including WebSockets, and Flash sockets.
Hence if you want to use Socket.IO on the server end you must also use their client code and objects. You can't easily make raw WebSocket connections to a socket.io server as you'd have to emulate their message protocol.
WebSocket is a computer communications transport protocol (like TCP, HTTP 1.0, HTTP 1.1, HTTP 2.0, QUIC, WebRTC, etc.)
Socket is an endpoint for sending and receiving data across the network (like Port number)
Example of Socket:
(TCP, 8.8.8.4, 8080, 8.8.8.8, 8070)
where:
(protocol, local address, local port, remote address, remote port)
Regarding your question (b), be aware that the Websocket specification hasn't been finalised. According to the W3C:
Implementors should be aware that this specification is not stable.
Personally I regard Websockets to be waaay too bleeding edge to use at present. Though I'll probably find them useful in a year or so.