I want to use grpc bidirectional streaming in web browser. Can I use bidirectional streaming with grpc-web in browser?
rpc TypingStream (stream OutgoingTyping) returns (stream IncomingTyping);
}
Currently, there is no support for bidirectional streaming in gRPC-web.
You could follow the following thread if you're interested:
https://github.com/grpc/grpc-web/issues/24
Thanks :)
Related
How is gRPC client streaming/bidirectional streaming implemented with HTTP/2?
Server streaming makes sense, in that it could utilize server push to send multiple responses to a request, but it's not clear to me how it does bidirectional message passing over HTTP/2 the way one would over a websocket.
gRPC encodes streams as HTTP bodies. There is a five byte header before each message, consisting of the message length and a flag byte. It does not use SERVER_PUSH or other HTTP/2-specific features for streaming.
At its core, gRPC is streaming. Unary (single request, single response) and server streaming (single request) are simply special cases for producing cleaner APIs or more optimized I/O behavior. But on-the-wire, everything looks the same as streaming.
The specification of HTTP/1 allows but does not require streaming and bidirectional connections, but some implementations don't support them. But with the nature of HTTP/2, it is generally more work to not support them. Also, there aren't decade-old HTTP/2 proxies to cause compatibility problems; gRPC is able to work with the HTTP/2 ecosystem to encourage streaming to be supported.
For more information of the gRPC encoding, see gRPC's PROTOCOL-HTTP2.md, especially Length-Prefixed-Message.
Im trying to connect a Javascript client to a Elixir phoenix socket by using socket.io. Right now what Im doing is this:
var socket = io.connect('ws/ip.adress.of.server/ws');
However Im not getting a connected true socket object:
Can anyone guide me on the correct way to connect to the phoenix socket? Is there any place in my server code where I can look for the URL I need to connect to?
Thanks
socket.io is a dedicated protocol which works on top of websockets or long polling. It leverages similar concepts to those offered by phoenix sockets. But apart from that, those two are two distinct things which are not interoperable. As Justin Wood mentioned, use phoenix.js when you want to use phoenix channels.
I have one use case for real time streaming, we will be using Kafka(0.9) for message buffer and spark streaming(1.6) for stream processing (HDP 2.4). We will receive ~80-90K/Sec event on Http. Can you please suggest a recommended architecture for data ingestion into Kafka topics which will be consumed by spark streaming.
We are considering flafka architecture.
Is Flume listening to Http and sending to Kafka (Flafka )for real time streaming a good option?
Please share other possible approaches if any.
One approach could be Kafka Connect. Look for a source that fit in your needs or develop a custom new one.
Nowadays I'm designing a REST interface for a distributed system. It is a client/sever architecture but with two message exchange patterns:
req/resp: the most RESTful approach, it would be a CRUD interface to access/create/modify/delete objects in the server.
pub/subs: this is my main doubt. I need the server to send asynchronous notifications to the client as soon as possible.
Searching in the web I found that one solution could be to implement REST-servers in the server and client: Publish/subscribe REST-HTTP Simple Protocol web services architecture?
Another alternative would be to implement blocking-REST and so the client doesn't need to listen in a specific port: Using blocking REST requests to implement publish/subscribe
I would like to know which options would you consider to implement an interface like this one. Thanks!
Web Sockets can provide a channel for the service to update web clients live. There's other techniques like http long polling where the client makes a "blocking" request (as you referred to it) where the service holds the request for a period of less than a timeout (say 50 sec) and writes a response when it has data. The web client immediately issues another request. This loop creates a continuous channel where messages can be "sent" from the server to the client but it's initiated from the client (firewalls, proxies, etc...)
There are libraries such as socket.io, signalR and many others that wrap this logic and even fallback from websockets to long polling gracefully for you and abstract away the details.
I would recommend write some sample web socket and long polling examples just to understand but then rely on libraries like mentioned above to get it right.
I need to stream data between a couple hundred KB and many MBs between akka cluster nodes. Simplest approach would be to split it up as chunked messages, but that appears to be in advisable because it might interfere with housekeeping chatter of the cluster.
Alternatively, I could use messages to communicate one time urls and use http.
However, I'd prefer a persistent connection approach, so I was thinking using zeromq and chunked messages.
But rather than rolling my own approach, I'd like to use an existing way of accomplishing this but I have not found one.
One more requirement: most of the time the consumption of that stream is going straight out via Play, so an approach that created an iteratee that could be used to proxy the steam to http would be preferable.
Akka Streams 2.5.12 has StreamRefs for what I believe to be your use case.
Iteratees can't communicate across machine boundaries, so iteratees alone are probably not the tool you are looking for.
I would pursue one of the following approaches:
Using remote rpc akka Actors to send chunks of your data across the wire. Actors can be used to create iteratees and enumerators on either side (Enumerator.unicast and Iteratee.foreach) of the wire so that the fact that you are using Actors is just an implementation detail and not visible in your interface of these streams.
Use Akka Streams. This library has support for TCP connections, and while this is a different streaming library from iteratees, I have found that it is more robust in the stream operations it supports. It looks like Play is looking to move towards a tighter integration with Akka Streams as they are looking at replacing their netty HTTP backend with Akka Http Streams