Observer Daemon on Pusher Channel [closed] - daemon

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Currently I have a server-side user list that is pulled down by a User A's browser and then tracks locally the state of the system via pusher as users log on or off.
As User A's status changes, it sends ajax updates to the server to notify its status.
I am having de-sync issues with the status of users that are pulled down from the database vs the local tracking of the state in the browser while it is keeping track of users on the channel.
I would like to create a server-side observer that is constantly monitoring the pusher channels and acts as redundant method to sync the clients browser to the database.
Can anyone point me in the right direction of a good solution to use for the following necessary functions:
-Needs to integrate with pusher and be able to listen to/respond to events, not just send json messages over the channel
-Needs to receive all events that are published on a channel
I am unsure what libraries or solutions exist that can listen to Pusher channel events on the server.
Any suggestions would be much appreciated.

The best solution for this is to use Pusher's WebHooks. The benefit of this is that you can receive a number of events related to user activity and all events will be delivered i.e. failures are queued and resent.
There are no language requirements to consuming WebHooks as it's just an HTTP request made from Pusher to an endpoint that you define.
Right now you can receive channel vacated and occupied events (if a channel has any subscribers or none) and presence events (users joining and leaving a channel). It's likely that Pusher will expose additional events as WebHooks in the future.
If you were to run a daemon process which connects as a client there is the possibility of missing events during times where the client isn't connected e.g. network downtime or reconnection phases.

Related

How to communicate in real-time between multiple instances of microservices [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I want to set up 3 microservice architecture; one would be a frontend, the second one would be a backend, and the third one would be a pod that would be responsible for running some commands. The frontend should enable a user to run a sequence of commands and show their outputs in real-time, these commands would get passed to the backend that would then create a pod to actually run the commands. So basically, as the commands run in a pod, the frontend should be able to display the output from these commands in real-time.
I have tried researching on the solution and I came across Pusher, but I want to build something myself instead of using some 3rd party apps. Also, I know there are many technologies available out there, like WebSockets, which would be the best technology to use in this case?
(This answer is assuming you're interested in using Kubernetes since this question is tagged with kubernetes and you mentioned Pods).
It sounds like you have the basic building blocks assembled already, and you just need a way to stream the logs through the backend, and expose them in a way the frontend can subscribe to.
You're on the right track with WebSockets, that tends to be the easiest way to stream data from an API into your frontend. One way to connect these pieces is to have the backend use the Kubernetes API to create a Job pod whose logs can be streamed. The workflow could go as follows:
frontend makes a request to the backend to run a command via WebSocket
backend waits for the frontend to send the command over the WebSocket
once received, the backend uses the Kubernetes Job API to create a Job pod
if the Job was successfully created, the backend opens a WebSocket via the Watch/GetLogs API, and pipes anything written to that pod's logs back into the WebSocket with the frontend.
It's up to you to decide the format of data returned over the WebSocket (e.g., plaintext, JSON, etc.).

MQTT as an addition to the REST API [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I have a REST API which receives some information (let's say events) from clients. Now I need to send some information from the server to clients. I'm trying to add MQTT as an additional way for clients to communicate with the server. Unlike HTTP MQTT allows me to do both: sending and receiving, so it's possible to make MQTT analogs for all existing REST API methods.
Receive events from clients - HTTP, MQTT
Sent commands to clients - MQTT
My idea was to make a "listener" which subscribes to all "event" MQTT topics and translate them into HTTP requests to the REST API (to keep components decoupled). But there is a problem: this listener is a simple client. It doesn't have any special permissions and can't get publisher's credentials to act on his behalf when talking with the REST API. MQTT doesn't even allow a subscriber to get who published a particular message.
One solution is to use MQTT only for sending information from the server to clients and keep using REST API for all incoming requests. But that looks strange :)
Another way is to use broker custom hooks but not all brokers support it and it's not a part of the MQTT specification so it's not very reliable.
Any ideas how to organize it in a proper way?
Given that most (if not all) MQTT brokers support wildcard topics in ACLs you can encode the user in the topic and then grant the agent access to the wild card topic that matches all users.
e.g.
publish to events/<user>
and then grant the agent access to the topic events/+
You can then make sure that the Users ACL makes sure only they can publish to events/<user> such ensuring that users can not impersonate each other.

How to read data from socket, until client stopped send? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
I have some problem.
I have client and server. Client connect with server over TCP.
Then, client send some data (separated by chunks), I don't know what is the length of data (TLS handshake). But I know that client send some data with fixed length, and then stop, until not received some response, then he send data with fixed length again.
I need read all chunks, until client stopped send (because so many chunks). How to do that ?
I have only one idea, it's timeout. Read data in loop and set timeout between iterate. If timeout is ended, then data complete collected.
Perhaps there is a more elegant solution?
Based on the information in your comments, you're doing this wrong. The correct way to write an HTTPS proxy is to read the CONNECT line, make the upstream connection, send the appropriate response back o the client, and then if successful start copying bytes in both directions simultaneously. You're not in the least concerned with packets or read sizes, and you should certainly not make any attempt to 'collect' packets before retransmission, as that will just add latency to the system.
You can accomplish this either by starting two threads per connection, one in each direction, or via non-blocking sockets and select()/poll()/epoll(), or whatever that looks like in Go.
BUT I have no idea why you're doing this at all. There are plenty of open-source HTTP proxies already in existence, and as you're dealing with HTTPS there is no value you can possibly add to them. Your claim about 'business logic' is meaningless, or at least unimplementable.

How to make a realtime notification like facebook? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
I am trying to make a realtime notification just like facebook.After learning and searching alot i m very confuse please explain me what is right and what is wrong..
Please make sure that the site may would have same number of users as Facebook
We can make Realtime notification with long polling or not? IF yes what is the advantages, disadvantages and limitations.
we can make Realtime notifiaction with websockets or not?Please mind the number of users can be same as facebook .If yes what is the advantages, disadvantages and limitations.
If there is any other method please explain.
Confusion
How Far I learn in web and found that Websocket is good but there is a limitation(Max 5K) in number of open connection which means that at a time the max number of user is just 5K,this is very less than facebook's number of users.. if I m wrong please explain.
You're wrong, a websocket based solution is not limited to 5K concurrent connections.
According to the Facebook Newsroom they have about 727 million daily active users on average in September 2013 or about 504k unique users that hit the Facebook page every minute. Given an average visit time of 18 minutes (researched by statisticbrain.com) their notification infrastructure must be able to serve about 9 million (18*504k) concurrent TCP connections 24/7. Although this number is a very far approximation it gives a far idea of what they are dealing with and what you have to deal with if you are going to build such a system.
You can use long polling as well as websockets to build your realtime notification system. In both cases you face similar problems which are related to your OS (Explanations are for a Unix based system):
limitation of ports, one tcp listener socket can only accept 2^16 connections on the same IP/Port it is listening, so you'll need to listen on multiple ports and/or multiple IP adresses.
memory, every open connection uses at least one file descriptor
Read more about the limitations in What is the theoretical maximum number of open TCP connections that a modern Linux box can have
Long-polling vs. Websockets:
Every poll in your long-poll solution requires a new HTTP request, which requires more bandwidth than what is needed to keep a websocket connection alive. Moreover the notification is returned as a HTTP response resulting in a new poll request. Although the websocket solution can be more efficient in terms of bandwidth and consumption of system resources, it has a major drawback: lack of browser support.
Depending on the stats at hand, a websocket-only solution ignores about 20-40% of your visitors (stats from statscounter.com). For this reason different server libraries were developed that abstract the concept of a connection away from the 'physical' underlying transport model. As a result more modern browsers create the connection using websockets and older browsers fall back to an alternative transport such as e.g. HTTP long polling, jsonp polling, or flash. Prominent examples of such libraries are Sock.js and Socket.io.

Open Source Queuing Solutions for peek, mark as done and then remove [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I am looking at open source queuing platforms that allow me do the following:
I have multiple producers, multiple consumers putting data into a queue in a multithreaded environment with the specific use case:
I want the ability for consumers to be able do the following
Peek at a message from the queue(which should mark as the message as invisible on the queue so that other consumers cannot consume the same message)
The consumer works on the message consumed and if it is able to do the work successfully, it marks the message as consumed which should permanently delete it from the queue.
If the consumer dies abruptly after marking the message as consumed or fails to acknowledge successful consumption after a certain timeout, the message is made visible on the queue again so that another consumer can pick it up.
I've been looking at RabbitMQ, hornetQ, ActiveMQ but I'm not sure I can get this functionality out of the box, any recommendations on a system that gives me this functionality?
RabbitMQ does this out of the box, except for the timeout-based redelivery. If the connection is dropped while a message is unacknowledged, the message will be requeued for delivery to some other consumer of the queue. You can either use pull-mode ("Basic.Get") or push-mode/subscribe-mode ("Basic.Consume") to get the server to feed you messages.
This is how hornetq works in auto acknowledge mode. It's not really "peeking" , but a message is delivered to a listener and is not visible to any other listener. If the listener fails to complete the the transaction, because it dies, throws an exception, etc. , the message reappears on the queue and is redelivered to another listener. If the listener successfully completes the message is removed from the queue for good.
Sorry, just realized this thread is over a year old. Well, maybe this will help someone...
What you're asking for is standard JMS behaviour - which would be implemented out of the box by any compliant JMS implementation.
By way of introduction I can say that I've built and designed many message based systems from the ground up, using many technologies including CORBA, COM and native sockets.
In many of these it is the design that sits on the technology that is important.
Bearing this in mind I would probably choose to start with RabbitMQ and maybe enhance it if needed.
In many ways it is a headful to understand AMQP but it is worth the time, and I believe that it will allow you to make this work.
Even if you can't get the exact functionality out of the box the important question is can you make it do this, which I believe I could. It's opensource after all.