How to communicate in real-time between multiple instances of microservices [closed] - kubernetes

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I want to set up 3 microservice architecture; one would be a frontend, the second one would be a backend, and the third one would be a pod that would be responsible for running some commands. The frontend should enable a user to run a sequence of commands and show their outputs in real-time, these commands would get passed to the backend that would then create a pod to actually run the commands. So basically, as the commands run in a pod, the frontend should be able to display the output from these commands in real-time.
I have tried researching on the solution and I came across Pusher, but I want to build something myself instead of using some 3rd party apps. Also, I know there are many technologies available out there, like WebSockets, which would be the best technology to use in this case?

(This answer is assuming you're interested in using Kubernetes since this question is tagged with kubernetes and you mentioned Pods).
It sounds like you have the basic building blocks assembled already, and you just need a way to stream the logs through the backend, and expose them in a way the frontend can subscribe to.
You're on the right track with WebSockets, that tends to be the easiest way to stream data from an API into your frontend. One way to connect these pieces is to have the backend use the Kubernetes API to create a Job pod whose logs can be streamed. The workflow could go as follows:
frontend makes a request to the backend to run a command via WebSocket
backend waits for the frontend to send the command over the WebSocket
once received, the backend uses the Kubernetes Job API to create a Job pod
if the Job was successfully created, the backend opens a WebSocket via the Watch/GetLogs API, and pipes anything written to that pod's logs back into the WebSocket with the frontend.
It's up to you to decide the format of data returned over the WebSocket (e.g., plaintext, JSON, etc.).

Related

MQTT as an addition to the REST API [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I have a REST API which receives some information (let's say events) from clients. Now I need to send some information from the server to clients. I'm trying to add MQTT as an additional way for clients to communicate with the server. Unlike HTTP MQTT allows me to do both: sending and receiving, so it's possible to make MQTT analogs for all existing REST API methods.
Receive events from clients - HTTP, MQTT
Sent commands to clients - MQTT
My idea was to make a "listener" which subscribes to all "event" MQTT topics and translate them into HTTP requests to the REST API (to keep components decoupled). But there is a problem: this listener is a simple client. It doesn't have any special permissions and can't get publisher's credentials to act on his behalf when talking with the REST API. MQTT doesn't even allow a subscriber to get who published a particular message.
One solution is to use MQTT only for sending information from the server to clients and keep using REST API for all incoming requests. But that looks strange :)
Another way is to use broker custom hooks but not all brokers support it and it's not a part of the MQTT specification so it's not very reliable.
Any ideas how to organize it in a proper way?
Given that most (if not all) MQTT brokers support wildcard topics in ACLs you can encode the user in the topic and then grant the agent access to the wild card topic that matches all users.
e.g.
publish to events/<user>
and then grant the agent access to the topic events/+
You can then make sure that the Users ACL makes sure only they can publish to events/<user> such ensuring that users can not impersonate each other.

Rolling Over Streaming Connections During Upgrades

I am working on an application that uses Amazon Kinesis, and one of the things I was wondering about is how you can roll over an application during an upgrade without data loss on streams. I have heard about things like blue/green deployments and such, but I was wondering what is the best practice for upgrading a data streaming service so you don't loose data from your streams.
For example, my application has an HTTP endpoint that ingests data as a series of POST operations. If I want to replace the service with a newer version, how do I manage existing application streaming to my endpoint?
One common method is having a software load balancer (LB) with a virtual IP; behind this LB there would be at least two HTTP ingestion endpoints during normal operation. During upgrade, each endpoint is announced out and upgraded in turn. The LB ensures that no traffic is forwarded to an announced out endpoint.
(The endpoints themselves can be on separate VMs, Docker containers or physical nodes).
Of course, the stream needs to be finite; the TCP socket/HTTP stream is owned by one of the endpoints. However, as long as the stream can be stopped gracefully, the following flow works, assuming endpoint A owns the current ingestion:
Tell endpoint A not to accept new streams. All new streams will be redirected only to endpoint B by the LB.
Gracefully stop existing streams on endpoint A.
Upgrade A.
Announce A back in.
Rinse and repeat with endpoint B.
As a side point, you would need two endpoints with a load balanced (or master/slave) set-up if you require any reasonable uptime and reliability guarantees.
There are more bespoke methods which allow hot code swap on the same endpoint, but they are more bespoke and rely on specific internal design (e.g. separate process between networking and processing stack connected by IPC).

Observer Daemon on Pusher Channel [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Currently I have a server-side user list that is pulled down by a User A's browser and then tracks locally the state of the system via pusher as users log on or off.
As User A's status changes, it sends ajax updates to the server to notify its status.
I am having de-sync issues with the status of users that are pulled down from the database vs the local tracking of the state in the browser while it is keeping track of users on the channel.
I would like to create a server-side observer that is constantly monitoring the pusher channels and acts as redundant method to sync the clients browser to the database.
Can anyone point me in the right direction of a good solution to use for the following necessary functions:
-Needs to integrate with pusher and be able to listen to/respond to events, not just send json messages over the channel
-Needs to receive all events that are published on a channel
I am unsure what libraries or solutions exist that can listen to Pusher channel events on the server.
Any suggestions would be much appreciated.
The best solution for this is to use Pusher's WebHooks. The benefit of this is that you can receive a number of events related to user activity and all events will be delivered i.e. failures are queued and resent.
There are no language requirements to consuming WebHooks as it's just an HTTP request made from Pusher to an endpoint that you define.
Right now you can receive channel vacated and occupied events (if a channel has any subscribers or none) and presence events (users joining and leaving a channel). It's likely that Pusher will expose additional events as WebHooks in the future.
If you were to run a daemon process which connects as a client there is the possibility of missing events during times where the client isn't connected e.g. network downtime or reconnection phases.

Simulating Virtual Users for Smartphone App based Service

Apologies if something similar has been asked in the future but my search didn't return, what I would consider, directly related.
I am trying to implement a service with its backend in AWS EC2/S3 and front-end in iPhone and the service is more or less like a todo-list. This is not a novel idea but will help me in a class I teach about IT infrastructure.
Unfortunately I have access to only my own iPhone and I cannot demonstrate scalability over AWS, etc.
Is there a way/software tool/framework to simulate virtual users for this app that can send requests to the AWS servers pretending to be from different accounts/apps?
The simulator should send requests just like my actual iphone app would send if I were to add an item to the list or delete or edit.
I understand stress testing is a well established topic but here I want to just simulate multiple users and demonstrate scalability instead of trying to push the Web service to its limits. Neither am I sure if this completely overlaps with traffic simulation.
Any help will be deeply appreciated.
You might be able to do it using Apache JMeter. That depends on what you have going on on the backend. But it supports the following server types:
Web - HTTP, HTTPS
SOAP
Database via JDBC
LDAP
JMS
Mail - SMTP(S), POP3(S) and IMAP(S)
Native commands or shell scripts
You should be able to wire something together with that.
http://jmeter.apache.org/
http://www.opensourcetesting.org/performance.php
I've used it at various points to simulate VERY heavy loads for my services running in AWS/EC2.
Apache Benchmark is a very convenient tool for doing HTTP load testing -- you can have it make concurrent requests to simulate multiple users. It's main advantage over other tools is that it's simple and easy to get started with. If your backend listens on HTTP, it might be worth trying ab before investing any time in something more complex.

COMET (server push to client) on iPhone [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I'm looking to establish some kind of socket/COMET type functionality from my server(s) to my iPhone application. Essentially, anytime a user manages to set an arbitrary object 'dirty' on the server, by say, updating their Address.. the feedback should be pushed from the server to any clients keeping a live poll to the server. The buzzword for this is COMET I suppose. I know there is DWR out there for web browser applications, so I'm thinking, maybe it's best to set a hidden UIWebView in each of my controllers just so I can get out of the box COMET from their javascript framework? Is there a more elegant approach?
There are a couple of solutions available to use a STOMP client.
STOMP is incredibly simple and lightweight, perfect for the iPhone.
I used this one as my starting point, and found it very good. It has a few object allocation/memory leak problems, but once I got the hang of iPhone programming, these were easy to iron out.
Hope that helps!
Can you use ordinary TCP/IP socket in your application?
A) If yes then definitely a raw TCP/IP socket is more elegant solution. From your iPhone app you just wait for notification events. The socket is open as long as your application is open. If you want you can even use HTTP protocol / headers.
On the server side you can use some framework to write servers which efficiently handle thousands of open TCP/IP connections. e.g Twisted, EventMachine or libevent. Then just bind the server main socket to http port (80).
The idea is to use a server which keeps just a single data structure per client. Receives update event from some DB application and then pushes it to right client.
B) No, you have to use Apache and http client on iPhone side. Then you should know that whole COMET solution is in fact work around for limitations of HTTP protocol and Apache / PHP.
Apache was designed to handle many short time connections. As far I know only newest versions Apache (mpm worker) can handle efficiently big number of opened connection. Previously Apache was keeping one process per connection.
Web browsers have a limit of concurrent open connections to one web server (URL address in fact, eg. www.foo.com, not IP address of www.foo.com). And the limit is 2 connections. Additionally, a browser will allow only for AJAX connections to the same server from which the main HTML page was downloaded.
I wrote a web server for doing exactly this kind of thing. I'm pushing realtime updates through the server with long polling and, as an example, I had safari on the iPhone displaying that data.
A given instance of the server should be able to handle a few thousand concurrent clients without trying too hard. I've got a plan to put them in a hierarchy to allow for more horizontal scaling (should be quite trivial, but doesn't affect my current application).
WebSync has a javascript client that works on the iPhone, if that's what you're after
Would long-polling work for what you want to achieve? You can implement the client-side in a few lines of regular Javascript, which will be lighter than any framework could possibly be.
It would also be trivial to implement it in ObjC (connect, wait for a response or timeout, repeat)
The answers to my question Simple "Long Polling" example code? hopefully explain how extremely simple Long Polling is..
Basically you would just request a URL as usual - the web-server would accept the connection, but not send any data until it's available. When you receive data, or the connection times-out, you reconnect (and repeat)
The most complicated bit would be server server-side, as you cannot use a regular threaded web-server like Apache, although this is also the case with Comet..
StreamHub Comet Server works with the iPhone out of the box, no plugins or anything required. Just browsed to their website on my iPhone and all the examples worked, didn't need to install Flash or anything.
Do you want/have do the communication for your app over http? If not, you can use CFNetwork framework to use sockets (TCP/UDP) to allow your app and server to communicate. From what I have seen of the CFNetwork stack, it is pretty cool, and makes it fairly straitforward to read and write to streams, and allows for synchronous and asynchronous communication. It also allows for you to define callbacks on your socket allowing you to get notified of events like data received, connection made, etc. So, in your example you could send the information over the socket to your server, and then you could define a callback that would listen for incoming data on the stream and then update your app accordingly.
EDIT: Did a little more research, and if you go the socket approach, you may want to also look at the NSStream classes. They are Cocoa abstractions build on top of the CFSocket stuff.
you didn't mention what serverside tech you're using. But in case it's microsoft .net (or for any other googlers who come across this), there is a simple option for comet: http://www.codeplex.com/ncomet.
COMET, LightStreamer, AJAX all that junk is broken. It is basics of TCP that no 'keep-alives' are ever guaranteed without pinging traffic.. So you can forget that long-polling if any decent reliability or timely delivery is to be guaranteed..
It's just hype everyone saw through back in 2003 when the lame-mania kicked off..