I need some advice/insight how to best implement certain functionality. The idea of my task is live system monitoring dashboard.
Let's say I have a following setup based on two physical servers:
Server1 is running Play application which monitors certain files, services, etc for changes. As soon as change occurs it alerts another Play application running on Server2.
Server2 is running a Play application that serves a web front end displaying live dashboard data being sent to it from Play application sitting on Server1.
I am only familiar with Play framework in a way that it serves data to http requests, but the way I need it to run in this particular situation is a bit different.
My question is how do I keep these two Play applications in constant connection the way I've described above? The requirement is that Server1 application would be pushing data to Server2 application on a need basis as opposed to Server2 application running in an endless loop and asking Server1 application if there is any new data every 5 seconds.
I'm using Play Framework 2.2.1 with Scala.
Actually Akka introduced in Play 2.0 perfectly fits your requirements (as Venkat pointed).
Combining its remoting, scheduler and futures possibilities you will be able to build every monitor you need.
Scanerio may be ie:
S1 let's name it a Doctor uses Akka's scheduler to monitor resources each several seconds
if Doctor detects changes sends Akka message to S2's actor (FrontEnd) otherwise does nothing.
Mentioned actor of FrontEnd can add event to some queue, or push it directly ie to some WebSocket, which will push it to browser. Other option is setting another scheduler at FrontEnd which will check if queue contains new events.
Check included sample applications how you can communicate your FrontEnd with browser (ie. commet-live-monitoring or eventsource-clock).
For communication between a Doctor and FrontEnd apps, akka-remote is promising feature.
I think Server-Sent Events (SSE: http://dev.w3.org/html5/eventsource/) are what you are looking for. Since it's supposed to be only one-directional push (server1 pushes data to server2), SSE is probably a better choice over WebSockets which are full-duplex bidirectional connections. Since your Server2 has a web-front end, the browser can automatically reconnect to Server1 if you are using SSE. Most modern browsers support SSE (IE doesn't).
Since you are using Play Framework: You can use Play WS API for Service to Service communication and also you can take advantage of the powerful abstractions for handling data asynchronously like Enumerator and Iteratee. As Play! integrates seamlessly with Akka, you can manage/supervise the HTTP connection using Actors.
Edit:
Answering "How exactly one service can push data to another on a need basis" in steps:
Manage the HTTP connection: Server1 needs to have a WebService client to manage HTTP connection with Server2. By "manage HTTP connection" I mean: reconnect/reset/disconnect the HTTP connection. Akka Actors are a great usecase for solving this problem. Basically this actor receives messages like CONNECT, CHECK_CONN_STATUS, DISCONNECT, RESET etc. Have a scheduler for your HttpSupervisor actor to check the connection status, so that you can reconnect if the connection is dead.
val system = ActorSystem("Monitor")
val supervisorRef = system.actorOf(Props(new HttpSupervisor(system.eventStream)), "MonitorSupervisor")
system.scheduler.schedule(60 seconds, 60 seconds, supervisorRef, CHECK_CONN_STATUS)
Listen to the changes and PUSH on need:
Create an Enumerator which produces the changes. Create an Iteratee for consuming the changes asynchronously. Again, some code that may be of help:
val monitorIteratee = play.api.libs.iteratee.Iteratee.foreach[Array[Byte]]
(WS.url(postActionURLOnServer2).post(new String(_, "UTF-8")))
Attach the iteratee to the enumerator.
Related
I would like to maintain an SSE pipeline in the front end of my Play 2.7.x application, which would listen indefinitely for some irregularly spaced events from server (possibly triggered by other users). I send the events via a simple Akka flow, like this:
Ok.chunked(mySource via EventSource.flow).as(ContentTypes.EVENT_STREAM)
However the connection is automatically closed by Play/Akka server. What would be the best course of action here:
set play.server.http.idleTimeout to infinite (but documentation
does not recommend it; also it would affect other non-SSE endpoints)?
rely on browser to automatically reestablish the connection (but as far as I know not all browsers do it)?
explicitly implement some reconnection logic in Javascript on the client?
perhaps idleTimeout can be overridden locally for a specific action (I have not found a way though)?
Periodically send an empty Event to keep the connection alive:
import scala.concurrent.duration._
val heartbeat = Event("", None, None)
val sseSource =
mySource
.via(EventSource.flow)
.keepAlive(1.second, () => heartbeat)
Ok.chunked(sseSource).as(ContentTypes.EVENT_STREAM)
Akka HTTP's support for server-sent events demonstrates the same approach (Play internally uses Akka HTTP).
I have a scenario where I have a bunch of Akka Actors running with each Actor representing an IoT device. I have a web application based on Play inside which these Actors are running and are connected to these IoT devices.
Now I want to expose the signals from there Actors to the outside world by means of a WebSocket Endpoint. Each of the Actor has some sort of mechanism with which I can ask for the latest signal status.
My idea is to do the following:
Add a WebSocket endpoint in my controller which expects the id of the IoT device for which it needs the signals. In this controller, I will do an actor selection to get the Actor instance that corresponds to the id of the IoT device that is passed in.
Use the ActorRef obtained in step 1 and instantiate the WebSocketActor
In this WebSocketActor, I will instantiate a Monix Observable that will at regular intervals use the actorRef and ask it for the signals.
As soon as I get these signals, I will pass it on to the WebSocket endpoint
Now my question is:
What happens say if a client has opened a WebSocket stream and after some time the Actor representing the IoT device is dead. I probably should handle this situation in my WebSocketActor. But how would this look like?
If the Actor representing the IoT device comes back alive (assuming that I have some supervison set up), can I continue processing the client that opened the socket connection before the Actor was dead? I mean will the client need to somehow close and open the connection again?
Please suggestions?
If you'd like to see an Akka actors + Monix integration example, communicating over WebSocket, look no further than the monix-sample project.
The code handles network failure. If you'll load that sample in the browser, disconnect the network and you'll see it recover once connectivity is back on.
I've been reading Akka documentation but I cannot figure out what to do to get done what I have in mind.
I want to create a small Akka application (App A) that is meant to be "always running". This App is NOT meant to be deployed on a cloud architecture but on a single machine.
I'd like also to add some "human interaction" features to this app, so I was thinking about to create a console application (App B) to enable somebody to send messages to a Master Actor in the App A, including for example "Shut down" (instead on Ctrl C) or "Force execution of task X right now".
Both apps will run on the same machine, I think to connect a terminal on that machine and start the console application.
So what I haven't got so far is:
1) should I use Remote Actors on App A in order to make it visible from App B?
2) Is it possible and also a good practice communicating using Actor Messages between the two Apps or in this specific scenario (console->application) there are other advisable approaches? Notice I have no need for security standards on this kind of communication.
3) If I can send Actor Messages to Local Actors, the routing system described for Remote Actors "schema://domain:port/path" is valid also for Local Actors?
Finally, as a general guideline, consider I want to keep it simple...
1) Why not. You may also consider to use Spray which will give you access by http. Or even use Typesafe Console - http://resources.typesafe.com/docs/console/manual/getting-started.html
2) That's fine. The only thing you should keep in mind that there is no guaranteed delivery in Akka Remote. If you have no connection problems - it should be fine.
3) Yes, but process will connect to its own port.
We have an existing play server app to which mobile clients talk via web sockets (two way communication). Now as part of load testing we need to simulate hundreds of client requests to the server.
I was thinking to write a separate play client (faceless) app and somehow in a loop make 100s of requests to a server app? Given that I am new to web sockets, does this approach sound reasonable?
Also what is the best way to write a faceless web socket client that makes web socket requests to a web socket server?
If you want to properly validate the performance of your application, it is very important to :
simulate the behavior of real users by simulating real "websocket" connections
- reproduce a realistic end-user journey on the application utilizing the websocket channel
It's important to generate the proper user workflow ( actions done by a user when receiving a websocket message). For example in a betting application users interact with the applicaiton depending on the messages received by the browser.
To be able to generate a realistic load test, I would recommend to use a real loadtesting software supporting Websocket. It will allow you to generate Different kind of users, with different kind of network, different kind of browsers....etc
What is the framework use by your applicaiton? Depending on the framework i could recommend the proper tool for you need.
You have to make a difference between hundreds of clients and hundreds of requests from the same client.
When you have hundreds of clients, the requests can come in at the same time.
When you only have one client, requests will mostly come in sequentially (depending on using one or multiple threads).
When you only have one client, you can perfectly send requests using a loop. What you will actually measure here is the processing latency of the server.
When you want to simulate multiple clients, this is a bit more difficult. If you simulate them from one machine, the requests are pipelined through the network card and hence the requests are not really send in parallel. Also, you are limited by the bandwidth of the machine. Suppose the server has a 1Gb connection and your test machine has a 1Gb connection, then you can never overload the bandwidth of the server. If your clients are supposed to have a limited bandwidth like 50Mb, then you can run 20 clients (not taking into account the serialisation that happens through the network card).
In theory, you should use as many machines as the number of clients you want to test. In reality, you would use a number of machines each running a limited number of clients.
Regarding a headless test application, you could use a headless browser testing framework like PhantomJS.
I have written a simple websocket client using Node.js.
If the server is open and ready to accept the request then you can fire the requests as written below,
const WebSocket = require('ws')
const url = ws://localhost:9000/ws
const connection = new WebSocket(url)
connection.onopen = () => {
for (var i=0;i<100;i++) {
connection.send('hello')
}
}
connection.onmessage = (event) => {
console.log(event.data)
}
connection.onerror = (error) => {
console.log(`WebSocket error: ${error}`)
}
I've never done a notification service on web client and I just would like to know what is the most common pattern.
Like if the server has to push the client or if it's the client which needs to get the server info every minutes for example.
Or if there is another pattern.
There are multiple ways to implement push notifications:
HTTP Long Polling : The client initiates a request. The server checks if it has any new notifications. Irrespective of whether or not it has new notifications appropriate response is send and connection is closed. After time X client initiates another request (+ Very easy to implement - notifications are not real time. They depend on X since data retrieval is client initiated. As X decreases overhead on server increases )
HTTP Streaming: This is very similar to HTTP Long Polling however the connection is not closed. The server sends chunked response. So as soon as server receives new notification that it wants to push it can simply write to the socket. ( + lower latency than long polling and almost real time behaviour / overhead of closing connection and re opening reduced - memory usage client side keeps on piling up / ugly hacks etc )
WebSocket: TCP based protocol provides true two way communication. The server can push data to client any time. ( + ve: true real time - some older browsers dont support it ). Read more about it WebSocket.org | About WebSocket
Now based on the technology stack there are various solutions available:
(A) Nodejs : the cross-browser WebSocket for realtime apps. ( does heavy lifting for you. Gracefully falls back in case websocket is not supported )
(B) Django : As mentioned previously you can use signals for notifications. Also you can try django-websocket 0.3.0 for supporting websocket
(C) Jetty / Netty and Grizzly (Java based) : All have websocket support
from link
This depends on what web framework you use. With a modern framework like meteor, it's very easy for the server to push notifications to clients, and many kinds of display updates can happen automatically, without having to construct a notification mechanism to take care of them.
Have a look at the two Meteor screencasts listed at http://meteor.com.