Can i avoid putting a timed event from client app which pings server, for event updates?
I am using Angularjs, Nodejs-expressjs, to build my web app.
The other alternative i can think of maybe socket.io.
Can i do something like
app.post('/abc', function(req, res){
if(event){
res.send('event data');
}
});
The above app.post will not return till the event happens.
There are multiple ways to implement push notifications:
HTTP Long Polling : The client initiates a request. The server checks if it has any new notifications. Irrespective of whether or not it has new notifications appropriate response is send and connection is closed. After time X client initiates another request (+ Very easy to implement - notifications are not real time. They depend on X since data retrieval is client initiated. As X decreases overhead on server increases )
HTTP Streaming: This is very similar to HTTP Long Polling however the connection is not closed. The server sends chunked response. So as soon as server receives new notification that it wants to push it can simply write to the socket. ( + lower latency than long polling and almost real time behaviour / overhead of closing connection and re opening reduced - memory usage client side keeps on piling up / ugly hacks etc )
WebSocket: TCP based protocol provides true two way communication. The server can push data to client any time. ( + ve: true real time - some older browsers dont support it ). Read more about it WebSocket.org | About WebSocket
Now based on the technology stack there are various solutions available: (A) Nodejs : the cross-browser WebSocket for realtime apps. ( does heavy lifting for you. Gracefully falls back in case websocket is not supported ) (B) Django : As mentioned previously you can use signals for notifications. Also you can try django-websocket 0.3.0 for supporting websocket (C) Jetty / Netty and Grizzly (Java based) : All have websocket support
from link
Related
I am currently working on a project that requires the client requesting a big job and sending it to the server. Then the server divides up the job and responds with an array of urls for the client to make a GET call on and stream back the data. I am the greenhorn on the project and I am currently using Spring websockets to improve efficiency. Instead of the clients constantly pinging the server to see if it has results ready to stream back, the websocket will now just directly contact the client hooray!
Would it be a bad idea to have websockets manage the whole process from end to end? I am using STOMP with Spring websockets, will there still be major issues with ditching REST?
With RESTful HTTP you have a stateless request/response system where the client sends request and server returns the response.
With webSockets you have a stateful (or potentially stateful) message passing system where messages can be sent either way and sending a message has a lower overhead than with a RESTful HTTP request/response.
The two are fairly different structures with different strengths.
The primary advantages of a connected webSocket are:
Two way communication. So, the server can notify the client of anything at any time. So, instead of polling a server on some regular interval to see if there is something new, a client can establish a webSocket and just listen for any messages coming from the server. From the server's point of view, when an event of interest for a client occurs, the server simply sends a message to the client. The server cannot do this with plain HTTP.
Lower overhead per message. If you anticipate a lot of traffic flowing between client and server, then there's a lower overhead per message with a webSocket. This is because the TCP connection is already established and you just have to send a message on an already open socket. With an HTTP REST request, you have to first establish a TCP connection which is several back and forths between client and server. Then, you send HTTP request, receive the response and close the TCP connection. The HTTP request will necessarily include some overhead such as all cookies that are aligned with that server even if those are not relevant to the particular request. HTTP/2 (newest HTTP spec) allows for some additional efficiency in this regard if it is being used by both client and server because a single TCP connection can be used for more than just a single request/response. If you charted all the requests/responses going on at the TCP level just to make an https REST request/response, you'd be surpised how much is going on compared to just sending a message over an already established webSocket.
Higher Scale in some circumstances. With lower overhead per message and no client polling to find out if something is new, this can lead to added scalability (higher number of clients a given server can serve). There are downsides to the webSocket scalability too (see below).
Stateful connections. Without resorting to cookies and session IDs, you can directly store state in your program for a given connection. While a lot of development has been done with stateless connections to solve most problems, sometimes it's just simpler with stateful connections.
The primary advantages of a RESTful HTTP request/response are:
Universal support. It's hard to get more universally supported than HTTP. While webSockets enjoy relatively good support now, there are still some circumstances where webSocket support isn't regularly available.
Compatible with more server environments. There are server environments that don't allow long running server processes (some shared hosting situations). These environments can support HTTP request, but can't support long running webSocket connections.
Higher Scale in some circumstances. The webSocket requirement for a continuously connected TCP socket adds some new scale requirements to the server infrastructure that HTTP requests don't demand. So, this ends up being a tradeoff space. If the advantages of webSockets aren't really needed or being used in a significant way, then HTTP requests might actually scale better. It definitely depends upon the specific usage profile.
For a one-off request/response, a single HTTP request is more efficient than establishing a webSocket, using it and then closing it. This is because opening a webSocket starts with an HTTP request/response and then after both sides have agreed to upgrade to a webSocket connection, the actual webSocket message can be sent.
Stateless. If your job is not made more complicated by having a stateless infrastruture, then a stateless world can make scaling or fail-over much easier (just add or remove server processes behind a load balancer).
Automatically Cacheable. With the right server settings, http responses can be cached by browser or by proxies. There is no such built-in mechanism for requests sent via webSockets.
So, to address the way you asked the question:
What are the pitfalls of using websockets in place of RESTful HTTP?
At large scale (hundreds of thousands of clients), you may have to do some special server work in order to support large numbers of simultaneously connected webSockets.
All possible clients or toolsets don't support webSockets or requests made over them to the same level they support HTTP requests.
Some of the less expensive server environments don't support the long running server processes required to support webSockets.
If it's important to your application to get progress notifications back to the client, you could either use a long running http connection with continuing progress being sent down or you can use a webSocket. The webSocket is likely easier. If you really only need the webSocket for the relatively short duration of this particular activity, then you may find the best overall set of tradeoffs comes by using a webSocket only for the duration of time when you need the ability to push data to the client and then using http requests for the normal request/response activities.
It really depends on your requirements. REST services can be much more transparent and easier to pick up by developer compared to Websockets.
Using Websockets, you remove most of the advantages that RESTful webservices offer, such as the ability to reference a resource via a URI. Really what you should be doing is to figure out what the advantages are of REST and hypermedia, and based on that decide whether those advantages are important to you.
It's of course entirely possible to create a RESTful webservice, and augment it with a a websocket-based API for real-time responses.
But if you are creating a service that only you are going to consume in a controlled environment, the only disadvantage might be that not every client supports websockets, while pretty much any type of environment can do a simple http call.
We have an existing play server app to which mobile clients talk via web sockets (two way communication). Now as part of load testing we need to simulate hundreds of client requests to the server.
I was thinking to write a separate play client (faceless) app and somehow in a loop make 100s of requests to a server app? Given that I am new to web sockets, does this approach sound reasonable?
Also what is the best way to write a faceless web socket client that makes web socket requests to a web socket server?
If you want to properly validate the performance of your application, it is very important to :
simulate the behavior of real users by simulating real "websocket" connections
- reproduce a realistic end-user journey on the application utilizing the websocket channel
It's important to generate the proper user workflow ( actions done by a user when receiving a websocket message). For example in a betting application users interact with the applicaiton depending on the messages received by the browser.
To be able to generate a realistic load test, I would recommend to use a real loadtesting software supporting Websocket. It will allow you to generate Different kind of users, with different kind of network, different kind of browsers....etc
What is the framework use by your applicaiton? Depending on the framework i could recommend the proper tool for you need.
You have to make a difference between hundreds of clients and hundreds of requests from the same client.
When you have hundreds of clients, the requests can come in at the same time.
When you only have one client, requests will mostly come in sequentially (depending on using one or multiple threads).
When you only have one client, you can perfectly send requests using a loop. What you will actually measure here is the processing latency of the server.
When you want to simulate multiple clients, this is a bit more difficult. If you simulate them from one machine, the requests are pipelined through the network card and hence the requests are not really send in parallel. Also, you are limited by the bandwidth of the machine. Suppose the server has a 1Gb connection and your test machine has a 1Gb connection, then you can never overload the bandwidth of the server. If your clients are supposed to have a limited bandwidth like 50Mb, then you can run 20 clients (not taking into account the serialisation that happens through the network card).
In theory, you should use as many machines as the number of clients you want to test. In reality, you would use a number of machines each running a limited number of clients.
Regarding a headless test application, you could use a headless browser testing framework like PhantomJS.
I have written a simple websocket client using Node.js.
If the server is open and ready to accept the request then you can fire the requests as written below,
const WebSocket = require('ws')
const url = ws://localhost:9000/ws
const connection = new WebSocket(url)
connection.onopen = () => {
for (var i=0;i<100;i++) {
connection.send('hello')
}
}
connection.onmessage = (event) => {
console.log(event.data)
}
connection.onerror = (error) => {
console.log(`WebSocket error: ${error}`)
}
I've never done a notification service on web client and I just would like to know what is the most common pattern.
Like if the server has to push the client or if it's the client which needs to get the server info every minutes for example.
Or if there is another pattern.
There are multiple ways to implement push notifications:
HTTP Long Polling : The client initiates a request. The server checks if it has any new notifications. Irrespective of whether or not it has new notifications appropriate response is send and connection is closed. After time X client initiates another request (+ Very easy to implement - notifications are not real time. They depend on X since data retrieval is client initiated. As X decreases overhead on server increases )
HTTP Streaming: This is very similar to HTTP Long Polling however the connection is not closed. The server sends chunked response. So as soon as server receives new notification that it wants to push it can simply write to the socket. ( + lower latency than long polling and almost real time behaviour / overhead of closing connection and re opening reduced - memory usage client side keeps on piling up / ugly hacks etc )
WebSocket: TCP based protocol provides true two way communication. The server can push data to client any time. ( + ve: true real time - some older browsers dont support it ). Read more about it WebSocket.org | About WebSocket
Now based on the technology stack there are various solutions available:
(A) Nodejs : the cross-browser WebSocket for realtime apps. ( does heavy lifting for you. Gracefully falls back in case websocket is not supported )
(B) Django : As mentioned previously you can use signals for notifications. Also you can try django-websocket 0.3.0 for supporting websocket
(C) Jetty / Netty and Grizzly (Java based) : All have websocket support
from link
This depends on what web framework you use. With a modern framework like meteor, it's very easy for the server to push notifications to clients, and many kinds of display updates can happen automatically, without having to construct a notification mechanism to take care of them.
Have a look at the two Meteor screencasts listed at http://meteor.com.
I'm supposed to implement a web application where the user log's in and by that registers for some sort of events (in this case, alarms). When an alarm happens, the server needs to push the alarm to all of the clients.
At the moment I'm using
GWT on the Client side
Jetty on the Server side
Is implementing the server push by using Jetty Continuations a good idea? My requirements are:
the number of clients will be quite small (<20) but could increase in the future
alarms must not get lost (i.e. if the client will be down, it must not miss any alarms)
if a client goes down, other clients need to be informed about it (or at least the admin should receive some sort of notification, e.g. by Mail).
The main reason for using Comet (e.g. Jetty Continuations) is, that it allows to reduce the polling frequency. In other words: You can achieve the same thing without Comet, by using frequent polling from the client side. Which alternative to choose depends on the characteristics of your application - depending on that, each alternative can be more or less efficent than the other!
In your case, since you need notifications when a client goes down, it makes sense to use frequent polling. Comet (long polling) is not suited very well for this task: Because of its priciple, it can take a long time until a client sends a new request. And receiving a new request is the only way a server can know that a client is still alive (remember, that a web server - no matter if Comet or not - can never send a request to the client).
Your requirement states that alarms must not get lost, implies a more complicated solution than long polling or frequent polling.
Your client should send an acknowledgement message to the server, because your user could close the application just after the alarm message arrived, he/she can loss that alarm.
Also, your user should click an alarm message to acknowledge the server . You can put a time limit to acknowledge , if the client does not send an ack message , then you can assume that alarm has been lost..
Long polling with acknowledgment algortihm would be my choices to solve your problem..
I'm new to node.js and I want to ask a simple question about how it works.
I have used FMs in the past for client to client communication and real time applications. For example, for creating a collaborative application where you need to see what other users are doing. I want to explore that using NodeJS.
I have couple of questions:
1) How does NodeJs handle server-to-client communication? Is thee any way to push information to the client? or the client needs to be making requests constantly to the server to see if anything has changed?
2) Is there such thing like permanent connections between the server and the clients?
3) How Can be handle client-to-client communication (of course thru the server)?
Thanks in advance.
3) How Can be handle client-to-client
communication (of course thru the
server)?
A simple solution is to open a websocket between the server and each client :
[Client A] <==websocket==> [Server] <==websocket==> [Client B]
If you go with Socket.IO for example, it is very easy to do client-to-client communication this way.
When the server receives a message from one client, you just broadcast it to all clients or send it to one specific client depending on your use case.
Some example code using Socket.IO :
var socket = io.listen(server);
socket.on('connection', function(client) {
client.on('message', function(msg) {
broadcast(msg); // broadcast message to all clients
// OR
socket.clients[session_id].send(msg); // send message to one client
});
client.on('disconnect', function( ) {
console.log('Client Disconnected.');
});
});
Quite a lot of Node.js questions from you recently ;)
As Toby already said, Node can do HTTP, TCP/UDP and Unix Sockets. When you establish a permanent connection, you can of course push data to the clients.
Since you are talking about Browser based clients, there a numerous ways to achieve this. You could for example use WebSockets with a Flash fallback. In case you are not interested in the low level details and want a complete package, take a look at Socket.IO.
WebSockets can't do this, Flash can't do it either as far as I know. So unless you want to enter Java/Silverlight land, you'll need to route the requests through your server.