I want to notify the browser side via javascript about an approaching session timeout.
My current implementation polls a URL every so often to find out if X seconds remain until session timeout.
For purely academic scaling reasons, what is an alternative to polling for session timeout on a Torquebox2 environment?
For example if I use a websocket server, how do I add session timeout information for the user and is it possible to have the client side trigger immediately after the information is pushed?
Is there a simple gem or alternative that does the bulk of work for me?
Web sockets would surely be better than polling.
There is a good example of web socket usage in torquebox here https://github.com/torquebox/stomp-chat-demo
In this example, session information is set and read both in the Sinatra application and in the stomplets (analogous to an http controller but for web sockets). The documentation for web sockets in TorqueBox shows how to use the session both in your application controller and your stomplet here http://torquebox.org/documentation/current/stomp.html#d0e3602
Related
My Flutter mobile app communicates with my back-end server. The docs say it's better to use Client class (IOClient) than plain get, put, etc. methods to maintain persistent connections across multiple requests to the same server.
Docs also say that:
It's important to close each client when it's done being used; failing
to do so can cause the Dart process to hang.
I don't understand when I need to close the client, because almost all app screens require HTTP connection to the same server. What's the best practice here?
Update:
Is it OK to close Client only before app is terminated, or should I close it every time app is hidden (goes to paused state)?
I personnaly think that closing client after each user action is the best practise.
What i call an "user action" can be constituted of multiple API request.
So i think the best is something like that:
var client = http.Client();
try {
var response = await client.post(
Uri.https('my-api-site.com', 'users/add'),
body: {'firstname': 'Alain', 'Lastname': 'Deseine'});
var Response = jsonDecode(utf8.decode(response.bodyBytes)) as Map;
...
// Add here every API request that you need to complete the users action
} finally {
// Then finally destroy the client.
client.close();
}
Don't close the HTTP Client
For some of you, it may sound odd, but the solution is as simple as not to do that.
Why
In most cases, the HTTP Client should be available for the whole app run time. Also, app resources are disposed automatically when the app is closed by the user. For that reason, in most cases, we don't need to handle the disposal of the HTTP Client.
When to dispose an HTTP Client?
Only if we want to run a limited, one-time, predicted, season of HTTP requests. In that case, you can dispose of the Client in many different ways (depending on your state management or the lifecycle that you want to trigger the disposal).
The dispose() function is common to all packages that handle cache and local resources. The documentation mentions that option, but it does not suggest you use it in every scenario. It should be handled in very specific scenarios only.
So for most of you, just don't dispose of the HTTP Client.
Keep connections atomic per server interaction.
almost all app screens require HTTP connection to the same server
One thing is that all screens make http calls, other thing is having constant high frequency interactions with the server.
Let's say we have a multiplayer app, that requires each second that passes to communicate with the server. If that was the case, leaving the client open would be critical. Even though you have the architectural consequence that the dart process would hang. This would mean that dart may not be the best solution for a high server traffic app.
To my understanding your app is not the case. You don't need to worry about leaving the connection open constantly, so you could only open and close it each time you need to use it without having to pay a high performance price.
It should be seemless to the user if you are opening a connection each time you try to consume your API.
Having said this, here are some other insights on this topic:
A large amount of clients connected to the server, even when not active, may consume resources of memory or objects (for example, if there is one thread per connection). On the other hand, keeping the connection on, will allow the client to detect if there is a connection problem to the server much faster (if that even matters). Extracted from this other thread
Hopefully this will help you, given your use case, take a better decision.
In terms of network traffic, it's better to use the same client throughout the app lifecycle. Establishing a new connection for each api is very expensive. However, as per the documentation,
It's important to close each client when it's done being used; failing to do so can cause the Dart process to hang.
Also, if client.close() isn't called, it doesn't mean that the server will keep the connection open forever. The server will close the connection if it is idle for a period more than the HTTP Keep-Alive Timeout. In this case, if the client sends a new request over the connection closed by server, he'll get a 408 Request Timeout.
So, if you decide to use the same client throughout the app lifecycle, keep in your mind the two possible issues that you may run into.
I am working on an automation project using Raspberry pi and Windows IoT. Is it possible to broadcast to a web page, similar to Server-Send-Event? I am monitoring certain events and instead of calling server every few seconds for updates, I would like server to send the alert to web page direct. Any help would be greatly appreciated.
I think you can use WebSockets. WebSockets are an advanced technology that makes it possible to open an interactive communication session between the user's browser and a server. You can refer to this sample. Or you can use IoTWeb to embed a simple HTTP and WebSocket server into your application.
Update:
WebSockets are a great addition to the HTTP protocol suite, but there are numerous situations where they cannot be used.
Some companies have firewalls that will prevent WebSockets from
working.
If you are deploying software in a shared hosting
environment, you may not be permitted to use WebSockets.
If you are
behind a reverse proxy that isn’t configured or the software doesn’t
support pass-through of WebSocket protocol, WebSockets won’t work.
Another option is long polling,the browser does an XHR request and the server simply doesn’t respond until it has something to send. But in this way, if you want to do 2-way communications with the server, you are effectively using 2 sockets. One is tied up hanging/waiting for the long poll response, and the other is sent by the client to send new information to the server. Long polling is also problematic because the client has to be able to handle XHR errors, some of which are tricky to handle or even impossible to handle. You can search more differences and disadvantages from internet.
I have a question about the reason the web applications continues setting cookies, because the persistent HTTP conections use sockets, i.e.: websocket.
HTTP 1.1 and 2 uses persistent http conections, with sockets in the client and server. These sockets are active a necessary time for loading a complete web page (HTML, CSS, images, etc), then the sockets are killed by the server. It is logic due to the server does not know what is doing the client. So, in this scenario, the use of the cookies is justified.
But, with websocket i think the scenary is different, because it uses only one socket, so it means that after the conection is done, the server and the client uses the sockets for sending data.
So, the question is... why are the cookies necessary if the server know who is the client?
This question is impossibly broad, since many different web applications work in many different ways.
In general, cookies are used to store data that needs to persist beyond the momentary connection between the client and the server.
More specifically, the connection between the client and the server can be very transient. The server receives a request, sends a page, and moves on to the next request. It doesn't maintain a constant connection to every browser that contacts it.
I understand that rest ws is stateless. And we are expecting pretty high traffic. Is it a good idea to set session timeout (we are using tomcat) really low? Like one minute? pros and cons?
If you are expecting high traffic, session management will bring overhead to your application and with a one minute timeout your server will consume time invalidating lots of sessions.
If your application is indeed stateless then don't use sessions. You can't fully disable them either but if you don't do getSession() then you should be fine.
If you (absolutely) want to be sure no code is creating sessions, you could have a look at Tomcat's session manager component and maybe create your own implementation that tells you what's happening when you subject your server to some stress tests.
I have a perl web application (CGI::Application with ModPerl::Registry) which connects to a authenticated custom server over a socket and exchanges data (command/response) with it. Currently the web application connects to the server, authenticates and disconnects on every page request - even for the same user.
Is there some way I can use the same socket over multiple page requests which share a common session id? Creating a separate daemon that proxies connections and makes them persistent is an option I am exploring, but would like to know if there are any simpler solutions.
I have no control over the design of the custom server unfortunately.
Looks like the same question was asked on PerlMonks. The responses there point in the right direction, but the issue seems to be that you want one cached connection per session, not one cached connection per session per httpd thread/process. You might have to resort to a separate proxy process to get the behaviour you want.