How to include a cookie in the initial WebSocket client request using akka Client-Side WebSocket Support? - scala

I'm using the akka Client-Side WebSocket package to consume, from my Scala application, a WebSocket endpoint that requires a cookie header in the initial client request.
Looking through the akka client API docs and Google'ing around hasn't got me too far. Any ideas on how one might go about doing this?

Related

URL design for a proxy API

We have multiple API's running for an enterprise. As per our limitation client will allow only one static IP to receive all Inbound/Outbound requests.
So, we need to expose a single API as a bridge between the client system and API's running behind.
How to approach this design?
How to design the URL for this proxy API?
What edge functions does this API need to provide?
Any help would be highly appreciated. Thanks!
You do not need to use web services consumer, yet will need to create a POC.
Define A RAML with required path and RAML, scaffolding should give you API took kit, and connect an HTTP request
use HTTP request
Examples:
Define Headers --- attributes.headers.id etc
queryParams ---- attributes.queryParams.date
if you are sending JSON payload across from ex: postman, change Mime type to application/json
sample http properties for request
http.host=myHost
http.port=8872
http.base.path=/myproxy/services

What is the difference between Async Response and Server-Sent Events in Jersey?

What is the difference between Async Response and Server-Sent Events in Jersey and when to use them?
Both are for different usage, one allows to wait for a slow resource (long-polling), the other allows to send a stream of data on the same TCP-connection.
Here's more detail :
AsyncResponse was introduced in JAX-RS 2, in order to perform long-polling requests.
Client open connection
Client send request payload
Server receive payload, pause/suspend the connection and look for the resources
Then
If timeout has been reached the server can end the connection
Resource is ready, server resume the connection and send the resource payload.
Connection is closed
As this is part of the JAX-RS specification, so you can just use it with the default jersey dependencies. Note that on a too long connection where no data is transmitted network equipment like firewall can close the TCP connection.
Server-Sent Events is a specification that allows the server to send messages on the same TCP connection.
Client use javascript EventSource to get a resource
Then the server can send at some point in time a payload, a message.
Then another
And so on
The connection can be closed programmatically at any time by either the client or the server.
SSE is not part of JAX-RS, so you need to have the Jersey SSE module in your classpath (additionaly in earlier version of Jersey 2 you had to programmatically enable the SseFeature).
Other things to consider :
SSE does not allow to pass custom headers, so no Authorisation header. It's possible to use the URL query string, but if you're not on HTTPS this a security issue.
SSE does allow to POST data, so this might go in the URL query string
Connection can close due to network (equipment failing, firewall, phone not in covered area, etc.)
In my opinion websockets are more flexible than SSE, and they even allow the client to send multiple messages. But Jersey does not implement the JEE specification that support websocket (JSR 356).
But really you should read the documentation of their SSE implementation, their's additional info like what is polling and what web-sockets.
AsyncResponse is like an ajax polling with long waiting time. The client initiate a single AJAX request to check for updates that will not return until it receives data or a timeout occurs and trigger another request. It does create unnecessary checking loop (at the server side) and the load is equivalent to the number of client connected. More client, more loop initiated = more resources needed.
Server-Sent Events is somewhat similar to long-polling at the server side, both uses loop to check for update and trigger a response. The only difference is that long-polling will continuous send request (either after timeout or receive data) whereas SSE only need to initiate once. Thus SSE is more suitable for mobile application when you consider battery usage.
Websocket uses loop as well, but not only to check for updates; also to listen for new connections and upgrade the connections to WS/WSS after handshake. Unlike long-polling and SSE; where the load increases with the number of clients, websocket constantly running the loop like a daemon. In addition to the constant loop, the load adds on as more client are connected to the socket.
For example, if you are designing a web service for administrative purposes, server running on long-polling and SSE are allow to rest after office hour when no one is around, whereas websocket will continue to run, waiting for connection. And did I mention without proper authentication, anyone can create a client and connect to your websocket? Most of the time, authentication and refuse connection is not done at the handshaking part, but after the connection was made.
And should I continue on how to implement websocket on multiple tab?

How to use a REST client with connection pooling and basic auth?

I currently have:
a REST API (Jersey) that runs as a seperate application
a GUI application (JSF) that is a client of the REST API
I'm wondering what the best way is to talk to the REST API from the GUI application. The REST API is stateless, but the GUI application is stateful and has to pass authentication info (basic auth) with every rest request. Because we have to support hundreds of simultaneous users, we want to configure our Jersey client for connection pooling.
We can handle connection pooling by configuring the Jersey client with Apache's HTTP client. Authentication can be handled by using the HTTPBasicAuthFilter, which will automatically send the same credentials with every request.
However, I'm not sure if it is best to configure 1 client for the entire GUI application, or to create a new client per session.
With 1 client for the application, connection pooling makes sense, but then I have to find a way to set the correct authentication info on every request. The HTTPBasicAuthFilter assumes that the credentials never change, which is not the case our app.
If I create a client with a new HTTPBasicAuthFilter per session, then authentication is trivial, but I don't get any benefit from connection pooling, since every client will have its own pool.
I doubt I'm the first one to run into this, so I am curious how other people have solved this.
Kind regards,
Glenn
You can attach client filters at the WebResource level. So you can have a single shared client and per-session WebResource objects that you attach the HTTPBasicAuthFilter to.

Is an API RESTful if it allows permanent requests (server push)

I am writing a REST API providing CRUD operations on resources.
I'd like the users to be able to register to some resources changes and get the updates via server push. For the server push I will provide support for reverse ajax, hidden iframe and websockets. In order to be as REST as possible I created a Streaming resource which handles the registrations and the connection to the client:
Streaming resource:
URI uri : A GET against this URI refreshes the client representation of the resources accessible to this user.
bool WebSocket : Indicate if websocket is available on this server
bool ReverseXHR : Indicate if ReverseXHR is available on this server
bool HiddenIframe : Indicate if HiddenIframe is available on this server
Registration[] Registrations : The set of registration tasks.
OpenChannel : Open streaming channel from webserver to client. GET parameter type=(websocket|xhr|hiddeniframe)
CloseChannel : Close streaming channel from webserver to client. GET parameter type=(websocket|xhr|hiddeniframe)
A call of openchannel?type=websocket would open the websocket and start streaming the data of the registered values.
I've read many articles but I am still a bit confused. Can I still call my API REST after doing this? And if no (or yes) why?
Thank you for your help!
Firstly, always implement what makes sense to solve the problem you face. Conforming to a given architectural style provides specific benefits but this should not exclude pragmatic solutions to a given problem.
But having said that, it seems like you're using streaming of resource data as a way to "tunnel" information back & forth between the client and the server. I'm pretty new to this but it seems to me that the tunneling data goes against the uniform interface constraint in the REST architectural style. Tunneling over HTTP is one of criticism level against soap based services.

How to do HTTP Server Push -- aka do I NEED STOMP, AMPQ, etc?

I am writing a collection of web services, one of which needs to implement server push.
The client will be native Objective-C. I want this to be as simple, fast, and lightweight as possible. The data transmitted will be JSON. Is it possible to do this without using a message broker?
There's an HTTP technique called COMET in which the client spins up a thread that makes a potentially very long-lived request to the HTTP server. Whenever the server wants to send something to the client, it sends a response to this request. The client processes this response and immediately makes another long-lived request to the server. In this way the server can send information while other things happen in the client's main execution thread(s). The information sent by the serve can be in any format you like. (In fact, for clients in a web browser doing COMET with a Javascript library, JSON is perfect.)
#DevDevDev: It's true that COMET is most often associated with a Javascript-enabled browser, but I don't think it has to be. You might check out iStreamLight, which is an Objective-C client for the iPhone that connects to COMET servers. It's also discussed in this interview with the author.