Programmatically Call REST URL - jboss

Summary
Is there a way to programmatically call REST URLs setup in JBoss via RESTEasy so that the programmatic method call actually drills down through the REST processor to find/execute the correct endpoint?
Background
We have an application that has ~20 different REST endpoints and we have set the application up to receive data from other federated peers. To cut down on cross network HTML requests, the peer site sends a bulk of requests to the server, and the receiving server needs to act upon the URL it receives. Example data flow:
Server B --> [Bulk of requests sent via HTTP/Post] --> Server A breaks list down to individual URLs --> [Begin Processing]
The individual URLs are REST URLs that the receiving server is familiar with.
Possible Solutions
Have the receiving server read through the URLs it receives, and call the management beans directly
The downside here is that we have to write additional processing code to decode the URL strings that are received.
The upside to this approach is that there is no ambiguity as to what happens
Have the receiving server execute the URL on itself
The receiving server could reform the URL to be http://127.0.0.1:8080/rest/..., and make a HTTP request on itself.
The downside here is that the receiving server could have to make a lot of HTTP requests upon itself (it's already somewhat busy processing "real" requests from the outside world)
Preferred: Have the receiving server access the main RESTEasy bean somehow and feed it the request.
Sort of combo of 1 & 2, without the manual processing of 1 or the HTTP requests involved with 2.
Technology Stack
JBoss 6.0.0 AS (2010 release) / Java 6
RESTEasy

Related

Handle multiple guzzle request in proxy for REST API (local server crashes)

I have the following case: I have a REST API, that can only be accessed with credentials. I need the frontend to make requests directly to the API to get the data. Because I don't want to hide the credentials somewhere in the frontend, I set up a proxy server, which forwards my request with http://docs.guzzlephp.org/en/stable/index.html but adds the necessary authentication.
No that worked neatly for some time, but now I added a new view where I need to fetch from one more endpoint. (so far it was 3 requests locally (MAMP))
Whenever I add a fourth API request, which all are being executed right on page load, my local server crashes.
I assume it is linked to this topic here:
Guzzle async requests not really async?, specifically because I make a new request for every fetch.
First: Do you think that could be the case? Could my local server indeed crash, because I have only 3 (probably simultaneous) requests?
Second: How could I approach this problem.
I don't really see the possibility to group the requests, because they are just incoming to the proxy url and every call of the proxy url will create a new Guzzle client with its own request...
(I mean, how many things can a simple PHP server execute at the same time? And why would it not just add requests to the call stack and execute them in order?)
Thanks for any help on this issue.

What is the difference between Async Response and Server-Sent Events in Jersey?

What is the difference between Async Response and Server-Sent Events in Jersey and when to use them?
Both are for different usage, one allows to wait for a slow resource (long-polling), the other allows to send a stream of data on the same TCP-connection.
Here's more detail :
AsyncResponse was introduced in JAX-RS 2, in order to perform long-polling requests.
Client open connection
Client send request payload
Server receive payload, pause/suspend the connection and look for the resources
Then
If timeout has been reached the server can end the connection
Resource is ready, server resume the connection and send the resource payload.
Connection is closed
As this is part of the JAX-RS specification, so you can just use it with the default jersey dependencies. Note that on a too long connection where no data is transmitted network equipment like firewall can close the TCP connection.
Server-Sent Events is a specification that allows the server to send messages on the same TCP connection.
Client use javascript EventSource to get a resource
Then the server can send at some point in time a payload, a message.
Then another
And so on
The connection can be closed programmatically at any time by either the client or the server.
SSE is not part of JAX-RS, so you need to have the Jersey SSE module in your classpath (additionaly in earlier version of Jersey 2 you had to programmatically enable the SseFeature).
Other things to consider :
SSE does not allow to pass custom headers, so no Authorisation header. It's possible to use the URL query string, but if you're not on HTTPS this a security issue.
SSE does allow to POST data, so this might go in the URL query string
Connection can close due to network (equipment failing, firewall, phone not in covered area, etc.)
In my opinion websockets are more flexible than SSE, and they even allow the client to send multiple messages. But Jersey does not implement the JEE specification that support websocket (JSR 356).
But really you should read the documentation of their SSE implementation, their's additional info like what is polling and what web-sockets.
AsyncResponse is like an ajax polling with long waiting time. The client initiate a single AJAX request to check for updates that will not return until it receives data or a timeout occurs and trigger another request. It does create unnecessary checking loop (at the server side) and the load is equivalent to the number of client connected. More client, more loop initiated = more resources needed.
Server-Sent Events is somewhat similar to long-polling at the server side, both uses loop to check for update and trigger a response. The only difference is that long-polling will continuous send request (either after timeout or receive data) whereas SSE only need to initiate once. Thus SSE is more suitable for mobile application when you consider battery usage.
Websocket uses loop as well, but not only to check for updates; also to listen for new connections and upgrade the connections to WS/WSS after handshake. Unlike long-polling and SSE; where the load increases with the number of clients, websocket constantly running the loop like a daemon. In addition to the constant loop, the load adds on as more client are connected to the socket.
For example, if you are designing a web service for administrative purposes, server running on long-polling and SSE are allow to rest after office hour when no one is around, whereas websocket will continue to run, waiting for connection. And did I mention without proper authentication, anyone can create a client and connect to your websocket? Most of the time, authentication and refuse connection is not done at the handshaking part, but after the connection was made.
And should I continue on how to implement websocket on multiple tab?

forbidden message while executing a rest message through Jmeter

We have come across similar problem, need your help to resolve this.
Can you please either let us know your contact number so that we can reach out to you or if you can provide your script if possible so that we can refer to
Here is the problem we are stuck with:
I am trying to test a Rest service through HTTP sampler using Jmeter. Not sure how to capture token from the sampler generates a token and to use this token for authorization in the header manager of another HTTP.
Loadrunner is not displaying the web address when trying to enter in the truclient browser. Below is the problem as this web address automatically redirect to another web address which is the authentication server.
Can you please suggest another solution for the below issue?
Here is the exact scenario we are trying to achieve
we want to loadtest the portal however due to redirect and different authentication method being used we are unable to do it using truclient protocol in loadrunner. Also tried Multiple protocol selecting LDAP, SMTP, HTTP/HTML etc but no luck.**
Thank You,
Sonny
JMETER is going to architecturally be the HTTP protocol layer equivalent with LoadRunner, with the exception of the number of threads per browser emulation.
In contrast to the code request, I want to architecturally visualize the problem. You mention redirect, is this an HTTP 301/302 redirect or one which is handled with information passed back to the client, processed on the client and then redirected to another host? You mention dynamic authentication via header token, have you examined the web_add_header() and web_add_auto_header() in Laodrunner web virtual users for passing of extra header messages, including ones which have been correlated from previous requests, such as the token being passed back as you note?
This authentication mechanism is based upon? LDAP? Kerberos? Windows Integrated Authentication? Simple Authentication based upon username/password in header? Can you be architecturally more specific and when this comes into play, such as from the first request to gain access to the test environment through the firewall or from a nth request to gain access within a business process?
You mention RESTFul services. These can be transport independent, such as being passed over SMTP using a mailbox to broker the passing of data between client and server, or over HTTP similar to SOAP messages. Do you have architectural clarity on this? Could it be that you need to provide mailbox authentication across SMTP and POP3 to send and receive?

http interface for long operation

I have a running system that process short and long running operations with a Request-Response interface based on Agatha-RRSL.
Now we want to change a little in order to be able to send requests via website in Json format so i'm trying many REST server implementation that support Json.
REST server will be one module or "shelve" handled by Topshelf, another module will be the processing module and the last the NoSQL database runner module.
To talk between REST and processing module i'm thinking about a servicebus but we have two types of request: short requests that perform work in 1-2 seconds and long requests that do work in 1 minute..
Is servicebus the right choice for this work? I'm thinking about returning a "response" for long running op with a token that can be used to request operation status and results with a new request. The problem is that big part of the requests must be used like sync request in order to complete http response.
I think I have also problems with response size (on MSMQ message transport) when I have to return huge list of objects
Any hint?
NServiceBus is not really suitable for request-response messaging patterns. It's more suited to asynchronous publish-subscribe.
Edit: In order to implement a kind of request response, you would need to message in both directions, but consisting of three logical steps:
So your client sends a message requesting the data.
The server would receive the message, process it, construct a return message with the data, and send it to the client.
The client can then process the data.
Because each of these steps takes place in isolation and in an asynchronous manner there can be no meaningful SLA or timeout enforced between when a client sends a request and receives a response. But this works nicely for large processing job which may take several minutes to complete.
Additionally a common value which can be used to tie the request to the response will need to be present in both messages. Otherwise a client could send more than one request, and receive multiple responses and not know which response was for which request.
So you can do this with NServiceBus but it takes a little more thought.
Also NServiceBus uses MSMQ as the underlying transport, not http.

How to do HTTP Server Push -- aka do I NEED STOMP, AMPQ, etc?

I am writing a collection of web services, one of which needs to implement server push.
The client will be native Objective-C. I want this to be as simple, fast, and lightweight as possible. The data transmitted will be JSON. Is it possible to do this without using a message broker?
There's an HTTP technique called COMET in which the client spins up a thread that makes a potentially very long-lived request to the HTTP server. Whenever the server wants to send something to the client, it sends a response to this request. The client processes this response and immediately makes another long-lived request to the server. In this way the server can send information while other things happen in the client's main execution thread(s). The information sent by the serve can be in any format you like. (In fact, for clients in a web browser doing COMET with a Javascript library, JSON is perfect.)
#DevDevDev: It's true that COMET is most often associated with a Javascript-enabled browser, but I don't think it has to be. You might check out iStreamLight, which is an Objective-C client for the iPhone that connects to COMET servers. It's also discussed in this interview with the author.