Long Polling with Java and JBoss - jboss

I'm looking for an example, how to implement a longpoling mechanism in java. I would love to use a stateless EJB.
I know that something like that would work:
#WebService(serviceName="mywebservice")
#Stateless
public class MyWebService {
#WebMethod
public String longPoll() {
short ct = 0;
while(someCondition == false && ct < 60) {
sleep(1000); // 1 sec
ct++;
}
if (someCondition)
return "got value";
else
return "";
}
}
Unfortunately i know that this does'nt scale. Can i return in the webmethod without finishing the response and finish it somewhere else?

JAX-WS provides support for invoking Web services using an asynchronous client invocation and supports both a callback and polling model. Have a look at:
Asynchronous Web Service Invocation with JAX-WS 2.0
Using the JAX-WS asynchronous programming model
In particular, the Polling Example

The thing you're trying to implement is called server push.
Each webserver/appserver has a pool of threads, say 10 threads for processing web requests, if all those threads will go into 'sleep' no other web request will be serviced until one of those 'sleeps' exists. Some solution is to increase number of those threads but then you'll eat more memory and more operating system resources (each thread costs). So yes, your implementation of 'server push' isn't scalable.
Solutions:
your web application can send a http request every (say) 5 secs, to check if your 'someCondition' changed, and then get the data
AFAIK, Tomcat (so JBoss too) already has some 'connector' for supporting such requests, so Thread.sleep() or semaphores won't be needed
use latest web server implementing Servlet API 3, it also has support for such long-running HTTP requests
read more: Online tutorials for implementing comets (server push)

Related

What is best way to run periodic consumer for redis or kafka in Ktor backend?

I'm trying to use Ktor for my new backend, and I have a question about how can I launch periodic consumer for kafka or redis inside Ktor backend application.
In very simple way, I thought that I make a single backend application configurable to enable routing and consuming, thus some of instance will support both, and some of instance will support only routing to ensure availability.
But, I'm not sure how can I trigger consumer application from ktor. From the code I tested with this,
fun main() {
embeddedServer(Netty, port = 8080, host = "127.0.0.1") {
configureDependencyInjection()
configureRouting()
configureSecurity()
configureSerialization()
configureExceptionHandling()
configureMonitoring()
consumerStarting() // this is consumer starting.
}.start(wait = true)
}
#OptIn(ExperimentalTime::class)
fun Application.consumerStarting() {
CoroutineScope(Dispatchers.IO).launch {
// todo: make this in to rule activator initializer
println("TEST")
delay(3.seconds)
consumerStarting()
}
}
And, when I tested this, I can see that "TEST" will be printed every 3 seconds.
Similar to this way, I thought that I can start monitoring instances from Application.consumerStarting() (e.g. periodically fetch stream from redis, consume it we coroutine, etc.).
But I'm not sure it is right way because I cannot find any references for this situation with Ktor.
Any comment will be welcome.

Project Reactor and Server Side Events

I'm looking for a solution that will have the backend publish an event to the frontend as soon as a modification is done on the server side. To be more concise I want to emit a new List of objects as soon as one item is modified.
I've tried implementing on a SpringBoot project, that uses Reactive Web, MongoDB which has a #Tailable cursor that publish an event as soon as the capped collection is modified. The problem is that the capped collection has some limitation and is not really compatible with what I want to do. The thing is I cannot update an existing element if the new one has a different size(as I understood this is illegal because you cannot make a rollback).
I honestly don't even know if it's doable, but maybe I'm lucky and I'll run into a rocket scientist right here that will prove otherwise.
Thanks in advance!!
*** EDIT:
Sorry for the vague question. Yes I'm more focused on the HOW, using the Spring Reactive framework.
When I had a similar need - to inform frontend that something is done on the backend side - I have used a message queue.
I have published a message to the queue from the backend and the frontend consumed the message.
But I am not sure if that is what you're looking for.
if you are using webflux with spring reactor, I think you can simply have a client request with content-type as 'text/event-stream' or 'application/stream+json' and You shall have API that can produce those content-type. This gives you SSE model without too much effort.
#GetMapping(value = "/stream", produces = {MediaType.TEXT_EVENT_STREAM_VALUE, MediaType.APPLICATION_STREAM_JSON_VALUE, MediaType.APPLICATION_JSON_UTF8_VALUE})
public Flux<Message> get(HttpServletRequest request) {
Just as an idea - maybe you need to use a web socket technology here:
The frontend side (I assume its a client side application that runs in a browser, written in react, angular or something like that) can establish a web-socket communication with the backend server.
When the process on backend finishes, the message from backend to frontend can be sent.
You can do emitting changes by hand. For example:
endpoint:
public final Sinks.Many<SimpleInfoEvent> infoEventSink = Sinks.many().multicast().onBackpressureBuffer();
#RequestMapping(path = "/sseApproach", produces = MediaType.TEXT_EVENT_STREAM_VALUE)
public Flux<ServerSentEvent<SimpleInfoEvent>> sse() {
return infoEventSink.asFlux()
.map(e -> ServerSentEvent.builder(e)
.id(counter.incrementAndGet() + "")
.event(e.getClass().getName())
.build());
}
Code anywhere for emitting data:
infoEventSink.tryEmitNext(new SimpleInfoEvent("any custom event"));
Watch out of threads and things like "subscribeOn", "publishOn", but basically (when not using any third party code), this should work good enough.

Vert.x Worker Thread Blocking

I have one vert.x Standard Verticle Basically,it parses HttpRequest and prepare JsonObject then I am sending JSONObject through eventbus. In Another Worker verticle that event get consumed and it will kick off execution(includes call to Penthao Data Integration Java API) it is blocking API.It took around 30 minutes to complete execution of ".kjb" file. But vert.x is continuously warning about Worker Thread Block so my question is what would be best practice in vert.x to tackle this scenario.
Any help would be highly appreciated.
According to vertx doc all blocking operations need to perform in code
vertx.executeBlocking(future -> {
// Call some blocking API that takes a significant amount of time to return
String result = someAPI.blockingMethod("hello");
future.complete(result);
}, res -> {
System.out.println("The result is: " + res.result());
});
So it's the best practice for all blocking tasks.
You could also deploy your verticle as a worker.
This way:
vertx.deployVerticle(yourVerticleInstance, new DeploymentOptions().setWorker(true));

JAX-WS SoapHandler with large messages: OutOfMemoryError

Using JAX-WS 2, I see an issue that others have spoken about as well. The issue is that if a SOAP message is received inside a handler, and that SOAP message is large - whether due to inline SOAP body elements that happen to have lots of content, or due to MTOM attachments - then it is dangerously easy to get an OutOfMemoryError.
The reason is that the call to getMessage() seems to set off a chain of events that involve reading the entire SOAP message on the wire, and creating an object (or objects) representing what was on the wire.
For example:
...
public boolean handleMessage(SOAPMessageContext context)
{
// for a large message, this will cause an OutOfMemoryError
System.out.println( context.getMessage().countAttachments() );
...
My question is: is there a known mechanism/workaround for dealing with this? Specifically, it would be nice to access the SOAP part in a SOAP message without forcing the attachments (if MTOM for example) to also be vacuumed up.
For those who run their app on JBoss 6 & 7 (with Apache CXF)... I was able to troubleshoot the problem by implementing my handler from the LogicalHandler interface instead of the SOAPHandler.
In this case your handleMessage() method would get the LogicalMessageContext context (instead of SOAPMessageContext) in the arguments that has no issues with the context.getMessage() call
There's actually a JAX-WS RI (aka Metro) specific solution for this which is very effective.
See https://javaee.github.io/metro/doc/user-guide/ch02.html#efficient-handlers-in-jax-ws-ri. Unfortunately that link is now broken but you can find it on WayBack Machine. I'll give the highlights below:
The Metro folks back in 2007 introduced an additional handler type, MessageHandler<MessageHandlerContext>, which is proprietary to Metro. It is far more efficient than SOAPHandler<SOAPMessageContext> as it doesn't try to do in-memory DOM representation.
Here's the crucial text from the original blog article:
MessageHandler:
Utilizing the extensible Handler framework provided by JAX-WS
Specification and the better Message abstraction in RI, we introduced
a new handler called MessageHandler to extend your Web Service
applications. MessageHandler is similar to SOAPHandler, except that
implementations of it gets access to MessageHandlerContext (an
extension of MessageContext). Through MessageHandlerContext one can
access the Message and process it using the Message API. As I put in
the title of the blog, this handler lets you work on Message, which
provides efficient ways to access/process the message not just a DOM
based message. The programming model of the handlers is same and the
Message handlers can be mixed with standard Logical and SOAP handlers.
I have added a sample in JAX-WS RI 2.1.3 showing the use of
MessageHandler to log messages and here is a snippet from the sample:
public class LoggingHandler implements MessageHandler<MessageHandlerContext> {
public boolean handleMessage(MessageHandlerContext mhc) {
Message m = mhc.getMessage().copy();
XMLStreamWriter writer = XMLStreamWriterFactory.create(System.out);
try {
m.writeTo(writer);
} catch (XMLStreamException e) {
e.printStackTrace();
return false;
}
return true;
}
public boolean handleFault(MessageHandlerContext mhc) {
.....
return true;
}
public void close(MessageContext messageContext) { }
public Set getHeaders() {
return null;
}
}
(end quote from 2007 blog post)
You can find a full example in the Metro GitHub repo.
What JAX-WS implementation runtime are you using? If there's a way to do this using the runtime built into WebSphere I'm certain there's a way to do this cleanly in other runtimes like Axis2 (proper), Apache CXF, and Metro/RI.
I am using the other way to reduce the memory costing, which is Message Accessor.
Instead of using context.getMessage(), I changed it to this way:
Object accessor = context.get("jaxws.message.accessor");
if (accessor != null) {
baosInString = accessor.toString();
}
Base on advice from IBM website. http://www-01.ibm.com/support/docview.wss?uid=swg1PM21151

ASP.NET Web Api: Delegate after Request

I have a problem with streams and the web api.
I return the stream which is consumed by the web api. Currently, i put the socket into a pool after getting the stream. but this cause some errors.
Now, I must putthe socket into the pool AFTER the request ended. (The stream was consumed and is now closed).
Is there a delegate for this or some other best practises?
Example code:
public HttpResponseMessage Get(int fileId)
{
HttpResponseMessage response = null;
response = new HttpResponseMessage(HttpStatusCode.OK);
Stream s = GetFile(id);
response.Content = new StreamContent(fileStream);
}
GetFile(int id)
{
FSClient fs = GetFSClient();
Stream s = fs.GetFileStream(id);
AddFSToPool(fs);
return s;
}
GetFile uses a self-programmed FileServer-Client.
It has an option to reuse FileServer-Connections. This connections will be stored in a pool. (In the pool are only unused FileServer-connections). If the next request calls GetFSClient() it gets an connected one from the pool (and removes it from the pool).
But if another requests comes in and uses a FileServer-Connection which is in the pool (because unused), there is still the problem, that the Stream is possibly in use.
Now I want to do the "put the FSClint into the pool" after the request ended and the stream is fully consumed.
Is there an entry point for that?
Stream is seen as a volatile/temporary resource - no wonder it implements IDisposable.
Also Stream is not thread-safe since it has a Position which means if it is read up to the end, it should be reset back to start and if two Threads reading the stream they will most likely read different chunks.
As such, I would not even attempt to solve this problem. Re-using streams on a web site (inherently multi-user / multi-threaded) not recommended.
UPDATE
As I said, still think that the best option is to re-think the solution but if you need to register something that runs after request finishes, use RegisterForDispose on request:
public HttpResponseMessage Get(HttpRequestMessage req, int fileId)
{
....
req.RegisterForDispose(myStream);
}