Apache Camel - Getting a list of files from FTP as a result of a GET request - rest

As the title suggests I'm trying to get a list of files from an FTP directory to send as a response of a GET request.
I have current rest route implementation:
rest().get("/files")
.produces(MediaType.APPLICATION_JSON_VALUE)
.route()
.routeId("restRouteId")
.to("direct:getAllFiles");
On the other side of the direct route I have the following routes:
from("direct:getAllFiles")
.routeId("filesDirectId")
.to("controlbus:route" +
"?action=start" +
"&routeId=ftpRoute");
from([ftpurl])
.noAutoStartup()
.routeId("ftpRoute")
.aggregate(constant(true), new FileAggregationStrategy())
.completionFromBatchConsumer()
.process(filesProcessor)
.to("controlbus:route" +
"?action=stop" +
"&routeId=" + BESTANDEN_ROUTE_ID);
The issue at hand is that with this method the request does not wait for the complete process to finish, it almost instantly returns an empty response with StatusCode 200.
I've tried multiple solutions but they all fail in either of two ways: either the request gets a response even though the route hasn't finished yet OR the route gets stuck waiting for inflight exchanges at some point and waits for the 5 minute timeout to continue.
Thanks in advance for your advice and/or help!
Note: I'm working in a Spring Boot application (2.0.5) and Apache Camel (2.22.1).

I think the problem here is that your two routes are not connected. You are using the control bus to start the second route but it doesn't return the value back to the first route - it just completes, as you've noted.
What I think you need (I've not tested it) is something like:
from("direct:getAllFiles")
.routeId("filesDirectId")
.pollEnrich( [ftpurl], new FileAggregationStrategy() )
.process( filesProcessor );
as this will synchronously consume your ftp consumer, and do the post processing and return the values to your rest route.

With the help of #Screwtape's answer i managed to get it working for my specific issue. A few adjustments were needed, here is a list of what you need:
Add the option "sendEmptyMessageWhenIdle=true" to the ftp url
In the AggregationStrategy add an if (exchange == null) clause
In the clause set a property "finished" to true
Wrap the pollEnrich with a loopDoWhile that checks the finished property
In its entirety it looks something like:
from("direct:ftp")
.routeId("ftpRoute")
.loopDoWhile(!finished)
.pollEnrich("ftpurl...&sendEmptyMessageWhenIdle=true", new FileAggregationStrategy())
.choice()
.when(finished)
.process(filesProcessor)
.end()
.end();
In the AggregationStrategy the aggregate method looks something like:
#Override
public Exchange aggregate(Exchange currentExchange, Exchange newExchange) {
if (currentExchange == null)
return init(newExchange);
else {
if (newExchange == null) {
currentExchange.setProperty("finished", true);
return currentExchange;
}
return update(currentExchange, newExchange);
}
}

Related

Spring Boot REST endpoint connection not being released

I have created Spring boot (2.1.4.RELEASE) REST endpoint to GET some data from the server. When I call this endpoint from the browser, I see the JSON in the browser window but I notice that the the spinner in fav icon is going on for 60 seconds. When i look at the network tab, I never see the response section for the request. After 60 seconds, it says that it failed. When I walk through the code in debugger, I see that data is being returned from the controller and when I 'play' the rest of the stack everything completes (thread that is being assigned to serve the request) I am kind of puzzled what's causing this behavior.
#GetMapping(path="/recipes")
public ResponseEntity<Collection<HpManifest>> getRecipes() {
ResponseEntity<Collection<HpManifest>> response = hpService.getRecipes();
return response;
}
public ResponseEntity<Collection<HpManifest>> getRecipes() {
logger.info("Retrieving recipes from");
UriComponentsBuilder builder =
UriComponentsBuilder.fromHttpUrl(endpointManifests)
.queryParam("type", HpManifestType.RECIPE.getType());
logger.info("REST endpoint: " + builder.toUriString());
ResponseEntity<Collection<HpManifest>> recipes = restTemplate.exchange(
builder.toUriString(),
HttpMethod.GET, null, new ParameterizedTypeReference<Collection<HpManifest>>() {});
logger.info("recipes are:");
recipes.getBody().forEach(r -> logger.info(r.toString()));
return recipes;
}
I ran into a similar issue just the other day. In my case it turned out to be that recipes (returned from the restTemplate.exchange method) contained a Transfer-Encoding: chunked in the headers and then when you return recipes, your spring framework is probably also including a Content-Length header. The combination of these two headers in a response to a browser can cause issues because the browser thinks it's getting chunked data back, but in reality it is not. I suggest making a new ResponseEntity from your recipes variable along the lines of:
return ResponseEntity.status(recipes.getStatusCode()).body(response.getBody());
Alternatively you could maybe force your spring framework to return chunked data, but I think that is not the right way to go.

Stream a response in Spring mvc

This is the situation, lets say i have and endpoint and receive a request that retrieves data between a range of time or whatever, and the result of that request is a big list that i get from a database, lets say a list of a "Person" object, then for each of this person objects I have to call another method that it may be a little slow and it would delay the response a lot if i have to wait until it is executed for all the elements of this big list.
What i would like to accomplish is that i can stream the response through a rest endpoint and my front end does not have to wait until all this list is processed to start displaying it on the screen.
So i have a confusion here, i know that an asynchronous method using spring #Async it would make the consumer to be able to give a response even if the task is still not finished, but as far as i understand, this is helpful in the case of sending emails, or any other task or series of tasks whose response you are not going to display in the screen.
But in the case of a response that is meant to be displayed in the screen, i guess i should stream a chunk of data as soon as i have a whole "person" object ready.
What is the right way to accomplish this? is the Async method of any help in this situation or i should only find a way to detect when i have a person object is formed to stream it? or i'm terribly wrong and im not understanding the concepts of Async and streaming.
A little example would help.
Thanks.
I have been trying to understand the same concept from last 3 days and here is the my understanding which may help you.
Asynchronous REST endpoint:
If your REST end point is doing some complex business logic or calling some external service and may take some time respond back, its better to respond back from API ASAP moving the time consuming logic to background (separate thread). This is where Asynchronous processing will help.
Chunked output:
If your end point is expected to send large amount of data. In order to improve the user experience if i decide to start rendering the output (in UI) as soon as they start becoming available, chunked output from REST end point is the better approach.
Using jersey we can achieve both asynchronous processing and chunked output as mentioned in the below sample.
public ChunkedOutput<String> getChunkedResponse() {
final ChunkedOutput<String> output = new ChunkedOutput<String>(String.class);
new Thread() {
public void run() {
try {
String chunk;
int index = 0;
while ((chunk = getWordAtIndex(index)) != null) {
output.write(chunk);
index++;
}
} catch (IOException e) {
//Add code to handle the IO Exception during this operation
} finally {
try {
output.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
}.start();
return output; // This output object may be returned way before output is created
}
I have tried out a sample to test this out with jersey and spring-boot combination. You can check it out in my git repository here.
Hope it helps.

Empty response on long running query SailsJS

I'm currently running SailsJS on a Raspberry Pi and all is working well however when I execute a sails.models.nameofmodel.count() when I attempt to respond with the result I end up getting a empty response.
getListCount: function(req,res)
{
var mainsource = req.param("source");
if(mainsource)
{
sails.models.gatherer.find({source: mainsource}).exec(
function(error, found)
{
if(error)
{
return res.serverError("Error in call");
}
else
{
sails.log("Number found "+found.length);
return res.ok({count: found.length});
}
}
);
}
else
{
return res.ok("Error in parameter");
}
},
I am able to see in the logs the number that was found (73689). However when responding I still get an empty response. I am using the default stock ok.js file, however I did stick in additional logging to try to debug and make sure it is going through the correct paths. I was able to confirm that the ok.js was going through this path
if (req.wantsJSON) {
return res.jsonx(data);
}
I also tried adding .populate() to the call before the .exec(), res.status(200) before I sent out a res.send() instead of res.ok(). I've also updated Sails to 11.5 and still getting the same empty response. I've also used a sails.models.gatherer.count() call with the same result.
You can try to add some logging to the beginning of your method to capture the value of mainsource. I do not believe you need to use an explicit return for any response object calls.
If all looks normal there, try to eliminate the model's find method and just evaluate the request parameter and return a simple response:
getListCount: function(req, res) {
var mainsource = req.param("source");
sails.log("Value of mainsource:" + mainsource);
if (mainsource) {
res.send("Hello!");
} else {
res.badRequest("Sorry, missing source.");
}
}
If that does not work, then your model data may not actually be matching on the criteria that you are providing and the problem may lie there; in which case, your response would be null. You mentioned that you do see the resulting count of the query within the log statement. If the res.badRequest is also null, then you may have a problem with the version of express that is installed within sailsjs. You mention that you have 11.5 of sailsjs. I will assume you mean 0.11.5.
This is what is found in package.json of 0.11.5
"express": "^3.21.0",
Check for any possible bugs within the GitHub issues for sailsjs regarding express and response object handling and the above version of express.
It may be worthwhile to perform a clean install using the latest sailsjs version (0.12.0) and see if that fixes your issue.
Another issue may be in how you are handling the response. In this case .exec should execute the query immediately (i.e. a synchronous call) and return the response when complete. So there should be no asynchronous processing there.
If you can show the code that is consuming the response, that would be helpful. I am assuming that there is a view that is showing the response via AJAX or some kind of form POST that is being performed. If that is where you are seeing the null response, then perhaps the problem lies in the view layer rather than the controller/model.
If you are experiencing a true timeout error via HTTP even though your query returns with a result just in time, then you may need to consider using async processing with sailjs. Take a look at this post on using a Promise instead.

Restangular - how to cancel/implement my own request

I found a few examples of using fullRequestInterceptor and httpConfig.timeout to allow canceling requests in restangular.
example 1 | example 2
this is how I'm adding the interceptor:
app.run(function (Restangular, $q) {
Restangular.addFullRequestInterceptor(function (element, operation, what, url, headers, params, httpConfig) {
I managed to abort the request by putting a resolved promise in timeout (results in an error being logged and the request goes out but is canceled), which is not what I want.
What I'm trying to do - I want to make the AJAX request myself with my own requests and pass the result back to whatever component that used Restangular. Is this possible?
I've been looking a restangular way to solve it, but I should have been looking for an angular way :)
Overriding dependency at runtime in AngularJS
Looks like you can extend $http before it ever gets to Restangular. I haven't tried it yet, but it looks like it would fit my needs 100%.
I'm using requestInterceptor a lot, but only to change parameters and headers of my request.
Basically addFullRequestInterceptor is helping you making change on your request before sending it. So why not changing the url you want to call ?
There is the httpConfig object that you can modify and return, and if it's close to the config of $http (and I bet it is) you can change the url and even method, and so change the original request to another one, entirely knew.
After that you don't need timeout only returning an httpConfig customise to your need.
RestangularConfigurer.addFullRequestInterceptor(function (element, operation, route, url, headers, params, httpConfig) {
httpConfig.url = "http://google.com";
httpConfig.method = "GET";
httpConfig.params = "";
return {
httpConfig: httpConfig
};
});
It will be pass on and your service or controller won't know that something change, that's the principle of interceptor, it allow you to change stuff and returning to be use by the next process a bit like a middleware. And so it will be transparent to the one making the call but the call will be made to what you want.

HttpListener prevent Timeout

I Implemented a HttpListener to process SoapRequests. This works fine but I can't find a soloution for the problem, that some soap-requests take too much time, resulting in timeouts on client side.
How do I let the requesting client know, that his request is not a timeout?
I thought about sending "dummy"-information while the request gets processsed, but the HttpListener only seems to send the data when you Close the response-object, and this can be done only once, so this is not the right thing to do I suppose.
Soloution:
Thread alliveWorker = new Thread(() =>
{
try
{
while (context.Response.OutputStream.CanWrite)
{
context.Response.OutputStream.WriteByte((byte) ' ');
context.Response.OutputStream.Flush();
Thread.Sleep(5000);
}
}
finally
{
}
});
alliveWorker.Start();
doWork();
alliveWorker.Interrupt();
createTheRealResponse();
Sending dummy information is not a bad idea.
I think you need to call the Flush() method on the HttpListenerResponse's OutputStream property after writing the dummy data. You must also enable SendChunked property:
Try sending a dummy space at regular interval:
response.SendChunked = true;
response.OutputStream.WriteByte((byte)' ');
response.OutputStream.Flush();
I see two options - increase timeouts on client side or extend protocol with operation status requests from client for long running operations.
If you are using .net 4.5, take a look at the HttpListenerTimeoutManager Class, you can use this class as a base to implement custom timeout behaviour.