I'm currently running SailsJS on a Raspberry Pi and all is working well however when I execute a sails.models.nameofmodel.count() when I attempt to respond with the result I end up getting a empty response.
getListCount: function(req,res)
{
var mainsource = req.param("source");
if(mainsource)
{
sails.models.gatherer.find({source: mainsource}).exec(
function(error, found)
{
if(error)
{
return res.serverError("Error in call");
}
else
{
sails.log("Number found "+found.length);
return res.ok({count: found.length});
}
}
);
}
else
{
return res.ok("Error in parameter");
}
},
I am able to see in the logs the number that was found (73689). However when responding I still get an empty response. I am using the default stock ok.js file, however I did stick in additional logging to try to debug and make sure it is going through the correct paths. I was able to confirm that the ok.js was going through this path
if (req.wantsJSON) {
return res.jsonx(data);
}
I also tried adding .populate() to the call before the .exec(), res.status(200) before I sent out a res.send() instead of res.ok(). I've also updated Sails to 11.5 and still getting the same empty response. I've also used a sails.models.gatherer.count() call with the same result.
You can try to add some logging to the beginning of your method to capture the value of mainsource. I do not believe you need to use an explicit return for any response object calls.
If all looks normal there, try to eliminate the model's find method and just evaluate the request parameter and return a simple response:
getListCount: function(req, res) {
var mainsource = req.param("source");
sails.log("Value of mainsource:" + mainsource);
if (mainsource) {
res.send("Hello!");
} else {
res.badRequest("Sorry, missing source.");
}
}
If that does not work, then your model data may not actually be matching on the criteria that you are providing and the problem may lie there; in which case, your response would be null. You mentioned that you do see the resulting count of the query within the log statement. If the res.badRequest is also null, then you may have a problem with the version of express that is installed within sailsjs. You mention that you have 11.5 of sailsjs. I will assume you mean 0.11.5.
This is what is found in package.json of 0.11.5
"express": "^3.21.0",
Check for any possible bugs within the GitHub issues for sailsjs regarding express and response object handling and the above version of express.
It may be worthwhile to perform a clean install using the latest sailsjs version (0.12.0) and see if that fixes your issue.
Another issue may be in how you are handling the response. In this case .exec should execute the query immediately (i.e. a synchronous call) and return the response when complete. So there should be no asynchronous processing there.
If you can show the code that is consuming the response, that would be helpful. I am assuming that there is a view that is showing the response via AJAX or some kind of form POST that is being performed. If that is where you are seeing the null response, then perhaps the problem lies in the view layer rather than the controller/model.
If you are experiencing a true timeout error via HTTP even though your query returns with a result just in time, then you may need to consider using async processing with sailjs. Take a look at this post on using a Promise instead.
Related
I need to publish my Karma test results to a custom REST API. To handle this automatically, I've written a custom Karma reporter. I'm trying to use the run_complete event so that the POST happens after all browsers finish. However, no HTTP call is being made.
I'm using Axios 0.19.2 to do the actual HTTP call, but the same thing happens with node-fetch. The tests are being run by the Angular cli via ng test. My Karma config is lengthy but other than having a million different reporters and possible browser configs, is pretty much standard.
This is my onRunComplete method:
self.onRunComplete = function () {
var report = ... ; // logic to generate a JSON object, not relevant
var url = '...'; // the endpoint for the request
try {
console.log('Sending report to ' + url);
axios.post(url, report, {headers: {'Content-Type': 'application/json'}})
.then(function(response) {
console.log('Success!');
console.log(response);
})
.catch(function(error) {
console.log('Failure!');
console.log(error);
});
} catch (err) {
console.log('Error!');
console.log(err);
}
}
At the end of the test run, it writes to console the 'Sending report to...' message, and then immediately ends. The server does not receive the request at all.
I also tried adding explicit blocking using a 'inProgress' boolean flag and while-loop, but that pretty much just leaves the entire test run hanging since it never completes. (Since the request is never made, the 'inProgress' flag is always true and we never hit the then/catch promise handlers or the catch block.)
I have verified that the Axios POST request works by taking the entire contents of the onRunComplete as shown here, putting it in its own JS file, and calling it directly. The report logs as expected. It's only when I call from inside of Karma that it's somehow blocked.
Since Karma's documentation pretty much boils down to "go read how other people did similar things!" I'm having trouble figuring out how to get this to work. Is there a trick to getting an HTTP request to happen inside of a custom reporter? Why does my implementation not work?
Looks like the post request is made asynchronously - that is the request is made and control resumes almost immediately to the method which completes... try instead:
self.onRunComplete = function () {
var report = ... ; // logic to generate a JSON object, not relevant
var url = '...'; // the endpoint for the request
try {
console.log('Sending report to ' + url);
await axios.post(url, report, {headers: {'Content-Type': 'application/json'}})
...
}
}
As the title suggests I'm trying to get a list of files from an FTP directory to send as a response of a GET request.
I have current rest route implementation:
rest().get("/files")
.produces(MediaType.APPLICATION_JSON_VALUE)
.route()
.routeId("restRouteId")
.to("direct:getAllFiles");
On the other side of the direct route I have the following routes:
from("direct:getAllFiles")
.routeId("filesDirectId")
.to("controlbus:route" +
"?action=start" +
"&routeId=ftpRoute");
from([ftpurl])
.noAutoStartup()
.routeId("ftpRoute")
.aggregate(constant(true), new FileAggregationStrategy())
.completionFromBatchConsumer()
.process(filesProcessor)
.to("controlbus:route" +
"?action=stop" +
"&routeId=" + BESTANDEN_ROUTE_ID);
The issue at hand is that with this method the request does not wait for the complete process to finish, it almost instantly returns an empty response with StatusCode 200.
I've tried multiple solutions but they all fail in either of two ways: either the request gets a response even though the route hasn't finished yet OR the route gets stuck waiting for inflight exchanges at some point and waits for the 5 minute timeout to continue.
Thanks in advance for your advice and/or help!
Note: I'm working in a Spring Boot application (2.0.5) and Apache Camel (2.22.1).
I think the problem here is that your two routes are not connected. You are using the control bus to start the second route but it doesn't return the value back to the first route - it just completes, as you've noted.
What I think you need (I've not tested it) is something like:
from("direct:getAllFiles")
.routeId("filesDirectId")
.pollEnrich( [ftpurl], new FileAggregationStrategy() )
.process( filesProcessor );
as this will synchronously consume your ftp consumer, and do the post processing and return the values to your rest route.
With the help of #Screwtape's answer i managed to get it working for my specific issue. A few adjustments were needed, here is a list of what you need:
Add the option "sendEmptyMessageWhenIdle=true" to the ftp url
In the AggregationStrategy add an if (exchange == null) clause
In the clause set a property "finished" to true
Wrap the pollEnrich with a loopDoWhile that checks the finished property
In its entirety it looks something like:
from("direct:ftp")
.routeId("ftpRoute")
.loopDoWhile(!finished)
.pollEnrich("ftpurl...&sendEmptyMessageWhenIdle=true", new FileAggregationStrategy())
.choice()
.when(finished)
.process(filesProcessor)
.end()
.end();
In the AggregationStrategy the aggregate method looks something like:
#Override
public Exchange aggregate(Exchange currentExchange, Exchange newExchange) {
if (currentExchange == null)
return init(newExchange);
else {
if (newExchange == null) {
currentExchange.setProperty("finished", true);
return currentExchange;
}
return update(currentExchange, newExchange);
}
}
My middleware need is to:
add an extra query param to requests made by a REST API client derived from GuzzleHttp\Command\Guzzle\GuzzleClient
I cannot do this directly when invoking APIs through the client because GuzzleClient uses an API specification and it only passes on "legal" query parameters. Therefore I must install a middleware to intercept HTTP requests after the API client prepares them.
The track I am currently on:
$apiClient->getHandlerStack()-push($myMiddleware)
The problem:
I cannot figure out the RIGHT way to assemble the functional Russian doll that $myMiddleware must be. This is an insane gazilliardth-order function scenario, and the exact right way the function should be written seems to be different from the extensively documented way of doing things when working with GuzzleHttp\Client directly. No matter what I try, I end up having wrong things passed to some layer of the matryoshka, causing an argument type error, or I end up returning something wrong from a layer, causing a type error in Guzzle code.
I made a carefully weighted decision to give up trying to understand. Please just give me a boilerplate solution for GuzzleHttp\Command\Guzzle\GuzzleClient, as opposed to GuzzleHttp\Client.
The HandlerStack that is used to handle middleware in GuzzleHttp\Command\Guzzle\GuzzleClient can either transform/validate a command before it is serialized or handle the result after it comes back. If you want to modify the command after it has been turned into a request, but before it is actually sent, then you'd use the same method of Middleware as if you weren't using GuzzleClient - create and attach middleware to the GuzzleHttp\Client instance that is passed as the first argument to GuzzleClient.
use GuzzleHttp\Client;
use GuzzleHttp\HandlerStack;
use GuzzleHttp\Command\Guzzle\GuzzleClient;
use GuzzleHttp\Command\Guzzle\Description;
class MyCustomMiddleware
{
public function __invoke(callable $handler) {
return function (RequestInterface $request, array $options) use ($handler) {
// ... do something with request
return $handler($request, $options);
}
}
}
$handlerStack = HandlerStack::create();
$handlerStack->push(new MyCustomMiddleware);
$config['handler'] = $handlerStack;
$apiClient = new GuzzleClient(new Client($config), new Description(...));
The boilerplate solution for GuzzleClient is the same as for GuzzleHttp\Client because regardless of using Guzzle Services or not, your request-modifying middleware needs to go on GuzzleHttp\Client.
You can also use
$handler->push(Middleware::mapRequest(function(){...});
Of sorts to manipulate the request. I'm not 100% certain this is the thing you're looking for. But I assume you can add your extra parameter to the Request in there.
private function createAuthStack()
{
$stack = HandlerStack::create();
$stack->push(Middleware::mapRequest(function (RequestInterface $request) {
return $request->withHeader('Authorization', "Bearer " . $this->accessToken);
}));
return $stack;
}
More Examples here: https://hotexamples.com/examples/guzzlehttp/Middleware/mapRequest/php-middleware-maprequest-method-examples.html
I am encountering a weird issue here...
After I seem to successfully insert some data into my db.collection I cant seem to get it to reflect using db.collection.find().fetch().
Find below the code I insert into my chrome console:
merchantReviews.insert({merchantScore: "5.5"}, function() {
console.log("Review value successfully inserted");
});
This yields:
"9sd5787kj7dsd98ycnd"
Review value successfully inserted
I think returned value "9sd5787kj7dsd98ycnd" is an indication of a successful db collection insert. Then when I run:
merchantReviews.find().fetch()
I get:
[]
Can anyone tell me what is going on here?
Looking forward to your help.
There are two possibilities here: either the insert fails on the server even though it passes on the client, or you haven't subscribed to your collection.
In case the insert fails on server (most likely due to insufficient permissions, if you have removed the insecure package but have not declared any collection.allow rules), the client code still returns the intended insert ID (in your case, "9sd5787kj7dsd98ycnd"). The callback is called once the server has confirmed that the insert has either failed or succeeded. If it has failed, the callback is called with a single error argument. To catch this, you can instead insert the document like this:
merchantReviews.insert({merchantScore: "5.5"}, function(error) {
if (error) {
console.error(error);
} else {
console.log("Review value successfully inserted");
}
});
If this still logs successful insert, then you haven't subscribed to the collection, and you have removed the autopublish package. You can read about Meteor publish-subscribe system here. Basically, you have to publish the collection in server-side code:
Meteor.publish('reviews', function () {
return merchantReviews.find();
});
And in server code (or your js console) you need to subscribe to the collection with Meteor.subscribe('reviews'). Now calling merchantReviews.find().fetch() should return all documents in the collection.
I am having this phantom problem in my application where one in every 5 request on a specific page (on an ASP.NET MVC application) throws this error:
Npgsql.NpgsqlException: ERROR: 57014: canceling statement due to user request
at Npgsql.NpgsqlState.<ProcessBackendResponses>d__0.MoveNext()
at Npgsql.ForwardsOnlyDataReader.GetNextResponseObject(Boolean cleanup)
at Npgsql.ForwardsOnlyDataReader.GetNextRow(Boolean clearPending)
at Npgsql.ForwardsOnlyDataReader.Read()
at Npgsql.NpgsqlCommand.GetReader(CommandBehavior cb)
...
On the npgsql github page I found the following bug report: 615
It says there:
Regardless of what exactly is happening with Dapper, there's
definitely a race condition when cancelling commands. Part of this is
by design, because of PostgreSQL: cancel requests are totally
"asynchronous" (they're delivered via an unrelated socket, not as part
of the connection to be cancelled), and you can't restrict the
cancellation to take effect only on a specific command. In other
words, if you want to cancel command A, by the time your cancellation
is delivered command B may already be in progress and it will be
cancelled instead.
Although they have made "changes to hopefully make cancellations much safer" in Npgsql 3.0.2 my current code is incompatible with this version because the need of migration described here.
My current workaround (stupid): I have commented the code in Dapper that says command.Cancel(); and the problem seems to be gone.
if (reader != null)
{
if (!reader.IsClosed && command != null)
{
//command.Cancel();
}
reader.Dispose();
reader = null;
}
Is there a better solution to the problem? And secondly what am I loosing with the current fix (except that I have to remember the change every time I update Dapper)?
Configuration:
NET45,
Npgsql 2.2.5,
Postgresql 9.3
I found why my code didn't dispose the reader, resulting in calling command.Cancel(). This only happens with QueryMultiple method when not every refcursor is read.
Changing the code from:
using (var multipleResults = connection.QueryMultiple("schema.getuserbysocialsecurity", new { socialSecurityNumber }))
{
var client = multipleResults.Read<Client>().SingleOrDefault();
if (client != null)
{
client.Address = multipleResults.Read<Address>().Single();
}
return client;
}
To:
using (var multipleResults = connection.QueryMultiple("schema.getuserbysocialsecurity", new { socialSecurityNumber }))
{
var client = multipleResults.Read<Client>().SingleOrDefault();
var address = multipleResults.Read<Address>().SingleOrDefault();
if (client != null)
{
client.Address = address;
}
return client;
}
This fixed the issue and now the reader is properly disposed and command.Cancel() is not invoked.
Hope this helps anyone else!
UPDATE
The npgsql docs for version 2.2 states:
Npgsql is able to ask the server to cancel commands in progress. To do
this, call the NpgsqlCommand’s Cancel method. Note that another thread
must handle the request as the main thread will be blocked waiting for
command to finish. Also, the main thread will raise an exception as a
result of user cancellation. (The error code is 57014.)
I have also posted an issue on the Dapper github page.