Mojolicious websocket request query string - perl

I'm experiencing unexpected behaviour while trying to access query string parameters in a mojolicious websocket request. Say my request looks like this:
ws://127.0.0.1:3000/websock_action?item_id=1234
Then in my mojo controller code I try and get the value of item_id in any of the following ways:
#in mojo controller
my $item_id = $self->param('item_id');
my $item_id = scalar $self->param('item_id');
my $item_id = scalar $self->tx->req->url->query->param('item_id');
The issue is that the item_id I get is often from a previous request, whichever of these techniques I use. My app is currently being served with hypnotoad.
Are query string parameters supported on websocket requests in mojolicious? Is there a more reliable way to access them? Essentially I'd like to know if I'm trying to something that isn't supported, so I can know whether the problem is something specific to my app.
Thanks in advance for any help

I suspect that what is happening, is that the parameters are passed in the html request, which is then upgraded to a websocket request at which point they are no longer available.
As Daren said, pass the data in the Web-Socket data. Something like...
var ws = $.websocket("ws://127.0.0.1:3000/websock_action", {
events: { message: function(e) {}
});
ws.send('message', 1234);

Related

Is there a way to save ParseObject without make a HTTP request to the REST API?

I didn't find very much about this topic, so I wonder if it is an easy task to achieve or if it's actually not possible. My problem is that I have a lot of HTTP requests on my server even if a Cloud function is called only once. So I suppose that all the object updating / savings / queries are made by using the REST API. I have so many HTTP requests that several hundred are going timeout, I suppose for the huge traffic that it's generated.
Is there a way to save a ParseObject by executing the query directly to MongoDB? If it's not possible at the moment can you give me some hints if there are already some helper functions to convert a ParseQuery and a ParseObject to the relative in MongoDB so that I can use the MongoDB driver directly?
It's really important for my application to reduce HTTP requests traffic at the moment.
Any idea? Thanks!
EDIT:
Here an example to reproduce the concept:
Make a cloud function:
Parse.Cloud.define('hello', async (req, res) => {
let testClassObject = new Parse.Object('TestClass');
await testClassObject.save(null, {useMasterKey: true});
let query = new Parse.Query('TestClass');
let testClassRecords = await query.find({useMasterKey: true});
return testClassRecords;
});
Make a POST request:
POST http://localhost:1337/parse/functions/hello
Capture HTTP traffic on port 1337 using Wireshark:
You can see that for 1 POST request other 2 are made because of the saving / query code. My goal would be to avoid these two HTTP calls and instead make a DB call directly so that less traffic will go through the whole webserver stack.
Link to the Github question: https://github.com/parse-community/parse-server/issues/6549
The Parse Server directAccess option should do the magic for you. Please make sure you are initializing Parse Server like this:
const api = new ParseServer({
...
directAccess: true
});
...

{guzzle-services} How to use middlewares with GuzzleClient client AS OPPOSED TO directly with raw GuzzleHttp\Client?

My middleware need is to:
add an extra query param to requests made by a REST API client derived from GuzzleHttp\Command\Guzzle\GuzzleClient
I cannot do this directly when invoking APIs through the client because GuzzleClient uses an API specification and it only passes on "legal" query parameters. Therefore I must install a middleware to intercept HTTP requests after the API client prepares them.
The track I am currently on:
$apiClient->getHandlerStack()-push($myMiddleware)
The problem:
I cannot figure out the RIGHT way to assemble the functional Russian doll that $myMiddleware must be. This is an insane gazilliardth-order function scenario, and the exact right way the function should be written seems to be different from the extensively documented way of doing things when working with GuzzleHttp\Client directly. No matter what I try, I end up having wrong things passed to some layer of the matryoshka, causing an argument type error, or I end up returning something wrong from a layer, causing a type error in Guzzle code.
I made a carefully weighted decision to give up trying to understand. Please just give me a boilerplate solution for GuzzleHttp\Command\Guzzle\GuzzleClient, as opposed to GuzzleHttp\Client.
The HandlerStack that is used to handle middleware in GuzzleHttp\Command\Guzzle\GuzzleClient can either transform/validate a command before it is serialized or handle the result after it comes back. If you want to modify the command after it has been turned into a request, but before it is actually sent, then you'd use the same method of Middleware as if you weren't using GuzzleClient - create and attach middleware to the GuzzleHttp\Client instance that is passed as the first argument to GuzzleClient.
use GuzzleHttp\Client;
use GuzzleHttp\HandlerStack;
use GuzzleHttp\Command\Guzzle\GuzzleClient;
use GuzzleHttp\Command\Guzzle\Description;
class MyCustomMiddleware
{
public function __invoke(callable $handler) {
return function (RequestInterface $request, array $options) use ($handler) {
// ... do something with request
return $handler($request, $options);
}
}
}
$handlerStack = HandlerStack::create();
$handlerStack->push(new MyCustomMiddleware);
$config['handler'] = $handlerStack;
$apiClient = new GuzzleClient(new Client($config), new Description(...));
The boilerplate solution for GuzzleClient is the same as for GuzzleHttp\Client because regardless of using Guzzle Services or not, your request-modifying middleware needs to go on GuzzleHttp\Client.
You can also use
$handler->push(Middleware::mapRequest(function(){...});
Of sorts to manipulate the request. I'm not 100% certain this is the thing you're looking for. But I assume you can add your extra parameter to the Request in there.
private function createAuthStack()
{
$stack = HandlerStack::create();
$stack->push(Middleware::mapRequest(function (RequestInterface $request) {
return $request->withHeader('Authorization', "Bearer " . $this->accessToken);
}));
return $stack;
}
More Examples here: https://hotexamples.com/examples/guzzlehttp/Middleware/mapRequest/php-middleware-maprequest-method-examples.html

Sending an unbuffered response in Plack

I'm working in a section of a Perl module that creates a large CSV response. The server runs on Plack, on which I'm far from expert.
Currently I'm using something like this to send the response:
$res->content_type('text/csv');
my $body = '';
query_data (
parameters => \%query_parameters,
callback => sub {
my $row_object = shift;
$body .= $row_object->to_csv;
},
);
$res->body($body);
return $res->finalize;
However, that query_data function is not a fast one and retrieves a lot of records. In there, I'm just concatenating each row into $body and, after all rows are processed, sending the whole response.
I don't like this for two obvious reasons: First, it takes a lot of RAM until $body is destroyed. Second, the user sees no response activity until that method has finished working and actually sends the response with $res->body($body).
I tried to find an answer to this in the documentation without finding what I need.
I also tried calling $res->body($row_object->to_csv) on my callback section, but seems like that ends up sending only the last call I made to $res->body, overriding all previous ones.
Is there a way to send a Plack response that flushes the content on each row, so the user starts receiving content in real time as the data is gathered and without having to accumulate all data into a veriable first?
Thanks in advance for any comments!
You can't use Plack::Response because that class is intended for representing a complete response, and you'll never have a complete response in memory at one time. What you're trying to do is called streaming, and PSGI supports it even if Plack::Response doesn't.
Here's how you might go about implementing it (adapted from your sample code):
my $env = shift;
if (!$env->{'psgi.streaming'}) {
# do something else...
}
# Immediately start the response and stream the content.
return sub {
my $responder = shift;
my $writer = $responder->([200, ['Content-Type' => 'text/csv']]);
query_data(
parameters => \%query_parameters,
callback => sub {
my $row_object = shift;
$writer->write($row_object->to_csv);
# TODO: Need to call $writer->close() when there is no more data.
},
);
};
Some interesting things about this code:
Instead of returning a Plack::Response object, you can return a sub. This subroutine will be called some time later to get the actual response. PSGI supports this to allow for so-called "delayed" responses.
The subroutine we return gets an argument that is a coderef (in this case, $responder) that should be called and passed the real response. If the real response does not include the "body" (i.e. what is normally the 3rd element of the arrayref), then $responder will return an object that we can write the body to. PSGI supports this to allow for streaming responses.
The $writer object has two methods, write and close which both do exactly as their names suggest. Don't forget to call the close method to complete the response; the above code doesn't show this because how it should be called is dependent on how query_data and your other code works.
Most servers support streaming like this. You can check $env->{'psgi.streaming'} to be sure that yours does.
Plack is middleware. Are you using a web application framework on top of it, like Mojolicious or Dancer2, or something like Apache or Starman server below it? That would affect how the buffering works.
The link above shows an example by Plack's author:
https://metacpan.org/source/MIYAGAWA/Plack-1.0037/eg/dot-psgi/echo-stream-sync.psgi
Or you can do it easily by using Dancer2 on top of Plack and Starman or Apache:
https://metacpan.org/pod/distribution/Dancer2/lib/Dancer2/Manual.pod#Delayed-responses-Async-Streaming
Regards, Peter
Some reading material for you :)
https://metacpan.org/pod/PSGI#Delayed-Response-and-Streaming-Body
https://metacpan.org/pod/Plack::Middleware::BufferedStreaming
https://metacpan.org/source/MIYAGAWA/Plack-1.0037/eg/dot-psgi/echo-stream.psgi
https://metacpan.org/source/MIYAGAWA/Plack-1.0037/eg/dot-psgi/nonblock-hello.psgi
So copy/paste/adapt and report back please

Restangular - how to cancel/implement my own request

I found a few examples of using fullRequestInterceptor and httpConfig.timeout to allow canceling requests in restangular.
example 1 | example 2
this is how I'm adding the interceptor:
app.run(function (Restangular, $q) {
Restangular.addFullRequestInterceptor(function (element, operation, what, url, headers, params, httpConfig) {
I managed to abort the request by putting a resolved promise in timeout (results in an error being logged and the request goes out but is canceled), which is not what I want.
What I'm trying to do - I want to make the AJAX request myself with my own requests and pass the result back to whatever component that used Restangular. Is this possible?
I've been looking a restangular way to solve it, but I should have been looking for an angular way :)
Overriding dependency at runtime in AngularJS
Looks like you can extend $http before it ever gets to Restangular. I haven't tried it yet, but it looks like it would fit my needs 100%.
I'm using requestInterceptor a lot, but only to change parameters and headers of my request.
Basically addFullRequestInterceptor is helping you making change on your request before sending it. So why not changing the url you want to call ?
There is the httpConfig object that you can modify and return, and if it's close to the config of $http (and I bet it is) you can change the url and even method, and so change the original request to another one, entirely knew.
After that you don't need timeout only returning an httpConfig customise to your need.
RestangularConfigurer.addFullRequestInterceptor(function (element, operation, route, url, headers, params, httpConfig) {
httpConfig.url = "http://google.com";
httpConfig.method = "GET";
httpConfig.params = "";
return {
httpConfig: httpConfig
};
});
It will be pass on and your service or controller won't know that something change, that's the principle of interceptor, it allow you to change stuff and returning to be use by the next process a bit like a middleware. And so it will be transparent to the one making the call but the call will be made to what you want.

How to query the roster using JSJAC XMPP client

How can I query full roster using JSJAC XMPP client? I have tried following function for this, but it does not work:
function getRoster(con){
var roster = new JSJaCIQ();
roster.setIQ(null, 'get', 'roster_1');
roster.setQuery(NS_ROSTER);
con.send(roster);
}
Instead of con.send, try:
con.sendIQ(roster, {result_handler: function(aIq, arg) {
var node = aIq.getQuery()
// do something with roster
});
You need to have a callback that fires when the roster is returned. To be complete, set a error_handler as well, in case an IQ error is returned or you time out.
sorry for commenting on such old question, hoewever this pops as #1 result in google on 'JSJAC roster' and the above answers didn't worked for me. i don't know whether something changed in the JSJaC API, however i was receiving iq errors 'service-unavaliable'. i had to use this code instead:
var rosterRequest = new JSJaCIQ();
rosterRequest.setType('get');
rosterRequest.setQuery(NS_ROSTER);
connection.send(rosterRequest);
(so no domain setting and no id setting - just the type, and namespace).