(Perl/POE) In POE::Component::IRC, How do you return/get data from a package_state in an external subroutine? - perl

I am trying to get the output data from a package_state in my IRC bot, which uses POE::Component::IRC as a base. But I just cannot seem to do it.
Basically, in a subroutine outside of the POE session, I wish to get the data from an event subroutine fired by POE when it receives the data from the server.
I've tried saving the data in a global array and even external file, but the outer subroutine will read the old data from it before that data gets updated.
More specifically, I am trying to get this bot to check if someone is 'ison' and if they are, return true (or get all data ( #_ ) from irc_303).
Something like this:
sub check_ison {
my $who = "someguy";
$irc->yield(ison => $who);
$data = (somehow retrieve data from irc_303);
return $data; #or true if $data
}

It sounds like you want a synchronous solution to an asynchronous problem. Due to the asynchronous nature of IRC (and POE, for that matter ...), you'll need to issue your ISON query and handle the numeric response as it comes in.
As far as I know, most client NOTIFY implementations issue an ISON periodically (POE::Component::IRC provides timer sugar via POE::Component::Syndicator), update their state, and tell the user if something changes.
You have options...
You could issue ISONs on a timer, save state appropriately in your numeric response handler, and provide a method to query the state. If your application looks more like a client (the user/something needs to be notified when something changes, that is) your numeric response handler could do some basic list comparison and issue appropriate events for users appearing/disappearing.
Otherwise, you could simply have a 'check_ison' that issues the ISON and yields some sort of 'response received' event from the numeric response handler, letting you know fresh data is available.

Related

(React/redux) Using Sockets with redux to only send changed pieces of state

I'm trying to figure out how to use socket.io alongside of my react/redux application, and seeing how much I can optimize when doing so. So, on the server I have a pretty basic setup with the sockets you can see here :
store.subscribe(() => {
io.emit('state', store.getState().toJS());
});
io.on('connection', (socket) => {
socket.emit('state', store.getState().toJS());
socket.on('action', function (action) {
action.headers = socket.request.headers;
store.dispatch(action);
});
});
So the only thing out of the ordinary is I was just sticking the request headers onto the actions for use later. What I would like to try and achieve is something like this -
io.connections[i].emit('state')
and add
socket.on('state', (action) => {
socket.emit('state', getUserSpecificState(store));
});
The idea is this would allow me to loop through all the connections, and then use that connection's socket to call for the specific user at the time. I don't know if something like this is possible, and am looking for some possible feedback in handling sending back only user specific information back. It would also be cool to only send back the part of the state changed by the action (and not the whole state), and then have the front end store assemble the state. Any/all input is more than welcomed, thanks!
Looks like it's not redux's issue to handle this logic, you put it in some module and connect to state using middleware or just by using state object "from the outside". Speaking of state management, what are you trying to implement? Looks like case for some CRDT thing like swarm.js or diffsync.

Sending an unbuffered response in Plack

I'm working in a section of a Perl module that creates a large CSV response. The server runs on Plack, on which I'm far from expert.
Currently I'm using something like this to send the response:
$res->content_type('text/csv');
my $body = '';
query_data (
parameters => \%query_parameters,
callback => sub {
my $row_object = shift;
$body .= $row_object->to_csv;
},
);
$res->body($body);
return $res->finalize;
However, that query_data function is not a fast one and retrieves a lot of records. In there, I'm just concatenating each row into $body and, after all rows are processed, sending the whole response.
I don't like this for two obvious reasons: First, it takes a lot of RAM until $body is destroyed. Second, the user sees no response activity until that method has finished working and actually sends the response with $res->body($body).
I tried to find an answer to this in the documentation without finding what I need.
I also tried calling $res->body($row_object->to_csv) on my callback section, but seems like that ends up sending only the last call I made to $res->body, overriding all previous ones.
Is there a way to send a Plack response that flushes the content on each row, so the user starts receiving content in real time as the data is gathered and without having to accumulate all data into a veriable first?
Thanks in advance for any comments!
You can't use Plack::Response because that class is intended for representing a complete response, and you'll never have a complete response in memory at one time. What you're trying to do is called streaming, and PSGI supports it even if Plack::Response doesn't.
Here's how you might go about implementing it (adapted from your sample code):
my $env = shift;
if (!$env->{'psgi.streaming'}) {
# do something else...
}
# Immediately start the response and stream the content.
return sub {
my $responder = shift;
my $writer = $responder->([200, ['Content-Type' => 'text/csv']]);
query_data(
parameters => \%query_parameters,
callback => sub {
my $row_object = shift;
$writer->write($row_object->to_csv);
# TODO: Need to call $writer->close() when there is no more data.
},
);
};
Some interesting things about this code:
Instead of returning a Plack::Response object, you can return a sub. This subroutine will be called some time later to get the actual response. PSGI supports this to allow for so-called "delayed" responses.
The subroutine we return gets an argument that is a coderef (in this case, $responder) that should be called and passed the real response. If the real response does not include the "body" (i.e. what is normally the 3rd element of the arrayref), then $responder will return an object that we can write the body to. PSGI supports this to allow for streaming responses.
The $writer object has two methods, write and close which both do exactly as their names suggest. Don't forget to call the close method to complete the response; the above code doesn't show this because how it should be called is dependent on how query_data and your other code works.
Most servers support streaming like this. You can check $env->{'psgi.streaming'} to be sure that yours does.
Plack is middleware. Are you using a web application framework on top of it, like Mojolicious or Dancer2, or something like Apache or Starman server below it? That would affect how the buffering works.
The link above shows an example by Plack's author:
https://metacpan.org/source/MIYAGAWA/Plack-1.0037/eg/dot-psgi/echo-stream-sync.psgi
Or you can do it easily by using Dancer2 on top of Plack and Starman or Apache:
https://metacpan.org/pod/distribution/Dancer2/lib/Dancer2/Manual.pod#Delayed-responses-Async-Streaming
Regards, Peter
Some reading material for you :)
https://metacpan.org/pod/PSGI#Delayed-Response-and-Streaming-Body
https://metacpan.org/pod/Plack::Middleware::BufferedStreaming
https://metacpan.org/source/MIYAGAWA/Plack-1.0037/eg/dot-psgi/echo-stream.psgi
https://metacpan.org/source/MIYAGAWA/Plack-1.0037/eg/dot-psgi/nonblock-hello.psgi
So copy/paste/adapt and report back please

Catalyst event loops only reaching a single client at a time

I'm working on a Catalyst/psgi application that would make great use of asychronous streaming, however beyond a simple timer (like here: http://www.catalystframework.org/calendar/2013/13), I'm a little stumped on how to implement more "global" events.
By global events, I mean things like:
a periodic timer that is the same for all clients
the visit to a given page by a single client (but updates all clients)
a file stat watcher that will update all clients when a file changes.
Correct me if I'm wrong, but to me these all seem very different from the example linked above, which will give each client a different counter. I would like to have events that happen "across the board."
An example of what I've tried (using #2 from my list above):
has 'write_fh' => ( is => 'rw', predicate => 'has_write_fh' );
sub events : Path('/stream') Args(0) {
my ( $self, $c ) = #_;
$c->res->body("");
$c->res->content_type('text/event-stream');
$self->write_fh( $c->res->write_fh() );
}
sub trigger : Path('/trigger') : Args(0) {
my ( $self, $c ) = #_;
$self->write_fh->write( *the event string* );
}
When I run this, it actually gets further than I would expect - the event does get triggered, but unreliably. With two browsers open, sometimes the event is sent to one, and sometimes to the other.
Now, I think I understand why this would never work - the client who hits /trigger, has no knowledge of all the other clients who are watching /stream, and so the write_fh I'm trying to use is not useful.
But if each client's request is in its own contained bubble, how am I to access their stream from some other request?
Or am I completely on the wrong track...?
Your problem with write_fh is that this event is singlecast - once it was received by anyone, it won't be received anymore. so one of the connections catch it, and the other simply don't.
you need to broadcast your events. Take a look at AnyEvent::IRC to see how it can be done.
(note that it was written for an old version of AnyEvent, but it should still work)

store and setRequest

I have a jobque mechanism in ZF.
The jobque simlpy stores the the function call (Class, Method and params) and later executes it as CLI daemon. The daemon works, however at places the application looks for information from the request object, and when called from the CLI these places fail, or get no info.
I would like to store the original request object together with the job and when the job is processed set the request object back as if the job was done by the originall request, somethin along the line of the following pseudo code:
$ser_request = serialize(Zend_Controller_Front::getInstance ()->getRequest ());
-->save to db
-->retrive from db
$ZCF= new Zend_Controller_Front;
$ZCF::getInstance ()->setRequest (unserialize($ser_request))
The aim is to store and replay the jobs later withouth having to change the rest of the application.
Any suggestions how to do that?
I am not sure if this works, but here's an idea. Try to implement _sleep and _wakeup magic methods for the request object. Haven't tried it out, but maybe it's at least a starting solution.

How to write/update session data before a request end in Perl Catalyst MVC Framework

How can I write or update session data before a request ends in Perl MVC Catalyst Framework.
I am using Session::State::Cookie and Session::Store::FastMap
I need to ensure that the data is available before the long-running request completes
This is what worked for me.
To ensure the information is updated at the time it is set in the long running request, I do a $c->finalize_session just after updating some importante information related to the session:
$c->session->{important_info} = "new value";
$c->finalize_session;
I verified that the other requests are gathering the right value after that.
I did not observed any side effects calling $c->finalize_session many times during a request just to ensure the session data to be updated, but I am not certained about this.
One of the informations that I am setting in this way is a counter to update a progress bar to feedback the user (because this task takes a long time). I do not know if it is the best way to do that, I will appreciate any suggestion.
You can do some last-second processing just before a request is completed and the response sent to the client by overriding the handle_request method in your application's main module or a plugin.
sub handle_request {
my ($c, #args) = #_;
my $status = $c->next::method(#args);
# Do some last minute processing before the request is completed.
return $status;
}
I've overridden this method before to collect stats about a request or restart a worker process if it uses too much memory. Let me know if this is helpful or if you have more questions about it.