Please accept my apology if this is something terribly obvious that I am simply missing. I have a perl script that hits the Asana restful API at a burst rate of a bit more than 100 calls per minute. When I stress test it, I do occasionally hit their rate limit and see error 429. I know from reading Asana's documentation that it will return a "retry-after" response header but I can't figure out for the life of me how to retrieve / open / read this header. Any advice you can provide would be appreciated.
Edit:
My code is attached below. Of course I've erased sensitive information such as my API key and project number, but the core code is here. If I run this just once, it doesn't produce enough calls per minute to trigger an error. I have to run it about 3-4 times simultaneously to produce the error. One might say "well don't do that." While correct, the point of this exercise is to produce the error, so running it four times simultaneously is good.
When you do so with a valid API key and project number, occasionally you will get this error:
{"errors":[{"message":"You have made too many requests recently. Please, be chill."}]}
My question is how to retrieve the header that apparently includes a retry-after field along with a number of seconds. I may just resort to building in a delay of 20 seconds every time an error is returned, but I'd prefer to handle the error more elegantly.
#!/usr/local/bin/perl
my $counter = 0;
my $AsanaAPIcode = "...";
my $AsanaProjectID = "...";
my $AsanaFullString = 'curl -u ' . $AsanaAPIcode . ': https://app.asana.com/api/1.0/projects/' . $AsanaProjectID . '?opt_fields=archived';
my $APIoutput = `$AsanaFullString`;
print $APIoutput;
my $startTime = time;
my $totalCount = 200;
while ($counter<=$totalCount) {
print $counter . "\n";
$APIoutput = `$AsanaFullString`;
print $APIoutput . "\n";
$counter++;
}
my $endTime = time;
my $totalTime = $endTime - $startTime;
print "Total time = " . $totalTime . " seconds.\n";
print $totalCount / ($totalTime / 60) . " API calls per minute.\n";
print "end";
The 'Retry-After' is in the HTTP response, which is lost by the backtick call to curl.
A clunky solution is to use 'curl -D' (--dump-headers) which will drop all HTTP headers into the response, which you'd then have to parse and remove.
A better solution would be to use the LWP library (perldoc LWP::UserAgent).
Obviously I can't test this code without an API key...
I think you can inject these into the URL (https://name:password#app.asana.com/api...)
use LWP::UserAgent;
use LWP::Protocol::https;
my $agent = LWP::UserAgent->new(); # check LWP docs for extra params
my $request = LWP::Request->new( 'https://app.asana.com/api/1.0/projects/' . $AsanaProjectID . '?opt_fields=archived' )
my $response = $ua->request($request);
if ($response->code == 429) {
my $retry = $response->header('Retry-After');
...
}
Or...you could try CPAN for something like WWW::Asana.
http://search.cpan.org/dist/WWW-Asana/
We ran into the issue with our Asana Connector for Klok. We contacted the folks at Asana and they were willing to add the "retry after" amount into the body of the response. So, you can now grab it from the "retry_after" property:
{
"errors":[{"message":"You have made too many requests recently. Please, be chill."}],
"retry_after":30
}
This was a big help for us since we are using Adobe AIR 2.x which doesn't give us access to the response headers of an error response.
There is a CPAN module for that, WWW::Asana, and it handles the rate limit response properly.
Related
I'm working in a section of a Perl module that creates a large CSV response. The server runs on Plack, on which I'm far from expert.
Currently I'm using something like this to send the response:
$res->content_type('text/csv');
my $body = '';
query_data (
parameters => \%query_parameters,
callback => sub {
my $row_object = shift;
$body .= $row_object->to_csv;
},
);
$res->body($body);
return $res->finalize;
However, that query_data function is not a fast one and retrieves a lot of records. In there, I'm just concatenating each row into $body and, after all rows are processed, sending the whole response.
I don't like this for two obvious reasons: First, it takes a lot of RAM until $body is destroyed. Second, the user sees no response activity until that method has finished working and actually sends the response with $res->body($body).
I tried to find an answer to this in the documentation without finding what I need.
I also tried calling $res->body($row_object->to_csv) on my callback section, but seems like that ends up sending only the last call I made to $res->body, overriding all previous ones.
Is there a way to send a Plack response that flushes the content on each row, so the user starts receiving content in real time as the data is gathered and without having to accumulate all data into a veriable first?
Thanks in advance for any comments!
You can't use Plack::Response because that class is intended for representing a complete response, and you'll never have a complete response in memory at one time. What you're trying to do is called streaming, and PSGI supports it even if Plack::Response doesn't.
Here's how you might go about implementing it (adapted from your sample code):
my $env = shift;
if (!$env->{'psgi.streaming'}) {
# do something else...
}
# Immediately start the response and stream the content.
return sub {
my $responder = shift;
my $writer = $responder->([200, ['Content-Type' => 'text/csv']]);
query_data(
parameters => \%query_parameters,
callback => sub {
my $row_object = shift;
$writer->write($row_object->to_csv);
# TODO: Need to call $writer->close() when there is no more data.
},
);
};
Some interesting things about this code:
Instead of returning a Plack::Response object, you can return a sub. This subroutine will be called some time later to get the actual response. PSGI supports this to allow for so-called "delayed" responses.
The subroutine we return gets an argument that is a coderef (in this case, $responder) that should be called and passed the real response. If the real response does not include the "body" (i.e. what is normally the 3rd element of the arrayref), then $responder will return an object that we can write the body to. PSGI supports this to allow for streaming responses.
The $writer object has two methods, write and close which both do exactly as their names suggest. Don't forget to call the close method to complete the response; the above code doesn't show this because how it should be called is dependent on how query_data and your other code works.
Most servers support streaming like this. You can check $env->{'psgi.streaming'} to be sure that yours does.
Plack is middleware. Are you using a web application framework on top of it, like Mojolicious or Dancer2, or something like Apache or Starman server below it? That would affect how the buffering works.
The link above shows an example by Plack's author:
https://metacpan.org/source/MIYAGAWA/Plack-1.0037/eg/dot-psgi/echo-stream-sync.psgi
Or you can do it easily by using Dancer2 on top of Plack and Starman or Apache:
https://metacpan.org/pod/distribution/Dancer2/lib/Dancer2/Manual.pod#Delayed-responses-Async-Streaming
Regards, Peter
Some reading material for you :)
https://metacpan.org/pod/PSGI#Delayed-Response-and-Streaming-Body
https://metacpan.org/pod/Plack::Middleware::BufferedStreaming
https://metacpan.org/source/MIYAGAWA/Plack-1.0037/eg/dot-psgi/echo-stream.psgi
https://metacpan.org/source/MIYAGAWA/Plack-1.0037/eg/dot-psgi/nonblock-hello.psgi
So copy/paste/adapt and report back please
I am using CGI.pm to write out cookies. Now during the course of the user using my site, other cookies are added to the "test.com" cookie set, (as shown in the broswer history)
But now I want to log the user out, and "clean" the PC. Since I don't know what scripts the user has used, I can't foresee what cookies would be on the PC.
In short, it there a way to read all the cookies for "test.com" back into a script so I can then print them out again with a 1s duration, (effectively 'deleting' them) ** I know you can read the cookie back in with $xyz=cookie('$name') ... but how can I create the array holding the $name variable so I can loop through it? The script will also run on "test.com", so the cross site policy is not an issue
+++++
brian d foy added a partial answer below. So this how I envisage the code might be strung together.
use CGI::Cookie;
%cookies = CGI::Cookie->fetch;
for (keys %cookies) {
$del_cookie.="cookie(-NAME=>'$cookies[$_]',-PATH=>'/',-EXPIRES=>'+1s');";
}
print header(-cookie=>[$del_cookie]);
I wondered how the script would recognise the domain. Appears the script is intelligent enough to only load the cookies for the domain for which the script is being executed on. (Now I've just got to find out why Firefox doesn't delete expired cookies!! Just found some listed that expired 29th - 31st Jan within my test domain, and at first wondered why they didn't appear in my cookie list!)
If you are trying to do this from your CGI script, you'll only have access to the cookies for that domain. You can get that list and reset them by giving them a time in the past.
It sounds like you aren't asking a cookie question at all. You're asking how to make an array. The CGI::Cookies (which comes with CGI.pm) has an example to deal with all the cookies you have access to under that domain:
%cookies = CGI::Cookie->fetch;
for (keys %cookies) {
do_something($cookies{$_});
}
This is what I ended up with:
use CGI::Cookies;
%cookies = CGI::Cookie->fetch;
#cookie = keys %cookies;
for($x=0; $x<#cookie; $x++){
my $c = CGI::Cookie->new(-name => $cookie[$x],-value => '-',-expires => '+1s');
print "Set-Cookie: $c\n";
}
print "content-type: text/html\n\n";
Firefox still leaves the cookies intact, (apparently that's a "design issue" and not a bug!!) but they are reset to a void value, and set to expire / become redundant in 1 second. Plus, quite why the "print" statement being sent before the "content-type" header doesn't cause a server error I don't know. OK, so purists will probably find a simpler system, and use "foreach" rather than for/next loop ... but I understand how the latter works!
I am getting error as "Internal Server Error.The server encountered an internal error or misconfiguration and was unable to complete your request."
I am submitting a form in html and get its values.
HTML Code (index.cgi)
#!c:/perl/bin/perl.exe
print "Content-type: text/html; charset=iso-8859-1\n\n";
print "<html>";
print "<body>";
print "<form name = 'login' method = 'get' action = '/cgi-bin/login.pl'> <input type = 'text' name = 'uid'><br /><input type = 'text' name = 'pass'><br /><input type = 'submit'>";
print "</body>";
print "</html>";
Perl Code to fetch data (login.pl)
#!c:/perl/bin/perl.exe
use CGI::Carp qw(fatalsToBrowser);
my(%frmfields);
getdata(\%frmfields);
sub getdata {
my ($buffer) = "";
if (($ENV{'REQUEST_METHOD'} eq 'GET')) {
my (%hashref) = shift;
$buffer = $ENV{'QUERY_STRING'};
foreach (split(/&/,$buffer)) {
my ($key, $value) = split(/=/, $_);
$key = decodeURL($key);
$value= decodeURL($value);
$hashref{$key} = $value;
}
}
else{
read(STDIN,$buffer,$ENV{'CONTENT_LENGTH'})
}
}
sub decodeURL{
$_=shift;
tr/+/ /;
s/%(..)/pack('c', hex($1))/eg;
return($_);
}
The HTML page opens correctly but when i submit the form, i get internal server error.
Please help.
What does the web server's error log say?
Independent of what it says, you must stop parsing the form data yourself. There are modules for that, specifically CGI.pm. Using that, you can do this instead:
use CGI;
my $CGI = CGI->new();
my $uid = $CGI->param( 'uid' );
my $pass = $CGI->param( 'pass' );
# rest of your script
Much cleaner and much safer.
I agree with Tore that you must not parse this yourself. Your code has multiple errors. You don't allow multiple parameter values, you don't allow the ; alternate separator, you don't handle POST with a query string in the URL, and so on.
I don't know how long it will be online for free, but chapter 15 of my new "Beginning Perl" book covers Web programming. That should get you started on some decent basics. Note that the online version is an early, rough draft. The actual book also includes Chapter 19 which has a complete Web app example.
could it be this line that's the problem?
my (%hashref) = shift;
You're initialising a proper hash, but shift will give you a hash reference, since you did getdata(\%frmfields);. You probably want this, instead:
my $hashref = shift;
"500 Internal Server Error" just means that something didn't work the way the web server expected. Maybe you don't have CGI enabled. Maybe the script isn't executable. Maybe it's in a directory the web server isn't allowed to access. It's even possible that maybe the web server ran the script successfully and it worked perfectly, but didn't start its output with a valid set of HTTP headers. You need to look in the web server's error log to find out what it didn't like, which may or may not be a Perl issue.
Like everyone else has said, though, don't try to parse query strings and grovel though %ENV yourself. Use one of the many fine modules or frameworks which are available and already known to work correctly. CGI.pm is the granddaddy of them all and works well for smaller projects, but I'd recommend looking into a proper web application framework such as Dancer, Mojolicious, or Catalyst (there are many others, but those are the big three) if you're planning to build anything with more than a handful of relatively simple pages and forms.
I do some testing with HTTP::Daemon:
use HTTP::Daemon;
use HTTP::Status;
my $d = HTTP::Daemon->new || die;
print "Please contact me at: <URL:", $d->url, ">\n";
while (my $c = $d->accept) {
while (my $r = $c->get_request) {
if ($r->method eq 'GET') {
# do some action (about 10s)
}
else {
$c->send_error(RC_FORBIDDEN)
}
}
$c->close;
undef($c);
}
It works fine, but if I do more request within 10s, the requests gets queued (I get all requests through $d->accept)
What I want is the following: if a client starts a request, no other should be queued.
I tried with the Listen option, but without success.
Any suggestions?
HTTP::Daemon doesn't fork for you, and explicitely tells you so in its documentation.
This HTTP daemon does not fork(2) for you. Your application, i.e. the
user of the "HTTP::Daemon" is responsible for forking if that is
desirable. Also note that the user is responsible for generating
responses that conform to the HTTP/1.1 protocol.
If your answering takes too long, fork to answer. Or use another module.
you have one thread here; it can either handle the first request or handle the next one to come in. You can't deal with new requests until control goes back to accept.
I wrote a quick script to download files using LWP::Simple library and its getstore() function. It is working rather well, but occasionally downloaded file is not complete. I do not know what is causing this, but when I download it afterward manually using wget in command line file is OK.
I would guess corrupted files are caused by connection drop or something similar, although I run my script on dedicated line in datacenter connection might drop somewhere between my server and remote server.
This is my code:
sub download {
my $status = getstore($_[0], $_[1]);
if (is_success($status)) { return 1; } else { return 0; }
}
What are the possible solutions for this problem? How to check if transfer went alright and if file is complete and not corrupted?
Thank you for your valuable replies.
The is_success() sub returns true for any 2XX HTTP code,
so if you are for example getting "206 Partial Content",
that will count as success.
You can just check whether status is 200 or not, and act
accordingly.
We can do like so:
use LWP;
use HTTP::Request::Common;
my $ua = LWP::UserAgent->new;
$ua->timeout(3);
my $res = $ua->request(HEAD $url); # just to get headers of a file
my $length_full = $res->headers->{'content-length'};
...
$res = $request(GET $url);
my $length_got = $res->content_length;
if ($length_got != $length_full) { print "File have not been downloaded completely!\n";
...
The $status values you can get are listed in the LWP::Simple documentation. If the servers return an error status every time you get a partial or corrupted download, just checking the return value would be enough.
Otherwise, you would need a more sophisticated strategy. If there are MD5 or SHA checksums for the files, you can check those after download. If not, you need to inspect the headers, find out how much the server was planning to send and how much you received.