File download via LWP returns error 500 (probably timeout due to long waiting time) - perl

I'm pretty new to Perl and just tried to use a simple and small script to download a file. It works on most websites, but it seems like it is not working on the one particular website I need to download a file from.
My code is:
use LWP::Simple;
my $status = getstore("http://www.regelleistung.net/download/ABGERUFENE_MRL_BETR_SOLL-WERTE.CSV", "file.csv");
if ( is_success($status) )
{
print "file downloaded correctly\n";
}
else
{
print "error downloading file: $status\n";
}
I always keep getting error status 500. The file is directly linked on
https://www.regelleistung.net/ext/data/ where you can click on "MRL", "SRL" and "RZ_SALDO".
Also, if I try to download the file via clicking the link in my browser it takes like forever to load before the actual download starts.
I feel like I need getstore() to wait until either it timeouts (say ~60 seconds) or the file is loaded.
Do you have any hint that could help me solve this problem? Using some other library or method? Even keyworks might be helpful, since I actually don't know what I could search for on Google.

Your code ran successfully the first time I tried it. I suspect that the site may have been busy when you first tested
To make the kind of changes that you are asking about you need the full LWP::UserAgent module, but I think your code should work for you if you keep trying a few times

Related

jQuery File Upload fails on large files

I am trying to use Blueimp jQuery File Upload and it works great with small files. If try to upload anything greater than 50mb it fails and I get an error 'Empty file upload result'.
I have seen lots of responses to questions from people getting the same error but they seem to get it despite the file uploading correctly or the code corrections suggested don't seem to apply to the code that is now supplied with the plugin.
The FAQ suggests that there is a server side restriction on file size but I have asked my host to increase it to 1GB and they have confirmed they have done this. I do not have permission to overwrite the php.ini as suggested in the FAQ, I just get a server error.
Has anyone else had this problem and if so how was it resolved?
I am using PHP.

Call ahk script from url (github gist)

Is there a way to call/load an ahk script from https. Like this one. I'd like to keep this file for public editing on github and it would be called from multiple users without the need to download it first..
How about downloading it:
UrlDownloadToFile, https://gist.github.com/gimoya/5821469/raw/034b2766bbcbe70e2a8e93b72d1ec8723351a8f8/Veg%C3%96K-Abk%C3%BCrzungen, hotstrings.ahk
if(ErrorLevel || !FileExist("hotstrings.ahk") ) {
msgbox, Download failed!
ExitApp
}
Run, hotstrings.ahk
You may want to add some check routines for the downloaded file to make sure, that it's syntactically correct and most of all not harmful. That is, you can't trust anything coming from the internet. A more secure way to apporach this would be to retrieve your hotstrings as XML or JSON and build them dynamically on your own; blindly executing downloaded code is very risky, especially when multiple users are able to edit it arbitrarily.

Downloaded ppt file seems to be corrupted

Recently I've integrated Google Drive with my iOS application. Everything works fine but .ppt files. Normally if a file is a Drive file I use downloadURL to download it. If the file belongs to Google Docs I use one of the exportLinks (exactly the same as Alain described it here).
However all .ppt files (with "mimeType": "application/vnd.google-apps.presentation") which come from Google Docs are corrupted after being downloaded (I use an export link with exportFormat=pptx). The same file downloaded via web browser works fine.
I use ASIHTTPRequest lib for downloading files (which also can be the reason of corrupted .ppt?).
Any ideas why only ppt files cause problems?
I can already tell you that the lib you're using isn't the cause:I'm not using it but I've the same problem: it seems that there the code received isn't 200 (if ($httpRequest->getResponseHttpCode() == 200)) as it shows me a specific error message I've asked to return in case of. Also, when I'm trying to download a presentation in PDF or txt, it shows the same error.
It's not really an answer but I'm trying to understand also why only presentations are causing problems.
EDIT: the code received is 302. If it can help...
EDIT 2: After trying, I noticed that the first parameter is the file id and the second the export format:
https://docs.google.com/feeds/download/presentations/Export?docId=filedid&exportFormat=pptx
But in the 302 code, I have this location:
https://docs.google.com/feeds/download/presentations/Export?exportFormat=pptx&id=fileid
Not only the two parameters aren't in the same order but the name is id and not docid
When I take this URL, put it as the export link and then try to copy the file, it's working. I get a 200 response and the inside of the file.
I hope it helps.

Identify the upload status

I am uploading the folder from local to FTP using perl Net::FTP::Recursive module. I have written the sample code below. In that code I need to know the status of the uploading process like whether it has been uploaded or not.
use strict;
use Net::FTP:recursive;
my $ftp_con= Net::FTP::Recursive->new('host.com',Debug=>0);
$ftp_con->login('username','password');
$ftp_con->rput('d:\my_test','\root\my_test');
$ftp_con->quit;
In the above code I am unable to find the status of the uploading. Can anyone suggest me to get the uploading status of the folder, whether the folder has been uploaded or not.
Thanks...
Subclass Net::FTP::Recursive to override _rput. Add a callback hook to the end of the foreach block and pass in the current file $file and the list of files #files as arguments.
In the main part of the code, count up each time the callback is called and calculating the progress from the counter and the number of elements in #files.
First thing did you remember what is your folder name that you transfer via ftf. If the transfering is so fast and you are unable to monitor wether it is already in the server, you can use anothet method to verify it wether it is successfully loaded.
1. Log in to CPanel of your website via your hosting provider
2. Locate legacy file manager folder then click
3. Choose document root for, the click Go, then start to see find your folder name that you tranfer via ftp.

Counting eclipse plugin installations/downloads

I'm currently hosting an Eclipse plugin update site on sourceforge.net . SF.net does not allow access to server logs but I'd still like to know how many downloads the plugin gets.
Is there an alternative way of gathering them?
I'm not going to have any sort of 'call home' feature in the plugin, so please don't suggest that.
I wrote a blog about how to track downloads of an Eclipse plug-in update site. What you can do is specify a url to your server and every time a download is initiated the update site will send an HTTP HEAD request to that url, which you can then use to count the number of times the plug-in was downloaded. If you want to track some information about who is downloading the plug-in you can pass information, like the package name, version, os, and and store it in a database.
http://programmingfortherestofus.blogspot.com/2014/08/tracking-downloads-to-your-eclipse.html
I hope it helps!
It is possible to host the plugin jars in the file release service, and then get your site.xml file to point to them. You need to point at a specific mirror to make it work.
This will tell you how many times people download each file as with a normal file release.
Unfortunately, in practice this is a lot of work to maintain, and tends to be unreliable (I kept getting bug reports saying the update site wasn't working).
You could write a very simple php script which just serves up the relevant file, and logs the download to a file or DB. Make sure it double checks the URL is a valid one to download to the user of course :)
Once that's in place, you can update the site.xml to point to the correct thing, or you could probably use URL rewriting to intercept requests to your jar file and pass them through the script. I've never tried that on the SF servers, but it might work.
EDIT:
Even better, just have a php script which sends a redirect like this:
<?php
$file = $_GET('file');
// Now log the access to file
header('Location: ' . $file);
?>
Just a thought: AFAIK, SourceForge does tell you how much data you served. You know the size of your plugin JARs. Divide the data served by the size of your plugin and you get a rough estimate of how many downloads you had.