Perl mechanize Find all links array loop issue - perl

I am currently attempting to create a Perl webspider using WWW::Mechanize.
What I am trying to do is create a webspider that will crawl the whole site of the URL (entered by the user) and extract all of the links from every page on the site.
But I have a problem with how to spider the whole site to get every link, without duplicates
What I have done so far (the part im having trouble with anyway):
foreach (#nonduplicates) { #array contain urls like www.tree.com/contact-us, www.tree.com/varieties....
$mech->get($_);
my #list = $mech->find_all_links(url_abs_regex => qr/^\Q$urlToSpider\E/); #find all links on this page that starts with http://www.tree.com
#NOW THIS IS WHAT I WANT IT TO DO AFTER THE ABOVE (IN PSEUDOCODE), BUT CANT GET WORKING
#foreach (#list) {
#if $_ is already in #nonduplicates
#then do nothing because that link has already been found
#} else {
#append the link to the end of #nonduplicates so that if it has not been crawled for links already, it will be
How would I be able to do the above?
I am doing this to try and spider the whole site to get a comprehensive list of every URL on the site, without duplicates.
If you think this is not the best/easiest method of achieving the same result I'm open to ideas.
Your help is much appreciated, thanks.

Create a hash to track which links you've seen before and put any unseen ones onto #nonduplicates for processing:
$| = 1;
my $scanned = 0;
my #nonduplicates = ( $urlToSpider ); # Add the first link to the queue.
my %link_tracker = map { $_ => 1 } #nonduplicates; # Keep track of what links we've found already.
while (my $queued_link = pop #nonduplicates) {
$mech->get($queued_link);
my #list = $mech->find_all_links(url_abs_regex => qr/^\Q$urlToSpider\E/);
for my $new_link (#list) {
# Add the link to the queue unless we already encountered it.
# Increment so we don't add it again.
push #nonduplicates, $new_link->url_abs() unless $link_tracker{$new_link->url_abs()}++;
}
printf "\rPages scanned: [%d] Unique Links: [%s] Queued: [%s]", ++$scanned, scalar keys %link_tracker, scalar #nonduplicates;
}
use Data::Dumper;
print Dumper(\%link_tracker);

use List::MoreUtils qw/uniq/;
...
my #list = $mech->find_all_links(...);
my #unique_urls = uniq( map { $_->url } #list );
Now #unique_urls contains the unique urls from #list.

Related

perl pdf::api2 checking if a pdf file is encrypted

I have a website using a perl script for customers to upload a pdf file for me to print and post the printed pages to them.
I am using PDF::API2 to detect the page size and number of pages in order to calculate the printing costs.
However, if the pdf file is password protected this does not work and I get this error -
Software error:
Objind 9 does not exist at index 0 at /home5/smckayws/public_html/hookincrochet.com/lib//PDF/API2/Basic/PDF/File.pm line 758.
I am trying to use the isEncrypted feature in the pdf::api2 module to catch that the file is encrypted in order to direct the customer to a different page so they can enter the page size and page number manually, but it is not working for me.
I just get the same error message as above.
I have tried the following code snippets found elsewhere.
my $pdf = PDF::API2->open( "$customer_directory/$filename" );
if ( defined $pdf && $pdf->isEncrypted )
{
print "$pdf is encrypted.\n";
exit;
}
while (glob "*.pdf") {
$pdf = PDF::API2->open($_);
print "$_ is encrypted.\n" if $pdf->isEncrypted();
}
Any help would be greatly appreciated.
My guess is that the PDFs might use a feature that your version of PDF::API2 doesn't support. This is a workaround for the problem.
Wrap the call to isEncrypted in an eval, catch the error and handle it.
This will only work if the error does not occur on unencrypted files.
my $pdf = PDF::API2->open( "$customer_directory/$filename" );
if ( defined $pdf ) {
eval { $pdf->isEncrypted };
if ($#) {
# there was some kind of error opening the file
# could abort now, or look more specific, like this:
if ($# =~ m/Objind 9 does not exist at index 0/) {
print "$pdf is encrypted.\n";
exit;
}
}
# file is not encrypted, opening worked, continue reading it
}

Perl: should the function TreeBuilder be adapted when it is in a loop foreach?

My code is to enter an actor name and the program, via the given actor's filmography in IMDB, lists on a hash table all the cinematic genres of the movies he has acted in as well as their frequency. However, I have a problem: When I type a name like "brad pitt" or "bruce willis" after running the program at the prompt, execution takes indefinitely. How do you know what the problem is?
Another problem: when I type "nicolas bedos" (an actor name that I entered from the beginning), it works but it seems that the index is only made for a single movie selected in the #url_links list. Should the look_down function of the TreeBuilder module within a foreach loop be adapted? I was telling myself that the #genres list was overwritten on each iteration so I added a push () but the result remains the same.
use LWP::Simple;
use PerlIO::locale;
use HTML::TreeBuilder;
use WWW::Mechanize;
binmode STDOUT, ':locale';
use strict;
use warnings;
print "Enter the actor's name:";
my $acteur1 = <STDIN>; # the user enters the name of the actor
print "We will analyze the filmography of the actor $actor1 by genre\n";
#we put the link with the given actor in Mechanize variable in order to browse the internet links
my $lien1 = "https://www.imdb.com/find?s=nm&q=$acteur1";
my $mech = WWW::Mechanize->new();
$mech->get($lien1); #we access the search page with the get function
$mech->follow_link( url_regex => qr/nm0/i ); #we access the first result using the follow_link function and the regular expression nm0 which is in the URL
my #url_links= $mech->find_all_links( url_regex => qr/title\/tt/i ); #owe insert in an array all the links having as regular expression "title" in their URL
my $nb_links = #url_links; #we record the number of links in the list in this variable
my $tree = HTML::TreeBuilder->new(); #we create the TreeBuilder module to access a specific text on the page via the tags
my %index; #we create a hashing table
my #genres = (); #we create the genre list to insert all the genres encountered
foreach (#url_links) { #we make a loop to browse all the saved links
my $mech2 = WWW::Mechanize->new();
my $html = $_->url(); #we take the url of the link
if ($html =~ m=^/title=) { #if the url starts with "/title"
$mech2 ->get("https://www.imdb.com$html"); #we complete the link
my $content = $mech2->content; #we take the content of the page
$tree->parse($content); #we access the url and we use the tree to find the strings that interest us
#genres = $tree->look_down ('class', 'see-more inline canwrap', #We have as criterion to access the class = "see-more .."
sub {
my $link = $_[0]->look_down('_tag','a'); #new conditions: <a> tags
$link->attr('href') =~ m{genres=}; #autres conditions: "genres" must be in the URL
}
);
}
}
my #genres1 = (); #we create a new list to insert the words found (the genres of films)
foreach my $e (#genres){ #we create a loop to browse the list
my $genre = $e->as_text; #the text of the list element is inserted into the variable
#genres1 = split(/[à| ]/,$genre); #we remove the unnecessary characters that are spaces, at and | which allow to keep that the terms of genre cine
}
foreach my $e (#genres1){ #another loop to filter listing errors (Genres: etc ..) and add the correct words to the hash table
if ($e ne ("Genres:" or "") ) {
$index{$e}++;
}
}
$tree->delete; #we delete the tree as we no longer need it
foreach my $cle (sort{$index{$b} <=> $index{$a}} keys %index){
print "$cle : $index{$cle}\n"; #we display the hash table with the genres and the number of times that appear in the filmography of the given actor
}
Thank you in advance for your help,
wobot
 
The IMDB Conditions of Use say this:
Robots and Screen Scraping: You may not use data mining, robots, screen scraping, or similar data gathering and extraction tools on this site, except with our express written consent as noted below.
So you might want to reconsider what you're doing. Perhaps you could look at the OMDB API instead.

ReCaptcha Implementation in Perl

To implement recaptcha in my website.
One option is google API . But for that i need to signup with domain name to get API key.
Is there any other way we can do it ?
You don't necessarily need a domain name to sign up, per se.
They have a concept of a "global key" where one single domain key would be used on several domains. When signing up, select the "Enable this key on all domains (global key)" option, and use a unique identifier (domainkey.abhilasha.com) and this will be fine, you can use the key from any domain in the end.
One way: add this code to your perl file that is called by an html form:
Simplified of course
my #field_names=qw(name branch email g-recaptcha-response);
foreach $field_name (#field_names)
{
if (defined param("$field_name"))
{
$FIELD{$field_name} = param("$field_name");
}
}
$captcha=$FIELD{'g-recaptcha-response'};
use LWP::Simple;
$secretKey = "put your key here";
$ip = remote_host;
#Remove # rem to test submitted variables are present
#print "secret= $secretKey";
#print " and response= $captcha";
#print " and remoteip= $ip";
$URL = "https://www.google.com/recaptcha/api/siteverify?secret=".$secretKey."&response=".$captcha."&remoteip=".$ip;
$contents = get $URL or die;
# contents variable takes the form of: "success": true, "challenge_ts": "2016-11-21T16:02:41Z", "hostname": "www.mydomain.org.uk"
use Data::Dumper qw(Dumper);
# Split contents variable by comma:
my ($success, $challenge_time, $hostname) = split /,/, $contents;
# Split success variable by colon:
my ($success_title, $success_value) = split /:/, $success;
#strip whitespace:
$success_value =~ s/^\s+//;
if ($success_value eq "true")
{
print "it worked";
}else{
print "it did not";
}
If you are just trying to block spam, I prefer the honeypot captcha approach: http://haacked.com/archive/2007/09/10/honeypot-captcha.aspx
Put an input field on your form that should be left blank, then hide it with CSS (preferably in an external CSS file). A robot will find it and will put spam in it but humans wont see it.
In your form validation script, check the length of the field, if it contains any characters, do not process the form submission.

Removing top-directory-only URLs from a list of URLs?

I have a question that I'm having trouble researching, as I don't know how to ask it correctly on a search engine.
I have a list of URLs. I would like to have some automated way (Perl for preference) to go through the list and remove all URLs that are top directory only.
So for example I might have this list:
http://www.example.com/hello.html
http://www.foo.com/this/thingrighthere.html
In this case I would want to remove example.com from my list, as it is either top-directory only or they reference files in a top directory.
I'm trying to figure out how to do that. My first thought was, count forward slashes and if there's more than two, eliminate the URL from the list. But then you have trailing forward slashes, so that wouldn't work.
Any ideas or thoughts would be much appreciated.
Something like this:
use URI::Split qw( uri_split );
my $url = "http://www.foo.com/this/thingrighthere.html";
my ($scheme, $auth, $path, $query, $frag) = uri_split( $url );
if (($path =~ tr/\///) > 1 ) {
print "I care about this $url";
}
http://metacpan.org/pod/URI::Split
You could do this with regexes, but its much less work to let the URI library do it for you. You won't get caught out by funny schemes, escapes, and extra stuff before and after the path (query, anchor, authorization...). There's some trickiness around how paths are represented by path_segments(). See the comments below and the URI docs for details.
I have assumed that http://www.example.com/foo/ is considered a top directory. Adjust as necessary, but its something you have to think about.
#!/usr/bin/env perl
use URI;
use File::Spec;
use strict;
use warnings;
use Test::More 'no_plan';
sub is_top_level_uri {
my $uri = shift;
# turn it into a URI object if it isn't already
$uri = URI->new($uri) unless eval { $uri->isa("URI") };
# normalize it
$uri = $uri->canonical;
# split the path part into pieces
my #path_segments = $uri->path_segments;
# for an absolute path, which most are, the absoluteness will be
# represented by an empty string. Also /foo/ will come out as two elements.
# Strip that all out, it gets in our way for this purpose.
#path_segments = grep { $_ ne '' } #path_segments;
return #path_segments <= 1;
}
my #filtered_uris = (
"http://www.example.com/hello.html",
"http://www.example.com/",
"http://www.example.com",
"https://www.example.com/",
"https://www.example.com/foo/#extra",
"ftp://www.example.com/foo",
"ftp://www.example.com/foo/",
"https://www.example.com/foo/#extra",
"https://www.example.com/foo/?extra",
"http://www.example.com/hello.html#extra",
"http://www.example.com/hello.html?extra",
"file:///foo",
"file:///foo/",
"file:///foo.txt",
);
my #unfiltered_uris = (
"http://www.foo.com/this/thingrighthere.html",
"https://www.example.com/foo/bar",
"ftp://www.example.com/foo/bar/",
"file:///foo/bar",
"file:///foo/bar.txt",
);
for my $uri (#filtered_uris) {
ok is_top_level_uri($uri), $uri;
}
for my $uri (#unfiltered_uris) {
ok !is_top_level_uri($uri), $uri;
}
Use the URI module from CPAN. http://search.cpan.org/dist/URI
This is a solved problem. People have already written, tested and debugged code that handles this already. Whenever you have a programming problem that others have probably had to deal with, then look for existing code that does it for you.

How do I use Perl's Google::Search to look for specific URLs?

The Google::Search module, which is based on the AJAX Search API, doesn't seems to work very well, or it is just me?
For example, I use firefox to search google for: http://bloggingheads.tv/forum/member.php?u=12129
It brings results.
But when I use the module this way:
$google_search = Google::Search->Web ( q => "http://bloggingheads.tv/forum/member.php?u=12129" );
#result = $google_search->all;
I get nothing in the array.
Any idea?
Seems like this API doesn't bring the same results like when searching manually, am I missing something?
I had a similar problem with cyrillic queries. Both Google::Search and REST::Google from CPAN didn't work for me - they were giving back fewer or no results compared to manual test.
Eventually I wrote a scraping module using WWW::Mechanize and HTML::TreeBuilder.
Here's a sample to get result stats:
my $tree = HTML::TreeBuilder->new_from_content($content);
if (my $div = $tree->look_down(_tag => 'div', id => 'resultStats')) {
my $stats = $div->as_text();
}
else { warn "no stats" }
Looking at the POD for Google::Search, it looks like it expects you to pass search terms to Web, instead of a URL. I downloaded a test script from CPAN, ran it, and it seems to produce expected results:
use strict;
use warnings;
use Google::Search;
my $search = Google::Search->Web(q => "rock");
my $result = $search->first;
while ($result) {
print $result->number, " ", $result->uri, "\n";
$result = $result->next;
}
print $search->error->reason, "\n" if $search->error;
__END__
0 http://www.rock.com/
1 http://en.wikipedia.org/wiki/Rock_music
2 http://en.wikipedia.org/wiki/Rock_(geology)
3 http://rockyourphone.com/
4 http://rockhall.com/
5 http://www.co.rock.mn.us/
6 http://www.co.rock.wi.us/
7 http://www.rockride.org/
etc...
I realize this does not specifically answer your question, but perhaps it steers you in the right direction.