Empty Session Creating on every Refresh - perl

Every time I refresh my page an empty session gets created.
Index.cgi
#!perl.exe
use strict;
use warnings;
use DBI;
use CGI;
use database;
use Digest::MD5 qw(md5 md5_hex md5_base64);
use Data::Dumper;
use CGI::Session;
my $q = new CGI;
my $session = new CGI::Session();
my $db=database->new();
print $q->header;
$session = CGI::Session->load();
my $first_name = $session->param("userid");
my $login_fail=$q->param("attempt");
<div class='login_form' style=''>
<form method='post' action='Session.cgi'>
<table>
END_HTML
if ($session->is_empty)
{
print <<END_HTML;
<tr class='l_form_input'>
<td><input type='text' name='userid' placeholder='Email or Phone'/></td>
<td><input type='password' name='password' placeholder='Password'/></td>
<td><input type='submit' name='submit' value='Log In'/></td>
</tr>
<tr>
<td><span><a href='reg.cgi'>Not A Member Yet?</a></span></td>
<td><span><a href='forgot.cgi'>Forget your password?</a></span></td>
</tr>
END_HTML
if (defined($login_fail)) {
print "<tr><td colspan='2'>Incorrect Login<td></tr>";
}
}else {
print <<END_HTML;
<tr class='l_form_input'>
<td><span>Logged in as $first_name</span></td>
<td><span><a href='logout.cgi'>logout<span></a></td>
</tr>
END_HTML
}
print <<END_HTML;
</table>
</form>
</div>
Session.cgi
#!perl.exe
use strict;
use warnings;
use database;
use DBI;
use CGI;
use Digest::MD5 qw(md5 md5_hex md5_base64);
my $q = new CGI;
my $db=database->new();
#print $q->header;
my $email=$q->param("userid");
my $password=$q->param("password");
$password = md5_hex($password);
my $flag=$db->login_flag($email,$password);
if($flag == 1) {
require CGI::Session;
my $session = CGI::Session->new();
print $session->header();
$session->param("userid", $email);
$session->flush();
print "<META HTTP-EQUIV=refresh CONTENT=\"1;URL='http://localhost/website/index.cgi\">\n";
} else {
print $q->header;
print "<META HTTP-EQUIV=refresh CONTENT=\"1;URL='http://localhost/website/index.cgi?attempt=login_fail\">\n";
}

Pass the CGI object when creating the new CGI::Session:
my $session = CGI::Session->new($q) or die CGI->Session->errstr;
Otherwise, you're just creating an anonymous session each time that is not dependent on the user's information.

Related

Downloading a image via web scraping with a Perl script

I'm a noob in perl, trying to download a IMDB movie poster image via perl script with the help of Mechanize framework. I'm not getting the 'id' attribute for 'td' tags so that I can find the specific place for the image. This is how the HTML of the image portion of IMDB page looks like:
<table id="title-overview-widget-layout" cellspacing="0" cellpadding="0" border="0">
<tbody>
<tr>
<td id="img_primary" rowspan="2">
<div class="image">
<a href="/media/rm419297536/tt2338151?ref_=tt_ov_i">
<img width="214" height="317" itemprop="image" src="http://ia.media-imdb.com/images/M/MV5BMTYzOTE2NjkxN15BMl5BanBnXkFtZTgwMDgzMTg0MzE#._V1_SY317_CR2,0,214,317_AL_.jpg" title="PK (2014) Poster" alt="PK (2014) Poster">
</a>
</div>
<div class="pro-title-link text-center">
</td>
<td id="overview-top">
</tr>
<tr>
</tbody>
</table>
And here is the perl script I'm trying to download with:
use strict;
use warnings;
use WWW::Mechanize;
use HTML::TokeParser;
#create a new instance of mechanize
my $agent = WWW::Mechanize->new();
#get the page we want.
$agent->get("http://www.imdb.com/title/tt2338151/");
#supply a reference to that page to TokeParser
my $stream = HTML::TokeParser->new(\$agent->{content});
my $c = 0;#to store the count of images and give the images names
#loop through all the td's
while (my $tag1 = $stream->get_tag("td")) {
$tag1->[1]->{id} ||= 'none';
my $asd = $tag1->[1]->{id};
print "$asd\n"; #shows none for all of the td's
if ($asd && $asd eq 'img_primary') {
while(my $tag = $stream->get_tag("div"))
{
# $tag will contain this array => [$tag, $attr, $attrseq, $text]
#get the class of the div tag from attr
my $cls = $tag->[1]{class};
#we're looking for div's with the class gallery-img2
if($cls && $cls eq "image") {
#get the content of the src tag
my $image = $stream->get_tag('img')->[1]{src};
#create a new mechanize to download the image
my $imgDown = WWW::Mechanize->new();
#give the image url and the local path to mechanize
$imgDown->get($image, ":content_file" => ".//image".$c.".jpg");
#update the count
$c++;
}
}
}
}
print "Total images scraped $c\n";
Any help will be much appropriated.
When JavaScript is involved, it's best to use a real browser to visit pages and query their contents.
You can do this with Selenium::Remote::Driver.

Scrape a table#id columns with Web::Scraper

Have a html-page, with a structure:
have a table with an id="searchResult"
many rows
each contains 3 td - without any class
In each table cell contains one URL, and I need the URL from the second cell (column)
Tried different XPATH scrapers like:
my $links = scraper {
process '//table[id="searchResult"]', "lines[]" => scraper {
process "//tr/td[2]/a", text => 'TEXT';
process "//tr/td[2]/a", link => '#href';
};
};
my $res = $links->scrape($html);
But not works and the $res is an empty {}.
If someone needs, here is the full test code:
use 5.014;
use warnings;
use Web::Scraper;
use Data::Dumper;
my $links = scraper {
process '//table[id="searchResult"]', "lines[]" => scraper {
process "//tr/td[2]/a", text => 'TEXT';
process "//tr/td[2]/a", link => '#href';
};
};
my $html = do {local $/;<DATA>};
#say $html;
my $res = $links->scrape($html);
say Dumper $res;
__DATA__
<html>
<body>
<p>...</p>
<table id="searchResult">
<thead><th>x</th><th>x</th><th>x</th><th>x</th><th>x</th></thead>
<tr>
<td>cell11</td>
<td>cell12</td>
<td>cell13</td>
</tr>
<tr>
<td>cell21</td>
<td>cell22</td>
<td>cell23</td>
</tr>
<tr>
<td>cell31</td>
<td>cell32</td>
<td>cell33</td>
</tr>
</table>
</body>
</html>
My preferred scraper for this type of project is Mojo::DOM. For a helpful 8 minute introductory video, check out Mojocast Episode 5.
You also could probably use a pointer to a CSS Selector Reference.
The following performs the parsing you're trying to do with this module:
use strict;
use warnings;
use Mojo::DOM;
my $dom = Mojo::DOM->new(do {local $/; <DATA>});
for my $link ($dom->find('table[id=searchResult] > tr > td:nth-child(2) > a')->each) {
print $link->{href}, " - ", $link->text, "\n";
}
__DATA__
<html>
<body>
<p>...</p>
<table id="searchResult">
<thead><th>x</th><th>x</th><th>x</th><th>x</th><th>x</th></thead>
<tr>
<td>cell11</td>
<td>cell12</td>
<td>cell13</td>
</tr>
<tr>
<td>cell21</td>
<td>cell22</td>
<td>cell23</td>
</tr>
<tr>
<td>cell31</td>
<td>cell32</td>
<td>cell33</td>
</tr>
</table>
</body>
</html>
Outputs:
#12 - cell12
#22 - cell22
#32 - cell32

parsing the html with HTML::TreeBuilder

I want to parse the html page.Extract the badge , description , and the badge type using
<div class="row">
<div class="span8">
<table id="badge-list">
<tr>
<td style="width: 25px;"></td>
<td style="width: 200px;" class="badge-cell">
<a class="badge-name" href="/badge/show/3/">
<span class="badge-icon bronze">•</span>
Editor
</a>
<span class="multiplier">x 3892</span></td>
<td class="earned False"> </td>
<td>First edit</td>
</tr>
my perl code is follows,
i am trying to extract a class="badge-name" and other detail using the below code
my $tree = HTML::TreeBuilder->new();
$tree->parse($content);
my ($h1) = $tree->look_down('_tag', 'table', 'id', 'badge-list');
my #tr = $h1->look_down('_tag', 'tr') ;
foreach my $tr (#tr) {
my #tdList = $tr->look_down('_tag','td');
foreach my $td ( #tdList) {
if (my $a = $td->look_down('_tag','a')) {
print $a->as_text , "\n";
my $span = $a->look_down('_tag','span', 'class');
print $span->attr('class');
}
else {
my $text = $td->as_text , "\n";
print "$text\n";
}
}
}
This code is throwing warning Wide character in print at ..
look_down requires pairs of attribute/value parameters.
$a->look_down('_tag','span', 'class')
should be just
$a->look_down('_tag','span')
I would suggest to add "use utf8;" at the start of the script to add support non ASCII symbols in the print. The symbol • is deferentially is wide.
use utf8;

How to extract a column of a table from html page using perl modules?

I have the following html code of a part of a webpage.
<h2 id="failed_process">Failed Process</h2>
<table border="1">
<thead>
<tr>
<th>
<b>pid</b>
</th>
<th>
<b>Priority</b>
</th>
</tr>
</thead>
<tbody>
<tr>
<td id="90">p_201211162334</td>
<td id="priority_90">NORMAL</td>
</tr>
<tr>
<td id="91">p_201211163423</td>
<td id="priority_91">NORMAL</td>
</tr>
<tr>
<td id="98">p_201211166543</td>
<td id="priority_98">NORMAL</td>
</tr>
</tbody>
</table>
<hr>
I need to extract the pid column . The output should look like
pid
p_201211162334
p_201211163423
p_201211166543
The table count for "Failed Process" table is 4. But the problem is if I mention the table count as 4 and if there are no failed tasks in the webpage, it'll go to the next table and fetch the pid's of next table resulting in wrong pid's.
I am using the below code to get the result.
#!/usr/bin/perl
use strict;
use warnings;
use lib qw(..);
use HTML::TableExtract;
my $content = get("URL");
my $te = HTML::TableExtract->new(
headers => [qw(pid)], attribs => { id => 'failed_process' },
);
$te->parse($content);
foreach my $col ($te->rows) {
print ("\t", #$col), "\n";
}
But I am getting the following error:
Can't call method "rows" on an undefined value
With my favourite DOM parser Mojo::DOM from the Mojolicious suite it would look like that:
#!/usr/bin/env perl
use strict;
use warnings;
use feature 'say';
use Mojo::DOM;
# instantiate with all DATA lines
my $dom = Mojo::DOM->new(do { local $/; <DATA> });
# extract all first column cells
$dom->find('table tr')->each(sub {
my $cell = shift->children->[0];
say $cell->all_text;
});
__DATA__
<h2 id="failed_process">Failed Process</h2>
<table border="1">
...
Output:
pid
p_201211162334
p_201211163423
p_201211166543
After $te->parse($html) you may add some like foreach my $table ($te->tables) .. then you can get rows $table->rows. You may also use Data::Dumper to analyze $te.

How to rearrange html content with HTML::Treebuilder

I'm writing a script to rearrange html content and I'm stuck with 2 problems. I have this html structure, which is movie titles and release years with thumbnails grouped in 5 columns. I want to generate new html files with the movies grouped by decades from 2011 to 1911, e.g. present-2011; 2010-2001; 2000-1991; etc.
<table>
<tr>
<td class="basic" valign="top">
<a href="details/267226.html" title="" id="thumbimage">
<img src="images/267226f.jpg"/>
</a>
<br/>Cowboys & Aliens  (2011)
</td>
<td class="basic" valign="top">
<a href="details/267185.html" title="" id="thumbimage">
<img src="images/267185f.jpg"/>
</a>
<br/>The Hangover Part II  (2011)
</td>
<td class="basic" valign="top">
<a href="details/267138.html" title="" id="thumbimage">
<img src="images/267138f.jpg"/>
</a>
<br/>Friends With Benefits  (2011)
</td>
<td class="basic" valign="top">
<a href="details/266870.html" title="" id="thumbimage">
<img src="images/266870f.jpg"/>
</a>
<br/>Beauty And The Beast  (1991)
</td>
<td class="basic" valign="top">
<a href="details/266846.html" title="" id="thumbimage">
<img src="images/266846f.jpg"/>
</a>
<br/>The Fox And The Hound  (1981)
</td>
</tr>
......
</table>
The one problem I have no idea how to solve is that after removing movies not matching the decade I'm left with empty 'tr' tags and thumbnail positions and don't know how to rearrange again every row in 5 columns filled with 5 titles. And also how to process each decade with one call of the script. Thanks.
use autodie;
use strict;
use warnings;
use File::Slurp;
use HTML::TreeBuilder;
my $tree = HTML::TreeBuilder->new_from_file( 'test.html' );
for my $h ( $tree->look_down( class => 'basic' ) ) {
edit_links( $h );
my ($year) = ($h->as_text =~ /.*?\((\d+)\).*/);
if ($year > 2010 or $year < 2001) {
$h->detach;
write_file( "decades/2010-2001.html", \$tree->as_HTML('<>&',' ',{}), "\n" );
}
}
sub edit_links {
my $h = shift;
for my $link ( $h->find_by_tag_name( 'a' ) ) {
my $href = '../'.$link->attr( 'href' );
$link->attr( 'href', $href );
}
for my $link ( $h->find_by_tag_name( 'img' ) ) {
my $src = '../'.$link->attr( 'src' );
$link->attr( 'src', $src );
}
}
The approach below should do what you wanted in question. During the HTML file processing, the hash %decade is setup, each key being ending year of decade and value arrayref of appropriate cells.
Second loop traverses the hash and outputs file for each decade, surrounding each 5 cells with <tr> tag.
use strict;
use HTML::TreeBuilder;
use File::Slurp;
use List::MoreUtils qw(part);
my $tree = HTML::TreeBuilder->new_from_file('test.html');
my %decade = ();
for my $h ( $tree->look_down( class => 'basic' ) ) {
edit_links( $h );
my ($year) = ($h->as_text =~ /.*?\((\d+)\).*/);
my $dec = (int($year/10) + 1) * 10;
$decade{$dec} ||= [];
push #{$decade{$dec}}, $h;
}
for my $dec (sort { $b <=> $a } keys %decade) {
my $filename = "decades/" . $dec . "-" . ($dec - 9) . ".html";
my $idx = 0;
my #items = map { $_->as_HTML('<>&',' ',{}) } #{ $decade{$dec} };
my $contents = join('',
'<table>',
(map { "<tr>#$_</tr>" } part { int($idx++ / 5) } #items),
'</table>');
write_file( $filename, $contents);
}
...