Progress indicator for Perl LWP POST upload - perl

I'm working on a Perl script which uploads big files with a POST request. My question is if it's possible to have a status output, because uploading big files can take some time with my internet connection.
I mean like a status bar with
$| = 1;
print "\r|----------> | 33%";
print "\r|--------------------> | 66%";
print "\r|------------------------------| 100%\n";
Here's my upload code:
my $ua=LWP::UserAgent->new();
$file = "my_big_holyday_vid.mp4";
$user = "username";
$pass = "password";
print "starting Upload...\n";
$res = $ua->post(
"http://$server",
Content_Type => 'form-data',
Content =>[
fn => ["$file" => $file],
username => $user,
password => $pass,
],
);
print "Upload complete!\n"

If you look at the documentation for HTTP::Request::Common you will see that, if you set $HTTP::Request::Common::DYNAMIC_FILE_UPLOAD to a true value, then the request object's content method will provide a callback that is used to fetch the data in chunks.
Normally this is called each time more data is needed for upload, but you can wrap it in your own subroutine to monitor the progress of the upload.
The program below gives an example. As you can see, the HTTP::Request object is created (I have assumed that the fn field should be just [$file]) and the content method is used to fetch the callback subroutine.
The subroutine wrapper just calls $callback in the first line to fetch the next data chunk, and returns it in the last line, just as $callback itself would do. Between these two lines you can add what you like, as long as it doesn't interfere with passing the chunk back to LWP. In this case I have printed the size of each chunk together with the percentage upload so far on each call.
For the purpose of percentage calculations, the full size of the file is accessible as $req->header('content-length'), which is more correct than using -s on the file.
Also, the final iteration can be detected if necessary as the callback will return chunk with a size of zero.
Note that this is untested except as far as it compiles and does roughly the right thing, as I have no internet service available that expects a file upload.
use strict;
use warnings;
use LWP;
use HTTP::Request::Common;
$HTTP::Request::Common::DYNAMIC_FILE_UPLOAD = 1;
my $ua = LWP::UserAgent->new;
my $server = 'example.com';
my $file = 'my_big_holyday_vid.mp4';
my ($user, $pass) = qw/ username password /;
print "Starting Upload...\n";
my $req = POST "http://$server",
Content_Type => 'form-data',
Content => [
fn => [$file],
username => $user,
password => $pass,
];
my $total;
my $callback = $req->content;
my $size = $req->header('content-length');
$req->content(\&wrapper);
my $resp = $ua->request($req);
sub wrapper {
my $chunk = $callback->();
if ($chunk) {
my $length = length $chunk;
$total += $length;
printf "%+5d = %5.1f%%\n", $length, $total / $size * 100;
}
else {
print "Completed\n";
}
$chunk;
}

Related

Web collection from Google data page

I am trying to collect the data from this page:
https://datastudio.google.com/reporting/c07bc5cf-5c09-4156-a903-3e7acd02721a/page/ql6IC
I usually use PERL/LWP to GET the page then parse, but this page does not return the visible elements, just the initial Google goo.
Looking to grab the Number of Confirmed Cases and the date updated at the bottom of the page.
Thanks in advance!
Here is an example of how to access the data after javascript has modified the DOM using Selenium::Chrome :
use strict;
use warnings;
use Selenium::Chrome;
# Enter your driver path here. See https://sites.google.com/a/chromium.org/chromedriver/
# for download instructions
my $driver_path = '/home/hakon/chromedriver/chromedriver';
my $driver = Selenium::Chrome->new( binary => $driver_path );
$driver->get("https://datastudio.google.com/reporting/c07bc5cf-5c09-4156-a903-3e7acd02721a/page/ql6IC");
sleep 5; # modify this sleep period such that the page is fully loaded before continuing
my $elem = $driver->find_element_by_class_name('tableBody');
# Do something with the table..
Update
To avoid specifying the sleep limit above you can use the wait_until() function from Selenium::Waiter, for example:
use feature qw(say);
use strict;
use warnings;
use Selenium::Chrome;
use Selenium::Waiter;
# Enter your driver path here. See https://sites.google.com/a/chromium.org/chromedriver/
# for download instructions
my $driver_path = '/home/hakon/chromedriver/chromedriver';
my $driver = Selenium::Chrome->new(
binary => $driver_path,
# avoid printing error from find_element_by_class_name() when class is not found,
# see the below wait_until() call. The error message will be on the form:
#
# "Error while executing command: no such element: no such element:
# Unable to locate element"
#
error_handler => sub { my $msg = $_[1]; die $msg if $msg !~ /\Qno such element\E/}
);
$driver->get("https://datastudio.google.com/reporting/c07bc5cf-5c09-4156-a903-3e7acd02721a/page/ql6IC");
my $timeouts = $driver->get_timeouts();
say "Current implicit timeout = ", $timeouts->{implicit};
$driver->set_implicit_wait_timeout(0);
say "Updated implicit wait timeout to 0 ms";
my $timeout = 30;
my $start_time = time;
my $elem = wait_until {
$driver->find_element_by_class_name('tableBody')
} timeout => $timeout, interval => 1;
if ( $elem ) {
my $elapsed = time - $start_time;
say "Found element after $elapsed seconds";
}
else {
say "Could not find tableBody element after $timeout seconds";
}

Web-crawler optimization

I am building a basic search engine using vector-space model and this is the crawler for returning 500 URLs and removes the SGML tags from the content. However, it is very slow (takes more than 30mins for retrieving the URLs only). How can I optimize the code? I have inserted wikipedia.org as an example starting URL.
use warnings;
use LWP::Simple;
use LWP::UserAgent;
use HTTP::Request;
use HTTP::Response;
use HTML::LinkExtor;
my $starting_url = 'http://en.wikipedia.org/wiki/Main_Page';
my #urls = $starting_url;
my %alreadyvisited;
my $browser = LWP::UserAgent->new();
$browser->timeout(5);
my $url_count = 0;
while (#urls)
{
my $url = shift #urls;
next if $alreadyvisited{$url}; ## check if already visited
my $request = HTTP::Request->new(GET => $url);
my $response = $browser->request($request);
if ($response->is_error())
{
print $response->status_line, "\n"; ## check for bad URL
}
my $contents = $response->content(); ## get contents from URL
push #c, $contents;
my #text = &RemoveSGMLtags(\#c);
#print "#text\n";
$alreadyvisited{$url} = 1; ## store URL in hash for future reference
$url_count++;
print "$url\n";
if ($url_count == 500) ## exit if number of crawled pages exceed limit
{
exit 0;
}
my ($page_parser) = HTML::LinkExtor->new(undef, $url);
$page_parser->parse($contents)->eof; ## parse page contents
my #links = $page_parser->links;
foreach my $link (#links)
{
$test = $$link[2];
$test =~ s!^https?://(?:www\.)?!!i;
$test =~ s!/.*!!;
$test =~ s/[\?\#\:].*//;
if ($test eq "en.wikipedia.org") ## check if URL belongs to unt domain
{
next if ($$link[2] =~ m/^mailto/);
next if ($$link[2] =~ m/s?html?|xml|asp|pl|css|jpg|gif|pdf|png|jpeg/);
push #urls, $$link[2];
}
}
sleep 1;
}
sub RemoveSGMLtags
{
my ($input) = #_;
my #INPUTFILEcontent = #$input;
my $j;my #raw_text;
for ($j=0; $j<$#INPUTFILEcontent; $j++)
{
my $INPUTFILEvalue = $INPUTFILEcontent[$j];
use HTML::Parse;
use HTML::FormatText;
my $plain_text = HTML::FormatText->new->format(parse_html($INPUTFILEvalue));
push #raw_text, ($plain_text);
}
return #raw_text;
}
Always use strict
Never use the ampersand & on subroutine calls
Use URI to manipulate URLs
You have a sleep 1 in there, which I assume is to avoid hammering the site too much, which is good. But the bottleneck in almost any web-based application is the internet itself, and you won't be able to make your program any faster without requesting more from the site. That means removing your sleep and perhaps making parallel requests to the server using, for instance, LWP::Parallel::RobotUA. Is that a way you should be going?
Use WWW::Mechanize which handles all the URL parsing and extraction for you. So much easier than all the link parsing you're dealing with. It was created specifically for the sort of thing you're doing, and it's a subclass of LWP::UserAgent so you should just be able to change all your LWP::UserAgent to WWW::Mechanize without having to change any code, except for all the link extraction, so you can do this:
my $mech = WWW::Mechanize->new();
$mech->get( 'someurl.com' );
my #links = $mech->links;
and then #links is an array of WWW::Mechanize::Link objects.

using Perl to scrape a website

I am interested in writing a perl script that goes to the following link and extracts the number 1975: https://familysearch.org/search/collection/results#count=20&query=%2Bevent_place_level_1%3ACalifornia%20%2Bevent_place_level_2%3A%22San%20Diego%22%20%2Bbirth_year%3A1923-1923~%20%2Bgender%3AM%20%2Brace%3AWhite&collection_id=2000219
That website is the amount of white men born in the year 1923 who live in San Diego County, California in 1940. I am trying to do this in a loop structure to generalize over multiple counties and birth years.
In the file, locations.txt, I put the list of counties, such as San Diego County.
The current code runs, but instead of the # 1975, it displays unknown. The number 1975 should be in $val\n.
I would very much appreciate any help!
#!/usr/bin/perl
use strict;
use LWP::Simple;
open(L, "locations26.txt");
my $url = 'https://familysearch.org/search/collection/results#count=20&query=%2Bevent_place_level_1%3A%22California%22%20%2Bevent_place_level_2%3A%22%LOCATION%%22%20%2Bbirth_year%3A%YEAR%-%YEAR%~%20%2Bgender%3AM%20%2Brace%3AWhite&collection_id=2000219';
open(O, ">out26.txt");
my $oldh = select(O);
$| = 1;
select($oldh);
while (my $location = <L>) {
chomp($location);
$location =~ s/ /+/g;
foreach my $year (1923..1923) {
my $u = $url;
$u =~ s/%LOCATION%/$location/;
$u =~ s/%YEAR%/$year/;
#print "$u\n";
my $content = get($u);
my $val = 'unknown';
if ($content =~ / of .strong.([0-9,]+)..strong. /) {
$val = $1;
}
$val =~ s/,//g;
$location =~ s/\+/ /g;
print "'$location',$year,$val\n";
print O "'$location',$year,$val\n";
}
}
Update: API is not a viable solution. I have been in contact with the site developer. The API does not apply to that part of the webpage. Hence, any solution pertaining to JSON will not be applicbale.
It would appear that your data is generated by Javascript and thus LWP cannot help you. That said, it seems that the site you are interested in has a developer API: https://familysearch.org/developers/
I recommend using Mojo::URL to construct your query and either Mojo::DOM or Mojo::JSON to parse XML or JSON results respectively. Of course other modules will work too, but these tools are very nicely integrated and let you get started quickly.
You could use WWW::Mechanize::Firefox to process any site that could be loaded by Firefox.
http://metacpan.org/pod/WWW::Mechanize::Firefox::Examples
You have to install the Mozrepl plugin and you will be able to process the web page contant via this module. Basically you will "remotly control" the browser.
Here is an example (maybe working)
use strict;
use warnings;
use WWW::Mechanize::Firefox;
my $mech = WWW::Mechanize::Firefox->new(
activate => 1, # bring the tab to the foreground
);
$mech->get('https://familysearch.org/search/collection/results#count=20&query=%2Bevent_place_level_1%3ACalifornia%20%2Bevent_place_level_2%3A%22San%20Diego%22%20%2Bbirth_year%3A1923-1923~%20%2Bgender%3AM%20%2Brace%3AWhite&collection_id=2000219',':content_file' => 'main.html');
my $retries = 10;
while ($retries-- and ! $mech->is_visible( xpath => '//*[#class="form-submit"]' )) {
print "Sleep until we find the thing\n";
sleep 2;
};
die "Timeout" if 0 > $retries;
#fill out the search form
my #forms = $mech->forms();
#<input id="census_bp" name="birth_place" type="text" tabindex="0"/>
#A selector prefixed with '#' must match the id attribute of the input. A selector prefixed with '.' matches the class attribute. A selector prefixed with '^' or with no prefix matches the name attribute.
$mech->field( birth_place => 'value_for_birth_place' );
# Click on the submit
$mech->click({xpath => '//*[#class="form-submit"]'});
If you use your browser's development tools, you can clearly see the JSON request that the page you link to uses to get the data you're looking for.
This program should do what you want. I've added a bunch of comments for readability and explanation, as well as made a few other changes.
use warnings;
use strict;
use LWP::UserAgent;
use JSON;
use CGI qw/escape/;
# Create an LWP User-Agent object for sending HTTP requests.
my $ua = LWP::UserAgent->new;
# Open data files
open(L, 'locations26.txt') or die "Can't open locations: $!";
open(O, '>', 'out26.txt') or die "Can't open output file: $!";
# Enable autoflush on the output file handle
my $oldh = select(O);
$| = 1;
select($oldh);
while (my $location = <L>) {
# This regular expression is like chomp, but removes both Windows and
# *nix line-endings, regardless of the system the script is running on.
$location =~ s/[\r\n]//g;
foreach my $year (1923..1923) {
# If you need to add quotes around the location, use "\"$location\"".
my %args = (LOCATION => $location, YEAR => $year);
my $url = 'https://familysearch.org/proxy?uri=https%3A%2F%2Ffamilysearch.org%2Fsearch%2Frecords%3Fcount%3D20%26query%3D%252Bevent_place_level_1%253ACalifornia%2520%252Bevent_place_level_2%253A^LOCATION^%2520%252Bbirth_year%253A^YEAR^-^YEAR^~%2520%252Bgender%253AM%2520%252Brace%253AWhite%26collection_id%3D2000219';
# Note that values need to be doubly-escaped because of the
# weird way their website is set up (the "/proxy" URL we're
# requesting is subsequently loading some *other* URL which
# is provided to "/proxy" as a URL-encoded URL).
#
# This regular expression replaces any ^WHATEVER^ in the URL
# with the double-URL-encoded value of WHATEVER in %args.
# The /e flag causes the replacement to be evaluated as Perl
# code. This way I can look data up in a hash and do URL-encoding
# as part of the regular expression without an extra step.
$url =~ s/\^([A-Z]+)\^/escape(escape($args{$1}))/ge;
#print "$url\n";
# Create an HTTP request object for this URL.
my $request = HTTP::Request->new(GET => $url);
# This HTTP header is required. The server outputs garbage if
# it's not present.
$request->push_header('Content-Type' => 'application/json');
# Send the request and check for an error from the server.
my $response = $ua->request($request);
die "Error ".$response->code if !$response->is_success;
# The response should be JSON.
my $obj = from_json($response->content);
my $str = "$args{LOCATION},$args{YEAR},$obj->{totalHits}\n";
print O $str;
print $str;
}
}
What about this simple script without firefox ? I had investigated the site a bit to understand how it works, and I saw some JSON requests with firebug firefox addon, so I know which URL to query to get the relevant stuff. Here is the code :
use strict; use warnings;
use JSON::XS;
use LWP::UserAgent;
use HTTP::Request;
my $ua = LWP::UserAgent->new();
open my $fh, '<', 'locations2.txt' or die $!;
open my $fh2, '>>', 'out2.txt' or die $!;
# iterate over locations from locations2.txt file
while (my $place = <$fh>) {
# remove line ending
chomp $place;
# iterate over years
foreach my $year (1923..1925) {
# building URL with the variables
my $url = "https://familysearch.org/proxy?uri=https%3A%2F%2Ffamilysearch.org%2Fsearch%2Frecords%3Fcount%3D20%26query%3D%252Bevent_place_level_1%253ACalifornia%2520%252Bevent_place_level_2%253A%2522$place%2522%2520%252Bbirth_year%253A$year-$year~%2520%252Bgender%253AM%2520%252Brace%253AWhite%26collection_id%3D2000219";
my $request = HTTP::Request->new(GET => $url);
# faking referer (where we comes from)
$request->header('Referer', 'https://familysearch.org/search/collection/results');
# setting expected format header for response as JSON
$request->header('content_type', 'application/json');
my $response = $ua->request($request);
if ($response->code == 200) {
# this line convert a JSON to Perl HASH
my $hash = decode_json $response->content;
my $val = $hash->{totalHits};
print $fh2 "year $year, place $place : $val\n";
}
else {
die $response->status_line;
}
}
}
END{ close $fh; close $fh2; }
This seems to do what you need. Instead of waiting for the disappearance of the hourglass it waits - more obviously I think - for the appearance of the text node you're interested in.
use 5.010;
use warnings;
use WWW::Mechanize::Firefox;
STDOUT->autoflush;
my $url = 'https://familysearch.org/search/collection/results#count=20&query=%2Bevent_place_level_1%3ACalifornia%20%2Bevent_place_level_2%3A%22San%20Diego%22%20%2Bbirth_year%3A1923-1923~%20%2Bgender%3AM%20%2Brace%3AWhite&collection_id=2000219';
my $mech = WWW::Mechanize::Firefox->new(tab => qr/FamilySearch\.org/, create => 1, activate => 1);
$mech->autoclose_tab(0);
$mech->get('about:blank');
$mech->get($url);
my $text;
while () {
sleep 1;
$text = $mech->xpath('//p[#class="num-search-results"]/text()', maybe => 1);
last if defined $text;
}
my $results = $text->{nodeValue};
say $results;
if ($results =~ /([\d,]+)\s+results/) {
(my $n = $1) =~ tr/,//d;
say $n;
}
output
1-20 of 1,975 results
1975
Update
This update is with special thanks to #nandhp, who inspired me to look at the underlying data server that produces the data in JSON format.
Rather than making a request via the superfluous https://familysearch.org/proxy this code accesses the server directly at https://familysearch.org/search/records, reencodes the JSON and dumps the required data out of the resulting structure. This has the advantage of both speed (the requests are served about once a second - more than ten times faster than with the equivalent request from the basic web site) and stability (as you note, the site is very flaky - in contrast I have never seen an error using this method).
use strict;
use warnings;
use LWP::UserAgent;
use URI;
use JSON;
use autodie;
STDOUT->autoflush;
open my $fh, '<', 'locations26.txt';
my #locations = <$fh>;
chomp #locations;
open my $outfh, '>', 'out26.txt';
my $ua = LWP::UserAgent->new;
for my $county (#locations[36, 0..2]) {
for my $year (1923 .. 1926) {
my $total = familysearch_info($county, $year);
print STDOUT "$county,$year,$total\n";
print $outfh "$county,$year,$total\n";
}
print "\n";
}
sub familysearch_info {
my ($county, $year) = #_;
my $query = join ' ', (
'+event_place_level_1:California',
sprintf('+event_place_level_2:"%s"', $county),
sprintf('+birth_year:%1$d-%1$d~', $year),
'+gender:M',
'+race:White',
);
my $url = URI->new('https://familysearch.org/search/records');
$url->query_form(
collection_id => 2000219,
count => 20,
query => $query);
my $resp = $ua->get($url, 'Content-Type'=> 'application/json');
my $data = decode_json($resp->decoded_content);
return $data->{totalHits};
}
output
San Diego,1923,1975
San Diego,1924,2004
San Diego,1925,1871
San Diego,1926,1908
Alameda,1923,3577
Alameda,1924,3617
Alameda,1925,3567
Alameda,1926,3464
Alpine,1923,1
Alpine,1924,2
Alpine,1925,0
Alpine,1926,1
Amador,1923,222
Amador,1924,248
Amador,1925,134
Amador,1926,67
I do not know how to post revised code from the solution above.
This code does not (yet) compile correctly. However, I have made some essential update to definitely head in that direction.
I would very much appreciate help on this updated code. I do not know how to post this code and this follow up such that it appease the lords who run this sight.
It get stuck at the sleep line. Any advice on how to proceed past it would be much appreciated!
use strict;
use warnings;
use WWW::Mechanize::Firefox;
my $mech = WWW::Mechanize::Firefox->new(
activate => 1, # bring the tab to the foreground
);
$mech->get('https://familysearch.org/search/collection/results#count=20&query=%2Bevent_place_level_1%3ACalifornia%20%2Bevent_place_level_2%3A%22San%20Diego%22%20%2Bbirth_year%3A1923-1923~%20%2Bgender%3AM%20%2Brace%3AWhite&collection_id=2000219',':content_file' => 'main.html', synchronize => 0);
my $retries = 10;
while ($retries-- and $mech->is_visible( xpath => '//*[#id="hourglass"]' )) {
print "Sleep until we find the thing\n";
sleep 2;
};
die "Timeout while waiting for application" if 0 > $retries;
# Now the hourglass is not visible anymore
#fill out the search form
my #forms = $mech->forms();
#<input id="census_bp" name="birth_place" type="text" tabindex="0"/>
#A selector prefixed with '#' must match the id attribute of the input. A selector prefixed with '.' matches the class attribute. A selector prefixed with '^' or with no prefix matches the name attribute.
$mech->field( birth_place => 'value_for_birth_place' );
# Click on the submit
$mech->click({xpath => '//*[#class="form-submit"]'});
You should set the current form before accessing a field:
"Given the name of a field, set its value to the value specified. This applies to the current form (as set by the "form_name()" or "form_number()" method or defaulting to the first form on the page)."
$mech->form_name( 'census-search' );
$mech->field( birth_place => 'value_for_birth_place' );
Sorry, I am not able too try this code out and thanks for open a question for a new question.

How do I upload a file from the local machine to Sharepoint using Perl SOAP::Lite?

#use SOAP::Lite ( +trace => all, maptype => {} );
use SOAP::Lite maptype => {};
use LWP::UserAgent;
use HTTP::Request::Common;
#credentials' file
require "c:\\test\\pass.pl";
my $userAgent = LWP::UserAgent->new(keep_alive => 1);
sub SOAP::Transport::HTTP::Client::get_basic_credentials {
return $username => $password;
}
my $soap
= SOAP::Lite
->uri('<mysite>/_vti_bin/lists.asmx')
->on_action(sub {join '/', 'http://schemas.microsoft.com/sharepoint/soap/CopyIntoItemsLocal', $_[1]})
->proxy('<mysite>/_layouts/viewlsts.aspx?BaseType=0', keep_alive => 1);
# my $method = SOAP::Data->name('CopyIntoItemsLocal')
# ->attr({xmlns => 'http://schemas.microsoft.com/sharepoint/soap/'});
# my #params = (SOAP::Data->name(SourceUrl => $source),
# SOAP::Data->name(DestinationUrl => $destination) );
# print $soap->call($method => #params)->result;
my $fileName = 'c:\test\abc.txt';
my $destDir = "<mysite>/Lists/sharepoint1/";
#load and encode Data
my $data;
open(FILE, $fileName) or die "$!";
#read in chunks of 57 bytes to ensure no padding in the middle (Padding means extra space for large files)
while (read(FILE, $buf, 60 * 57)) {
$data .= encode_base64($buf);
}
close(FILE);
#make the call
print "uploading $fileName...";
$lists = $soap->GetList();
my $method = SOAP::Data->name('CopyIntoItemsLocal')->attr({xmlns => 'http://schemas.microsoft.com/sharepoint/soap/'});
my #params = (
SOAP::Data->name('SourceUrl')->value($fileName)->type(''),
SOAP::Data->name('DestinationUrls')->type('')->value(
\SOAP::Data->name('string')->type('')->value($destDir . $fileName)
),
SOAP::Data->name('Fields')->type('')->value(
\SOAP::Data->name('FieldInformation')->type('')->attr({Type => 'File'})->value('')
),
SOAP::Data->name('Stream')->value("$data")->type('')
);
#print Results
print $soap->call($method => #params)->result;
#print $response->headerof('//CopyResult')->attr->{ErrorCode};
#use SOAP::Lite ( +trace => all, maptype => {} );
use SOAP::Lite maptype => {};
use LWP::UserAgent;
use HTTP::Request::Common;
use MIME::Base64 qw(encode_base64);
require "c:\\test\\pass.pl";
my $userAgent = LWP::UserAgent->new(keep_alive=>1);
#setup connection
sub SOAP::Transport::HTTP::Client::get_basic_credentials {
return $username => $password;
}
my $soap = SOAP::Lite
-> uri('http://<mysite>')
-> on_action( sub{ join '/', 'http://schemas.microsoft.com/sharepoint/soap', $_[1] })
-> proxy('http://<mysite>/_vti_bin/lists.asmx',keep_alive => 1);
$lists = $soap->GetListCollection();
quit(1, $lists->faultstring()) if defined $lists->fault();
my #result = $lists->dataof('//GetListCollectionResult/Lists/List');
foreach my $data (#result) {
my $attr = $data->attr;
foreach my $a qw'Title Description DefaultViewUrl Name ID WebId ItemCount' {
printf "%-16s %s\n", $a, $attr->{$a};
}
print "\n";
}
The authentication seems to be working. First I thought that the GetlistCollection Web service is working, as when I made call using that Web service, it returned a page. But I think the call is returning the page I specified in the proxy argument.
I am able to get the collection of list on the particular site on the sharepoint.
I have used GetListCollection. However I did not really understand the code which is printing the list. I just copied it from squish.net. Now I am trying to use the CopyIntoItemsLocal web service.
We have a repository of files on one server (SVN) and I have to write a Perl script which when executed will copy the files and directories from SVN to sharepoint along with the directory structure.
I will appreciate any help or tips. Since it is a big task, I am doing it in modules.
I would start by using soapUI (formerly by eviware, now by smartbear) an open source soapUI testing tool. This will allow you to send soap transactions back and forth without any other user interface. Once you are sure your transactions work and you can parse the data to get what you want, then I would make the move to use Perl to automate those transactions.
This helps you eliminate errors in your requests early on, figure out how to parse responses, and familiarize yourself with the API.

How to use Net::Twitter::Stream to read stream from API?

I'm trying to use the Net::Twitter::Stream Perl module from CPAN to read the stream from sample.json. I believe this is the corect module though they way they crafted it allows one to process the filter stream. I've modified it as such but I must be missing something as I don't get any data in return. I establish a connection but nothing comes back. I'm guessing this should be an easy fix but I'm a touch new to this part of Perl.....
package Net::Twitter::Stream;
use strict;
use warnings;
use IO::Socket;
use MIME::Base64;
use JSON;
use IO::Socket::SSL;
use LibNewsStand qw(%cf);
use utf8;
our $VERSION = '0.27';
1;
=head1 NAME
Using Twitter streaming api.
=head1 SYNOPSIS
use Net::Twitter::Stream;
Net::Twitter::Stream->new ( user => $username, pass => $password,
callback => \&got_tweet,
track => 'perl,tinychat,emacs',
follow => '27712481,14252288,972651' );
sub got_tweet {
my ( $tweet, $json ) = #_; # a hash containing the tweet
# and the original json
print "By: $tweet->{user}{screen_name}\n";
print "Message: $tweet->{text}\n";
}
=head1 DESCRIPTION
The Streaming verson of the Twitter API allows near-realtime access to
various subsets of Twitter public statuses.
The /1/status/filter.json api call can be use to track up to 200 keywords
and to follow 200 users.
HTTP Basic authentication is supported (no OAuth yet) so you will need
a twitter account to connect.
JSON format is only supported. Twitter may depreciate XML.
More details at: http://dev.twitter.com/pages/streaming_api
Options
user, pass: required, twitter account user/password
callback: required, a subroutine called on each received tweet
perl#redmond5.com
#martinredmond
=head1 UPDATES
https fix: iwan standley <iwan#slebog.net>
=cut
sub new {
my $class = shift;
my %args = #_;
die "Usage: Net::Twitter::Stream->new ( user => 'user', pass => 'pass', callback => \&got_tweet_cb )" unless
$args{user} && $args{pass} && $args{callback};
my $self = bless {};
$self->{user} = $args{user};
$self->{pass} = $args{pass};
$self->{got_tweet} = $args{callback};
$self->{connection_closed} = $args{connection_closed_cb} if
$args{connection_closed_cb};
my $content = "follow=$args{follow}" if $args{follow};
$content = "track=$args{track}" if $args{track};
$content = "follow=$args{follow}&track=$args{track}\r\n" if $args{track} && $args{follow};
my $auth = encode_base64 ( "$args{user}:$args{pass}" );
chomp $auth;
my $cl = length $content;
my $req = <<EOF;
GET /1/statuses/sample.json HTTP/1.1\r
Authorization: Basic $auth\r
Host: stream.twitter.com\r
User-Agent: net-twitter-stream/0.1\r
Content-Type: application/x-www-form-urlencoded\r
Content-Length: $cl\r
\r
EOF
my $sock = IO::Socket::INET->new ( PeerAddr => 'stream.twitter.com:https' );
#$sock->print ( "$req$content" );
while ( my $l = $sock->getline ) {
last if $l =~ /^\s*$/;
}
while ( my $l = $sock->getline ) {
next if $l =~ /^\s*$/; # skip empty lines
$l =~ s/[^a-fA-F0-9]//g; # stop hex from compaining about \r
my $jsonlen = hex ( $l );
last if $jsonlen == 0;
eval {
my $json;
my $len = $sock->read ( $json, $jsonlen );
my $o = from_json ( $json );
$self->{got_tweet} ( $o, $json );
};
}
$self->{connection_closed} ( $sock ) if $self->{connection_closed};
}
You don't need to post the source, we can pretty much figure it out. You should try one of the examples, but my advice is to use AnyEvent::Twitter::Stream which comes with a good example that you only have to modify a bit to get it running
sub parse_from_twitter_stream {
my $user = 'XXX';
my $password = 'YYYY';
my $stream = Net::Twitter::Stream->new ( user => $user, pass => $password,
callback => \&got_tweet,
connection_closed_cb => \&connection_closed,
track => SEARCH_TERM);
sub connection_closed {
sleep 1;
warn "Connection to Twitter closed";
parse_from_twitter_stream();#This isn't working for me -- can't get connection to reopen after disconnect
}
sub got_tweet {
my ( $tweet, $json ) = #_; # a hash containing the tweet
#Do stuff here
}
}