How can I write a Perl script to automatically take screenshots? - perl

I want a platform independent utility to take screenshots (not just within the browser).
The utility would be able to take screenshots after fixed intervals of time and be easily configurable by the user in terms of
time between successive shots,
the format the shots are stored,
till when (time, event) should the script run, etc
Since I need platform independence, I think Perl is a good choice.
a. Before I start out, I want to know whether a similar thing already exists, so I can start from there?
Searching CPAN gives me these two relevant results :
Imager-Screenshot-0.009
Imager-Search-1.00
From those pages, the first one looks easier.
b. Which one of these Perl modules should I use?

Taking a look at the sources of both, Imager::Search isn't much more than a wrapper to Imager::Screenshot.
Here's the constructor:
sub new {
my $class = shift;
my #params = ();
#params = #{shift()} if _ARRAY0($_[0]);
my $image = Imager::Screenshot::screenshot( #params );
unless ( _INSTANCE($image, 'Imager') ) {
Carp::croak('Failed to capture screenshot');
}
# Hand off to the parent class
return $class->SUPER::new( image => $image, #_ );
}
Given that Imager::Search does not really extend Imager::Screenshot much more, I'd say you're looking at two modules that are essentially the same.

Related

Perl Unit Testing -- is the subroutine testable?

I have been reading up and exploring on the concept of unit testing and test driven development in Perl. I'm looking into how I can incorporate the testing concepts into my development. Say I have a Perl subroutine here:
sub perforce_filelist {
my ($date) = #_;
my $path = "//depot/project/design/...module.sv";
my $p4cmd = "p4 files -e $path\#$date,\#now";
my #filelist = `$p4cmd`;
if (#filelist) {
chomp #filelist;
return #filelist;
}
else {
print "No new files!"
exit 1;
}
}
The subroutine executes a Perforce command and stores the output of that command (which is a list of files) in to the #filelist array. Is this subroutine testable? Would testing if the returned #filelist is empty useful? I am trying to teach myself how to think like a unit test developer.
There are a couple of things that make testing that perforce_filelist subroutine more difficult than it needs to be:
The p4 path is hard-coded
The p4 command is constructed inside the subroutine
The p4 command is fixed (so, it's always the first p4 in the path)
You output directly from the subroutine
You exit from inside the subroutine
But, your subroutine's responsibility is to get a filelist and return it. Anything you do outside of that makes it harder to test. If you can't change this because you don't have control of that, you can write stuff like this in the future:
#!perl -T
# Now perforce_filelist doesn't have responsibility for
# application logic unrelated to the file list
my #new_files = perforce_filelist( $path, $date );
unless( #new_files ) {
print "No new files!"; # but also maybe "Illegal command", etc
exit 1;
}
# Now it's much simpler to see if it's doing it's job, and
# people can make their own decisions about what to do with
# no new files.
sub perforce_filelist {
my ($path, $date) = #_;
my #filelist = get_p4_files( $path, $date );
}
# Inside testing, you can mock this part to simulate
# both returning a list and returning nothing. You
# get to do this without actually running perforce.
#
# You can also test this part separately from everything
# else (so, not printing or exiting)
sub get_p4_files {
my ($path, $date) = #_;
my $command = make_p4_files_command( $path, $date );
return unless defined $command; # perhaps with some logging
my #files = `$command`;
chomp #files;
return #files;
}
# This is where you can scrub input data to untaint values that might
# not be right. You don't want to pass just anything to the shell.
sub make_p4_files_command {
my ($path, $date) = #_;
return unless ...; # validate $path and $date, perhaps with logging
p4() . " files -e $path\#$date,\#now";
}
# Inside testing, you can set a different command to fake
# output. If you are confident the p4 is working correctly,
# you can assume it is and simulate output with your own
# command. That way you don't hit a production resource.
sub p4 { $ENV{"PERFORCE_COMMAND"} // "p4" }
But, you also have to judge if this level of decomposition is worth it to you. For a personal tool that you use infrequently, it's probably too much work. For something you have to support and lots of people use, it might be worth it. In that case, you might want the official P4Perl API. Those value judgements are up to you. But, having decomposed the problem, making bigger changes (such as using P4Perl) shouldn't be as seismic.
As a side note and not something I'm recommending for this problem, this is the use case for the & and no argument list. In this "crypto context", the argument list to the subroutine is the #_ of the subroutine calling it.
These calls keep passing on the same arguments down the chain, which is annoying to type out and maintain:
my #new_files = perforce_filelist( $path, $date );
my #filelist = get_p4_files( $path, $date );
my $command = make_p4_files_command( $path, $date );
With the & and no argument list (not even ()), it passes on the #_ to the next level:
my #new_files = perforce_filelist( $path, $date );
my #filelist = &get_p4_files;
my $command = &make_p4_files_command;
Whether it's testable depends a lot on your environment. You need to ask yourself the following questions:
Does the code depend on a production Perforce installation?
Does running the code with random values interfere with production?
Does running the code with the same values over and over again always yield the same results?
Can the external dependency be unavailable sometimes?
Is the external dependency outside of the control of the test?
Some of those things make it very hard (yet not impossible) to run tests for it. Some can be overcome by refactoring the code a little.
It's also important to define what exactly you want to test. A unit test for the function would make sure that it returns the right thing depending on what you put in, but you control the external dependency. An integration test on the other hand would run the external dependency.
Building an integration test for this is easy, but all the questions I have mentioned above apply. And since you have an exit in your code, you cannot really trap that. You would have to put that function in a script and run that and check the exit codes, or use a module like Test::Exit.
You also need to have your Perforce set up in a way that you always get the same results. That might mean to have dates and files there that you control. I don't know how Perforce works, so I cannot tell you how to do that, but in general these things are called fixtures. It's data that you control. For a database your test program would install them before running the tests, so you have a reproducible result.
You also have output to STDOUT, so you need a tool to grab that, too. Test::Output can do that.
use Test::More;
use Test::Output;
use Test::Exit;
# do something to get your function into the test file...
# possibly install fixtures...
# we will fake the whole function for this demonstration
sub perforce_filelist {
my ($date) = #_;
if ( $date eq 'today' ) {
return qw/foo bar baz/;
}
else {
print "No new files!";
exit 1;
}
}
stdout_is(
sub {
is exit_code( sub { perforce_filelist('yesterday') } ),
1, "exits with 1 when there are no files";
},
"No new files!",
"... and it prints a message to the screen"
);
my #return_values;
stdout_is(
sub {
never_exits_ok(
sub {
#return_values = perforce_filelist('today');
},
"does not exit when there are files"
);
},
q{},
"... and there is no output to the screen"
);
is_deeply( \#return_values, [qw/foo bar baz/],
"... and returns a list of filenames without newlines" );
done_testing;
As you can see, this takes care of all of the things the function does with relative ease. We cover all the code, but we are depending on something external. So this is not a real unit test.
Writing a unit test can be done similarly. There is Test::Mock::Cmd to replace the backticks or qx{} with another function. This could be done manually without that module too. Look at the module's code if you want to know how.
use Test::More;
use Test::Output;
use Test::Exit;
# from doc, could be just 'return';
our $current_qx = sub { diag( explain( \#_ ) ); return; };
use Test::Mock::Cmd 'qx' => sub { $current_qx->(#_) };
# get the function in, I used yours verbatim ...
my $qx; # this will store the arguments and fake an empty result
stdout_is(
sub {
is(
exit_code(
sub {
local $current_qx = sub { $qx = \#_; return; };
perforce_filelist('yesterday');
}
),
1,
"exits with 1 when there are no files"
);
},
"No new files!",
"... and it prints a message to the screen"
);
is $qx->[0], 'p4 files -e //depot/project/design/...module.sv#yesterday,#now',
"... and calls p4 with the correct arguments";
my #return_values;
stdout_is(
sub {
never_exits_ok(
sub {
# we already tested the args to `` above,
# so no need to capture them now
local $current_qx = sub { return "foo\n", "bar\n", "baz\n"; };
#return_values = perforce_filelist('today');
},
"does not exit when there are files"
);
},
q{},
"... and there is no output to the screen"
);
is_deeply( \#return_values, [qw/foo bar baz/],
"... and returns a list of filenames without newlines" );
done_testing;
We now can verify directly that the correct command line has been called, but we do not have to bother with setting up the Perforce to actually have any files, which makes the test run faster and makes you independent. You can run this test on a machine that does not have Perforce installed, which is useful if that function is only a small part of your overall application, and you still want to run the full test suite when you're working on a different part of the app.
Let's take a quick look at the second example's output.
ok 1 - exits with 1 when there are no files
ok 2 - ... and it prints a message to the screen
ok 3 - ... and calls p4 with the correct arguments
ok 4 - does not exit when there are files
ok 5 - ... and there is no output to the screen
ok 6 - ... and returns a list of filenames without newlines
1..6
As you can see it's almost the same as from the first example. I also hardly had to change the tests. Just the mocking strategy was added.
It's important to remember that tests are also code, and the same level of quality should apply to them. They act as documentation of your business logic and as a safety net for you and your fellow developers (including future-you). Clear descriptions of the business case that you are testing is essential for that.
If you want to learn more about the strategy of testing with Perl, and what not to do, I recommend watching the talk Testing Lies by Curtis Poe.
You ask:
Is this subroutine testable?
Yes, it definitely is. However a question instantly arrives; are you doing Development Driven Testing or Test Driven Development? Let me illustrate the difference.
Your current case is that you have written a method earlier than the test, which should drive the development of this function.
If you are trying to follow the basic guidance of TDD, you should write your test case first. In this stage the outcome of your unit test will be red, since there are missing pieces to test for.
Then you write the method with a minimal bits and pieces to make it compile. Now complete the first test case with something that you are asserting from the method you are testing against. If you did it right, your test case is now green, indicating to you that you can now check to see if there are things to refactor.
This will give you the basic principle of TDD, that is; red, green and refactor.
Summarized, you can test and assert for at least two things in your method.
Asserting to see if the #filelist is returned and is not empty
Asserting the failure case when you return 1
Also make sure that you are unit testing without external dependencies, like a file system etc, because that would be integration testing, which is including other moving parts of the system in your test.
As a final note, as with everything, experience comes through trying and learning. Always ask, at least yourself, then your business peers, to see if you are testing the right thing and if it brings any business value to test that part of the system.

Extract and Format Information from TAP Archive

What I like to do:
I am using Rex to remotely call tests at servers. I remotely execute the tests with a call the the local prove. I want to gather all the information about the testruns at the different servers at one place. To achieve this I run the tests with prove -a (and maybe also with --merge for capturing STDERR) to create an archive (.tgz). I then download this archive again with Rex to the controlling server. I think this is quite a good plan so far...
My problem now is that I find a lot of hints on creating such a TAP-archive, but none of how I can actually read this archive. Sure, I could open and process it somehow with Archive::Tar or parse it manually with TAP::Parser as suggested by Schwern. But knowing that there are formatters like TAP::Formatter::HTML or TAP::Formatter::JUnit (e.g. for Jenkins) I think there must be a way of using those tools directly on a TAP-archive? When I look up the docs I only find hints on how to use this stuff with prove to format tests while running them. But I need to use this formatters on the archive, I have been running prove already remotely...
So far about the context. My question in short is: How can I use the Perl-TAP-Tools to format TAP coming from a TAP-archive produced by prove?
I am thankful for any little hints. Also if you see a problem in my approach in general.
Renée provided a working solution here: http://www.perl-community.de/bat/poard/thread/18420 (German)
use strict;
use warnings;
use TAP::Harness::Archive;
use TAP::Harness;
use TAP::Formatter::HTML;
my $formatter = TAP::Formatter::HTML->new;
my $harness = TAP::Harness->new({ formatter => $formatter });
$formatter->really_quiet(1);
$formatter->prepare;
my $session;
my $aggregator = TAP::Harness::Archive->aggregator_from_archive({
archive => '/must/be/the/complete/path/to/test.tar.gz',
parser_callbacks => {
ALL => sub {
$session->result( $_[0] );
},
},
made_parser_callback => sub {
$session = $formatter->open_test( $_[1], $_[0] );
}
});
$aggregator->start;
$aggregator->stop;
$formatter->summary($aggregator);
Tanks a lot! I hope this will help some others too. It seems like this knowledge is not very wide spread yet.
I have made a module to pack this solution in a nice interface: https://metacpan.org/module/Convert::TAP::Archive
So from now on you can just type this:
use Convert::TAP::Archive qw(convert_from_taparchive);
my $html = convert_from_taparchive(
'/must/be/the/complete/path/to/test.tar.gz',
'TAP::Formatter::HTML',
);
The problem with the output is mentioned in the docs. Please provide patches or comments if you know how to fix this (minor) issue. E.g. here: https://github.com/borisdaeppen/Convert-TAP-Archive
Renee pointed me to how Tapper makes it: https://metacpan.org/source/TAPPER/Tapper-TAP-Harness-4.1.1/lib/Tapper/TAP/Harness.pm#L273
Quite some effort to read an archive though...

Perl module loading - Safeguarding against: perhaps you forgot to load "Bla"?

When you run perl -e "Bla->new", you get this well-known error:
Can't locate object method "new" via package "Bla"
(perhaps you forgot to load "Bla"?)
Happened in a Perl server process the other day due to an oversight of mine. There are multiple scripts, and most of them have the proper use statements in place. But there was one script that was doing Bla->new in sub blub at line 123 but missing a use Bla at the top, and when it was hit by a click without any of the other scripts using Bla having been loaded by the server process before, then bang!
Testing the script in isolation would be the obvious way to safeguard against this particular mistake, but alas the code is dependent upon a humungous environment. Do you know of another way to safeguard against this oversight?
Here's one example how PPI (despite its merits) is limited in its view on Perl:
use strict;
use HTTP::Request::Common;
my $req = GET 'http://www.example.com';
$req->headers->push_header( Bla => time );
my $au=Auweia->new;
__END__
PPI::Token::Symbol '$req'
PPI::Token::Operator '->'
PPI::Token::Word 'headers'
PPI::Token::Operator '->'
PPI::Token::Word 'push_header'
PPI::Token::Symbol '$au'
PPI::Token::Operator '='
PPI::Token::Word 'Auweia'
PPI::Token::Operator '->'
PPI::Token::Word 'new'
Setting the header and assigning the Auweia->new parse the same. So I'm not sure how you can build upon such a shaky foundation. I think the problem is that Auweia could also be a subroutine; perl.exe cannot tell until runtime.
Further Update
Okay, from #Schwern's instructive comments below I learnt that PPI is just a tokenizer, and you can build upon it if you accept its limitations.
Testing is the only answer worth the effort. If the code contains mistakes like forgetting to load a class, it probably contains other mistakes. Whatever the obstacles, make it testable. Otherwise you're patching a sieve.
That said, you have two options. You can use Class::Autouse which will try to load a module if it isn't already loaded. It's handy, but because it affects the entire process it can have unintended effects.
Or you can use PPI to scan your code and find all the class method calls. PPI::Dumper is very handy to understand how PPI sees Perl.
use strict;
use warnings;
use PPI;
use PPI::Dumper;
my $file = shift;
my $doc = PPI::Document->new($file);
# How PPI sees a class method call.
# PPI::Token::Word 'Class'
# PPI::Token::Operator '->'
# PPI::Token::Word 'method'
$doc->find( sub {
my($node, $class) = #_;
# First we want a word
return 0 unless $class->isa("PPI::Token::Word");
# It's not a class, it's actually a method call.
return 0 if $class->method_call;
my $class_name = $class->literal;
# Next to it is a -> operator
my $op = $class->snext_sibling or return 0;
return 0 unless $op->isa("PPI::Token::Operator") and $op->content eq '->';
# And then another word which PPI identifies as a method call.
my $method = $op->snext_sibling or return 0;
return 0 unless $method->isa("PPI::Token::Word") and $method->method_call;
my $method_name = $method->literal;
printf "$class->$method_name seen at %s line %d.\n", $file, $class->line_number;
});
You don't say what server enviroment you're running under, but from what you say it sounds like you could do with preloading all your modules in advance before executing any individual pages. Not only would this prevent the problems you're describing (where every script has to remember to load all the modules it uses) but it would also save you memory.
In pre-forking servers (as is commonly used with mod_perl and Apache) you really want to load as much of your code before your server forks for the first time so that the code is stored once in copy-on-write shared memory rather than mulitple times in each child process when it is loaded on demand.
For information on pre-loading in Apache, see the section of Practical mod_perl

Why is it a bad idea to write configuration data in code?

Real-life case (from caff) to exemplify the short question subject:
$CONFIG{'owner'} = q{Peter Palfrader};
$CONFIG{'email'} = q{peter#palfrader.org};
$CONFIG{'keyid'} = [ qw{DE7AAF6E94C09C7F 62AF4031C82E0039} ];
$CONFIG{'keyserver'} = 'wwwkeys.de.pgp.net';
$CONFIG{'mailer-send'} = [ 'testfile' ];
Then in the code: eval `cat $config`, access %CONFIG
Provide answers that lay out the general problems, not only specific to the example.
There are many reasons to avoid configuration in code, and I go through some of them in the configuration chapter in Mastering Perl.
No configuration change should carry the risk of breaking the program. It certainly shouldn't carry the risk of breaking the compilation stage.
People shouldn't have to edit the source to get a different configuration.
People should be able to share the same application without using a common group of settings, instead re-installing the application just to change the configuration.
People should be allowed to create several different configurations and run them in batches without having to edit the source.
You should be able to test your application under different settings without changing the code.
People shouldn't have to learn how to program to be able to use your tool.
You should only loosely tie your configuration data structures to the source of the information to make later architectural changes easier.
You really want an interface instead of direct access at the application level.
I sum this up in my Mastering Perl class by telling people that the first rule of programming is to create a situation where you do less work and people leave you alone. When you put configuration in code, you spend more time dealing with installation issues and responding to breakages. Unless you like that sort of thing, give people a way to change the settings without causing you more work.
$CONFIG{'unhappy_employee'} = `rm -rf /`
One major issue with this approach is that your config is not very portable. If a functionally identical tool were built in Java, loading configuration would have to be redone. If both the Perl and the Java variation used a simple key=value layout such as:
owner = "Peter Palfrader"
email = "peter#peter#palfrader.org"
...
they could share the config.
Also, calling eval on the config file seems to open this system up to attack. What could a malicious person add to this config file if they wanted to wreak some havoc? Do you realize that ANY arbitrary code in your config file will be executed?
Another issue is that it's highly counter-intuitive (at least to me). I would expect a config file to be read by some config loader, not executed as a runnable piece of code. This isn't so serious but could confuse new developers who aren't used to it.
Finally, while it's highly unlikely that the implementation of constructs like p{...} will ever change, if they did change, this might fail to continue to function.
It's a bad idea to put configuration data in compiled code, because it can't be easily changed by the user. For scripts, just make sure it's separated entirely from the rest and document it nicely.
A reason I'm surprised no one mentioned yet is testing. When config is in the code you have to write crazy, contorted tests to be able to test safely. You can end up writing tests that duplicate the code they test which makes the tests nearly useless; mostly just testing themselves, likely to drift, and difficult to maintain.
Hand in hand with testing is deployment which was mentioned. When something is easy to test, it is going to be easy (well, easier) to deploy.
The main issue here is reusability in an environment where multiple languages are possible. If your config file is in language A, then you want to share this configuration with language B, you will have to do some rewriting.
This is even more complicated if you have more complex configurations (example the apache config files) and are trying to figure out how to handle potential differences in data structures. If you use something like JSON, YAML, etc., parsers in the language will be aware of how to map things with regards to the data structures of the language.
The one major drawback of not having them in a language, is that you lose the potential of utilizing setting config values to dynamic data.
I agree with Tim Anderson. Somebody here confuses configuration in code as configuration not being configurable. This is corrected for compiled code.
Both a perl or ruby file is read and interpreted, as is a yml file or xml file with configuration data. I choose yml because it is easier on the eye than in code, as grouping by test environment, development, staging and production, which in code would involve more .. code.
As a side note, XML contradicts the "easy on the eye" completely. I find it interesting that XML config is extensively used with compiled languages.
Reason 1. Aesthetics. While no one gets harmed by bad smell, people tend to put effort into getting rid of it.
Reason 2. Operational cost. For a team of 5 this is probably ok, but once you have developer/sysadmin separation, you must hire sysadmins who understand Perl (which is $$$), or give developers access to production system (big $$$).
And to make matters worse you won't have time (also $$$) to introduce a configuration engine when you suddenly need it.
My main problem with configuration in many small scripts I write, is that they often contain login data (username and password or auth-token) to a service I use. Then later, when the scripts gets bigger, I start versioning it and want to upload it on github.
So before every commit I need to replace my configuration with some dummy values.
$CONFIG{'user'} = 'username';
$CONFIG{'password'} = '123456';
Also you have to be careful, that those values did not eventually slip into your commit history at some point. This can get very annoying. When you went through this one or two times, you will never again try to put configuration into code.
Excuse the long code listing. Below is a handy Conf.pm module that I have used in many systems which allows you to specify different variables for different production, staging and dev environments. Then I build my programs to either accept the environment parameters on the command line, or I store this file outside of the source control tree so that never gets over written.
The AUTOLOAD provides automatic methods for variable retrieval.
# Instructions:
# use Conf;
# my $c = Conf->new("production");
# print $c->root_dir;
# print $c->log_dir;
package Conf;
use strict;
our $AUTOLOAD;
my $default_environment = "production";
my #valid_environments = qw(
development
production
);
#######################################################################################
# You might need to change this.
sub set_vars {
my ($self) = #_;
$self->{"access_token"} = 'asdafsifhefh';
if ( $self->env eq "development" ) {
$self->{"root_dir"} = "/Users/patrickcollins/Documents/workspace/SysG_perl";
$self->{"server_base"} = "http://localhost:3000";
}
elsif ($self->env eq "production" ) {
$self->{"root_dir"} = "/mnt/SysG-production/current/lib";
$self->{"server_base"} = "http://api.SysG.com";
$self->{"log_dir"} = "/mnt/SysG-production/current/log"
} else {
die "No environment defined\n";
}
#######################################################################################
# You shouldn't need to configure this.
# More dirs. Move these into the dev/prod sections if they're different per env.
my $r = $self->{'root_dir'};
my $b = $self->{'server_base'};
$self->{"working_dir"} ||= "$r/working";
$self->{"bin_dir"} ||= "$r/bin";
$self->{"log_dir"} ||= "$r/log";
# Other URLs. Move these into the dev/prod sections if they're different per env.
$self->{"new_contract_url"} = "$b/SysG-training-center/v1/contract/new";
$self->{"new_documents_url"} = "$b/SysG-training-center/v1/documents/new";
}
#######################################################################################
# Code, don't change below here.
sub new {
my ($class,$env) = #_;
my $self = {};
bless ($self,$class);
if ($env) {
$self->env($env);
} else {
$self->env($default_environment);
}
$self->set_vars;
return $self;
}
sub AUTOLOAD {
my ($self,$val) = #_;
my $type = ref ($self) || die "$self is not an object";
my $field = $AUTOLOAD;
$field =~ s/.*://;
#print "field: $field\n";
unless (exists $self->{$field} || $field =~ /DESTROY/ )
{
die "ERROR: {$field} does not exist in object/class $type\n";
}
$self->{$field} = $val if ($val);
return $self->{$field};
}
sub env {
my ($self,$in) = #_;
if ($in) {
die ("Invalid environment $in") unless (grep($in,#valid_environments));
$self->{"_env"} = $in;
}
return $self->{"_env"};
}
1;

What's the best method to generate Multi-Page PDFs with Perl and PDF::API2?

I have been using PDF::API2 module to program a PDF. I work at a warehousing company and we are trying switch from text packing slips to PDF packing slips. Packing Slips have a list of items needed on a single order. It works great but I have run into a problem. Currently my program generates a single page PDF and it was all working fine. But now I realize that the PDF will need to be multiple pages if there are more than 30 items in an order. I was trying to think of an easy(ish) way to do that, but couldn’t come up with one. The only thing I could think of involves creating another page and having logic that redefines the coordinates of the line items if there are multiple pages. So I was trying to see if there was a different method or something I was missing that could help but I wasn’t really finding anything on CPAN.
Basically, i need to create a single page PDF unless there are > 30 items. Then it will need to be multiple.
I hope that made sense and any help at all would be greatly appreciated as I am relatively new to programming.
Since you already have the code working for one-page PDFs, changing it to work for multi-page PDFs shouldn't be too hard.
Try something like this:
use PDF::API2;
sub create_packing_list_pdf {
my #items = #_;
my $pdf = PDF::API2->new();
my $page = _add_pdf_page($pdf);
my $max_items_per_page = 30;
my $item_pos = 0;
while (my $item = shift(#items)) {
$item_pos++;
# Create a new page, if needed
if ($item_pos > $max_items_per_page) {
$page = _add_pdf_page($pdf);
$item_pos = 1;
}
# Add the item at the appropriate height for that position
# (you'll need to declare $base_height and $line_height)
my $y = $base_height - ($item_pos - 1) * $line_height;
# Your code to display the line here, using $y as needed
# to get the right coordinates
}
return $pdf;
}
sub _add_pdf_page {
my $pdf = shift();
my $page = $pdf->page();
# Your code to display the page template here.
#
# Note: You can use a different template for additional pages by
# looking at e.g. $pdf->pages(), which returns the page count.
#
# If you need to include a "Page 1 of 2", you can pass the total
# number of pages in as an argument:
# int(scalar #items / $max_items_per_page) + 1
return $page;
}
The main thing is to split up the page template from the line items so you can easily start a new page without having to duplicate code.
PDF::API2 is low-level. It doesn't have most of what you would consider necessary for a document, things like margins, blocks, and paragraphs. Because of this, I afraid you're going to have to do things the hard way. You may want to look at PDF::API2::Simple. It might meet your criteria and it's simple to use.
I use PDF::FromHTML for some similar work. Seems to be a reasonable choice, I guess I am not too big on positioning by hand.
The simplest method is to use PDF-API2-Simple
my #content;
my $pdf = PDF::API2::Simple->new(file => "$name");
$pdf->add_font('Courier');
$pdf->add_page();
foreach $line (#content)
{
$pdf->text($line, autoflow => 'on');
}
$pdf->save();