What is a better way to stream audio with Perl CGI? - perl

Stackoverflow:
For a cs assigment I am using the following code to stream audio. However, now I would like to add the ability to stream files successively, as in a playlist, how can I modify my code to accommodate this? I would like to have a text file of filenames that my script passes through sequentially streaming each. Is this possible? I've spent a good bit of time googling yet found few relevant links.
Thanks,
CB
#!/usr/bin/perl
use strict;
use CGI::Carp qw/fatalsToBrowser/;
open(OGGFILE, "../HW1/OGG/ACDC.ogg") or die "open error";
my $buffer;
print "Content-type: audio/ogg\n\n";
binmode STDOUT;
while( read(OGGFILE, $buffer, 16384)){
print $buffer;
}
close(OGGFILE);
Update:
I've since modified my code to create a playlist and it seems to be working well. However, for this to work, I am storing my music files in my html folder, available for all to see. Is it a simple matter of changing file permissions to prevent direct linking and visibility? Is it possible for me to modify this program so that it streams the files from a folder outside of /html?
Thanks
CB
#!/usr/bin/perl
use strict;
use CGI qw/:standard/;
use CGI::Pretty qw/:standard/;
use CGI::Carp qw/fatalsToBrowser/;
print header(-type=>'audio/x-mpegurl',-expires=>'now');
printf "#EXTM3U\n";
printf "#EXTINF:-1,Some ACDC song\n";
printf "http://www.mywebserver/MP3/ACDC.ogg\n";
printf "#EXTINF:-1,Some Pink Floyd Song\n";
printf "http://www.mywebserver.com/MP3/PinkFloyd.ogg\n";

For the players I've dealt with, I had to provide a specially formatted playlist that listed the sequence of audio files. The player then requested the audio files as it needed them. You'll have one program to serve that playlist, and another to serve individual audio files.
As for your current program, I'd get the Perl program completely out of the way. Just let the web server handle it, which will be much faster. Your program doesn't do anything the web server doesn't already do for you, so don't make it do the extra work. :)

Related

CGI/Perl script creating a customized signature file

I have this working somewhat.
I have a cgi file that has the following code:
#!/usr/bin/perl
use CGI;
$cgi = new CGI;
open (IMAGE, "ts.jpg");
$size = -s "ts.jpg";
read IMAGE, $data, $size;
close (IMAGE);
print $cgi->header(-type=>'image/jpeg'), $data;
exit;
This displays my image file corrrectly.
However, I want a user to be able to add 2 lines of text over the image through a web form to generate a new jpeg each time. Here is the URL: http://elearning.cpma.ca/signature.html
What am I missing in my cgi file that would allow me to re-publish to screen a new jpeg file with the 2 lines of text appearing on it when I click on the "Add Text" Button?
Any assistance would be really appreciated.
You'll need an image processing library. You'll see lots of recommendations for GD or ImageMagick, but I think I'd use Imager, as it's newer and a little easier to use.
A few general suggestions for improvements to your code.
Always use strict and warnings in your code.
Declare variables with my (my $cgi = ...).
The new CGI syntax is potentially problematic. Use CGI->new instead.
Use three-arg open and lexical filehandles (open my $image_fh, '<', 'ts.jpg')).
Always check the return code from open (open my $image_fh, '<', 'ts.jpg') or die $!).
I was able to create what I wanted through a Readme provided by alvarotrigo on GitHUb TextPainter
I was required to make some minor code changes in order to make it work properly - however, it was very easy to implement.
No CGI required. See SignatureFile for the final outcome of my image.
Thank you to all who responded to my issue.

How to listen to URL routes in perl

I'm starting my first perl project, and wanted to know how to listen to different end points, I.e. example.com/home (how do you load an HTML page when someone visits this home route?
Just a note that I'm not interested in using a framework for this particular project. Thanks
Well, I guess you could have a CGI program that interprets the path and takes the appropriate action. You could then combine that with a mod_rewrite rule that diverts all requests into that program.
But it's all looking a bit kludgy and a framework would be a much better solution.
The simplest way to talk to a server is CGI.
This is not Perl specific, but Perl was commonly used for it. It is very slow, but simple.
Here is small demo. You put this in the cgi-bin directory of your server, and go go http://www.example.com/cgi-bin/cgidemo.cgi and back pops the content of the Perl #INC array.
To hook it up to /home you could alias it in your .htaccess file.
Of course, this is all ancient and slow stuff and has been far surpassed and sped up by fastcgi, mod_perl, and lots of other stuff. I like the Mojolicous framework myself.
#!/usr/bin/perl
# cgidemo.cgi - minimal CGI program
use strict;
use warnings;
# Headers
print "Content-type: text/plain\n";
# Blank line after header
print "\n";
# Body
print "Perl Include Path:\n";
print join("\n", #INC), "\n";

Finding the standard out for a perl program

I'm redirecting standard out for a perl program. Example:
perl run_program.pl > /log/run_program.log
Is there a way to know what the standard out is. So in this case I'm looking to have the value of '/log/run_program.log'.
If it's not possible is there another/better way to get the same result?
Thanks in advance!
EDIT: The reason I'm not setting STDOUT in the program is because I'm calling a bunch of .pm that have print lines that I want to go to STDOUT with out having to pass the file to it.
On my system, you can use
readlink("/proc/$$/fd/1")
EDIT: The reason I'm not setting STDOUT in the program is because I'm calling a bunch of .pm that have print lines that I want to go to STDOUT with out having to pass the file to it.
Just to let you know, you might be able to use the select command to redefine the FD for the default output:
use strict;
use warnings;
use autodie;
open my $output_fd, ">", "/log/run_program.log";
my $old_default_fd = select( $output_fd );
print "I'm now going into /log/run_program.log\n";
select ($old_default_fd; # Restore the default when you no longer need it
This may work with most of your Perl modules. Just hope that they're not doing something stupid like:
print STDOUT "Ha, ha. I'm still going to STDOUT.\n".
I hate it when Perl modules print stuff.
<soapbox>
To you Perl Module writers:
Perl modules should not be printing (unless that's their main purpose). You should instead return what you want to print and let the caller decide what to do with the output.
</soapbox>
For the first part of your question, no. There's no way for the perl program to know where STDOUT is directed to.
The redirection happens external to the program, and is "wired up" before the perl process even starts. STDOUT could be pointed to a device, a file, or another process (a pipe).
The whole purpose of redirection from stdout to a file is to adapt a program which typically writes to stdout and redirect it to a file. The OS doesn't give you the name of the file, because it figures your program is too stupid to know what to do with a file name.
So your best bet is to get it as my $file_name = shift; and open it yourself. (A shift in the mainline pulls from #ARGV.)
Give a chance to this ideas:
...
my $log_path = "/log/run_program.log"; # or using $0 in some manner
open $log_handler, "<", $log_path or die;
...
Now you could code a myprint subroutine that will call print $log_handler and use it into the whole program, or better, having a look to OVERRIDING CORE FUNCTIONS you could self redefine print doing like this:
...
use subs 'print';
sub print { #redefine here }
...

Can I rate a song in iTunes (on a Mac) using Perl?

I've tried searching CPAN. I found Mac::iTunes, but not a way to assign a rating to a particular track.
If you're not excited by Mac::AppleScript, which just takes a big blob of AppleScript text and runs it, you might prefer Mac::AppleScript::Glue, which provides a more object-oriented interface. Here's the equivalent to Iamamac's sample code:
#!/usr/bin/env perl
use Modern::Perl;
use Mac::AppleScript::Glue;
use Data::Dumper;
my $itunes = Mac::AppleScript::Glue::Application->new('iTunes');
# might crash if iTunes isn't playing anything yet
my $track = $itunes->current_track;
# for expository purposes, let's see what we're dealing with
say Dumper \$itunes, \$track;
say $track->rating; # initially undef
$track->set(rating => 100);
say $track->rating; # should print 100
All that module does is build a big blob of AppleScript, run it, and then break it all apart into another AppleScript expression that it can use on your next command. You can see that in the _ref value of the track object when you run the above script. Because al it's doing is pasting and parsing AppleScript, this module won't be any faster than any other AppleScript-based approach, but it does allow you to intersperse other Perl commands within your script, and it keeps your code looking a little more like Perl, for what that's worth.
You can write AppleScript to fully control iTunes, and there is a Perl binding Mac::AppleScript.
EDIT Code Sample:
use Mac::AppleScript qw(RunAppleScript);
RunAppleScript(qq(tell application "iTunes" \n set rating of current track to $r \n end tell));
Have a look at itunes-perl, it seems to be able to rate tracks.

How do I serve a large file for download with Perl?

I need to serve a large file (500+ MB) for download from a location that is not accessible to the web server. I found the question Serving large files with PHP, which is identical to my situation, but I'm using Perl instead of PHP.
I tried simply printing the file line by line, but this does not cause the browser to prompt for download before grabbing the entire file:
use Tie::File;
open my $fh, '<', '/path/to/file.txt';
tie my #file, 'Tie::File', $fh
or die 'Could not open file: $!';
my $size_in_bytes = -s $fh;
print "Content-type: text/plain\n";
print "Content-Length: $size_in_bytes\n";
print "Content-Disposition: attachment; filename=file.txt\n\n";
for my $line (#file) {
print $line;
}
untie #file;
close $fh;
exit;
Does Perl have an equivalent to PHP's readfile() function (as suggested with PHP) or is there a way to accomplish what I'm trying to do here?
If you just want to slurp input to output, this should do the trick.
use Carp ();
{ #Lexical For FileHandle and $/
open my $fh, '<' , '/path/to/file.txt' or Carp::croak("File Open Failed");
local $/ = undef;
print scalar <$fh>;
close $fh or Carp::carp("File Close Failed");
}
I guess in response to the "Does Perl have a PHP ReadFile Equivelant" , and I guess my answer would be "But it doesn't really need one".
I've used PHP's manual File IO controls and they're a pain, Perls are just so easy to use by comparison that shelling out for a one-size-fits-all function seems over-kill.
Also, you might want to look at X-SendFile support, and basically send a header to your webserver to tell it what file to send: http://john.guen.in/past/2007/4/17/send_files_faster_with_xsendfile/ ( assuming of course it has permissions enough to access the file, but the file is just NOT normally accessible via a standard URI )
Edit Noted, it is better to do it in a loop, I tested the above code with a hard-drive and it does implicitly try store the whole thing in an invisible temporary variable and eat all your ram.
Alternative using blocks
The following improved code reads the given file in blocks of 8192 chars, which is much more memory efficient, and gets a throughput respectably comparable with my disk raw read rate. ( I also pointed it at /dev/full for fits and giggles and got a healthy 500mb/s throughput, and it didn't eat all my rams, so that must be good )
{
open my $fh , '<', '/dev/sda' ;
local $/ = \8192; # this tells IO to use 8192 char chunks.
print $_ while defined ( $_ = scalar <$fh> );
close $fh;
}
Applying jrockways suggestions
{
open my $fh , '<', '/dev/sda5' ;
print $_ while ( sysread $fh, $_ , 8192 );
close $fh;
}
This literally doubles performance, ... and in some cases, gets me better throughput than DD does O_o.
The readline function is called readline (and can also be written as
<>).
I'm not sure what problem you're having. Perhaps that for loops
aren't lazily evaluated (which they're not). Or, perhaps Tie::File is
screwing something up? Anyway, the idiomatic Perl for reading a file
a line at a time is:
open my $fh, '<', $filename or die ...;
while(my $line = <$fh>){
# process $line
}
No need to use Tie::File.
Finally, you should not be handling this sort of thing yourself. This
is a job for a web framework. If you were using
Catalyst (or
HTTP::Engine), you would
just say:
open my $fh, '<', $filename ...
$c->res->body( $fh );
and the framework would automatically serve the data in the file
efficiently. (Using stdio via readline is not a good idea here, it's
better to read the file in blocks from the disk. But who cares, it's
abstracted!)
You could use my Sys::Sendfile module. It's should be highly efficient (as it uses sendfile underneath the hood), but not entirely portable (only Linux, FreeBSD and Solaris are currently supported).
When you say "this does not cause the browser to prompt for download" -- what's "the browser"?
Different browsers behave differently, and IE is particularly wilful, it will ignore headers and decide for itself what to do based on reading the first few kb of the file.
In other words, I think your problem may be at the client end, not the server end.
Try lying to "the browser" and telling it the file is of type application/octet-stream. Or why not just zip the file, especially as it's so huge.
Don't use for/foreach (<$input>) because it reads the whole file at once and then iterates over it. Use while (<$input>) instead. The sysread solution is good, but the sendfile is the best performance-wise.
Answering the (original) question ("Does Perl have an equivalent to PHP's readline() function ... ?"), the answer is "the angle bracket syntax":
open my $fh, '<', '/path/to/file.txt';
while (my $line = <file>) {
print $line;
}
Getting the content-length with this method isn't necessarily easy, though, so I'd recommend staying with Tie::File.
NOTE
Using:
for my $line (<$filehandle>) { ... }
(as I originally wrote) copies the contents of the file to a list and iterates over that. Using
while (my $line = <$filehandle>) { ... }
does not. When dealing with small files the difference isn't significant, but when dealing with large files it definitely can be.
Answering the (updated) question ("Does Perl have an equivalent to PHP's readfile() function ... ?"), the answer is slurping. There are a couple of syntaxes, but Perl6::Slurp seems to be the current module of choice.
The implied question ("why doesn't the browser prompt for download before grabbing the entire file?") has absolutely nothing to do with how you're reading in the file, and everything to do with what the browser thinks is good form. I would guess that the browser sees the mime-type and decides it knows how to display plain text.
Looking more closely at the Content-Disposition problem, I remember having similar trouble with IE ignoring Content-Disposition. Unfortunately I can't remember the workaround. IE has a long history of problems here (old page, refers to IE 5.0, 5.5 and 6.0). For clarification, however, I would like to know:
What kind of link are you using to point to this big file (i.e., are you using a normal a href="perl_script.cgi?filename.txt link or are you using Javascript of some kind)?
What system are you using to actually serve the file? For instance, does the webserver make its own connection to the other computer without a webserver, and then copy the file to the webserver and then send the file to the end user, or does the user make the connection directly to the computer without a webserver?
In the original question you wrote "this does not cause the browser to prompt for download before grabbing the entire file" and in a comment you wrote "I still don't get a download prompt for the file until the whole thing is downloaded." Does this mean that the file gets displayed in the browser (since it's just text), that after the browser has downloaded the entire file you get a "where do you want to save this file" prompt, or something else?
I have a feeling that there is a chance the HTTP headers are getting stripped out at some point or that a Cache-control header is getting added (which apparently can cause trouble).
I've successfully done it by telling the browser it was of type application/octet-stream instead of type text/plain. Apparently most browsers prefer to display text/plain inline instead of giving the user a download dialog option.
It's technically lying to the browser, but it does the job.
The most efficient way to serve a large file for download depends on a web-server you use.
In addition to #Kent Fredric X-Sendfile suggestion:
File Downloads Done Right have some links that describe how to do it for Apache, lighttpd (mod_secdownload: security via url generation), nginx. There are examples in PHP, Ruby (Rails), Python which can be adopted for Perl.
Basically it boils down to:
Configure paths, and permissions for your web-server.
Generate valid headers for the redirect in your Perl app (Content-Type, Content-Disposition, Content-length?, X-Sendfile or X-Accel-Redirect, etc).
There are probably CPAN modules, web-frameworks plugins that do exactly that e.g., #Leon Timmermans mentioned Sys::Sendfile in his answer.