bash cgi won't return image - perl

We have a monitoring system making RRD databases. I am looking for the most light way of creating graphs from this RRD files for our HTML pages. So I don't want to store them in files. I am trying to create simple BASH CGI script, that will output image data, so I can do something like this:
<img src="/cgi-bin/graph.cgi?param1=abc"></img>
First of all, I am trying to create simple CGI script, that will send me PNG image. This doesn't work:
#!/bin/bash
echo -e "Content-type: image/png\n\n"
cat image.png
But when I rewrite this to PERL, it does work:
#!/usr/bin/perl
print "Content-type: image/png\n\n";
open(IMG, "image.png");
print while <IMG>;
close(IMG);
exit 0;
What is the difference? I would really like to do this in BASH. Thank you.

Absence of -n switch outputs third newline, so it should be
echo -ne "Content-type: image/png\n\n"
or
echo -e "Content-type: image/png\n"
from man echo
-n do not output the trailing newline

Related

How can I store Curl output into a file using Perl?

I have a directory there are lots of .wav files. I read the data of each file through curl command and store it into different files.
This is not complete script. some part of the script -
#command= '--request POST --data-binary "#Raajpal_long.wav"
"https://xxxx.xx.xxxx:xxxx/SpeakerId
action=search&confidence_threshold=0.0&number_of_matches=20&format=8K_PCM16"';
$stdout=system("curl #command");
when I run the perl script it gives the output on command line window :
{"status": 0, "processing_time": 96.0, "enrollment_audio_time": 131.10000610351562, "matches": [{"speaker": "sw", "identification_score": 252.54136657714844}]}
I want to store this output into a file.
I used -
open (FILE, ">1.txt") or die "Unable to open "1.txt";
$stdout=system("curl #command");
print FILE $stdout;
It's save only zero(0);
Can any one tell me how to solve this ?
You're already shelling out to curl to make the request; it would be cleaner to just use curl's -o/--output option to write to a file instead of stdout.
-o, --output <file>
Write output to instead of stdout. If you are using {} or [] to
fetch multiple documents, you can use '#' followed by a number in the
specifier. That variable will be replaced with the current
string for the URL being fetched. Like in:
curl http://{one,two}.example.com -o "file_#1.txt"
or use several variables like:
curl http://{site,host}.host[1-5].com -o "#1_#2"
You may use this option as many times as the number of URLs you have.
For example, if you specify two URLs on the same command line, you can
use it like this:
curl -o aa example.com -o bb example.net
and the order of the -o options and the URLs doesn't matter, just that
the first -o is for the first URL and so on, so the above command line
can also be written as
curl example.com example.net -o aa -o bb
You can't use system to capture output, you can use backticks `` in place of system.
Something like:
my #command= '--request POST --data-binary "#Raajpal_long.wav"
"https://services.govivace.com:49162/SpeakerId
action=search&confidence_threshold=0.0&number_of_matches=20&format=8K_PCM16"';
my $result = `curl #command`;
if ( $? == -1 ){
print "\n Curl Command failed: $!\n";
} elsif ($? == 0 ) {
print "$result\n";
}

What is a perl one liner to replace a substring in a string?

Suppose you've got this C shell script:
setenv MYVAR "-l os="\""redhat4.*"\"" -p -100"
setenv MYVAR `perl -pe "<perl>"`
Replace with code that will either replace "-p -100" with "-p -200" in MYVAR or add it if it doesn't exist, using a one liner if possible.
The topic does not correspond to content, but I think it may be usefull if someone posts an answer to topic-question. So here is the perl-oneliner:
echo "my_string" | perl -pe 's/my/your/g'
What you want will look something like
perl -e' \
use Getopt::Long qw( :config posix_default ); \
use String::ShellQuote; \
GetOptions(\my %opts, "l=s", "p=s") or die; \
my #opts; \
push #opts, "-l", $opts{l} if defined($opts{l}); \
push #opts, "-p", "-100"; \
print(shell_quote(#opts)); \
' -- $MYVAR
First, you need to parse the command line. That requires knowing the format of the arguments of the application for which they are destined.
For example, -n is an option in the following:
perl -n -e ...
Yet -n isn't an option in the following:
perl -e -n ...
Above, I used Getopt::Long in POSIX mode. You may need to adjust the settings or use an entirely different parser.
Second, you need to produce csh literals.
I've had bad experiences trying to work around csh's defective quoting, so I'll leave those details to you. Above, I used String::ShellQuote's shell_quote which produces sh (rather than csh) literals. You'll need to adjust.
Of course, once you got this far, you need to get the result back into the environment variable unmangled. I don't know if that's possible in csh. Again, I leave the csh quoting to you.

Perl deleting "blank" lines from a csv file

I'm looking to delete blank lines in a CSV file, using Perl.
I'm not too sure how to do this, as these lines aren't exactly "blank" (they're just a bunch of commas).
I'd also like to save the output as a file of the same name, overwriting the original.
How could I go about doing this?
edit: I can't use modules or any source code due to network restrictions...
You can do this using a simple Perl one-liner:
perl -i -ne 'print unless /^[,\s]*$/' <filename>
The -n flag assumes this loop around your program:
while(<>) {
print unless /^[,\s]*$/;
}
and the -i flag means inplace and modifies your input file.
Note: If you are worried about losing your data with -i, you can specify -i.bak and perl will automatically write the original file to your <filename>.bak
More of a command line hack,
perl -i -ne 'print if /[^,\r\n]/' file.csv
If you want to put it inside a shell script you can do this ...
#!/bin/sh
$(perl -i -n -e 'print $_ unless ($_ =~ /^\,+$/);' $*)

help porting unix commands to perl script

I am getting some perl compile errors trying to convert these unix commands to perl.
The use of single quotes and double quotes is throwing me off (see below: my $curlcmd).
Here's the working unix commands executed in order:
export CERT=`/dev/bin/util --show dev.testaccnt | awk '{print $2}'`
/usr/bin/curl -c /home/foobar/cookee.txt --certify /dev/key.crt \
--header "FooBar-Util:'${CERT}'" \
https://devhost.foobar.com:4443/fs/workflow/data/source/productname?val=Summ
I want to do the same within Perl:
#Build cmd in perl
my $cookie='/home/foobar/cookee.txt';
my $certkey='/dev/key.crt';
my $fsProxyHostPort='devhost.foobar.com:4443';
my $fsPath='workflow/data/source/productname';
my $fsProxyOperation='Summ';
my $fsProxyURL="https://$fsProxyHostPort/fs/$fsPath?val=$fsProxyOperation";
#Get cert
my $cert=qx(/dev/bin/pass-util --show foobar.dev.testaccnt | awk '{print \$2}');
Here's where I am having trouble executing it:
my $curlcmd = qx(/usr/bin/curl -c $cookie --certify $certkey --header "FooBar-Util:'${" . $cert . "}'". $fsProxyURL);
Can someone show me how to setup these commands in Perl correctly?
In the shell script, you have (in part):
--header "FooBar-Util:'${CERT}'"
This generates something like:
--header FooBar-Util:'data-from-certificate'
where the curl command gets to see those single quotes. To get the same result in Perl, you will need:
my $header = "FooBar-Util:'$cert'";
my $out = qx(/usr/bin/curl -c $cookie --certify $certkey --header $header $fsProxyURL);
Changes:
Lost the ${ ... } notation.
Lost the concatenation operations.
In situations where you have problems seeing the argument list sent to a command, I recommend using a program analogous to the shell echo command, but which lists each argument on its own line, rather than as a space-separated set of arguments on a single line. I call my version of this al for 'argument list'. If you test your commands (for example, the shell version) by prefixing the whole command line with al, you get to see the arguments that curl would see. You can then do the same in Perl to compare the arguments curl sees at the shell with the ones given it by Perl. Then you can fix the problems, typically much more easily.
For debugging with al:
my #lines = qx(al /usr/bin/curl -c $cookie --certify $certkey --header $header $fsProxyURL);
foreach my $line (#lines) { print "$line"; }
If you want to write al in Perl:
#!/usr/bin/env perl
foreach my $arg (#ARGV) { print "$arg\n"; }
Adventures in Validating an Answer
Fortunately, I usually check what I write as answers - and what is written above is mostly accurate, except for one detail; Perl manages to invoke the shell on the command, and in doing so, the shell cleans out the single-quotes:
my $cert = 'certificate-info';
my $fsProxyURL = 'https://www.example.com/fsProxy';
my $cookie = 'cookie';
my $certkey = 'cert-key';
my $header = "FooBar-Util:'$cert'";
#my #out = qx(al /usr/bin/curl -c $cookie --certify $certkey --header $header $fsProxyURL);
my #cmdargs = ( 'al', '/usr/bin/curl', '-c', $cookie, '--certify', $certkey, '--header', $header, $fsProxyURL);
print "System:\n";
system #cmdargs;
print "\nQX command:\n";
my #lines = qx(#cmdargs);
foreach my $line (#lines) { print "$line"; }
This yields:
System:
/usr/bin/curl
-c
cookie
--certify
cert-key
--header
FooBar-Util:'certificate-info'
https://www.example.com/fsProxy
QX command:
/usr/bin/curl
-c
cookie
--certify
cert-key
--header
FooBar-Util:certificate-info
https://www.example.com/fsProxy
Note the difference in the `FooBar lines!
At this point, you start to wonder what's the least unclean way to work around this. If you want to use the qx// operator, then you probably do:
my $header = "FooBar-Util:\\'$cert\\'";
This yields the variant outputs (system then qx//):
FooBar-Util:\'certificate-info\'
FooBar-Util:'certificate-info'
So, the qx// notation now gives the single quotes to the executed command, just as in the shell script. This meets the goal, so I'm going to suggest that it is 'OK' for you; I'm just not sure that I'd actually adopt it in my own code, but I don't have a cleaner mechanism on hand. I'd like to be able to use the system plus 'array of arguments' mechanism, while still capturing the output, but I haven't checked whether there's a sensible (meaning relatively easy) way to do that.
One other passing comment; if any of your arguments contained spaces, you'd have to be excruciatingly careful with what gets passed through the shell. Having the al command available really pays off then. You can't identify which spaces in echo output are parts of one argument and which are separators provided by echo.
Is it on purpose that you have two different definitions of $cert?
Your translation of --header "FooBar-Util:'${CERT}'" is bad. The ${...} tells the shell to insert the CERT variable, but since you're already doing this insertation from Perl, it is not needed and will just confuse.
You're also missing a space before the $fsProxyURL.
As you're apparently not using the captured output from curl for anyting, I would suggest that you use the system function instead so you avoid the use of an intermediate shell command line parsing:
system "/usr/bin/curl","-c",$cookie,"--certify",$certTheFirst,
"--header","FooBar-Util:'$certTheSecond'", $fsProxyURL;
Finally, it's not very perlish to use a subsidiary awk to split the pass-util value into fields when Perl does that kind of things perfectly well. Once you solve the immediate error, I suggest
my #passwordline = split / /, qx(/dev/bin/util --show dev.testaccnt);
my $certTheSecond = $passwordline[1];
This " . $cert . " seems to be a leftover from some other code, where black-quotes were used and not qx. Removing the black-quotes and concatenations (.) works on my machine.
So, to execute the command, do:
my $curlcmd = qx(/usr/bin/curl -c $cookie --certify $certkey --header "FooBar-Util:'${ $cert }'". $fsProxyURL);
Following your comment, if you just want to print the command, you can do:
my $curlcmd = qq(/usr/bin/curl -c $cookie --certify $certkey --header "FooBar-Util:'${ $cert }'". $fsProxyURL);
print $curlcmd;

Use curl to parse XML, get an image's URL and download it

I want to write a shell script to get an image from an rss feed.
Right now I have:
curl http://foo.com/rss.xml | grep -E '<img src="http://www.foo.com/full/' | head -1 | sed -e 's/<img src="//' -e 's/" alt=""//' -e 's/width="400"//' -e 's/ height="400" \/>//' | sed 's/ //g'
This I use to grab the first occurence of an image URL in the file.
Now I want to put this URL in a variable to use cURL again to download the image.
Any help appreciated! (Also you might give tipps on how to better remove everything from the line with the URL. This is the line:
<img src="http://www.nichtlustig.de/comics/full/100802.jpg" alt="" width="400" height="400" />
There's probably some better regex to remove everything except the URL than my solution.)
Thanks in advance!
Using a regexp to parse HTML/XML is a Bad Idea in general. Therefore I'd recommend that you use a proper parser.
If you don't object to using Perl, let Perl do the proper XML or HTML parsing for you using appropriate parser libraries:
HTML
curl http://BOGUS.com |& perl -e '{use HTML::TokeParser;
$parser = HTML::TokeParser->new(\*STDIN);
$img = $parser->get_tag('img') ;
print "$img->[1]->{src}\n";
}'
/content02/groups/intranetcommon/documents/image/blk_logo.gif
XML
curl http://BOGUS.com/whdata0.xml | perl -e '{use XML::Twig;
$twig=XML::Twig->new(twig_handlers =>{img => sub {
print $_[1]->att("src")."\n"; exit 0;}});
open(my $fh, "-");
$twig->parse($fh);
}'
/content02/groups/intranetcommon/documents/image/blk_logo.gif
I used wget instead of curl, but its just the same
#!/bin/bash
url='http://www.nichtlustig.de/rss/nichtrss.rss'
wget -O- -q "$url" | awk 'BEGIN{ RS="</a>" }
/<img src=/{
gsub(/.*<img src=\"/,"")
gsub(/\".[^>]*>/,"")
print
}' | xargs -i wget "{}"
Use a DOM parser and extract all img elements using getElementsByTagName. Then add them to a list/array, loop through and separately fetch them.
I would suggest using Python, but any language would have a DOM library.
#!/bin/sh
URL=$(curl http://foo.com/rss.xml | grep -E '<img src="http://www.foo.com/full/' | head -1 | sed -e 's/<img src="//' -e 's/" alt=""//' -e 's/width="400"//' -e 's/ height="400" \/>//' | sed 's/ //g')
curl -C - -O $URL
This totally does the job!
Any idea on the regex?
Here's a quick Python solution:
from BeautifulSoup import BeautifulSoup
from os import sys
soup = BeautifulSoup(sys.stdin.read())
print soup.findAll('img')[0]['src']
Usage:
$ curl http://www.google.com/`curl http://www.google.com | python get_img_src.py`
This works like a charm and will not leave you trying to find the magical regex that will parse random HTML (Hint: there is no such expression, especially not if you have a greedy matcher like sed.)