I have a directory there are lots of .wav files. I read the data of each file through curl command and store it into different files.
This is not complete script. some part of the script -
#command= '--request POST --data-binary "#Raajpal_long.wav"
"https://xxxx.xx.xxxx:xxxx/SpeakerId
action=search&confidence_threshold=0.0&number_of_matches=20&format=8K_PCM16"';
$stdout=system("curl #command");
when I run the perl script it gives the output on command line window :
{"status": 0, "processing_time": 96.0, "enrollment_audio_time": 131.10000610351562, "matches": [{"speaker": "sw", "identification_score": 252.54136657714844}]}
I want to store this output into a file.
I used -
open (FILE, ">1.txt") or die "Unable to open "1.txt";
$stdout=system("curl #command");
print FILE $stdout;
It's save only zero(0);
Can any one tell me how to solve this ?
You're already shelling out to curl to make the request; it would be cleaner to just use curl's -o/--output option to write to a file instead of stdout.
-o, --output <file>
Write output to instead of stdout. If you are using {} or [] to
fetch multiple documents, you can use '#' followed by a number in the
specifier. That variable will be replaced with the current
string for the URL being fetched. Like in:
curl http://{one,two}.example.com -o "file_#1.txt"
or use several variables like:
curl http://{site,host}.host[1-5].com -o "#1_#2"
You may use this option as many times as the number of URLs you have.
For example, if you specify two URLs on the same command line, you can
use it like this:
curl -o aa example.com -o bb example.net
and the order of the -o options and the URLs doesn't matter, just that
the first -o is for the first URL and so on, so the above command line
can also be written as
curl example.com example.net -o aa -o bb
You can't use system to capture output, you can use backticks `` in place of system.
Something like:
my #command= '--request POST --data-binary "#Raajpal_long.wav"
"https://services.govivace.com:49162/SpeakerId
action=search&confidence_threshold=0.0&number_of_matches=20&format=8K_PCM16"';
my $result = `curl #command`;
if ( $? == -1 ){
print "\n Curl Command failed: $!\n";
} elsif ($? == 0 ) {
print "$result\n";
}
Related
This question already has answers here:
Using the -d test operator in perl
(3 answers)
Closed 8 years ago.
What does the -d in the following piece of code:
foreach my $filename (#files) {
my $filepath = $dir.$filename;
next if -d $filepath;
function1();
}
This is a short form for
if (-d $filepath) {
next;
}
Where -d $filepath is a test if $filepath is a directory.
See http://perldoc.perl.org/functions/-X.html for a full list of file tests.
-d tests if $filepath is a directory.
All such file tests are documented at perldoc -X:
-X FILEHANDLE
-X EXPR
-X DIRHANDLE
-X
A file test, where X is one of the letters listed below. This unary operator takes one argument, either a filename, a filehandle, or a dirhandle, and tests the associated file to see if something is true about it. If the argument is omitted, tests $_, except for -t, which tests STDIN. Unless otherwise documented, it returns 1 for true and '' for false. If the file doesn't exist or can't be examined, it returns undef and sets $! (errno). Despite the funny names, precedence is the same as any other named unary operator. The operator may be any of:
...
-f File is a plain file.
-d File is a directory.
...
It checks for the directory...
A short example to check that
$somedir = "c:/windows";
if (-d $somedir) {
print "$somedir exists";
} else {
print "$somedir does not exist!";
}
Also check the docs for other such cases
-f File is a plain file.
-d File is a directory.
-l File is a symbolic link.
-p File is a named pipe (FIFO), or Filehandle is a pipe.
-S File is a socket.
-b File is a block special file.
-c File is a character special file.
-t Filehandle is opened to a tty.
Essentially, next if -d $filepath; means "if this file is a directory, run the next iteration of the loop", which effectively skips the call of function1 for that file. In short, it is a way of applying function1 only to files which are NOT directories.
I'm looking to use wget to retrieve a perl file and execute it in one line. Does anyone know if this is possible/how I would go about doing this?
In order to use wget for this purpose, you would use the -O flag and give it the '-' character as an argument. From the manpage:
-O file
--output-document=file
Giving '-' as the "file" option to -O tells it to send it's output to stdout, which can then be piped into the Perl command.
You can provide the -q flag as well to turn off wget's own warning and message output:
-q
--quiet
Turn off Wget's output.
This will make things look cleaner in the shell.
So you would end up with something like:
wget -qO - http://127.0.0.1/myscript.pl | perl -
For more information on I/O redirection take a look at this:
http://www.tldp.org/LDP/abs/html/io-redirection.html
Just download and pipe to perl
curl -L http://your_location.pl | perl -
You'll sometimes see code like for install modules like cpanm.
I hv a file like below, i need to get the Value of title(which occurs multiple times in file] and store it in separate file
{"card":{"cardName":"43SCX4","portSignalRates":["43SCX4-L-OTU3","43SCX4-C-TENGIGE","43SCX4-C-OTU2","43SCX4-C-FC8G","43SCX4-C-STM64","43SCX4-C-OC192"],"listOfPort":{"43SCX4-L-OTU3":{"portAid":"43SCX4-L-OTU3","signalType":"OTU3","tabNames":["PortDetails","OTU3e2Details"],"title":"OperationalMode",{"label":"Regen","value":"regen"}],"label":"Regen","value":"regen","checked":"","enabled":"true","selected":""},{"type":"dijit.form.Select","name":"Frequency","title":"Transmit Frequency "}}}}
I tried with "awk -F, '{}' sample", i'm able to split,but not able to iterate and put it to another file only "title":"****"
Through grep with -oE option.
$ grep -oE '\"title\":\"[^"]*"' file
"title":"OperationalMode"
"title":"Transmit Frequency "
If you want to get the value of Title then use a lookbehind,
$ grep -oP '(?<=\"title\":\")[^"]*' file
OperationalMode
Transmit Frequency
If you want to save it another file then redirect the output to another file using output redirection > operator. Example grep -oP '(?<=\"title\":\")[^"]*' file > outfile
We have a monitoring system making RRD databases. I am looking for the most light way of creating graphs from this RRD files for our HTML pages. So I don't want to store them in files. I am trying to create simple BASH CGI script, that will output image data, so I can do something like this:
<img src="/cgi-bin/graph.cgi?param1=abc"></img>
First of all, I am trying to create simple CGI script, that will send me PNG image. This doesn't work:
#!/bin/bash
echo -e "Content-type: image/png\n\n"
cat image.png
But when I rewrite this to PERL, it does work:
#!/usr/bin/perl
print "Content-type: image/png\n\n";
open(IMG, "image.png");
print while <IMG>;
close(IMG);
exit 0;
What is the difference? I would really like to do this in BASH. Thank you.
Absence of -n switch outputs third newline, so it should be
echo -ne "Content-type: image/png\n\n"
or
echo -e "Content-type: image/png\n"
from man echo
-n do not output the trailing newline
I have a log file which contains traffic for an entire server. The server serves multiple domains, but I know that all of the PDF files I want to count are in /some/directory/.
I know that I can get a list of all the PDF files I want if I grep that directory for the 'pdf' extension.
How can I then count how many times each PDF was accessed individually from the command line?
this is a bit longer than one line but it will give you a better summary. You can modify this with the path to the pdfs and the apache access_log file and just paste it in to the command line or put it in a bash script
for file in `ls /path/to/pdfs | grep pdf `
do
COUNT=`grep -c $file access_log`
echo $file $COUNT
done
Grep for the name of the pdf file in your log and use the -c option to count occurrences. For example:
grep -c myfile.pdf apache.log
If you have hundreds of files, create a single file with a list of all the filenames, e.g.
$ cat filelist.txt
foo.pdf
bar.pdf
and then use grep in a loop
while read filename
do
COUNT=$(grep -c $filename apache.log)
echo $filename:$COUNT
done < filelist.txt
This will print out how many times each pdf file occurred in the log.
Use grep to identify the rows with your pdf and then wc -l to count the rows found:
grep /your/pdf logfile | wc -l
You may also check for 200 responses wrt 302 - i.e. if the user has only accessed a page or the full document (some pdf readers only load a page at a time)