Intersystems Cache SQL shell: How to spool the output of a query to a file? - intersystems-cache

I am trying to create a script in UNIX to query the Cache database. I could get the output but how do I write the output to a file>
I used the following procedure:
cache << EOF
DO \$SYSTEM.SQL.Shell()
SELECTMODE DISPLAY
SELECT * from .....
GO
EXIT
EOF

If you are using this for reports/webpages/etc I would look at the built in modules that Intersystems provides for Perl, Python, Ruby, etc. These should give you a cleaner interface into the database.
That said, I do some external calls for monitoring purposes that don't use the language API because of what I am trying to get from the database (usually internals). In those cases I use Perl. I use Perl to because doing it this way I need to parse out the menus and other things I don't want (which you won't get with the APIs). The following is a simple example looking at all the users that exist in Cache.
#!/usr/bin/perl
use strict;
my $username = 'user';
my $password = 'password';
my $val = `csession $instance -U %SYS << done
$username
$password
d ^SECURITY
1
3
*
*
h
done`;
my #users = split(/\n/, $val);
my $in_users = 0;
my $output = '';
foreach (#users){
if($in_users){
chomp($_);
if($_ eq ''){
#no longer listing users
$in_users = 0;
} else {
$output .= "$_\n";
}
}
$in_users = 1 if($_ =~ m/^\-\-\-\-/); # I started a the user list
}
print $output; # this prints out to the console
#This prints to a file.
open(OUTFILE, ">outfile.txt");
print OUTFILE $output;
Check out the following links about the Perl and Python API
http://docs.intersystems.com/cache20102/csp/docbook/DocBook.UI.Page.cls?KEY=GBPL
and
http://docs.intersystems.com/cache20102/csp/docbook/DocBook.UI.Page.cls?KEY=GBPY

It's a bit hackish, but the following should work:
echo -e "username\npassword\nDO \$SYSTEM.SQL.Shell()\nfoo\nbar\nH\n" | cache > output
Each command you would enter in Caché is followed by a "\n" to indicate a line return. That is then piped into a new session and its output is written to the file.

Related

Perl hangs up on while loop

This code hangs up for some reason or just doesn't go any further when while (<>) { $file .= $_}; is queried. Why is that?
As soon as I start the code with the entered text does not happen more than that it outputs task1 and then it hangs.
Code:
#!/usr/bin/perl -w
use strict;
use JSON;
my $json = JSON->new->allow_nonref;
my $file = "";
print('task1');
while (<>) { $file .= $_ };
print('task2');
my $json_output = $json->decode( $file );
my ($c, $i, $cstr, $istr);
foreach my $cert (#$json_output) {
print('task3');
$i = $json_output->{i};
$c = $json_output->{c};
$istr = join("", map { sprintf("%02x",$_) } #$i);
$cstr = pack("C*", #$c);
open(F, ">$istr.der"); print F $cstr; close(F);
print('done.');
}
Output:
task1
This line
while (<>) { $file .= $_ };
is trying to read from a file specified on the command line, or if there isn't one, from standard input. If there isn't anything piped to standard input, then it sits waiting for you to type something at the keyboard.
So I'm guessing you didn't specify a file on the command line, and your program is sitting there waiting to get input from standard input.
Also, the easier way to read in the entire file to a single variable is like so:
my $file = do { local $/; <> };
See this article for other options.
How do you invoke your code? The <> operator means that it takes input from either all the files that you specify as arguments, or from standard input. If you call your script with no arguments, it will sit and wait for console input.
If you call it without arguments, try entering a few lines of text when it is "hanging", and then type Ctrl+D if you are on Linux, or Ctrl+Z on Windows. That should make the script work.

Asking for other inputs after piping a file to a perl script from a shell script

I am working on a school project that involves having a main shell script that will prompt the user for a text file that contains 50 words. If the shell script finds that file in the same directory as the shell and perl scripts, it will then print a menu asking if the user wants to sort the list using shell and outputting the sorted list to a new file (that one is finished and works), make a call to a perl script, where the perl script will take that file and print all the words in that file, then prompt the user for what word they want to search for. This will return what line the word is on in the list. What I have done is if the user selects to sort using the perl script, we pipe the inputted file in shell to the perl script with:
cat $filename | ./search.pl
Which happens to successfully pipe the file over to the perl script where we can use it. The first while loop is where we access the list and print every word/line for the user to see, which works fine. But this is where I run into trouble. After the whole list is printed, the printf line where it asks for the word they want to search for will print, but then the program will just stop without allowing anymore input, and go back to the terminal. What my logic for this search script is that we print every word for the user to see what they can search for, and then ask them what they want to search for, and then look through the inputted file from the shell script with a while loop; if we find it, print that we found it on that line, if we don't find it then go the to the next line, and if we hit the end without finding it just print that it can't be found.
Why am I unable to enter more input with the call to STDIN and assign it to $word to use in the second while loop? Also, when I am doing the second while loop, is using <> by itself after asking for a different output going to mess things up? If so, how do I make a reference again to the file for the second while loop?
#!/usr/bin/env perl
$count = 1; #global variable for return value
# of words.
while (<>) {
printf "$_";
}
#Now that we have opened the file, printed off everything for the user to see, they can now enter a word in a prompt to
# see what line it is on.
printf "\nPlease enter the word you want to search for\n";
my $word = <STDIN>;
chomp $word;
while ( $line = <> ) {
if ( $line =~ m/$word/ ) {
print "$word has been found on line $count.\n\n";
} elsif ( $line !=~ m/$word/ ) {
$count++;
} else {
print "$word cannot be found.";
}
}
The Shell script (for reference):
#!/bin/bash
clear
printf "Hello. \nPlease input a filename for a file containing a list of words you would like to use. Please allow for one word per line.\n -> "
read filename
printf "You have entered the filename: $filename.\n"
if [ -f "$filename" ] #check if the file even exists in the current directory to use
then
printf "The file $filename exists. What would you like to do with this file?\n\n"
else
printf "The file: $filename, does not exist. Rerun this shell script and please enter a valid file with it's proper file extension. An example of this would be mywords.txt \n\nNow exiting.\n\n"
exit
fi
printf "Main Menu\n"
printf "=========\n"
printf "Select 1 to sort file using Shell and output to a new file.\n"
printf "Select 2 to sort file using Perl and output to a new file.\n"
printf "Select 3 to search for a word using Perl.\n"
printf "Select 4 to exit.\n\n"
echo "Please enter your selection below"
read selection
printf "You have selected option $selection.\n"
if [ $selection -eq "1" ]
then
read -p "What would you like to call the new file? " newfile #asks user what they want to call the new file that will have the sorted list outputted to it
sort $filename > $newfile
echo "Your file: $newfile, has been created."
fi
if [ $selection -eq "2" ]
then
read -p "What would you like to call the new file? " newfile2
cat $filename | ./sort.pl
# > $newfile2 #put the sorted list into the new output file that the user specificed with newfile2
fi
if [ $selection -eq "3" ]
then
cat $filename | ./search.pl
fi
if [ $selection -eq "4" ]
then
printf "Now exiting.\n\n"
exit
fi
I have modified your code as shown below. For your understanding, i have been putting comments but try to avoid comments wherever not required.
Code:
#!/usr/bin/env perl
use strict;
use warnings;
#Input File passing as an argument to the program
my $InFile = $ARGV[0];
#Opening and reading a file using filehandle $fh
open my $fh,'<', $InFile or die "Couldn't open the file $InFile : $!";
while (<$fh>) {
printf "$_";
}
# Seek function as shown below will reset the file handle position to beginning of the file
seek($fh, 0, 0);
printf "\nPlease enter the word you want to search for\n";
my $word = <STDIN>;
chomp $word;
my $count = 1; #global variable for return value of words
my $val = 0;
while (my $line = <$fh>) {
if ($line =~ m/$word/) {
print "$word has been found on line $count.\n\n";
$val++;
}
elsif ($line !~ m/$word/) {
$count++;
}
}
if ($val == 0) {
print "$word cannot be found";
}
close($fh);

This is a Perl script to grep a pattern entered from docx files. Please anybody, please point out my mistakes to make it work?

#!usr/bin/perl
#script: patternsearch.pl : Program to search for specific pattern inside the file.
print ("Prgramme name: $0 \n");
print ("Enter pattern: \n");
chop ($pattern = <STDIN>);
print ("Enter the absolute folder path: \n");
chop ($folder = <STDIN>);
print ("Enter file type: \n");
chop ($filetype = <STDIN>);
die ("pattern not entered??? \n") if ($pattern eq " ");
if ($filetype eq "txt") {
foreach $search (`find $folder -type f -name "*.$filetype"`) {
do `grep -H $pattern $search>> patternsearch.txt`;
}
}
else {
foreach $search (`find $folder -type f -name "*.$filetype"`) {
do `antiword $search | grep -H $pattern >> patternsearch.txt`;
}
}
print ("Taskcompleted \n");
*.docx files are not plain text or even actually XML -- they're zipped bundles of XML and other stuff. You can't grep for text in the zipped file. You could unzip a *.docx, and then grep in the contents -- although in my experience the XML is written without line breaks, such that each grep hit would be the entire contents of the document.
You really should
use strict;
use warnings;
at the start of every program, and declare all you variables with my at the point of first use. This applies especially if you are asking for help with your program, and will quickly draw attention to a lot of simple mistakes.
You ought to use chomp instead of chop, as the latter just removes the last character from a string whereas the former checks to see if it is a line terminator (newline) before it removes it.
The only problems I can find is that you don't chomp the output from your backtick find commands: you should write chomp $search before the grep or antiword commands. Also (to paraphrase Yoda) there is no do before a backticks command. Remove that from before grep and antiword and your program may work.
If you have any further problems, please explain what output you expect, and what you are getting.

Perl script works with -w switch but not without

This script works on localhost with the -w switch but not without. It also works when use strict and use warning are active.
apache2/error.log:
without switch (aborted script):
(2)No such file or directory: exec of ... failed
with the switch I get:
Use of uninitialized value $email_flag in string ne ...
which looks initialised to me.
On the live web server neither one works. Perl is new to me, but I know some BASH and PHP.
I run Debian Lenny, Apache2, Perl 5.10.
#!/usr/bin/perl -w
$| = 1;
my $mailprog = '/usr/sbin/sendmail'; # where the mail program lives
my $to = "not\#for.you"; # where the mail is sent
my ($command,$email,#pairs,$buffer,$pair,$email_flag) ;
read(STDIN, $buffer, $ENV{'CONTENT_LENGTH'});
#pairs = split(/&/, $buffer);
foreach $pair (#pairs) {
# Split the pair up into individual variables. #
my($name, $value) = split(/=/, $pair);
# Decode the form encoding on the name and value variables. #
$name =~ tr/+/ /;
$name =~ s/%([a-fA-F0-9][a-fA-F0-9])/pack("C", hex($1))/eg;
$value =~ tr/+/ /;
$value =~ s/%([a-fA-F0-9][a-fA-F0-9])/pack("C", hex($1))/eg;
# If they try to include server side includes, erase them, so they
# aren't a security risk if the html gets returned. Another
# security hole plugged up.
$value =~ s/<!--(.|\n)*-->//g;
## print "Name of form element is $name with value of $value \n";
if ($name eq 'email') {
$email = $value;
}
if ($name eq 'command') {
$command = $value;
}
}
if ($email =~ /(#.*#)|(\.\.)|(#\.)|(\.#)|(^\.)/ ||
$email !~ /^.+\#(\[?)[a-zA-Z0-9\-\.]+\.([a-zA-Z]{2,3}|[0-9]{1,3})(\]?)$/ ) {
$email_flag = "ERROR";
}
my $urlcommand = $command;
if ($command eq 'Subscribe') {
$command = "SUBSCRIBE rpc-news";
}
if ($command eq 'Unsubscribe') {
$command = "UNSUBSCRIBE rpc-news";
}
if ($command eq 'Suspend') {
$command = "SET rpc-news NOMAIL";
}
if ($command eq 'Resume') {
$command = "SET rpc-news MAIL";
}
my $getInfo = '';
print "Content-Type: text/html\n";
if ($email_flag ne "ERROR") {
open(MAIL,"|$mailprog -t");
print MAIL "To: $to\n";
print MAIL "From: $email\n";
print MAIL "Subject: [rpc-news] $command \n";
print MAIL "Reply-to: $email \n";
print MAIL "$command \n";
print MAIL "EXIT \n";
close (MAIL);
$getInfo = "?result=good";
}
if ($email_flag eq "ERROR") {
$getInfo = "?result=bad";
}
my $rootURL= $ENV{'SERVER_NAME'};
my $url = "http://${rootURL}/thank_you.html${getInfo}&action=${urlcommand}";
print "Location: $url\n\n";
Did you create your script on a Windows machine and upload it to a Linux server without fixing the line endings? Without the -w switch, the shebang line may look like "#!/usr/bin/perl\r", so the system goes looking for a program named "perl\r" (or however the line ending looks). With the -w switch, "#!/usr/bin/perl" doesn't have an indecipherable line ending stuck to it. Instead, that gets stuck to -w where it doesn't cause failure.
I thought there was a perlfaq about this, but I can't seem to find it in the docs at the moment.
Update: I found it over on PerlMonks, in a really old Q&A topic that seems unrelated until you read the body of the message: Answer: How to get rid of premature end of script headers. Yeah, I know, if you were just browsing threads you wouldn't even stop on that one. But here's the text of the post:
If you developed this script on
Windows, it's possible that the script
file has non-UNIX line endings. (The
perl interpreter can handle them, but
the shebang line is interpreted by the
shell, and is not tolerant of
incorrect line endings.) If this is
the problem, the script may terminate
with an error right at the shebang
line.
Use of uninitialized value $email_flag in string ne ...
which looks initialised to me.
if ($email =~ /(#.*#)|(\.\.)|(#\.)|(\.#)|(^\.)/ ||
$email !~ /^.+\#(\[?)[a-zA-Z0-9\-\.]+\.([a-zA-Z]{2,3}|[0-9]{1,3})(\]?)$/
) {
$email_flag = "ERROR";
}
$email_flag only gets initialized here if the pattern matches - otherwise it's left undefined. You could add an else clause to ensure it gets initialized no matter what.
I would not use that code, it doesn't use CGI.pm (or CGI::Simple ...)
Get "TFMail -- Improved Form Mail" from "nms - web programs written by experts"
Its simple to install, and its written well ( it uses CGI ...)

How do I get the output of an external command in Perl?

I want to have output of Windows command-line program (say, powercfg -l) written into a file which is created using Perl and then read the file line by line in a for loop and assign it to a string.
You have some good answers already. In addition, if you just want to process a command's output and don't need to send that output directly to a file, you can establish a pipe between the command and your Perl script.
use strict;
use warnings;
open(my $fh, '-|', 'powercfg -l') or die $!;
while (my $line = <$fh>) {
# Do stuff with each $line.
}
system 'powercfg', '-l';
is the recommended way. If you don't mind spawning a subshell,
system "powercfg -l";
will work, too. And if you want the results in a string:
my $str = `powercfg -l`;
my $output = qx(powercfg -l);
## You've got your output loaded into the $output variable.
## Still want to write it to a file?
open my $OUTPUT, '>', 'output.txt' or die "Couldn't open output.txt: $!\n";
print $OUTPUT $output;
close $OUTPUT
## Now you can loop through each line and
## parse the $line variable to extract the info you are looking for.
foreach my $line (split /[\r\n]+/, $output) {
## Regular expression magic to grab what you want
}
There is no need to first save the output of the command in a file:
my $output = `powercfg -l`;
See qx// in Quote-Like Operators.
However, if you do want to first save the output in a file, then you can use:
my $output_file = 'output.txt';
system "powercfg -l > $output_file";
open my $fh, '<', $output_file
or die "Cannot open '$output_file' for reading: $!";
while ( my $line = <$fh> ) {
# process lines
}
close $fh;
See perldoc -f system.
Since the OP is running powercfg, s/he are probably capturing the ouput of the external script, so s/he probably won't find this answer terribly useful. This post is primarily is written for other people who find the answers here by searching.
This answer describes several ways to start command that will run in the background without blocking further execution of your script.
Take a look at the perlport entry for system. You can use system( 1, 'command line to run'); to spawn a child process and continue your script.
This is really very handy, but there is one serious caveat that is not documented. If you start more 64 processes in one execution of the script, your attempts to run additional programs will fail.
I have verified this to be the case with Windows XP and ActivePerl 5.6 and 5.8. I have not tested this with Vista or with Stawberry Perl, or any version of 5.10.
Here's a one liner you can use to test your perl for this problem:
C:\>perl -e "for (1..100) { print qq'\n $_\n-------\n'; system( 1, 'echo worked' ), sleep 1 }
If the problem exists on your system, and you will be starting many programs, you can use the Win32::Process module to manage your application startup.
Here's an example of using Win32::Process:
use strict;
use warnings;
use Win32::Process;
if( my $pid = start_foo_bar_baz() ) {
print "Started with $pid";
}
:w
sub start_foo_bar_baz {
my $process_object; # Call to Create will populate this value.
my $command = 'C:/foo/bar/baz.exe'; # FULL PATH to executable.
my $command_line = join ' ',
'baz', # Name of executable as would appear on command line
'arg1', # other args
'arg2';
# iflags - controls whether process will inherit parent handles, console, etc.
my $inherit_flags = DETACHED_PROCESS;
# cflags - Process creation flags.
my $creation_flags = NORMAL_PRIORITY_CLASS;
# Path of process working directory
my $working_directory = 'C:/Foo/Bar';
my $ok = Win32::Process::Create(
$process_object,
$command,
$command_line,
$inherit_flags,
$creation_flags,
$working_directory,
);
my $pid;
if ( $ok ) {
$pid = $wpc->GetProcessID;
}
else {
warn "Unable to create process: "
. Win32::FormatMessage( Win32::GetLastError() )
;
return;
}
return $pid;
}
To expand on Sinan's excellent answer and to more explicitly answer your question:
NOTE: backticks `` tell Perl to execute a command and retrieve its output:
#!/usr/bin/perl -w
use strict;
my #output = `powercfg -l`;
chomp(#output); # removes newlines
my $linecounter = 0;
my $combined_line;
foreach my $line(#output){
print $linecounter++.")";
print $line."\n"; #prints line by line
$combined_line .= $line; # build a single string with all lines
# The line above is the same as writing:
# $combined_line = $combined_line.$line;
}
print "\n";
print "This is all on one line:\n";
print ">".$combined_line."<";
Your output (on my system) would be:
0)
1)Existing Power Schemes (* Active)
2)-----------------------------------
3)Power Scheme GUID: 381b4222-f694-41f0-9685-ff5bb260df2e (Balanced) *
4)Power Scheme GUID: 8c5e7fda-e8bf-4a96-9a85-a6e23a8c635c (High performance)
5)Power Scheme GUID: a1841308-3541-4fab-bc81-f71556f20b4a (Power saver)
This is all on one line:
>Existing Power Schemes (* Active)-----------------------------------Power Scheme GUID: 381b4222-f694-41f0-9685-ff5bb260df2e (Balanced) *Power Scheme GUID: 8c5e7fda-e8bf-4a96-9a85-a6e23a8c635c (High performance)Power Scheme GUID: a1841308-3541-4fab-bc81-f71556f20b4a (Power saver)<
Perl makes it easy!
Try using > operator to forward the output to a file, like:
powercfg -l > output.txt
And then open output.txt and process it.