Perl: can not get correct exit code from external program - perl

I've searched everywhere, but I can't seem to find a solution for my issue. Probably, it is code related.
I'm trying to catch the exit code from a novell program called DXCMD, to check whether certain "drivers" are running. This is no problem in bash, but I need to write a more complex perl script (easier working with arrays for example).
This is the code:
#Fill the driverarray with the results from ldapsearch (in ldap syntax)
#driverarray =`ldapsearch -x -Z -D "$username" -w "$password" -b "$IDM" -s sub "ObjectClass=DirXML-Driver" dn | grep ^dn:* | sed 's/^....//' | sed 's/cn=//g;s/dc=//g;s/ou=//;s/,/./g'`;
#iterate through drivers and get the exit code:
foreach $driverdn (#driverarray)
{
my $cmd = `/opt/novell/eDirectory/bin/dxcmd -user $username -password $password -getstate "$driverdn"`;
my $driverstatus = $?>>8;
}
I've come this far; the rest of the code is written (getting the states).
But the $?>>8 code always returns 60. When I copy the command directly into the shell and echo the $?, the return code is always 2 (which means the driver is running fine). In bash, the code also works (but without the >>8, obviously).
I've looked into the error code 60, but I cannot find anything, so I think it is due to my code.
How can I rectify this error? Or how can I track the error? Anyone? :)

Wrong value passed to -getstate. You didn't remove the newline. You're missing
chomp(#driverarray);

Related

bash script to build complex command syntax, print it first then execute - problems with variable expansion

I want to create scipt to faciliate producing local text file extracts from Hive.
This is to basically execute commands like below:
hive -e "SET hive.cli.print.header=true;SELECT * FROM dropme"|perl -pe 's/(?:\t|^)\KNULL(?=\t|$)//g'>extract/outbound/dropme.txt
While the above works like a charm I find it quite problematic to implement through the parametrized following script (much simplified):
#!/bin/sh
TNAME=dropme
SQL="SELECT * FROM $TNAME"
echo $SQL
echo "SQL: $SQL"
EXTRACMD="hive -e \"SET hive.cli.print.header=true;$SQL\"|perl -pe 'BEGIN{if(defined(\$_=<ARGV>)){s/\b\w+\.//g;print}}s/(?:\t|^)\KNULL(?=\t|$)//g'>extract/outbound/$TNAME.txt"
echo "CMD: $EXTRACMD";
${EXTRACMD}
When run I get: Exception in thread "main" java.lang.NumberFormatException: For input string: "e"
I know there may be many flavours you can print the text or execute command. For instance the line echo $SQL prints me list of files in the directory instead:
SELECT file1.txt file2.txt file3.txt file4.txt FROM dropme
while the next one: echo "SQL: $SQL" gives just what I want: SQL: SELECT * FROM dropme
echo "CMD: $EXTRACMD" prints the (almost) the command to be executed. Almost, as I see \t in perl code being expanded:
CMD: hive -e "SET hive.cli.print.header=true;SELECT * FROM dropme"|perl -pe 'BEGIN{if(defined($_=<ARGV>)){s\w+\.//g;print}}s/(?: |^)\KNULL(?= |$)//g'>extract/outbound/dropme.txt
Maybe that's still ok, but what I want is to be able to copy&paste this command into (other) terminal and execute as the command I put at the top. Ideally I would like that command to be exactly the same (so with \t there)
Biggest problem I have comes when I try to execute it (${EXTRACMD} line). I'm getting the error:
Exception in thread "main" java.lang.NumberFormatException: For input string: "e" …and so on, irrelevant as bash treats every 'word' as single command here. I assume as I don't even know what is really tries to run (prior print attempt obviously doesn't help)
I'm aware that I have multiple options, like:
escaping special characters in the command definition string (like I did with doublequotes)
experimenting with echo and $VAR, '$VAR' or "$VAR"
experimenting with "${EXTRACMD}" or evaluating through eval "${EXTRACMD}"
experimenting with shopt -s extglob or set -f
but as number of combinations is quite large and with my little bash experience I feel it's better to ask for good practice here so my question is:
Is there a way to print a (complex/compound shell) command first and subsequently be able to execute it (exactly as per printed output)? In this case it would be printing the exact command from the top, then executing it the same way as by manually copying that output into terminal prompt and pressing Enter.
Do not construct commands as strings. See http://mywiki.wooledge.org/BashFAQ/050 for details.
That page also talks about a built-in way of getting the shell to tell you what it is running (section 6).
If that doesn't do what you want you can also, with bash, try using printf %q\\n "${arr[*]}".

Calling Patch.exe from Powershell with p0 argument

I'm currently trying to patch some files via PowerShell using Patch.exe
I am able to call the exe using the '&' command, but it doesn't seem to be reading my p0 input. I'm not an expert on PowerShell and any help would be appreciated!
here is what I am calling in PS:
$output = & "$scriptPath\patch.exe" -p0 -i $scriptPath\diff.txt
My error reads:
can't find file to patch at input line 5
Perhaps you used the wrong -p or --strip option?
The text leading up to this was:
Which I can emulate by leaving out the p0 parameter on my patch file from commandline.
Here are some alternatives I've already tried:
#$output = & "$scriptPath\patch.exe" -p0 -i "$scriptPath\diff.txt"
#CMD /c “$scriptPath\patchFile.bat” (where patchFile.bat has %~dp0patch.exe -p0 < %~dp0diff.txt, seems like powershell reads < as 0<, so there is an error there I think)
#GET-CONTENT $scriptPath\diff.txt | &"$scriptPath\patch.exe" "-p0"
#GET-CONTENT $scriptPath\diff.txt | CMD /c “$scriptPath\patch.exe -p0”
Thanks!
Try:
$output = & "$scriptPath\patch.exe" -p0 -i "$scriptPath\diff.txt"
Patch.exe started in the wrong context and I solved this by using Push-Location / Pop-Location
Here is what my code looks like now
Push-Location $scriptPath
$output = & "$scriptPath\patch.exe" -p0 -i "$scriptPath\diff.txt"
Pop-Location
Keith also mentioned in one of his comments that you can use:
[Environment]::CurrentDirectory = $pwd
I have not tested this, but I assume it does the same thing (Keith is a powershell MVP, I am just a student).

Curl command uploading document fails when run from Perl

I've got a Perl script that uploads documents into Alfresco using curl.
Some of the documents have ampersand in the file name and initially this caused curl to fail. I fixed this by placing a carat symbol in front of the ampersand. But now I'm finding some documents are failing to upload when they don't have a space either side of the ampersand. Other documents with spaces in the file name and an ampersand do load successfully.
The snippet of Perl code that is running is:
# Escape & for curl in file name with a ^
my $downloadFileNameEsc = ${downloadfile};
$downloadFileNameEsc =~ s/&/^&/g;
$command = "curl -u admin:admin -F file=\#${downloadFileNameEsc} -F id=\"${docId}\" -F title=\"${docTitle}\" -F tags=\"$catTagStr\" -F abstract=\"${abstract}\" -F published=\"${publishedDate}\" -F pubId=\"${pubId}\" -F pubName=\"${pubName}\" -F modified=\"${modifiedDate}\" -F archived=\"${archived}\" -F expiry=\"${expiryDate}\" -F groupIds=\"${groupIdStr}\" -F groupNames=\"${groupNameStr}\" ${docLoadUrl}";
logmsg(4, $command);
my #cmdOutput = `$command`;
$exitStatus = $?;
my $upload = 0;
logmsg(4, "Alfresco upload status $exitStatus");
if ($exitStatus != 0) {
You can see that I am using backticks to execute the curl command so that I can read the response. The perl script is being run under windows.
What this effectively tries to run is:
curl -u admin:admin -F file=#tmp-download/Multiple%20Trusts%20Gift%20^&%20Loan.pdf -F id="e2ef104d-b4be-4896-8360-7d6f2e7c7b72" ....
This works.
curl -u admin:admin -F file=#tmp-download/Quarterly_Buys^&sells_Q1_2006.doc -F id="78d18634-ee93-4c29-b01d-270aeee3219a" ....
This fails!!
The only difference being as far as I can see is that in the one that works the file name has spaces (%26) in the file name somewhere around the ampersand, not necessarily next to the ampersand.
I can't see why one runs successfully and the other doesn't. Think it must be to do with backticks and ampersands in the file name. I haven't tried using system as I wanted to capture the response.
Any thoughts because I've exhausted all options.
You should learn to use Perl modules. Perl has some great modules to handle the Web requests. If you depend upon operating system commands, you will end up with not only dependencies upon those commands, but shell interactions and whether or not you need to quote special characters.
Perl modules remove a lot of the issues that you can run into. You are no longer dependent upon particular commands or even particular implementation of those commands. (The curl command can vary from system to system, and may not even be on the system you're on). Plus, most of these modules handle the piddling details for you (such as URI escaping strings).
LWP is the standard Perl library for implementing these requests. Take a look at the LWP Cookbook. This is a tutorial on the whole HTTP process. Basically, you need to create an agent which is really just a virtual web browser for you to use. Then, you can configure it (for example, setting the machine, browser type, etc.) you might need.
What is really nice is HTTP::Request::Common that provides a simple interface for using HTTP forms.
my $results = POST "$docLoadUrl"
[ file => '#' . "$downloadFileName",
id => $docId,
title => $docTitle,
tag => $catTagStr,
abstract => $abstract,
published => $publishedDate,
pubId => $pubId,
pubName => $pubName,
...
];
This is a lot easier to read and maintain. Plus, it will handle URI encoding for you.

How do you handle the exception in bat and *sql files

I'm using powershell script to run a few *.sql file. I can get the output value by using the script below..
$return_value = sqlcmd -S ServerName -i "MyAwesome.sql" -v parameter1="par1"
The problem is that I will have to extract out the output text to determine whether there is any error in SQL file or not..
Is there any best practice to handle the exceptions in powershell file and *.sql file?
Do I have to catch the error in each and every SQL files to produce the pre-defined output?
Might not be an option for you, but the Invoke-SqlCmd cmdlet has a parameter called "-ErrorVariable" to trap error messages.
You can use sqlcmd's exit code to determine whether there was an error or not.
$output = sqlcmd -S ServerName -i "MyAwesome.sql" -v parameter1="par1"
if ($LASTEXITCODE -ne 0) {
Write-Error $output
}
If the exit code is not 0 you will usually find the error message either in stdout or stderr. You can put both in the output variable like this:
$output = sqlcmd -S ServerName -i "MyAwesome.sql" -v parameter1="par1" 2>&1
If there was anything in stderr it will be combined with whatever is in stdout.
In your SQL script you will also want to implement best practice error handling techniques such as using try/catch constructs and using transactions where warranted.
http://msdn.microsoft.com/en-us/library/ms179296.aspx
If you catch an exception in your SQL script print the error message and set the return code so you can handle the error in PowerShell.

Why isn't this command taking the diff of two directories?

I am asked to diff two directories using Perl but I think something is wrong with my command,
$diff = system("sudo diff -r '/Volumes/$vol1' '/Volumes/$vol2\\ 1/' >> $diff.txt");
It doesn't display and output. Can someone help me with this? Thanks!
It seems that you want to store all differences in a string.
If this is the case, the command in the question is not going to work for a few reasons:
It's hard to tell whether it's intended or not, but the $diff variable is being used to set the filename storing the differences. Perhaps this should be diff.txt, not $diff.txt
The result of the diff command is saved in $diff.txt. It doesn't display anything in STDOUT. This can be remedied by omitting the >> $diff.txt part. If it also needs to be stored in file, consider the tee command:
sudo diff -r dir1/ dir2/ | tee diff.txt
When a system call is assigned to a variable, it will return 0 upon success. To quote the documentation:
The return value is the exit status of the program as returned by the wait call.
This means that $diff won't store the differences, but the command exit status. A more sensible approach would be to use backticks. Doing this will allow $diff to store whatever is output to STDOUT by the command:
my $diff = `sudo diff -r dir1/ dir2/ | tee diff.txt`; # Not $diff.txt
Is it a must to use the sudo command? Avoid using it if even remotely possible:
my $diff = `diff -r dir1/ dir2/ | tee diff.txt`; # Not $diff.txt
A final recommendation
Let a good CPAN module take care of this task, as backtick calls can only go so far. Some have already been suggested here; it may be well worth a look.
Is sudo diff being prompted for a password?
If possible, take out the sudo from the invocation of diff, and run your script with sudo.
"It doesn't display and output." -- this is becuase you are saving the differences to a file, and then (presumably) not doing anything with that resulting file.
However, I expect "diff two directories using Perl" does not mean "use system() to do it in the shell and then capture the results". Have you considered doing this in the language itself? For example, see Text::Diff. For more nuanced control over what constitutes a "difference", you can simply read in each file and craft your own algorithm to perform the comparisons and compile the similarities and differences.
You might want to check out Test::Differences for a more flexible diff implementation.