redirecting echo with undefined variable - solaris

I'm running a script in solaris 11 with different results depending of the shell used.
The script has an echo redirecting to a file given by an environment value:
echo "STARTING EXEC" >> $FILE
ps. EXEC is just the message the script show, it's not using exec command.
If I execute the script without the variable FILE defined (using /usr/bin/ksh):
./start.sh[10]: : cannot open
and the script continue the execution.
The flags for ksh are:
echo $-
imsuBGEl
But if I change to /usr/xpg4/bin/sh, the script show me the echo in stdout and there is no error shown.
The flags for xpg4 sh are:
echo $-
imsu
I tried to change the flags with set +- (I can't remove El flags, but BG are removed ok), but can't get the same behavior.
Is there anything I can do to get the same result using ksh without cannot open error?
/usr/bin/ksh --version
version sh (AT&T Research) 93u 2011-02-08
I'll want the script keep going, showing the message in stdout, instead of showing the error just like it does now.
Like shellter said in the comments, the good thing to do is to check if the FILE variable is defined before doing anything. This is a script migration from an HPUX to a SOLARIS environment, and client think they must have the same result as before (we unset FILE variable before execution to test it).

You are likely running Solaris 11, not Solaris 64.
Should you want to have your scripts to work under Solaris 11 without having to search everywhere the bogus redirections, you can simply replace all shebangs (first line) by #!/usr/xpg4/bin/sh.

The final solution we are going to take is to install the ksh88 package and use it like default shell (/usr/sunos/bin/ksh). This shell have the same behavior the client had before, and we can let the scripts with no modifications.
The ksh used in solaris 11 is the 93 (http://docs.oracle.com/cd/E23824_01/html/E24456/userenv-1.html#shell-1)
Thanks #jlliagre and #shellter for your help.

Related

Perl backtick ignores everything past the first space

I have a command
my $output = `somecommand parm1 parm2`;
When I try to run this Perl script I get the message
Can't exec "somecommand" at .....
It seems it is not seeing anything past the first space in between the backticks. I have a friend who runs this in a different environment and it runs fine.
What could I have in my environment that would cause this? I am running Perl v5.20 but so is my friend.
Perl's not ignoring the command parameters, it's just mentioning only the part of the command that it has a problem with -- it can't find somecommand
Whatever your somecommand is, it's not a shell command and it's not in a directory listed in your PATH variable
Change PATH to add its location to the end and it will work for you. You can do that system-wide or you dan modify it temporarily in your Perl code by manipulating $ENV{PATH} before you run the command

System call to run pbmtextps: ghostscript

I'm coding a Perl script to generate images with text in them. I'm on a Linux machine. I'm using pbmtextps. When I try to run pbmtextps in Perl with a system call like this
system("pbmtextps -fontsize 24 SampleText > out.pbm");
I get this error message
pbmtextps: failed to run Ghostscript process: rc=-1
However, if I run the exact same pbmtextps command from the command-line outside of Perl, it runs with no errors.
Why does it cause the ghostscript error when I run it from inside a Perl script?
ADDITIONAL INFO: I tried to hack around this by creating a C code called mypbmtextps.c which does the exact same thing with a C system call. That works from the command line. No errors. But then when I call that C program from the Perl script, I get the same ghostscript error.
ANSWER: I solved it. The problem was this line in the PERL script:
$SIG{CHLD} = 'IGNORE';
When I got rid of that (which I need for other things, but not in this script) it worked okay. If anyone knows why that would cause a problem, please add that explanation.
Ah-ha. Well, the SIGCHLD is required for wait(), and so required for perl to be able to retrieve the exit status of the child process created by system(). In particular, system() always returns -1 when SIGCHLD is ignored. $? will also be unavailable with SIGCHLD blocked.
What printed the error message? pbmtextps, or your perl script?
As far as I know, the signal handler for your perl process shouldn't affect the signal handler for the child processes, but this could depend on your version of perl and your OS version. On my Linux Mint 13 with Perl 5.14.2 the inner perl script prints 0, with the outer script printing -1:
perl -e '$SIG{CHLD}= "IGNORE"; print system(q{perl -e "print system(q(/bin/sleep 1))"})'
Is your perl script modifying the environment?
You can test with
system("env > /tmp/env.perl");
and then compare it to the environment form your shell:
env > /tmp/env.shell
diff /tmp/env.shell /tmp/env.perl
Is the perl script also being run from the shell, or is it being run from some other process like cron or apache? (in particular, you should check $PATH)

can't find perl error log

I have a perl file (eg:test.pl) which does some DB operations.
While testing, its working fine.
I execute this file as a background process by using the command
perl test.pl &
Its working properly for some days.
But after some days ,the file execution get stopped.
How can I find the reason or view the error?
I checked the log file "/var/log/httpd/error_log", but can't find anything.
I keep the perl file in a server, which runs in Cent OS.
Any one have idea?
There is no 'perl error log'
But you can define a destination for output to be saved to, just run your script like this:
perl test.pl >> /var/log/some-log-file.log 2>&1 &
This will redirect STDOUT (normal shell output) and STDERR (error output) to /var/log/some-log-file.log instead of to the terminal.
You may also wish to use nohup in order to have the script ignore HANGUP (logout) signals, which could be causing your unexpected terminations:
nohup perl test.pl >> /var/log/some-log-file.log 2>&1 &
Obviously, whichever user you run the script as will need to have write access to the log file.

Change shell within Perl script

I am looking for a nice way to get the following done:
So I have a script that I need to run in Python in Unix by calling from a Perl script that was, in turn, called from my Excel VBA macro in Windows using Plink. The Python script, due to dependency issues, has to run in either csh or bash, and I will need to use export/setenv to add a few libraries before running the script. However by default, perl runs in sh shell and as such, there is no way I can add in all the dependencies and have the Python script to run.
So, I am just wondering if there is EITHER: 1. a way for me to add dependencies to sh shell in the perl script, OR 2. force my perl script to run in csh (preferred, since for some reason .bashrc for the account runs into permission issues).
Thanks a lot!
How about "3. Set the appropriate environment variable in the Perl or Python scripts"?
$ENV{'PATH'} = ...
...
os.environ['PATH'] = os.pathsep.join(newpaths + os.environ['PATH'].split(os.pathsep))
(dunno how to get the path separator in Perl, sorz)
To force the shell to csh, try the following in Perl :
`/bin/csh -c "command_name"`;
Edit:
You can use ENV variable, like this. Try that :
$s = `/bin/bash -c 'VAR_FOO=753; echo \$VAR_FOO'`;
print $s;
I ended up just change the .cshrc script, apparently the addition to PATH, for some reason, did not work for me. After that, everything runs smoothly by putting all into one line
so basically it looks something like this
/path/to/.cshrc && /python/path/to/python
Hope that helps!

csh script inherit envirionment variables?

I found an odd problem when I run a simple csh script on Solaris.
#!/bin/csh
echo $LD_LIBRARY_PATH
Let's call this script test. When I run this:
shell> echo $LD_LIBRARY_PATH
shell> /usr/lib:/usr/openwin/lib:/usr/dt/lib:/usr/local/lib:/lib:/my_app/lib
shell> ./test
shell> /usr/lib:/usr/openwin/lib:/usr/dt/lib:/usr/local/lib:/lib
They print out totally different values for $LD_LIBRARY_PATH. I can't figure out why. (It's OK on my linux machine)
Thanks!
Do you set $LD_LIBRARY_PATH in your $HOME/.cshrc?
You really shouldn't if you do, since it often just breaks software, but changing the first line of the script to #!/bin/csh -f will cause your script to not read .cshrc files at the start, protecting you from other users who made that mistake.
If your interactive shell is in the sh/ksh family you might have set LD_LIBRARY_PATH using "set" but not exported it. In that case it's new value will be set like a normal variable, but not exported into the environment. But it's more likely that your script is reinitializing the variable.
You can use the "env" command to dump out the exported environment from the interactive shell to check this.