Perl script failing to open file for writing on first run but succeeding on second run - perl

This script fails on the first run but succeeds the second time it is run with the same output_dir argument.
$output_dir is an argument passed in by user such as "/home/user/mydir".
Failing line:
open(StepOne, ">$output_dir/Step_One_Create_Resources.sh");
OS is Ubuntu 12.04
This seems like a permissions issue but I'm running the script as root.

Yes, maybe it is true that the $output_dir didn't exist when you call open(StepOne, ">$output_dir/Step_One_Create_Resources.sh") for the first time.So ,I strongly advise you to wait until the directory is created.
do{
}while(!(-e $output_dir));
open(StepOne, ">$output_dir/Step_One_Create_Resources.sh") or die $!;
this will make sure you open the output_dir just after the output_dir is actually created!

I found out what was causing this issue. When a directory that doesn't exist was being passed in as the output directory, the script would create the dir but it was failing to open a file at that location. The script runs fine when the output dir already exists.

Related

Why does Windows Powershell and Command Prompt attempt to open conda.exe from a temp folder on startup?

Whenever I open Windows Powershell I see this
powershell error message. It seems to me that every time I open Powershell it attempts to run conda.exe from a TEMP folder that no longer exists on my machine. Furthermore, when I open command prompt I see this error, so I'm guessing it's doing the same thing there.
I've checked my user and system path variables and there is no mention of the temp path that is listed in the powershell error. Any help with this would be greatly appreciated!
When such a thing happens, it's most likely because of your PowerShell $Profile, that is always loaded on startup.
Check the file, see if it exists and what's inside it. If you still got issues after that, let us know.

fopen doesnt work if script run by shell_exec

I have a strange issue on my LAMP installation. I have a php script with function that save a log to a file using fopen().
If I run the script by shell_exec($scriptfilewithpath) nothing happens (it may give some errors but I am not sure how to get it from shell_exec.
If I run the script from the ssh console (sudo php /opt/bitnami/apache2/htdocs/script.php ) I get " PHP Warning: fopen(/opt/bitnami/apache2/htdocs/log.log): failed to open stream: No such file or directory in /opt/bitnami/apache2/htdocs/script.php on line 18" .
root and bitnami(used to install and set up LAMP) users have full access to script folder.
Any ideas where I am going wrong?
Thanks

Perl Strawberry: How to run a perl script by clicking

I'm working on Windows 7 and I have installed Strawberry. I would like to run a perl skript (test.pl):
open OUTPUT, ">test.txt";
print OUTPUT "This is a test\n";
by just clicking on the file or redirect with left mouse click to Perl-program (open with/perl.exe). When I do this a console opens for less than a second and disappears but the file test.txt is not created. If I go to the MS command and enter
> C:\myplace>perl test.pl
it works. I never had this experience before (WinXP, other Windows 7 PC with ActivePerl and Windows 8 with strawberry). I would be very happy if somebody could give me a hint how to solve this problem.
There are two problems here:
Creating the file where you want it. When double-clicking a perl script to launch it, it is not executed in the context of the folder you have opened in Explorer. If you want to specify an explicit context, do the following near the top of your script:
use FindBin; # This is a module that finds the location of your script
BEGIN { chdir $FindBin::Bin } # set context to that directory.
When you then create a new file without an aboslute path, the path is considered relative to that directory.
You do not have the problem when running the script from the command line, because you have specified the correct path. But if you run it from C:\ like
C:\> perl myplace/test.pl
then you have created the file in C\test.txt. The FindBin solution fixes this.
When running a script by double-clicking it, the command line exits before you can inspect the output. This “problem” is shared by all programming languages on Windows. You can force the window to stay open by waiting for some input. You can either do
system("PAUSE"); # not portable to non-Windows!
or
warn "Press enter to exit...\n";
<STDIN>; # Read a line from the command line and discard it.
# Feels akward when launching from the command line
to wait until an Enter ⏎ is pressed .
The other solution is to always use the command line for your scripts, which is what I'd actually suggest.
Check what is your script executing folder, as it might differ from C:\myplace
use Cwd;
print getcwd();
sleep 7;

Standard for feeding test data to a Nagios plugin?

I'm developing a Nagios plugin in Perl (no Nagios::Plugin, just plain Perl). The error condition I'm checking for normally comes from a command output, called inside the plugin. However, it would be very inconvenient to create the error condition, so I'm looking for a way to feed test output to the plugin to see if it works correctly.
The easiest way I found at the moment would be with a command line option to optionally read input from a file instead of calling the command.
if($opt_f) {
open(FILE, $opt_f);
#output = <FILE>;
close FILE;
}
else {
#output = `my_command`;
}
Are there other, better ways to do this?
Build a command line switch into your plugin, and if you set -t on the command line, you use your test command at /path/to/test/command, else you run the 'production' command at /path/to/production/command
The default action is production, only test it the switch indicating test mode is present.
Or you could have a test version of the command that returns various status for you to test (via a command line argument perhaps).
You put the test version of mycommnd in some test directory (/my/nagois/tests/bin).
Then you manipulate the PATH environment variable on the command line that runs the test.
$ env PATH=/my/nagois/tests/bin:$PATH nagios_pugin.pl
The change to $PATH will only last for as long as that one command executes. The change is localized to the subshell that is spawned to run the plugin.
The backticks used to execute the command will cause the shell to use the PATH to locate the command, and that will the the test version of the command, which lives in the directory that is now the first one on the search path.
let me know if I wasn't clear.
New answer for new method.

ssh problem - no such file or directory

I have a script in remote host which I run as ./test /a/b/c/f and it runs perfectly fine on the maching.
Now I am on host machine, I run the same script as ssh root#dst "./test /a/b/c/f" and this too runs fine.
But from my perl script I execute it using backticks as
$file = "/a/b/c/f";
`ssh root\#dst "./test $file"`;
or
system("ssh root\#dst \"./test $file\" ");
it says bash:./test no such file or directory.
I tried escaping $file with single \ and \. even that does not work. Any idea how to solve this,
Thanks.
Have you tried using an absolute path instead of one based on ./ ? It'll probably solve this problem, and it's safer in general (especially when connecting as root) than depending on whatever sets the cwd (probably bash based on history) to set it the same way every time.