I have a perl script which works fine from shell but doesn't work from web (lighttpd + mod_cgi). I found out that problem is with the following string
my $lastupdate = `/opt/mongo/bin/mongo 127.0.0.1:27117/getVersion -u test -p test --eval 'db.polling.find({},{"_id":0,"host":0,"ports":0}).sort({"date":-1}).limit(1).forEach(function(x){printjson(x)})' | awk -F'"' '/date/{print \$4}' |sed 's/T/,/;s/Z//'`;
As i understood, when running from cgi, string is not being splitted. So i have done this by my own
my $lastupdate = system('/opt/mongo/bin/mongo', '127.0.0.1:27117/getVersion', '-u', 'test', '-p', 'test', '--eval', 'db.polling.find({},{"_id":0,"host":0,"ports":0}).sort({"date":-1}).limit(1).forEach(function(x){printjson(x)})', '|', 'awk', '-F', '"', '/date/{print', '\$4}', '|sed', 's/T/,/;s/Z//');
Script works now but gives me unexpected value (differs from shell's run value).
What did i miss?
P.S. I know that there are smarter ways to interact mongoDB from perl, but my env is totally firewalled. I have access neither to CPAN, nor to rh repos and perl mongoDB driver has too much deps to install it manually.
The environment that you run a program under from a shell is completely different to the environment that the same program gets when run from a web server. Most obviously, it will be run as a different user - one who will have far more restricted filesystem permissions that the average user.
You can (partly) simulate this by working out which user your web server runs as (perhaps apache, www or nobody) and using sudo to run your program as that user. This might well reveal what the problem is.
You can't just switch from backticks to system(). Backticks return the output from running the command line and system() returns a value which requires some interpretation. That'll be why you're seeing a different result.
Related
Is it possible to run perl script, which is located on a remote server, on that server from Windows? There is a job on a remote server that I want to get done every time I make something on Windows.
You have to have something listening for an instruction to run the script, and then you have to send the instruction.
There are lots of approaches you could take to that, including:
Running an SSH server and then connecting to it from an ssh client on the windows machine
Running an HTTP server, running the script through FastCGI, and then requesting the URL for it from curl or a browser on the Windows machine
Writing a custom protocol, listening on a socket, and then writing a custom client that you run on the Windows machine
Absolutely.
You can use plink to run commands on the server from Windows, assuming the server is running sshd.
plink user#a.domain.ext echo hi
This will print "hi\n" to the standard output.
Substitute /path/to/perl/script for echo above and substitute hi with any command line argument that the script needs.
plink is available here: http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html
One cautionary personal note from doing this many times is that the environment in which the perl script will be run is much less complete than what you would experience when logging in via a full SSH session and running the command interactively. Many environment variables you would normally expect are unset.
For instance using "set | wc -l" in the command above produces only 39 environment variables defined, but from an interactive SSH session, there are 57 environment variables defined. You have to make sure your perl script isn't depending on an environment variable that hasn't been set. For instance, you may need to use full paths for any modules that it uses, or by using the -I flag in the shebang line, because #INC may not be what you expect it to be.
I made a perl script to change owner of a file owned by some other user. Script is complete. My administrator save that in /sbin directory and set uid for it using chmod u+s name_of_script. But when I run this script it gives me error that chown operation is not permitted. I made a C program and it works by following same steps. So my question is if setuid is working for perl then I should not get that error because C code did not give me any error. So can i setuid for perl script or I should go with c code.
Don't tell me to ask administrator to change owner each time. Actually in server I have user name staging and I am hosting a joomla site in it. Now when I install some plugin then files related to that plugin are owned by www-data. So that's why I do not want to go to admin each time. Or you can give me some other solution also regarding my problem.
Many unix systems (probably most modern ones) ignore the suid bit on interpreter scripts, as it opens up too many security holes.
However, if you are using perl < 5.12.0, you can run perl scripts with setuid set, and they will run as root. How it works is that when the normal perl interpreter runs, and detects that the file you are trying to execute has the setuid bit set, and it then executes a program called suidperl. Suidperl takes care of elevating the user's privileges, and starting up the perl interpreter in a super-secure mode. suidperl is itself running with setuid root.
One of the consequences of this is that taint mode is turned on automatically. Other additional checks are also performed. You will probably see messages like:
Insecure $ENV{PATH} while running setuid at ./foobar.pl line 3.
perlsec provides some good information about securing such scripts.
suidperl is often not installed by default. You may have to install it via a separate package. If it is not installed then you get this message:
Can't do setuid (cannot exec sperl)
Having said all of that - you would be much better off using sudo to execute actions with elevated privileges. It is much more secure as you can specify exactly what is allowed to be executed via the sudoers file.
As of perl 5.12.0, suidperl was dropped. As a result, if you want to run a perl script on perl >= 5.12.0 with setuid set, you would have to write your own C wrapper. Again I recommend sudo as a better alternative.
No, you cannot use setuid aka chmod +s on scripts. The script's interpreter would be the thing that would actually need to be setuid, but doing that is a really bad idea. REALLY bad.
If you absolutely must have something written in Perl as setuid, the typical thing to do would be to make a small C wrapper that is setuid and executes the Perl script after starting. This gives you the best of both worlds in having a small and limited setuid script but still have a scripting language available to do the work.
If you have a sudo configuration that allows it (as most desktop linux distributions do for normal users), you can start your perl script with this line:
#!/usr/bin/env -S -i MYVAR=foo sudo --preserve-env perl -w -T
Then in your script before you use system() or backticks explicitly set your $ENV{PATH} (to de-taint it):
$ENV{PATH} = '/usr/bin';
Other environment variable that your script explicitly mentions or that get implicitly used by perl itself will have to be similarly de-tainted (see man perlsec).
This will probably (again depending on your exact sudo configuration) get you to the point where you only have to type in your root password once (per terminal) to run the script.
To avoid having to type your password at all you can add a line like this to the bottom of /etc/sudoers:
myusername ALL=(ALL) NOPASSWD:ALL
Of course you'd want to be careful with this on a multi-user system.
The -S options to env splits the string into separate arguments (making it possible to use options and combinations of programs like sudo/perl with the shebang mechanism). You can use -vS instead to see what it's doing.
The -i option to env clears the environment entirely.
MYVAR=foo introduces an environment variable definition.
The --preserve-env option to sudo will preserve MYVAR and others.
sudo sets up a minimal environment for you when it finds e.g. PATH to be missing.
The -i option to env and --preserve-env option to sudo may both be omitted and you'll probably end up with a slightly more extensive list of variables from your original environment including some X-related ones (presumably the ones the sudo configuration considers safe). --preserve-env without -i will end up passing along your entire unsanitized environment.
The -w and -T options to perl are generally advisable for scripts running as root.
How would I determine what script, program, or shell executed my Perl script?
Example: I might want to have human readable output if executed from shell (customized for each type of shell), a different type of output if called as a script from another perl script, and a machine readable format if executed from a program such as a continuous integration server.
Motivation: I have a tool that changes its output based on which shell executes it. I'd normally implement this behavior as an option to the script, but this tool's design doesn't allow for options. Other shells have environment variables that indicate what shell is running. I'm working on a patch to support Powershell, which has no such special variable.
Edit: Many of these answers happen to be linux specific. Unfortuantely, Powershell is for Windows. getppid, the $ENV{SHELL} variable, and shelling out to ps won't help in this case. This script needs to run cross-platform.
You use getppid(). Take this snippet in child.pl:
my $ppid = getppid();
system("ps --no-headers $ppid");
If you run it from the command line, system will show bash or similar (among other things). Execute it with system("perl child.pl"); in another script, e.g. parent.pl, and you will see that perl parent.pl executed it.
To capture just the name of the process with arguments (thanks to ikegami for the correct ps syntax):
my $ppid = getppid();
my $ps = `ps --no-headers -o cmd $ppid`;
chomp $ps;
EDIT: An alternative to this approach, might be to create soft links to your script, make the different contexts use different links to access your script and inspect $0 to build logic around that.
I would suggest a different approach to accomplish your goal. Instead of guessing at the context, make it more explicit. Each use case is wholly separate, so have three different interfaces.
A function which can be called inside a Perl program. This would likely return a Perl data structure. This is far easier, faster and more reliable than parsing script output. It would also serve as the basis for the scripts.
A script which outputs for the current shell. It can look at $ENV{SHELL} to discover what shell is running. For bonus points, provide a switch to explicitly override.
A script which can be called inside a non-Perl program, such as your continuous integration server, and issue machine readable output. XML and/or JSON or whatever.
2 and 3 would be just thin wrappers to format the data coming out of 1.
Each is tailored to fit its specific need. Each will work without heuristics. Each will be far simpler than trying to guess the context and what the user wants.
If you can't separate 2 and 3, have the continuous integration server set an environment variable and look for it.
Depending on your environment, you may be able to pick it up from the environment variables. Consider the following code:
/usr/bin/perl -MData::Dumper -e 'print Dumper(\%ENV);' | grep sh
On my Ubuntu system, it gets me:
'SHELL' => '/bin/bash',
So I guess that says I'm running perl from a bash shell. If you use something else, the SHELL variable may give you a hint.
But let's say you know you're in bash, but perl is run from a subshell. Then try:
/bin/sh -c "/usr/bin/perl -MData::Dumper -e 'print Dumper(\%ENV);'" | grep sh
You will find:
'_' => '/bin/sh',
'SHELL' => '/bin/bash',
So the shell is still bash, but bash has a variable $_ which also show the absolute filename of the shell or script being executed, which may also give a valuable hint. Similarily, for other environments there will most probably be clues left in the perl %ENV hash that should give you valuable hints.
If you're running PowerShell 2.0 or above (most likely), you can infer the shell as a parent process by examining the environment variable %psmodulepath%. By default, it points to the system modules under %windir%\system32\windowspowershell\v1.0\modules; this is what you would see if you examine the variable from cmd.exe.
However, when PowerShell starts up, it prepends the user's default module search path to this environment variable which looks like: %userprofile%\documents\windowspowershell\modules. This is inherited by child processes. So, your logic would be to test if %psmodulepath% starts with %userprofile% to detect powershell 2.0 or higher. This won't work in PowerShell 1.0 because it does not support modules.
This is on Windows XP with PowerShell v2.0, so take it with a grain of salt.
In a cmd.exe shell, I get:
PSModulePath=C:\WINDOWS\system32\WindowsPowerShell\v1.0\Modules\
whereas in the PowerShell console window, I get:
PSModulePath=E:\Home\user\WindowsPowerShell\Modules;C:\WINDOWS\system32\WindowsP
owerShell\v1.0\Modules\
where E:\Home\user is where my "My Documents" folder is. So, one heuristic may be to check if PSModulePath contains a user dependent path.
In addition, in a console window, I get:
!::=::\
in the environment. From the PowerShell ISE, I get:
!::=::\
!C:=C:\Documents and Settings\user
I have this command that I load (example.sh) works well in the unix command line.
However, if I execute it in Perl using the system or ` syntax, it doesn't work.
I am guessing certain settings like environment variables and other external sh files weren't loaded.
Is there an example coding to ensure it will work?
More Updates on coding execution failure (I have been trying with different codes):
push (#JOBSTORUN, "cd $a/$b/$c/$d; loadproject cats; sleep 60;");
...
my $pm = new Parallel::ForkManager(3);
foreach my $job (#JOBSTORUN) {
$pm->start and next;
print(`$job`);
$pm->finish;
}
print "\n\n[DONE] FINISHED EXECUTING JOBS\n";
Output Messages:
sh: loadproject: command not found
Can you show us what you have tried so far? How are you running this program?
My first suspicion wouldn't be the environment if you are running it from a login shell. any Perl script you start (well, any program, really) inherits the same environment. However, if you are running the program through cron, then that's a different story.
The other mistakes I usually make in these situations is specifying the relative paths incorrectly. The paths are fine from the command line, but my Perl script has some other current working directory.
For general advice, see Interact with the system when Perl isn't enough. There's also a chapter in Learning Perl about this.
That's about the best advice you can hope for given the very limited information you've shared with us.
I have a script in remote host which I run as ./test /a/b/c/f and it runs perfectly fine on the maching.
Now I am on host machine, I run the same script as ssh root#dst "./test /a/b/c/f" and this too runs fine.
But from my perl script I execute it using backticks as
$file = "/a/b/c/f";
`ssh root\#dst "./test $file"`;
or
system("ssh root\#dst \"./test $file\" ");
it says bash:./test no such file or directory.
I tried escaping $file with single \ and \. even that does not work. Any idea how to solve this,
Thanks.
Have you tried using an absolute path instead of one based on ./ ? It'll probably solve this problem, and it's safer in general (especially when connecting as root) than depending on whatever sets the cwd (probably bash based on history) to set it the same way every time.