Perl script to run as root (generalized) - perl

I would like to be able to run certain Perl scripts on my system as root, even though the "user" calling them is not running as root.
For each script I can write a C wrapper, setting setuid root for that wrapper; the wrapper would change the UID to 0 and then call the Perl script, which itself would not have the setuid bit set. This avoids unfortunate impediments while attempting to run setuid root scripts.
But I don't want to write a C wrapper for each script. I just want one C wrapper to do the job for the whole system. I also don't want just any script to be able to use this C wrapper; the C wrapper itself should be able to check some specific characteristic of the Perl script to see whether changing the UID to root is acceptable.
I don't see any other Stack Overflow question yet which addresses this issue.
I know the risks, I own the system, and I don't want something arbitrarily babysitting me by standing in my way.

What you are trying to do is very hard, even by experts. The setuid wrapper that used to come with perl no longer exists because of that, and because it's no longer needed these days. Linux and I presume other modern unix systems support setuid scripts, so you don't need highly-fragile and complex wrappers.
If you really need a wrapper, don't re-invent the wheel; just use sudo!

So use a single wrapper, take the perl script to execute as an argument, and have the C wrapper compare the length of the script and a SHA-3 or SHA-2 hash of the script contents to expected values.

After some brainstorming:
All the requirements can be met through the following steps. First we show steps which are done just once.
Step one. Since each script which should be run as root has to be marked by user root somehow (otherwise, just any user could do this), the system administrator takes advantage of his privileged status to choose a user ID which will never actually be assigned to anyone. In this example, we use user ID 9999.
Step two. Compile a particular C wrapper and let the object code run suid root. The source code for that wrapper can be found here.
Then, the following two steps are done once for each Perl script to be run as root.
Step one. Begin each Perl script with the following code.
if($>)
{
exec { "/path-to-wrapper-program" }
( "/path-to-wrapper-program",$0,#ARGV);
}
Step two. As root (obviously), change the owner of the Perl script to user 9999. That's it. No updating of databases or text files. All requirements for running a Perl script as root reside with the script itself.
Comment on step one: I actually place the above Perl snippet after these lines:
use strict;
use warnings FATAL=>"all";
... but suit yourself.

Related

How can I have one perl script call another and get the return results?

How can I have one perl script call another perl script and get the return results?
I have perl Script B, which does a lot of database work, prints out nothing, and simply exits with a 0 or a 3.
So I would like perl Script A call Script B and get its results. But when I call:
my $result = system("perl importOrig.pl filename=$filename");
or
my $result = system("/usr/bin/perl /var/www/cgi-bin/importOrig.pl filename=$filename");
I get back a -1, and Script B is never called.
I have debugged Script B, and when called manually there are no glitches.
So obviously I am making an error in my call above, and not sure what it is.
There are many things to consider.
Zeroth, there's the perlipc docs for InterProcess Communication. What's the value in the error variable $!?
First, use $^X, which is the path to the perl you are executing. Since subprocesses inherit your environment, you want to use the same perl so it doesn't confuse itself with PERL5LIB and so on.
system("$^X /var/www/cgi-bin/importOrig.pl filename=$filename")
Second, CGI programs tend to expect particular environment variables to be set, such as REQUEST_METHOD. Calling them as normal command-line programs often leaves out those things. Try running the program from the command line to see how it complains. Check that it gets the environment it wants. You might also check the permissions of the program to see if you (or whatever user runs the calling program) are allowed to read it (or its directory, etc). You say there are no glitches, so maybe that's not your particular problem. But, do the two environments match in all the ways they should?
Third, consider making the second program a modulino. You could run it normally as a script from the command line, but you could also load it as a Perl library and use its features directly. This obviates all the IPC stuff. You could even fork so that stuff runs concurrently.

What is role of effective and real user or group id in perl language?

In perl language when talking about special cases of filehandling
File Test= -r,-w,-x are READ,WRITE AND EXECUTE options for effective user
File Test= -R,-W,-X are READ,WRITE AND EXECUTE options for real user.
Where does the concept of real user and effective user imply in perl?
This isn't a perl concept, it's a Unix concept. It's to do with setuid processes - and in particular, permissions when operating as root.
If a process starts as you then it's real UID remains yours. But if it setuids to another user (doesn't have to be root) it'll have a different permission set.
The tests above allow you to differentiate between the two cases - could I as a normal user edit this file, and can I as a privileged user do so?
As much as my experience with Perl and Ubuntu is concerned:
Normally it is not possible to use setuid file bit with Perl scripts (see Can i setuid for perl script? for details).
When a Perl script is started so the $< (real user id), as well as $> (the effective user id) will be set to the same user, who started the script.
If this initial user is root (i.e. the $< and $> are both 0), so you can change the effective user id by setting the $> and you can also later change the effective user id back. If the user was not root initially so you will get an exception about permissions when trying to change the effective user.
I didn't find any way to somehow grant the "change effective user" permission to an other user than the root.
What I wrote above applies to a standard Perl installation on Ubuntu. If you tweak it somehow, e.g. you change setuid bit on the Perl interpret (not on the script), so you can have a different $<, $> after the script is started, but normally you don't want to do that.
With Perl on Windows:
When trying to modify $> so I get always a "not implemented" exception.

Executing system commands safely while coding in Perl

Should one really use external commands while coding in Perl? I see several disadvantages of it. It's not system independent plus security risks might also be there. What do you think? If there is no way and you have to use the shell commands from Perl then what is the safest way to execute that particular command (like checking pid, uid etc)?
It depends on how hard it is going to be to replicate the functionality in Perl. If I needed to run the m4 macro processor on something, I'd not think of trying to replicate that functionality in Perl myself, and since there's no module on http://search.cpan.org/ that looks suitable, it would appear others agree with me. In that case, then, using the external program is sensible. On the other hand, if I needed to read the contents of a directory, then the combination of readdir() et al plus stat() or lstat() inside Perl is more sensible than futzing with the output of ls.
If you need to execute commands, think very carefully about how you invoke them. In particular, you probably want to avoid the shell interpreting the arguments, so use the array form of system (see also exec), etc, rather than a single string for the command plus arguments (which means the shell is used to process the command line).
Executing external commands can be expensive simply because it involves forking new process and watching for its output if you need it.
Probably more importantly, should external process fail for any reason, it may be difficult to understand what happened by means of your script. Worse still, surprisingly often external process can be stuck forever, so will be your script. You can use special tricks like opening pipe and watching for output in loop, but this itself is error-prone.
Perl is very capable of doing many things. So, if you stick to using only Perl native constructs and modules to accomplish your tasks, not only it will be faster because you never fork, but it will be more reliable and easier to catch errors by looking at native Perl objects and structures returned by library routines. And of course, it will be automatically portable to different platforms.
If your script runs under elevated permissions (like root or under sudo), you should be very careful as to what external programs you execute. One of the simple ways to ensure basic security is to always specify commands by full name, like /usr/bin/grep (but still think twice and just do grep by Perl itself!). However, even this may not be enough if attacker is using LD_PRELOAD mechanism to inject rogue shared libraries.
If you are willing to go very secure, it is suggested to use tainted check by using -T flag like this:
#!/usr/bin/perl -T
Taint flag will be also enabled by Perl automatically if your script was determined to have different real and effective user or group ids.
Tainted mode will severely limit your ability to do many things (like system() call) without Perl complaining - see more at http://perldoc.perl.org/perlsec.html#Taint-mode, but it will give you much higher security confidence.
Should one really use external commands while coding in Perl?
There's no single answer to this question. It all depends on what you are doing within the wide range of potential uses of Perl.
Are you using Perl as a glorified shell script on your local machine, or just trying to find a quick-and-dirty solution to your problem? In that case, it makes a lot of sense to run system commands if that is the easiest way to accomplish your task. Security and speed are not that important; what matters is the ability to code quickly.
On the other hand, are you writing a production program? In that case, you want secure, portable, efficient code. It is often preferable to write the functionality in Perl (or use a module), rather than calling an external program. At least, you should think hard about the benefits and drawbacks.

Is Perl unit-testing only for modules, not programs?

The docs I find around the ’net and the book I have, Perl Testing, either say or suggest that unit-testing for Perl is usually done when creating modules.
Is this true? Is there no way to unit-test actual programs using Test::More and cousins?
Of course you can test scripts using Test::More. It's just harder, because most scripts would need to be run as a separate process from which you capture the output, and then test it against expected output.
This is why modulinos (see chapter 17 in: brian d foy, Mastering Perl, second edition, O'Reilly, 2014) were developed. A modulino is a script that can also be used as a module. This makes it easier to test, as you can load the modulino into your test script and then test its functions like you would a regular module.
The key feature of a modulino is this:
#! /usr/bin/perl
package App::MyName; # put it in a package
run() unless caller; # Run program unless loaded as a module
sub run {
... # your program here
}
The function doesn't have to be called run; you could use main if you're a C programmer. You'd also normally have additional subroutines that run calls as needed.
Then your test scripts can use require "path/to/script" to load your modulino and exercise its functions. Since many scripts involve writing output, and it's often easier to print as you go instead of doing print sub_that_returns_big_string(), you may find Test::Output useful.
It's not an easiest way to test your code, but you can test the script directly. You can run the script with specific parameters using system (or better IPC::Run3), capture output and compare it with expected result.
But this will be a top level test. It'll be hard to tell which part of your code caused problem.
Unit-tests are used to test individual modules. This makes it easier to see where the problem came from. Also testing functions individually is much easier, because you only need to think about what can happen in smaller piece of code.
The way of testing is depend on your project size. You can, of cause, put everything into single file, but putting your code into module (or even splitting it into different modules) will give you benefits in future: code can be easier reused and tested.

Is there an interpreter for sending commands to Selenium?

I'm a Perl programmer doing some web application testing using Selenium. I was wondering if there's some kind of interactive interpreter which would allow me to type Selenium commands at a prompt and have them sent to selenium.
The way I'm currently developing code is to type all of the commands into a Perl script, and then execute the script. This make the development process painfully slow, because it makes it necessary to rerun the entire script whenever I make a change.
(If I were a Ruby programmer, I bet the Interactive Ruby Shell could help with this, but I'm hoping to find a solution for Perl.) Thanks!
The Java-based SeleniumServer has an --interactive mode.
I don't know about a Selenium shell, but if you are looking for a Perl REPL, there are modules such as Devel::REPL and Carp::REPL. I've made shells for various simple things using my Polyglot module, although I haven't looked at that in awhile.
May I ask why it's "necessary to re-run the entire script"? Since your solution is an interactive shell, i'm assuming that it's NOT because you need previous commands to set something up. If that's a correct assumption, simply make your set of Selenium commands in Perl script in such a way that you can skip the first N commands.
You can do it by explicit "if" wrappers.
Or you can have a generic "call a Selenium command based on a config hash" driver executed in a loop, and adding the config hash for every single Selenum command to an array of hashes.
Then you can either have an input parameter to the test script that refers to a # in an array; or you can even include unique labels for each test as a part of a config hash and pass "start with test named X" or even "only do test named X" as input on command line.