What is role of effective and real user or group id in perl language? - perl

In perl language when talking about special cases of filehandling
File Test= -r,-w,-x are READ,WRITE AND EXECUTE options for effective user
File Test= -R,-W,-X are READ,WRITE AND EXECUTE options for real user.
Where does the concept of real user and effective user imply in perl?

This isn't a perl concept, it's a Unix concept. It's to do with setuid processes - and in particular, permissions when operating as root.
If a process starts as you then it's real UID remains yours. But if it setuids to another user (doesn't have to be root) it'll have a different permission set.
The tests above allow you to differentiate between the two cases - could I as a normal user edit this file, and can I as a privileged user do so?

As much as my experience with Perl and Ubuntu is concerned:
Normally it is not possible to use setuid file bit with Perl scripts (see Can i setuid for perl script? for details).
When a Perl script is started so the $< (real user id), as well as $> (the effective user id) will be set to the same user, who started the script.
If this initial user is root (i.e. the $< and $> are both 0), so you can change the effective user id by setting the $> and you can also later change the effective user id back. If the user was not root initially so you will get an exception about permissions when trying to change the effective user.
I didn't find any way to somehow grant the "change effective user" permission to an other user than the root.
What I wrote above applies to a standard Perl installation on Ubuntu. If you tweak it somehow, e.g. you change setuid bit on the Perl interpret (not on the script), so you can have a different $<, $> after the script is started, but normally you don't want to do that.
With Perl on Windows:
When trying to modify $> so I get always a "not implemented" exception.

Related

Another way to set DB::deep?

When debugging complex code using standart Perl debugger in NonStop mode sometimes we get 100 levels deep in subroutine calls error.
Is there another way to set $DB::deep variable, without touching code ?
I tried to use dumpDepth option, but seems like it's not available in NonStop mode.
I know about perl -MPerlIO='via;$DB::deep=500' hack, but it's not working with perl version >= 5.20
Create a ~/.perldb file with
$DB::deep=500
Set permissions on this file to something secure (0444 or 0644) so you don't get this admonishment:
perldb: Must not source insecure rcfile /Users/mob/.perldb.
You or the superuser must be the owner, and it must not
be writable by anyone but its owner.

Perl script to run as root (generalized)

I would like to be able to run certain Perl scripts on my system as root, even though the "user" calling them is not running as root.
For each script I can write a C wrapper, setting setuid root for that wrapper; the wrapper would change the UID to 0 and then call the Perl script, which itself would not have the setuid bit set. This avoids unfortunate impediments while attempting to run setuid root scripts.
But I don't want to write a C wrapper for each script. I just want one C wrapper to do the job for the whole system. I also don't want just any script to be able to use this C wrapper; the C wrapper itself should be able to check some specific characteristic of the Perl script to see whether changing the UID to root is acceptable.
I don't see any other Stack Overflow question yet which addresses this issue.
I know the risks, I own the system, and I don't want something arbitrarily babysitting me by standing in my way.
What you are trying to do is very hard, even by experts. The setuid wrapper that used to come with perl no longer exists because of that, and because it's no longer needed these days. Linux and I presume other modern unix systems support setuid scripts, so you don't need highly-fragile and complex wrappers.
If you really need a wrapper, don't re-invent the wheel; just use sudo!
So use a single wrapper, take the perl script to execute as an argument, and have the C wrapper compare the length of the script and a SHA-3 or SHA-2 hash of the script contents to expected values.
After some brainstorming:
All the requirements can be met through the following steps. First we show steps which are done just once.
Step one. Since each script which should be run as root has to be marked by user root somehow (otherwise, just any user could do this), the system administrator takes advantage of his privileged status to choose a user ID which will never actually be assigned to anyone. In this example, we use user ID 9999.
Step two. Compile a particular C wrapper and let the object code run suid root. The source code for that wrapper can be found here.
Then, the following two steps are done once for each Perl script to be run as root.
Step one. Begin each Perl script with the following code.
if($>)
{
exec { "/path-to-wrapper-program" }
( "/path-to-wrapper-program",$0,#ARGV);
}
Step two. As root (obviously), change the owner of the Perl script to user 9999. That's it. No updating of databases or text files. All requirements for running a Perl script as root reside with the script itself.
Comment on step one: I actually place the above Perl snippet after these lines:
use strict;
use warnings FATAL=>"all";
... but suit yourself.

how to read texts on the terminal inside perl script

Is there any way to capture the texts on termianl screen inside a perl script. I know there are some functions like system,exec,backticks but the problem is that they execute commands FROM the script.For ex:- in terminal i write cd/ (or) ls,and after that i run my perl script which will read what was written on termianl screen(in this case, scipt will capture cd/ (or) ls-whichever was given to termianl). I came with one solution that by passing the commands which you wrote on termianl as a command line arguments to the script,but any other way???
Like this maybe:
history | perl -ne 'print $_'
As I understand it, in a situation where you've typed some stuff into a terminal like this:
[tai#littlerobot ~] echo "Hello"
Hello
[tai#littlerobot ~] perl myscript.pl
You want myscript.pl to be able to access the echo "Hello" part, and possibly also the Hello that was that command's output.
Perl does not provide such a feature. No programming language does or can provide such a feature because the process in which your script/program runs has no intrinsic knowledge about what happened in the same terminal before it was run. The only way it could access this text would be if it could ask the currently running terminal, which will have some record of this information (i.e. the scrollback buffer), even if it cannot distinguish between which characters in the text were typed by you, and which are output. However, I know of no terminal that exposes that information via any kind of public API.
So if you want myscript.pl to be able to access that echo "Hello", you'll need to pass it to your script. Piping history to your script (as shown by Mark Setchell in his answer) is one technique. history is a shell built-in, so it has as much knowledge as your shell has (which is not quite the same knowledge as your terminal has). In particular it can give you a list of what commands have been typed in this shell session. However, it cannot tell you about the output generated by those commands. And it cannot tell you about other shell sessions, so doing this in Perl is fairly useless:
my #history = `tcsh -c history`;
The last thing you could try (though it would be incredibly complicated to do) would be to ask the X server (or Windows if running on that operating system) for a screen shot and then attempt to locate which rectangle the current terminal is running in and perform OCR on it. This would be fraught with problems though, such as dealing with overlapping windows.
So, in summary, you cannot do this. It's nothing to do with Perl. You cannot do this in any programming language.

Real world examples of UNIX named pipes

I usually think of UNIX pipes as a quick and dirty way to interact with the console, doing things such as:
ls | grep '\.pdf$' #list all pdfs
I understand that it's possible to use create named pipes using mkfifo and mknod.
Are named pipes still used significantly today, or are they a relic of the past?
They are still used, although you might not notice. As a first-class file-system object provided by the operating system, a named pipe can be used by any program, regardless of what language it is written in, that can read and write to the file system for interprocess communication.
Specific to bash (and other shells), process substitution can be implemented using named pipes, and on some platforms that may be how it actually is implemented. The following
command < <( some_other_command )
is roughly identical to
mkfifo named_pipe
some_other_command > named_pipe &
command < named_pipe
and so is useful for things like POSIX-compliant shell code, which does not recognize process substitution.
And it works in the other direction: command > >( some_other_command ) is
mkfifo named_pipe
some_other_command < named_pipe &
command > named_pipe
pianobar, the command line Pandora Radio player, uses a named pipe to provide a mechanism for arbitrary control input. If you want to control Pianobar with another app, such as PianoKeys, you set it up to write command strings to the FIFO file, which piano bar watches for incoming commands.
I wrote a MAMP stack app in college that we used to manage users in an LDAP database on a file server. The PHP scripts would write commands to a FIFO that a shell script running in launchd would read from and interact with the LDAP database. I had no idea what I was doing, so I don’t know if there was a better or more correct way to do it, but it worked great at the time.
Named pipes are useful where a program would expect a path to a file as an argument as opposed to being willing to read/write with stdin and stdout, though very modern versions of bash can get around this with < ( foo ) it is still sometimes useful to have a file like object which is usable as a pipe.

Expect/Tcl Script: How to prompt user without writing to stdout?

In Expect script, how do you prompt user without writing to stdout? Is it possible to prompt the user through stderr?
I have a script written in Perl to automate some testing using ssh. In order to automate login for ssh, I wrapped the ssh command using Expect, but sometimes the password for ssh expires mid-way in executing the code (e.g rsa token refreshes every 30seconds).
I found this: How can I make an expect script prompt for a password?
which works great, except it prompts the user through stdout. My Perl script reads and parses the stdout.
I would like to abstract this to my Expect script without modifying the Perl code, is it possible?
Edit: I just realized my use case is stupid since I'm not sure how Perl will be able to interact with the Expect script prompt since I'm calling the Expect script from Perl. :/ But it would still be good to know if it's possible to Expect to write to stderr. :D
Thanks!
You can try accessing the “file” /dev/tty (it's really a virtual device). That will bypass the Perl script and contact the user's terminal directly. Once you've got that open, you use normal Expect commands to ask the user to give their password (remembering to turn off echo at the right time with stty).
Otherwise, your best bet might actually be to pop up a GUI; using a little Tcl/Tk script to do this is pretty trivial:
# This goes in its own script:
pack [label .l -text "Password for [lindex $argv 0]"]
pack [entry .e -textvariable pass -show "*"]
bind .e <Enter> {
puts $pass
exit
}
focus .e
# In your expect script:
set password [exec wish getPass.tcl "foo#bar.example.com"]
That's all. It's also a reasonably secure way of getting a password, as the password is always communicated by (non-interceptable) pipes. The only downside is that it requires a GUI to be present.
The GUI is probably in need of being made prettier. That's just the bare bones of what you need.