Bourne shell: how to redirect to multiple-digit fd? - sh

In a C program I have a file descriptor (actual example: 14, but of course the program doesn't know this in advance) open for writing. I wish to call system(3) to run something and send its standard output to this file descriptor. Of course, this calls /bin/sh, which is the Bourne shell, which doesn't recognize constructs of the form 1>&14. Is there an alternative syntax (perhaps using braces or something) which I can use to let the Bourne shell see the 14 and use it? I could of course do one of these:
Do a fork/exec combo instead of system(3) and redirect by hand.
Redirect the output to a file and then copy the data therefrom to file descriptor 14.
Since I have root, have /bin/sh point to Bash.
The most elegant way would be to discover a syntax by which the Bourne shell will accept a multiple-digit file descriptor number. Is there such?

Consider my favorite rules about optimization: (1) Do not optimize. (2) For experts: do not optimize yet.
The situation arising in the question will (for me) arise in code which will be executed at most 100 times a day. So this will suffice:
sprintf(big_string,
"bash -c \"somethingsomething 1>&%d\"",
the_fd
);

Related

Perl script to run as root (generalized)

I would like to be able to run certain Perl scripts on my system as root, even though the "user" calling them is not running as root.
For each script I can write a C wrapper, setting setuid root for that wrapper; the wrapper would change the UID to 0 and then call the Perl script, which itself would not have the setuid bit set. This avoids unfortunate impediments while attempting to run setuid root scripts.
But I don't want to write a C wrapper for each script. I just want one C wrapper to do the job for the whole system. I also don't want just any script to be able to use this C wrapper; the C wrapper itself should be able to check some specific characteristic of the Perl script to see whether changing the UID to root is acceptable.
I don't see any other Stack Overflow question yet which addresses this issue.
I know the risks, I own the system, and I don't want something arbitrarily babysitting me by standing in my way.
What you are trying to do is very hard, even by experts. The setuid wrapper that used to come with perl no longer exists because of that, and because it's no longer needed these days. Linux and I presume other modern unix systems support setuid scripts, so you don't need highly-fragile and complex wrappers.
If you really need a wrapper, don't re-invent the wheel; just use sudo!
So use a single wrapper, take the perl script to execute as an argument, and have the C wrapper compare the length of the script and a SHA-3 or SHA-2 hash of the script contents to expected values.
After some brainstorming:
All the requirements can be met through the following steps. First we show steps which are done just once.
Step one. Since each script which should be run as root has to be marked by user root somehow (otherwise, just any user could do this), the system administrator takes advantage of his privileged status to choose a user ID which will never actually be assigned to anyone. In this example, we use user ID 9999.
Step two. Compile a particular C wrapper and let the object code run suid root. The source code for that wrapper can be found here.
Then, the following two steps are done once for each Perl script to be run as root.
Step one. Begin each Perl script with the following code.
if($>)
{
exec { "/path-to-wrapper-program" }
( "/path-to-wrapper-program",$0,#ARGV);
}
Step two. As root (obviously), change the owner of the Perl script to user 9999. That's it. No updating of databases or text files. All requirements for running a Perl script as root reside with the script itself.
Comment on step one: I actually place the above Perl snippet after these lines:
use strict;
use warnings FATAL=>"all";
... but suit yourself.

how to read texts on the terminal inside perl script

Is there any way to capture the texts on termianl screen inside a perl script. I know there are some functions like system,exec,backticks but the problem is that they execute commands FROM the script.For ex:- in terminal i write cd/ (or) ls,and after that i run my perl script which will read what was written on termianl screen(in this case, scipt will capture cd/ (or) ls-whichever was given to termianl). I came with one solution that by passing the commands which you wrote on termianl as a command line arguments to the script,but any other way???
Like this maybe:
history | perl -ne 'print $_'
As I understand it, in a situation where you've typed some stuff into a terminal like this:
[tai#littlerobot ~] echo "Hello"
Hello
[tai#littlerobot ~] perl myscript.pl
You want myscript.pl to be able to access the echo "Hello" part, and possibly also the Hello that was that command's output.
Perl does not provide such a feature. No programming language does or can provide such a feature because the process in which your script/program runs has no intrinsic knowledge about what happened in the same terminal before it was run. The only way it could access this text would be if it could ask the currently running terminal, which will have some record of this information (i.e. the scrollback buffer), even if it cannot distinguish between which characters in the text were typed by you, and which are output. However, I know of no terminal that exposes that information via any kind of public API.
So if you want myscript.pl to be able to access that echo "Hello", you'll need to pass it to your script. Piping history to your script (as shown by Mark Setchell in his answer) is one technique. history is a shell built-in, so it has as much knowledge as your shell has (which is not quite the same knowledge as your terminal has). In particular it can give you a list of what commands have been typed in this shell session. However, it cannot tell you about the output generated by those commands. And it cannot tell you about other shell sessions, so doing this in Perl is fairly useless:
my #history = `tcsh -c history`;
The last thing you could try (though it would be incredibly complicated to do) would be to ask the X server (or Windows if running on that operating system) for a screen shot and then attempt to locate which rectangle the current terminal is running in and perform OCR on it. This would be fraught with problems though, such as dealing with overlapping windows.
So, in summary, you cannot do this. It's nothing to do with Perl. You cannot do this in any programming language.

Real world examples of UNIX named pipes

I usually think of UNIX pipes as a quick and dirty way to interact with the console, doing things such as:
ls | grep '\.pdf$' #list all pdfs
I understand that it's possible to use create named pipes using mkfifo and mknod.
Are named pipes still used significantly today, or are they a relic of the past?
They are still used, although you might not notice. As a first-class file-system object provided by the operating system, a named pipe can be used by any program, regardless of what language it is written in, that can read and write to the file system for interprocess communication.
Specific to bash (and other shells), process substitution can be implemented using named pipes, and on some platforms that may be how it actually is implemented. The following
command < <( some_other_command )
is roughly identical to
mkfifo named_pipe
some_other_command > named_pipe &
command < named_pipe
and so is useful for things like POSIX-compliant shell code, which does not recognize process substitution.
And it works in the other direction: command > >( some_other_command ) is
mkfifo named_pipe
some_other_command < named_pipe &
command > named_pipe
pianobar, the command line Pandora Radio player, uses a named pipe to provide a mechanism for arbitrary control input. If you want to control Pianobar with another app, such as PianoKeys, you set it up to write command strings to the FIFO file, which piano bar watches for incoming commands.
I wrote a MAMP stack app in college that we used to manage users in an LDAP database on a file server. The PHP scripts would write commands to a FIFO that a shell script running in launchd would read from and interact with the LDAP database. I had no idea what I was doing, so I don’t know if there was a better or more correct way to do it, but it worked great at the time.
Named pipes are useful where a program would expect a path to a file as an argument as opposed to being willing to read/write with stdin and stdout, though very modern versions of bash can get around this with < ( foo ) it is still sometimes useful to have a file like object which is usable as a pipe.

How do I restrict Perl debugger output to lines in my own script?

I'm running the debugger in noninteractive mode, with the output written to a file. I want to print out each line of my Perl script as it executes, but only lines in the script itself. I don't want to see the library code (File::Basename, Exporter::import, etc.) that the script calls. This seems like the sort of thing that should be easy to do, but the documentation for perldebug only discusses limiting the depth for dumping structures. Is what I want possible, and if so, how?
Note that I'm executing my program as follows:
PERLDB_OPTS="LineInfo=temp.txt NonStop=1 AutoTrace=1 frame=2" perl -dS myprog.pl arg0 arg1
By default, Devel::DumpTrace doesn't step into system modules, and you can exercise fine control over what modules the debugger will step into (it's not easy, but it's possible). Something like
DUMPTRACE_FH=temp.txt perl -d:DumpTrace=quiet myprog.pl
would be similar to what you're apparently trying to do.
Devel::DumpTrace also does a lot more processing on each line -- figuring out variable values and including them in the output -- so it may be overkill and run a lot slower than perl -dS ...
(Crikey, that's now two plugs for Devel::DumpTrace this week!)
Are you talking about not wanting to step through functions outside of your own program? For that, you want to use n instead of s.
From perldebug:
s [expr] Single step. Executes until the beginning of another
statement, descending into subroutine calls. If an
expression is supplied that includes function calls, it too
will be single‐stepped.
n [expr] Next. Executes over subroutine calls, until the beginning
of the next statement. If an expression is supplied that
includes function calls, those functions will be executed
with stops before each statement.

How to discover command line options (if any) for an undocumented executable of unknown origin?

Take an undocumented executable of unknown origin. Trying /?, -h, --help from the command line yields nothing. Is it possible to discover if the executable supports any command line options by looking inside the executable? Possibly reverse engineering? What would be the best way of doing this?
I'm talking about a Windows executable, but would be interested to hear what different approaches would be needed with another OS.
In linux, step one would be run strings your_file which dumps all the strings of printable characters in the file. Any constants chars will thus be shown, including any "usage" instructions.
Next step could be to run ltrace on the file. This shows all function calls the program does. If it includes getopt (or familiar), then it is a sure sign that it is processing input parameters. In fact, you should be able to see exactly what argument the program is expecting since that is the third parameter to the getopt function.
For Windows, you can see this question about decompiling Windows executables. It should be relatively easy to at least discover the options (what they actually do is a different story).
If it's a .NET executable try using Reflector. This will convert the MSIL code into the equivalent C# code which may make it easier to understand. Unfortunately private and local variable names will be lost, as these are not stored in the MSIL but it should still be possible to follow what's going on.