Export variable to parent process - fish

When writing fish scripts, I'd like to be able to factor out common code into separate scripts. Something akin to this:
#! /usr/bin/fish
fish setup.sh
// Put rest of my code here.
but, as far as I can see, there's no way to export variables to parent processes, only children. Children can be done with set --export; is there a way to make variables visible to parents? I understand that the mechanisms are probably not similar, but it would be nice.

The UNIX process model makes it impossible for a process to affect the environment of any other process. This isn't a limitation of fish. The solution is to use the source command so the statements in setup.sh are executed in the context of the current process. If setup.sh is not a fish script the usual solution is to have it write export VAR=val statements to stdout then do setup.sh | source -.

Related

Perl script to run as root (generalized)

I would like to be able to run certain Perl scripts on my system as root, even though the "user" calling them is not running as root.
For each script I can write a C wrapper, setting setuid root for that wrapper; the wrapper would change the UID to 0 and then call the Perl script, which itself would not have the setuid bit set. This avoids unfortunate impediments while attempting to run setuid root scripts.
But I don't want to write a C wrapper for each script. I just want one C wrapper to do the job for the whole system. I also don't want just any script to be able to use this C wrapper; the C wrapper itself should be able to check some specific characteristic of the Perl script to see whether changing the UID to root is acceptable.
I don't see any other Stack Overflow question yet which addresses this issue.
I know the risks, I own the system, and I don't want something arbitrarily babysitting me by standing in my way.
What you are trying to do is very hard, even by experts. The setuid wrapper that used to come with perl no longer exists because of that, and because it's no longer needed these days. Linux and I presume other modern unix systems support setuid scripts, so you don't need highly-fragile and complex wrappers.
If you really need a wrapper, don't re-invent the wheel; just use sudo!
So use a single wrapper, take the perl script to execute as an argument, and have the C wrapper compare the length of the script and a SHA-3 or SHA-2 hash of the script contents to expected values.
After some brainstorming:
All the requirements can be met through the following steps. First we show steps which are done just once.
Step one. Since each script which should be run as root has to be marked by user root somehow (otherwise, just any user could do this), the system administrator takes advantage of his privileged status to choose a user ID which will never actually be assigned to anyone. In this example, we use user ID 9999.
Step two. Compile a particular C wrapper and let the object code run suid root. The source code for that wrapper can be found here.
Then, the following two steps are done once for each Perl script to be run as root.
Step one. Begin each Perl script with the following code.
if($>)
{
exec { "/path-to-wrapper-program" }
( "/path-to-wrapper-program",$0,#ARGV);
}
Step two. As root (obviously), change the owner of the Perl script to user 9999. That's it. No updating of databases or text files. All requirements for running a Perl script as root reside with the script itself.
Comment on step one: I actually place the above Perl snippet after these lines:
use strict;
use warnings FATAL=>"all";
... but suit yourself.

How to cleanly pass command-line parameters when test-running my Python script?

So I'm writing Python with IDLE and my usual workflow is to have two windows (an editor and a console), and run my script with F5 to quickly test it.
My script has some non-optional command line parameters, and I have two conflicting desires:
If someone launches my script without passing any parameters, I'd like him to get an error telling him to do so (argparse does this well)
When I hit F5 in IDLE, I'd like to have my script run with dummy parameters (and I don't want to have more keystrokes than F5 to have this work, and I don't want to have a piece of code I have to remember to remove when I'm not debugging any more)
So far my solution has been that if I get no parameters, I look for a params.debug file (that's not under source control), and if so, take that as default params, but it's a bit ugly... so would there be a cleaner, more "standard" solution to this? Do other IDEs offer easier ways of doing this?
Other solutions I can think of: environment variables, having a separate "launcher" script that's the one taking the "official" parameters.
(I'm likely to try out another IDE anyway)
With some editors you can define the 'execute' command,
For example with Geany, for Python files, F5 is python2.7 %f. That could be modified to something like python2.7 %f dummy parameters. But I use an attached terminal window and its line history more than F5 like commands.
I'm an Ipython user, so don't remember much about the IDLE configuration. In Ipython I usually use the %run magic, which is more like invoking the script from a shell than from an IDE. Ipython also has a better previous line history than the shell.
For larger scripts I like to put the guts of the code (classes, functions) in one file, and test code in the if __name__ block. The user interface is in another file that imports this core module.
Thanks for your question. I also searched for a way to do this. I found that Spyder which is the IDE I use has an option under Run/Configure to enter the command line parameters for running the program. You can even configure different ones for the different editors you have open.
Python tracker issue 5680 is a proposal to add to Idle a way to set command lines args for F5 Run module. You are free to test any of the proposed patches if you are able to apply them.
In the meanwhile conditionally extending sys.argv as done below should fulfill your requirements.
import sys
in __name__ == '__main__':
if 'idlelib.PyShell' in sys.modules:
sys.argv.extend(('a', '-2')) # add your argments here.
print(sys.argv) # in use, parse sys.argv after extending it
# ['C:\\Programs\\python34\\tem.py', 'a', '-2']

how to read texts on the terminal inside perl script

Is there any way to capture the texts on termianl screen inside a perl script. I know there are some functions like system,exec,backticks but the problem is that they execute commands FROM the script.For ex:- in terminal i write cd/ (or) ls,and after that i run my perl script which will read what was written on termianl screen(in this case, scipt will capture cd/ (or) ls-whichever was given to termianl). I came with one solution that by passing the commands which you wrote on termianl as a command line arguments to the script,but any other way???
Like this maybe:
history | perl -ne 'print $_'
As I understand it, in a situation where you've typed some stuff into a terminal like this:
[tai#littlerobot ~] echo "Hello"
Hello
[tai#littlerobot ~] perl myscript.pl
You want myscript.pl to be able to access the echo "Hello" part, and possibly also the Hello that was that command's output.
Perl does not provide such a feature. No programming language does or can provide such a feature because the process in which your script/program runs has no intrinsic knowledge about what happened in the same terminal before it was run. The only way it could access this text would be if it could ask the currently running terminal, which will have some record of this information (i.e. the scrollback buffer), even if it cannot distinguish between which characters in the text were typed by you, and which are output. However, I know of no terminal that exposes that information via any kind of public API.
So if you want myscript.pl to be able to access that echo "Hello", you'll need to pass it to your script. Piping history to your script (as shown by Mark Setchell in his answer) is one technique. history is a shell built-in, so it has as much knowledge as your shell has (which is not quite the same knowledge as your terminal has). In particular it can give you a list of what commands have been typed in this shell session. However, it cannot tell you about the output generated by those commands. And it cannot tell you about other shell sessions, so doing this in Perl is fairly useless:
my #history = `tcsh -c history`;
The last thing you could try (though it would be incredibly complicated to do) would be to ask the X server (or Windows if running on that operating system) for a screen shot and then attempt to locate which rectangle the current terminal is running in and perform OCR on it. This would be fraught with problems though, such as dealing with overlapping windows.
So, in summary, you cannot do this. It's nothing to do with Perl. You cannot do this in any programming language.

Real world examples of UNIX named pipes

I usually think of UNIX pipes as a quick and dirty way to interact with the console, doing things such as:
ls | grep '\.pdf$' #list all pdfs
I understand that it's possible to use create named pipes using mkfifo and mknod.
Are named pipes still used significantly today, or are they a relic of the past?
They are still used, although you might not notice. As a first-class file-system object provided by the operating system, a named pipe can be used by any program, regardless of what language it is written in, that can read and write to the file system for interprocess communication.
Specific to bash (and other shells), process substitution can be implemented using named pipes, and on some platforms that may be how it actually is implemented. The following
command < <( some_other_command )
is roughly identical to
mkfifo named_pipe
some_other_command > named_pipe &
command < named_pipe
and so is useful for things like POSIX-compliant shell code, which does not recognize process substitution.
And it works in the other direction: command > >( some_other_command ) is
mkfifo named_pipe
some_other_command < named_pipe &
command > named_pipe
pianobar, the command line Pandora Radio player, uses a named pipe to provide a mechanism for arbitrary control input. If you want to control Pianobar with another app, such as PianoKeys, you set it up to write command strings to the FIFO file, which piano bar watches for incoming commands.
I wrote a MAMP stack app in college that we used to manage users in an LDAP database on a file server. The PHP scripts would write commands to a FIFO that a shell script running in launchd would read from and interact with the LDAP database. I had no idea what I was doing, so I don’t know if there was a better or more correct way to do it, but it worked great at the time.
Named pipes are useful where a program would expect a path to a file as an argument as opposed to being willing to read/write with stdin and stdout, though very modern versions of bash can get around this with < ( foo ) it is still sometimes useful to have a file like object which is usable as a pipe.

Is there an interpreter for sending commands to Selenium?

I'm a Perl programmer doing some web application testing using Selenium. I was wondering if there's some kind of interactive interpreter which would allow me to type Selenium commands at a prompt and have them sent to selenium.
The way I'm currently developing code is to type all of the commands into a Perl script, and then execute the script. This make the development process painfully slow, because it makes it necessary to rerun the entire script whenever I make a change.
(If I were a Ruby programmer, I bet the Interactive Ruby Shell could help with this, but I'm hoping to find a solution for Perl.) Thanks!
The Java-based SeleniumServer has an --interactive mode.
I don't know about a Selenium shell, but if you are looking for a Perl REPL, there are modules such as Devel::REPL and Carp::REPL. I've made shells for various simple things using my Polyglot module, although I haven't looked at that in awhile.
May I ask why it's "necessary to re-run the entire script"? Since your solution is an interactive shell, i'm assuming that it's NOT because you need previous commands to set something up. If that's a correct assumption, simply make your set of Selenium commands in Perl script in such a way that you can skip the first N commands.
You can do it by explicit "if" wrappers.
Or you can have a generic "call a Selenium command based on a config hash" driver executed in a loop, and adding the config hash for every single Selenum command to an array of hashes.
Then you can either have an input parameter to the test script that refers to a # in an array; or you can even include unique labels for each test as a part of a config hash and pass "start with test named X" or even "only do test named X" as input on command line.