Acquire lock for /etc/passwd file in Bourne shell script - sh

I have an embedded system where user management in /etc/passwd file is done usually automatically with a Bourne shell script. However, it might happen that sometimes /etc/passwd file is edited with a text editor by root user or by passwd utility. Is there a way to program Bourne shell script in a way that it locks the /etc/passwd file during its execution so that other programs are not able to edit the file at the time? Also, this script should detect if /etc/passwd file is not opened by other processes. I could use following solution from Wooledge wiki:
# locking example -- CORRECT
# Bourne
lockdir=/tmp/myscript.lock
if mkdir "$lockdir"
then # directory did not exist, but was created successfully
echo >&2 "successfully acquired lock: $lockdir"
# continue script
else
echo >&2 "cannot acquire lock, giving up on $lockdir"
exit 0
fi
However, this ensures only that two instances of this script are not running simultaneously. I also have a BusyBox lock available which behaves similarly to flock, but again, as far as I can tell, I can't protect other processes editing /etc/passwd file.

The vipw command may provide this for you and you can customize the editor using the EDITOR environment name.
See man vipw for details.

Is there a way to program Bourne shell script in a way that it locks the /etc/passwd file during its execution so that other programs are not able to edit the file at the time?
That is called mandatory file locking and the answer is probably no. In Linux that requires the mand option when the file system is mounted. I would guess that's a nonstarter in your environment, but if it's an option (so to speak) have a look at your favorite resource for how to proceed from there.
It's not shell-script functionality you need. For one process to prevent another from opening a file requires kernel support. Unix programs traditionally use advisory locks, or cooperate some other way. vipw(8) is an example of how that's done.

Related

Editing WSL2 instance of Ubuntu Crontab using Windows VSCode

My question is whether it is possible to edit the crontab of a WSL2-based instance of Ubuntu with my Windows VSCode that is connected via WSL remote SSH.
If I type export EDITOR=code inside my WSL instance and then crontab -e, I am able to see a /tmp/crontab.sygFAU file load inside my VSCode instance.
The problem is that once I make edits to this file, it will save the file to /tmp/crontab.sysFAU but it doesn't actually take the next step of replacing the the real crontab file in /var/spool/cron/crontabs.
So once I re-open the crontab, it will just show what I had previously, and not my saved edits.
It would be nice to know if this is not possible or if there are any alternative ways to run a GUI editor because using nano is a pain!
An interesting question that I haven't considered before, myself. Here's what's happening:
You set your editor to code
You crontab -e, which properly loads VSCode with the temporary crontab.
However, because this is a Windows GUI application, it returns control to the parent Linux application (crontab) immediately after starting VSCode. You can see the same result if you just start notepad.exe from your shell in WSL -- Once Notepad starts (rather than exits) control is returned to the shell.
If you switch back to your terminal at this point, you'll see that crontab detected that the editor that it launched exited (returned), and so it has already tried to copy the temporary file to the permanent location.
However, since the temporary files doesn't yet have any changes, crontab decides there's nothing to do.
Editing the file in VSCode and saving it has no effect, other than to leave a dangling /tmp/... file hanging around (since crontab isn't there to clean up).
So what's the solution? We need a way to launch a Windows GUI application and prevent it from returning control to crontab until you are done editing.
I originally thought something from this question might work, but the problem is that the actual command that launches the Windows process is embedded in a shell script, which you can see with less "$(which code)" (or code "$(which code)"), but it's probably not a great idea to edit this.
So the next-best thing I came up with is a simple "wrapper" script around the (already-a-wrapper) code command. Create ~/.local/bin/code_no_fork.sh (could be anywhere) with:
#!/usr/bin/env bash
code $* > /dev/null
echo Press Spacebar to continue
read -r -s -d ' '
Credit: This answer for the Spacebar approach
Then:
EDITOR=~/.local/bin/code_no_fork crontab -e
After you make your edits in VSCode, simply press Space to allow the script to continue/exit, at which point crontab will (assuming no errors were detected) install the new Crontab.
Alternatives
This is should typically only be a problem with Windows GUI applications, so the other possible avenue is to simply use any Linux editor that doesn't fork. If you want a GUI editor, that's entirely possible as long as you are running a WSL release that includes WSLg support (now available for Windows 10 and 11).
I won't offer any individual editor suggestions since that would get into "opinion" and "software recommendation" territory, which is off-topic here.

How to stop inotify from monitoring a directory?

I've ended up in a situation where a directory is being monitored, apparently by inotify, to trigger a process that doesn't exist.
I simply want to stop inotify from monitoring the directory, but after searching and reading the man page I can't find how to do this. The manual mentions inotify_rm_watch but running this, or int inotify_rm_watch, or inotify, are all not recognised from the command line.
The directory is still monitored after rebooting the system, so it's not as simple as just killing a process.
How do I permanently stop a directory being monitored by inotify? Is there some inotify config file that lists what is monitored that I should remove it from?
inotify_rm_watch is a programing interface that needs to be called from same process that called inotify_add_watch in first place.
Inotify is used by programs to react to file changes. To stop it from happening you have to stop the specific program using inotify. But in most cases you probably don't want to stop programs from watching for file changes because it is part of intended behavior for them.
You can list all programs using inotify with following shell command:
ps -p $(find /proc/*/fd/* -type l -lname 'anon_inode:inotify' -print 2> /dev/null | sed -e 's/^\/proc\/\([0-9]*\)\/.*/\1/')
inotify watches are NOT preserved past the termination of the process that added the watch, let alone reboots. inotify_rm_watch is not a command-line utility, but a function meant to be called by the process that owns the watch.
If the directory is being monitored, it's because there's a process running that is monitoring it. Stop running that process, and the directory will no longer be monitored.

Can I setuid a perl script?

I made a perl script to change owner of a file owned by some other user. Script is complete. My administrator save that in /sbin directory and set uid for it using chmod u+s name_of_script. But when I run this script it gives me error that chown operation is not permitted. I made a C program and it works by following same steps. So my question is if setuid is working for perl then I should not get that error because C code did not give me any error. So can i setuid for perl script or I should go with c code.
Don't tell me to ask administrator to change owner each time. Actually in server I have user name staging and I am hosting a joomla site in it. Now when I install some plugin then files related to that plugin are owned by www-data. So that's why I do not want to go to admin each time. Or you can give me some other solution also regarding my problem.
Many unix systems (probably most modern ones) ignore the suid bit on interpreter scripts, as it opens up too many security holes.
However, if you are using perl < 5.12.0, you can run perl scripts with setuid set, and they will run as root. How it works is that when the normal perl interpreter runs, and detects that the file you are trying to execute has the setuid bit set, and it then executes a program called suidperl. Suidperl takes care of elevating the user's privileges, and starting up the perl interpreter in a super-secure mode. suidperl is itself running with setuid root.
One of the consequences of this is that taint mode is turned on automatically. Other additional checks are also performed. You will probably see messages like:
Insecure $ENV{PATH} while running setuid at ./foobar.pl line 3.
perlsec provides some good information about securing such scripts.
suidperl is often not installed by default. You may have to install it via a separate package. If it is not installed then you get this message:
Can't do setuid (cannot exec sperl)
Having said all of that - you would be much better off using sudo to execute actions with elevated privileges. It is much more secure as you can specify exactly what is allowed to be executed via the sudoers file.
As of perl 5.12.0, suidperl was dropped. As a result, if you want to run a perl script on perl >= 5.12.0 with setuid set, you would have to write your own C wrapper. Again I recommend sudo as a better alternative.
No, you cannot use setuid aka chmod +s on scripts. The script's interpreter would be the thing that would actually need to be setuid, but doing that is a really bad idea. REALLY bad.
If you absolutely must have something written in Perl as setuid, the typical thing to do would be to make a small C wrapper that is setuid and executes the Perl script after starting. This gives you the best of both worlds in having a small and limited setuid script but still have a scripting language available to do the work.
If you have a sudo configuration that allows it (as most desktop linux distributions do for normal users), you can start your perl script with this line:
#!/usr/bin/env -S -i MYVAR=foo sudo --preserve-env perl -w -T
Then in your script before you use system() or backticks explicitly set your $ENV{PATH} (to de-taint it):
$ENV{PATH} = '/usr/bin';
Other environment variable that your script explicitly mentions or that get implicitly used by perl itself will have to be similarly de-tainted (see man perlsec).
This will probably (again depending on your exact sudo configuration) get you to the point where you only have to type in your root password once (per terminal) to run the script.
To avoid having to type your password at all you can add a line like this to the bottom of /etc/sudoers:
myusername ALL=(ALL) NOPASSWD:ALL
Of course you'd want to be careful with this on a multi-user system.
The -S options to env splits the string into separate arguments (making it possible to use options and combinations of programs like sudo/perl with the shebang mechanism). You can use -vS instead to see what it's doing.
The -i option to env clears the environment entirely.
MYVAR=foo introduces an environment variable definition.
The --preserve-env option to sudo will preserve MYVAR and others.
sudo sets up a minimal environment for you when it finds e.g. PATH to be missing.
The -i option to env and --preserve-env option to sudo may both be omitted and you'll probably end up with a slightly more extensive list of variables from your original environment including some X-related ones (presumably the ones the sudo configuration considers safe). --preserve-env without -i will end up passing along your entire unsanitized environment.
The -w and -T options to perl are generally advisable for scripts running as root.

Should I turn a perl script that parses a /var/log/.* file into a daemon?

I am writing a perl script to parse, for example, /var/log/syslog.
The perl script triggers further subsequent tasks when particular events in the log appear. The log is parsed following the advice of this post:
Command line: monitor log file and add data to database
Which what I believe is the use of a pipe.
Now I'd like this script to forever run in the background.
This sounds like a daemon to me, and the daemon program referenced in the following question seems ideal:
How can I run a Perl script as a system daemon in linux?
But from this post, it seems clear that daemon's have no open file handles. So how can I have a daemon, or a perl script that becomes a daemon, that monitors a logfile?
It sounds like what you want is a daemon. In that case the advise given in the second post you reference is the best practice. However, you do have other options like daemontools, which removes the fork complexity.
Daemons are allowed to have filehandles, but you should close STDIN, STDOUT, and STDERRR because you shouldn't really use them anymore. A lot of this has to do with the way fork works in *nix systems. Just open the pipe filehandle after your second fork, and you shouldn't have any issues.
this doesn't answer your question, but is another route to consider which may or may not be appropriate for you:
rsyslog can execute a program when a certain message is logged
see Filter Conditions for setting up the up the trigger, Templates for formatting the output that's passed to the script, and Actions > Shell Execute for specifying the executable.
Be sure to read the security implications, and that ryslog blocks while the external program runs. But if your script runs reliably quickly, it may be an option.

How to open Perl file handle to write data via sudo (or as another user)

I'd like to write data to a file, but the file handle should be opened with access permissions for a specific user.
Thus, the following statement:
open (FH, "> $filename") or die "$#\n";
would allow writing to a file as that particular user.
Is there a way to do this within a Perl script, without the entire script being run with sudo -u $username?
There are two established ways. Stackers, you are invited to edit this answer to fill in the drawbacks for each.
Run the program with sudo. The first thing you do in the program is to open the files you need and keep the handles, and then immediately afterwards drop the root privileges. Any further processing must take place with low privileges. The Apache httpd works likes this, it opens the log files as root, but continues running as nobody or similar.
If you don't like that way, run the program normally, and when you need to elevate, create a new process and have it run with a user configured sudo, su -, kdesu/gksu or whatnot. The CPAN client works likes this, it fetches, unpacks, builds and tests a module as a normal user, but calls sudo make install etc. when it's time to install.
An alternative to daxim's suggestions is to have the script owned by the specific user and have the script permissions include the setuid and/or setgid bits.