Why is inotifywait not executing the body of the while loop - inotify

I had inotifywait working well, and suddenly it has stopped working as I expected. I'm not sure what's happening. My script:
#!/bin/bash
while inotifywait -e modify foo.txt; do
echo we did it
done
echo $?
When I execute it, I get:
Setting up watches.
Watches established.
When I edit foo.txt, I get:
0
And then the script exits.
Why is the while loop exiting given that the exit code is 0?
Why is it never echoing the content of the while loop?
UPDATE
This version does work. However I still don't get what's wrong with the original (given that, I swear, it was working for some time.)
#!/bin/bash
echo helle
while true; do
inotifywait -e modify foo.txt
echo hello
done

Related

Supervisord- Execute a command before starting the application / program

Using supervisord, how do I execute a command before running the program?
For example in the code below, I want a file to be created before starting the program. In the code below I am using tail -f /dev/null to simulate the background process but this could be any running program like '/path/to/application'.
I tried '&&' and this doesn't seem to work. The requirement is that the file has to be created first in order for the application to work.
[supervisord]
nodaemon=true
logfile=~/supervisord.log
[program:app]
command:touch ~a.c && tail -f /dev/null
The problem is that supervisor isn't running a shell to interpret command sections, so "&&" is just one of 5 space separated arguments it is passing to the touch command; if this ran successfully, then there should be some unusual filenames in its working directory now.
You can use a shell as your command and pass it the shell logic you would like:
command=/bin/sh -c "touch ~a.c && tail -f /dev/null"
Usually, this type of shell wrapper should be the interface provided and managed by the app and is what supervisord and others just know how to call with paths and options, i.e.:
command=myappswrapper.sh ~a.c
(where myappswrapper.sh is:)
#!/bin/sh
touch $1 && tail -f /dev/null
Here is a trick.
You use a shell script to do that and beyond that
[program:app]
command:sh /path/to/your/script.sh
It's can your script.sh
touch ~a.c
exec tail -f /dev/null
notice exec

How is this bash script resulting in an infinite loop?

From some Googling (I'm no bash expert by any means) I was able to put together a bash script that allows me to run a test suite and output a status bar at the bottom while it runs. It typically takes about 10 hours, and the status bar tells me how many tests passed and how many failed.
It works great sometimes, however occasionally I will run into an infinite loop, which is bad (mmm-kay?). Here's the code I'm using:
#!/bin/bash
WHITE="\033[0m"
GREEN="\033[32m"
RED="\033[31m"
(run_test_suite 2>&1) | tee out.txt |
while IFS=read -r line;
do
printf "%$(tput cols)s\r" " ";
printf "%s\n" "$line";
printf "${WHITE}Passing Tests: ${GREEN}$(grep -c passed out.txt)\t" 2>&1;
printf "${WHITE}Failed Tests: ${RED}$( grep -c FAILED out.txt)${WHITE}\r" 2>&1;
done
What happens when I encounter the bug is I'll have an error message repeat infinitely, causing the log file (out.txt) to become some multi-megabyte monstrosity (I think it got into the GB's once). Here's an example error that repeats (with four lines of whitespace between each set):
warning caused by MY::Custom::Perl::Module::TEST_FUNCTION
print() on closed filehandle GEN3663 at /some/CPAN/Perl/Module.pm line 123.
I've tried taking out the 2>&1 redirect, and I've tried changing while IFS=read -r line; to while read -r line;, but I keep getting the infinite loop. What's stranger is this seems to happen most of the time, but there have been times I finish the long test suite without any problems.
EDIT:
The reason I'm writing this is to upgrade from a black & white test suite to a color-coded test suite (hence the ANSI codes). Previously, I would run the test suite using
run_test_suite > out.txt 2>&1 &
watch 'grep -c FAILED out.txt; grep -c passed out.txt; tail -20 out.txt'
Running it this way gets the same warning from Perl, but prints it to the file and moves on, rather than getting stuck in an infinite loop. Using watch, also prints stuff like [32m rather than actually rendering the text as green.
I was able to fix the perl errors and the bash script seems to work well now after a few modifications. However, it seems this would be a safer way to run the test suite in case something like that were to happen in the future:
#!/bin/bash
WHITE="\033[0m"
GREEN="\033[32m"
RED="\033[31m"
run_full_test > out.txt 2>&1 &
tail -f out.txt | while IFS= read line; do
printf "%$(tput cols)s\r" " ";
printf "%s\n" "$line";
printf "${WHITE}Passing Tests: ${GREEN}$(grep -c passed out.txt)\t" 2>&1;
printf "${WHITE}Failed Tests: ${RED}$( grep -c 'FAILED!!' out.txt)${WHITE}\r" 2>&1;
done
There are some downsides to this. Mainly, if I hit Ctrl-C to stop the test, it appears to have stopped, but really run_full_test is still running in the background and I need to remember to kill it manually. Also, when the test is finished tail -f is still running. In other words there are two processes running here and they are not in sync.
Here is the original script, slightly modified, which addresses those problems, but isn't foolproof (i.e. can get stuck in an infinite loop if run_full_test has issues):
#!/bin/bash
WHITE="\033[0m"
GREEN="\033[32m"
RED="\033[31m"
(run_full_test 2>&1) | tee out.txt | while IFS= read line; do
printf "%$(tput cols)s\r" " ";
printf "%s\n" "$line";
printf "${WHITE}Passing Tests: ${GREEN}$(grep -c passed out.txt)\t" 2>&1;
printf "${WHITE}Failed Tests: ${RED}$( grep -c 'FAILED!!' out.txt)${WHITE}\r" 2>&1;
done
The bug is in your script. That's not an IO error; that's an illegal argument error. That error happens when the variable you provide as a handle isn't a handle at all, or is one that you've closed.
Writing to a broken pipe results in the process being killed by SIGPIPE or in print returning false with $! set to EPIPE.

Get current playing file in MPlayer slave mode

Problem: I can't find any way to reliably get the current playing file in an MPlayer playlist.
Here is how far I have gotten. This working ash script monitors a text file with the path to the current playlist. When I update the file, the script closes the old instance of MPlayer and opens a new one with the new playlist:
# POLL PLAYLIST FILE FOR CHANGES
CURRENTPLAYLISTPATH=/home/tc/currentplaylist
INFIFO=/tmp/mplayer-in
CURRENTPLAYLIST="NEVERMATCHAPLAYLIST"
FIRSTRUN=1
while [ 1 ];
do
# CHECK FOR NEW PLAYLIST
NEWPLAYLIST=$(head -n 1 $CURRENTPLAYLISTPATH)
if [[ "$NEWPLAYLIST" != "$CURRENTPLAYLIST" ]]; then
if [ "$FIRSTRUN" == 0 ]; then
echo "quit" > "$INFIFO"
fi
# CREATE NAMED PIPE, IF NEEDED
trap "rm -f $INFIFO" EXIT
if [ ! -p $INFIFO ]; then
mkfifo $INFIFO
fi
# START MPLAYER
mplayer -fixed-vo -nolirc -vc ffmpeg12vdpau,ffh264vdpau, -playlist $NEWPLAYLIST -loop 0 -geometry 1696x954 -slave -idle -input file=$INFIFO -quiet -msglevel all=0 -identify | tee -a /home/tc/mplayer.log &
CURRENTPLAYLIST=$NEWPLAYLIST
FIRSTRUN=0
fi
sleep 5;
done
My original plan was just to use the "-identify" flag and parse the log file. This actually works really well up until I need to truncate the log file to keep it from getting too large. As soon as my truncating script is run, MPlayer stops writing to the log file:
FILENAME=/home/tc/mplayer.log
MAXCOUNT=100
if [ -f "$FILENAME" ]; then
LINECOUNT=`wc -l "$FILENAME" | awk '{print $1}'`
if [ "$LINECOUNT" -gt "$MAXCOUNT" ]; then
REMOVECOUNT=`expr $LINECOUNT - $MAXCOUNT`
sed -i 1,"$REMOVECOUNT"d "$FILENAME"
fi
fi
I have searched and searched but have been unable to find any other way of getting the current playing file that works.
I have tried piping the output to another named pipe and then monitoring it, but only works for a few seconds, then MPlayer completely freezes.
I have also tried using bash (instead of ash) and piping the output to a function like the following, but get the same freezing problem:
function parseOutput()
{
while read LINE
do
echo "get_file_name" > /tmp/mplayer-in
if [[ "$LINE" == *ANS_FILENAME* ]]
then
echo ${LINE##ANS_FILENAME=} > "$CURRENTFILEPATH"
fi
sleep 1
done
}
# START MPLAYER
mplayer -fixed-vo -nolirc -vc ffmpeg12vdpau,ffh264vdpau, -playlist $NEWPLAYLIST -loop 0 -geometry 1696x954 -slave -idle -input file=/tmp/mplayer-in -quiet | parseOutput &
I suspect I am missing something very obvious here, so any help, ideas, points in the right direction would be greatly appreciated.
fodder
Alright then, so I'll post mine too.
Give this one a try (assuming there is only one instance running, like on fodder's machine):
basename "$(readlink /proc/$(pidof mplayer)/fd/* | grep -v '\(/dev/\|pipe:\|socket:\)')"
This is probably the safer way, since the file descriptors might not always be in the same order on all systems.
However, this can be shortened, with a little risk:
basename "$(readlink /proc/$(pidof mplayer)/fd/*)" | head -1
You might probably like to install this, too:
http://mplayer-tools.sourceforge.net/
Well, I gave up on getting the track from MPlayer itself.
My 'solution' is probably too hackish, but works for my needs since I know my machine will only ever have one instance of MPlayer running:
lsof -p $(pidof mplayer) | grep -o "/path/to/my/assets/.*"
If anyone has a better option I'm certainly still interested in doing this the right way, I just couldn't make any of the methods work.
fodder
You can use the run command.
Put this in ~/.mplayer/input.conf:
DEL run "echo ${filename} ${stream_pos} >> /home/knarf/out"
Now if you press the delete key while playing a file it will do what you expect i.e. append the current file playing and the position in the stream to the ~/out file. You can replace echo with your program.
See slave mod docs for more info (Ctrl-F somevar).
About getting properties from MPlayer
I have used a non-elegant solution, but it is working for me.
stdbuf -oL mplayer --slave --input=file=$FIFO awesome_awesome.mp3 |
{
while IFS= read -r line
do
if [[ "${line}" == ANS_* ]]; then
echo "${line#*=}" > ${line%=*} # echo property_value > property_name
fi
done
} &
mplayer_pid=&!
read filename < ./ANS_FILENAME
read timeLength < ./ANS_LENGTH
echo ($timeLength) $filename
and so on..
It is in another proccess, that's why I've used files to bring properties
'stdbuf' is for not to miss anything
I started putting together a bash library to handle tasks like this. Basically, you can accomplish this by dumping the mplayer output to a file. Then you grep that dump for "Playing " and take the last result with tail. This should give you the name of the file that's currently playing or that last finished playing.
Take a look at my bash code. You'll want to modify the playMediaFile function to your needs, but the getMediaFileName function should do exactly what you're asking. You'll find the code on my github.

How can I make a shell script indicate that it was successful?

If I have a basic .sh file containing the following script code:
#!/bin/sh
rm -rf "MyFolder"
How do I make this running script file display results to the terminal that will indicate if the directory removal was successful?
You don't really need to make it say it was successful. You could have it say something only on error ✖, and then silence means success ✔.
That's how the Unix philosophy works:
The rule of silence, also referred to as the silence is golden rule, is an important part of the Unix philosophy that states that when a program has nothing surprising, interesting or useful to say, it should say nothing. It means that well-behaved programs should treat their users' attention and concentration as being valuable and thus perform their tasks as unobtrusively as possible. That is, silence in itself is a virtue. http://www.linfo.org/rule_of_silence.html
That's the way rm itself behaves.
If you are asking about the general case, as suggested by your question's title, you can run your script with sh -x scriptname to see what it's doing. It's also quite common to write diagnostic output into the script itself, and control it with an option.
#!/bin/sh
verbose=false
case $1 in -v | --verbose )
verbose=true
shift ;;
esac
say () {
$verbose || return
echo "$0: $#" >&2
}
say "Removing $dir ..."
rm -rf "$dir" || say "Failed."
If you run this script without any options, it will run silently, like a well-behaved Unix utility should. If you run it with the -v option, it will print some diagnostics to standard error.
rm -rf "My Folder" && echo "Done" || echo "Error!"
You can read more on creating a sequence of pipelines in bash manual
In the bash (and other similar shells) the ? environment variable gives you the exit code of the last executed command. So you can do:
#!/bin/sh
rm -rf "My Folder"
echo $?
UPDATE
If once the rm command has been executed the directory doesn't exist (because it has been successfully removed or because it didn't exist when the command was executed) the script will print 0. If the directory exists (which will mean that the command has been unable to remove it) then the script will print an exit code other than 0. If I understand properly the question this is exactly the requested behavior. If it is not, please correct me.
The previous answers was wrong : rm don't exit with error code > 0 when the dir isn't present.
Instead, I recommend to use :
dir='/path/to/dir'
if [[ -d $dir ]]; then
rm -rf "$dir"
fi
If you want rm to return a status, remove -f flag.
Example on Linux Mint (the dir doesn't exists):
$ rm -rf /tmp/sdfghjklm
$ echo $?
0
$ rm -r /tmp/sdfghjklm
$ echo $?
1

Launching a Perl script without output

I'm making a script (Perl or shell) that launches a second Perl script. The script that it's launching has thousands of lines of output. So basically I want to make a script that launches another script without any output - and if possible run it within a screen session and then exit the script (yet keep the other running in the screen)? How can I do this?
When you launch your script direct output to /dev/null. To make a script run in the background use the & symbol. For example the follow will show nothing in the console and run in the background...
echo hi > /dev/null &
If you want to run in screen, you have to create a screenrc
#!/bin/sh
echo "screen my_perl_program" > /tmp/$$.screenrc
echo "autodetach on" >> /tmp/$$.screenrc
echo "startup_message off" >> /tmp/$$.screenrc
screen -L -dm -S session1 -c /tmp/$$.screenrc
Then you can restore it with screen -S session1