mplayer slave: set volume before starting the playback - mplayer

I know this is about a specific program (mplayer back-end); however it will be used to program a front-end so I hope it is still considered on-topic on Stack Overflow.
I want to run two mplayer slave instances which will be used to fade between different audio streams (webradio; smoothly change the channel). To do this, I set the "software volume" of mplayer so it will not affect the PCM output channel of the sound card but insert a software volume mixer to adjust the volume.
However, I encounter the following problem.
I start mplayer with the following command (can be tested on command-line):
mplayer -slave -idle -softvol
and send the following commands to mplayer:
loadfile <url>
set volume 0
it starts (for a short time) to play the file at 100% volume and then jumps to 0% volume. If I swap the two commands, mplayer tells me that I can't adjust the volume:
Failed to set property 'volume' to '0'.
ANS_ERROR=PROPERTY_UNAVAILABLE
Obviously, the audio filter isn't yet loaded / audio output not yet set up or something like that, so mplayer can't change the volume of a non-existing audio output.
Can I force mplayer to initialize everything in advance so that I can set the volume to 0%, load the file and then increase the volume to fade in the playback?
I already checked whether I can set the volume after playing some file (e.g. a silent dummy file); mplayer complains with the same error. For now, the only option I can think of is to start such a dummy file, adjust the volume, stop the dummy file, load the correct file to be played, and it will start with the volume just set. But I can't believe this is the best option.

I solved the problem by myself: While I tried to follow this guide, -af volume=0 didn't help. However, there also is a -volume 0 command line option which worked for me:
mplayer -slave -idle -softvol -volume 0

I have found this working perfect # my home player
important, while volume setting: the 1 in 'echo "volume $i 1 "' otherwise no change in vol.
mkfifo /tmp/mi1 and /tmp/mi2
and set the $url1 and $url2 to your radiostreamurls prior to this
mplayer -softvol -slave -input file=/tmp/mi1 $url1 &>/dev/null & mplayer -softvol -slave -input file=/tmp/mi2 $url2 & for ((i=1; i<=100; i+=1)); do sleep 0.25; echo "volume $(( 100 - $i )) 1 ">/tmp/mi1; sleep 0.25 ; echo "volume $i 1 ">/tmp/mi2 ; done ; (sleep 5 ; echo quit >/tmp/mi1 ;echo quit >/tmp/mi2)&
happy listening

Related

sh script gets stuck on read command in while loop

I'm trying to write a script that I'll put in my pi's cron to check for network connectivity every 10 seconds, if it fails a ping to google it will write a text file as false, then next time it succeeds, it will restart a program, because the specific program has issues with reconnecting to the network automatically.
The script seemed to be working when I was executing it from the terminal out of the same directory, then I cd back to / and added a bunch of comments, and now it just exits the script without any output, and for the life of me I can't figure out where i messed it up - I'm still relatively new to scripting so I could be missing something absolutely obvious here, but I couldn't find anything useful on google.
file heirarchy:
/home/pi/WEB_UI/
inside the WEB_UI folder are both of the scripts i'm running here.
nonet.sh - the script in question
pianobar.sh - a simple script to pkill a program and reload it after 5 seconds.
var.txt - a text file that will only ever contain "true" or "false
I've tried removing all of the comments, changing the file locations to ./ and making the while; do commands a single line, but I can't figure out where the issue is. if I run sh -x for the script, it returns:
pi#raspberrypi:~/WEB_UI $ sh -x nonet.sh
+ ping -q -c 1 -W 1 google.com
+ read line
interestingly I get the same result from a test script I was using that was basically
"if var.txt says 'true', echo 'up', else echo 'down'"
I wonder if something is wrong with my sh interpreter?
#!/bin/sh
#ping google, if successful return true
if ping -q -c 1 -W 1 google.com >/dev/null; then
#read variable line, perform action do
while read line
do
#leading $ means return previous output. if line is false:
if [ "$line" = "false" ]
then
#return network up text, run pianobar script, set var.txt to true.
echo "the network is back up"
sh /home/pi/WEB_UI/pianobar.sh
echo true > /home/pi/WEB_UI/var.txt
else
#otherwise return network is up, set var.txt to true
echo "the network is up"
echo true > /home/pi/WEB_UI/var.txt
#fi ends an if statement, done ends a while loop.
#text after done tells the while loop where to get the line variable
fi
done < /home/pi/WEB_UI/var.txt
else
while read line
do
if [ "$line" = "false" ]
then
#if var.txt is already false, ping google again
if ping -q -c 1 -W 1 google.com >/dev/null; then
#if ping works, the network is back, restart pianobar, set var to true
echo "the network is back up"
sh /home/pi/WEB_UI/pianobar.sh
echo true > /home/pi/WEB_UI/var.txt
else
#if var.txt is false, network is still down. wait.
echo "the network is still down"
fi
else
echo "the network is down"
echo false > /home/pi/WEB_UI/var.txt
fi
done < /home/pi/WEB_UI/var.txt
fi
the script SHOULD just echo a simple line saying whether the network is up, down, back up, or still down, depending on how many flags it passes/fails. Any assistance would be greatly appreciated!
as Shellter said in comments above, the issue was that I needed to add \n to the end of the line in my var.txt
I think I saw another post recently where while read... was frustrated by a missing \n char, so maybe you want to do printf "false\n" > file instead. Good luck.

How can I fix "/etc/cron.daily/logrotate: gzip: stdin: file size changed while zipping"?

in last days i get daily mail from cron's logrotate task:
/etc/cron.daily/logrotate:
gzip: stdin: file size changed while zipping
How can I fix it?
Thanks,
Gian Marco.
Here's a blog post in French which gives a solution.
In English, you can read this bug report.
To summarise:
First you have to add the --verbose option in the script /etc/cron.daily/logrotate to have more information the next time it runs to identify which rotation log cause the problem.
#!/bin/sh
test -x /usr/sbin/logrotate || exit 0
/usr/sbin/logrotate --verbose /etc/logrotate.conf`
Next you have to add the delaycompress option in logrotate configuration.
Like exemple, I add the nginx's logrotate configiguration in /etc/logrotate.d/nginx:
/var/log/nginx/*.log {
daily
missingok
rotate 14
compress
delaycompress
notifempty
create 0640 www-data adm
sharedscripts
...
}
upstart will close (and reopen) its log file when it notices that the file is deleted. However, if you look at what gzip does, you see that it doesn't delete the file until after it's one writing the output file. That means that there always is a race condition where log lines might be lost for lines logs being written gzipping.
You can disable the warning using gzip --quiet, but really that doesn't hide the fact that you might still loose log lines.
This means that delaycompress is not a generic fix to this. It's a specific fix to a specific problem.
The real solution for this is probably a combination of delaycompress and being able to send a signal to the process. It will make the race condition go away in practise (unless you rotate multiple times per second :) ).

Script response if md5sum returns FAILED

Say I had a script that checked honeypot locations using md5sum.
#!/bin/bash
#cryptocheck.sh
#Designed to check md5 CRC's of honeypot files located throughout the filesystem.
#Must develop file with specific hashes and create crypto.chk using following command:
#/opt/bin/md5sum * > crypto.chk
#After creating file, copy honeypot folder out to specific folders
locations=("/share/ConfData" "/share/ConfData/Archive" "/share/ConfData/Application"
"/share/ConfData/Graphics")
for i in "${locations[#]}"
do
cd "$i/aaaCryptoAudit"
/opt/bin/md5sum -c /share/homes/admin/crypto.chk
done
And the output looked like this:
http://pastebin.com/b4AU4s6k
Where would you start to try and recognize the output and perhaps trigger some sort of response by the system if there is a 'FAILED'?
I've worked a bit with PERL trying to parse log files before but my attempts typically failed miserably for one reason or another.
This may not be the proper way to go about this, but I'd want to be putting this script into a cronjob that would run every minute. I had some guys telling me that an inotify job or script (I'm not familiar with this) would be better than doing it this way.
Any suggestions?
--- edit
I made another script to call the script above and send the output to a file. The new script then runs a grep -q on 'FAILED' and if it picks anything up, it sounds the alarm (tbd what the alarm will be).
#!/bin/bash
#cryptocheckinit.sh
#
#rm /share/homes/admin/cryptoalert.warn
/share/homes/admin/cryptocheck.sh > /share/homes/admin/cryptoalert.warn
grep -q "FAILED" /share/homes/admin/cryptoalert.warn && echo "LIGHT THE SIGNAL FIRES"
Use:
if ! /opt/bin/md5sum -c /share/homes/admin/crypto.chk
then
# Do something
fi
Or pipe the output of the loop:
for i in "${locations[#]}"
do
cd "$i/aaaCryptoAudit"
/opt/bin/md5sum -c /share/homes/admin/crypto.chk
done | grep -q FAILED && echo "LIGHT THE SIGNAL FIRES"

How to check whether mplayer plays a file or not?

I am trying to check if mplayer is playing an mp3 file. I currently use this line from python
strace -p " + str(mplayer.pid) + " 2>&1 | head -n 200 | grep 'read(3'
That is because I know that mplayer makes system calls when reading file from descriptor number 3. However, no matter how many lines I analyze, there is not a single reading operation.
I only know of one reliable way to determining whether MPlayer is playing something, and that is by running it as slave and reading its ASCII pipe continuously.
Watching text occurrences in that pipe of media data not found, Failed to open or STARTING PLAYBACK and whether the process has quit (it is done playing).

MongoDB log file growth

Currently my log file sits at 32 meg. Did I miss an option that would split the log file as it grows?
You can use logrotate to do this job for you.
Put this in /etc/logrotate.d/mongod (assuming you use Linux and have logrotate installed):
/var/log/mongo/*.log {
daily
rotate 30
compress
dateext
missingok
notifempty
sharedscripts
copytruncate
postrotate
/bin/kill -SIGUSR1 `cat /var/lib/mongo/mongod.lock 2> /dev/null` 2> /dev/null || true
endscript
}
If you think that 32 megs is too large for a log file, you may also want to look inside to what it contains.
If the logs seem mostly harmless ("open connection", "close connection"), then you may want to start mongod with the --quiet switch. This will reduce some of the more verbose logging.
Rotate the logs yourself
http://www.mongodb.org/display/DOCS/Logging
or use 'logrotate' with an appropriate configuration.
Using logrotate is a good option. while, it will generate 2 log files that fmchan commented, and you will have to follow Brett's suggestion to "add a line to your postrotate script to delete all mongod style rotated logs".
Also copytruncate is not the best option. There is always a window between copy and truncate. Some mongod logs may get lost. Could check logrotate man page or refer to this copytruncate discussion.
Just provide one more option. You could write a script that sends the rotate signal to mongod and remove the old log files. mongologrotate.sh is a simple reference script that I have written. You could write a simple cron job or script to call it periodically like every 30 minutes.