How to check whether mplayer plays a file or not? - mplayer

I am trying to check if mplayer is playing an mp3 file. I currently use this line from python
strace -p " + str(mplayer.pid) + " 2>&1 | head -n 200 | grep 'read(3'
That is because I know that mplayer makes system calls when reading file from descriptor number 3. However, no matter how many lines I analyze, there is not a single reading operation.

I only know of one reliable way to determining whether MPlayer is playing something, and that is by running it as slave and reading its ASCII pipe continuously.
Watching text occurrences in that pipe of media data not found, Failed to open or STARTING PLAYBACK and whether the process has quit (it is done playing).

Related

How to prevent the output truncated if the rows of output from the windbg to large?

If the output rows from the windbg command to large ,such as 100k rows, finally the windbg just display thousands of the rows, and most of them would be truncated , so my question is how to prevent the output truncated , or write all of the rows from the output to a local file to keep all of the output rows? the "write Windows text to file" wouldn't helpful.
Not sure if it would help, but .logopen and .logclose commands might be helpful in this case (respectively open and close a log file which keeps a copy of the events and commands from the Debugger Command window).
See also Keeping a Log File in WinDbg.
sometimes simply piping works especially when running cdb and quitting after executing just one command
cdb -c "tc 100;q" calc >> foo.txt
you should have 100 calls lets check
grep -c !.*: foo.txt
256
lets check how many sysenter were done and what were the index of the syscalls
grep sysenter -B 4 foo.txt | grep eax | awk "{print $1}"
eax=000000ea
eax=0000014d
eax=000000fb
we can use the output when the commands run for an infinite amount of time
without having file locked issues
like this
if .logopen .logclose isnt an option
Try to open additional command window with Ctrl+N and execute the long outputed command within it

Script response if md5sum returns FAILED

Say I had a script that checked honeypot locations using md5sum.
#!/bin/bash
#cryptocheck.sh
#Designed to check md5 CRC's of honeypot files located throughout the filesystem.
#Must develop file with specific hashes and create crypto.chk using following command:
#/opt/bin/md5sum * > crypto.chk
#After creating file, copy honeypot folder out to specific folders
locations=("/share/ConfData" "/share/ConfData/Archive" "/share/ConfData/Application"
"/share/ConfData/Graphics")
for i in "${locations[#]}"
do
cd "$i/aaaCryptoAudit"
/opt/bin/md5sum -c /share/homes/admin/crypto.chk
done
And the output looked like this:
http://pastebin.com/b4AU4s6k
Where would you start to try and recognize the output and perhaps trigger some sort of response by the system if there is a 'FAILED'?
I've worked a bit with PERL trying to parse log files before but my attempts typically failed miserably for one reason or another.
This may not be the proper way to go about this, but I'd want to be putting this script into a cronjob that would run every minute. I had some guys telling me that an inotify job or script (I'm not familiar with this) would be better than doing it this way.
Any suggestions?
--- edit
I made another script to call the script above and send the output to a file. The new script then runs a grep -q on 'FAILED' and if it picks anything up, it sounds the alarm (tbd what the alarm will be).
#!/bin/bash
#cryptocheckinit.sh
#
#rm /share/homes/admin/cryptoalert.warn
/share/homes/admin/cryptocheck.sh > /share/homes/admin/cryptoalert.warn
grep -q "FAILED" /share/homes/admin/cryptoalert.warn && echo "LIGHT THE SIGNAL FIRES"
Use:
if ! /opt/bin/md5sum -c /share/homes/admin/crypto.chk
then
# Do something
fi
Or pipe the output of the loop:
for i in "${locations[#]}"
do
cd "$i/aaaCryptoAudit"
/opt/bin/md5sum -c /share/homes/admin/crypto.chk
done | grep -q FAILED && echo "LIGHT THE SIGNAL FIRES"

CMD: Export all the screen content to a text file

In command prompt - How do I export all the content of the screen to a text file(basically a copy command, just not by using right-clicking and the clipboard)
This command works, but only for the commands you executed, not the actual output as well
doskey /HISTORY > history.txt
If you want to append a file instead of constantly making a new one/deleting the old one's content, use double > marks. A single > mark will overwrite all the file's content.
Overwrite file
MyCommand.exe>file.txt
^This will open file.txt if it already exists and overwrite the data, or create a new file and fill it with your output
Append file from its end-point
MyCommand.exe>>file.txt
^This will append file.txt from its current end of file if it already exists, or create a new file and fill it with your output.
Update #1 (advanced):
My batch-fu has improved over time, so here's some minor updates.
If you want to differentiate between error output and normal output for a program that correctly uses Standard streams, STDOUT/STDERR, you can do this with minor changes to the syntax. I'll just use > for overwriting for these examples, but they work perfectly fine with >> for append, in regards to file-piping output re-direction.
The 1 before the >> or > is the flag for STDOUT. If you need to actually output the number one or two before the re-direction symbols, this can lead to strange, unintuitive errors if you don't know about this mechanism. That's especially relevant when outputting a single result number into a file. 2 before the re-direction symbols is for STDERR.
Now that you know that you have more than one stream available, this is a good time to show the benefits of outputting to nul. Now, outputting to nul works the same way conceptually as outputting to a file. You don't see the content in your console. Instead of it going to file or your console output, it goes into the void.
STDERR to file and suppress STDOUT
MyCommand.exe 1>nul 2>errors.txt
STDERR to file to only log errors. Will keep STDOUT in console
MyCommand.exe 2>errors.txt
STDOUT to file and suppress STDERR
MyCommand.exe 1>file.txt 2>nul
STDOUT only to file. Will keep STDERR in console
MyCommand.exe 1>file.txt
STDOUT to one file and STDERR to another file
MyCommand.exe 1>stdout.txt 2>errors.txt
The only caveat I have here is that it can create a 0-byte file for an unused stream if one of the streams never gets used. Basically, if no errors occurred, you might end up with a 0-byte errors.txt file.
Update #2
I started noticing weird behavior when writing console apps that wrote directly to STDERR, and realized that if I wanted my error output to go to the same file when using basic piping, I either had to combine streams 1 and 2 or just use STDOUT. The problem with that problem is I didn't know about the correct way to combine streams, which is this:
%command% > outputfile 2>&1
Therefore, if you want all STDOUT and STDERR piped into the same stream, make sure to use that like so:
MyCommand.exe > file.txt 2>&1
The redirector actually defaults to 1> or 1>>, even if you don't explicitly use 1 in front of it if you don't use a number in front of it, and the 2>&1 combines the streams.
Update #3 (simple)
Null for Everything
If you want to completely suppress STDOUT and STDERR you can do it this way. As a warning not all text pipes use STDOUT and STDERR but it will work for a vast majority of use cases.
STD* to null
MyCommand.exe>nul 2>&1
Copying a CMD or Powershell session's command output
If all you want is the command output from a CMD or Powershell session that you just finished up, or any other shell for that matter you can usually just select that console from that session, CTRL + A to select all content, then CTRL + C to copy the content. Then you can do whatever you like with the copied content while it's in your clipboard.
Just see this page
in cmd type:
Command | clip
Then open a *.Txt file and Paste. That's it. Done.
If you are looking for each command separately
To export all the output of the command prompt in text files. Simply follow the following syntax.
C:> [syntax] >file.txt
The above command will create result of syntax in file.txt. Where new file.txt will be created on the current folder that you are in.
For example,
C:Result> dir >file.txt
To copy the whole session, Try this:
Copy & Paste a command session as follows:
1.) At the end of your session, click the upper left corner to display the menu.
Then select.. Edit -> Select all
2.) Again, click the upper left corner to display the menu.
Then select.. Edit -> Copy
3.) Open your favorite text editor and use Ctrl+V or your normal
Paste operation to paste in the text.
If your batch file is not interactive and you don't need to see it run then this should work.
#echo off
call file.bat >textfile.txt 2>&1
Otherwise use a tee filter. There are many, some not NT compatible. SFK the Swiss Army Knife has a tee feature and is still being developed. Maybe that will work for you.
How about this:
<command> > <filename.txt> & <filename.txt>
Example:
ipconfig /all > network.txt & network.txt
This will give the results in Notepad instead of the command prompt.
From command prompt Run as Administrator. Example below is to print a list of Services running on your PC run the command below:
net start > c:\netstart.txt
You should see a copy of the text file you just exported with a listing all the PC services running at the root of your C:\ drive.
If you want to output ALL verbosity, not just stdout. But also any printf statements made by the program, any warnings, infos, etc, you have to add 2>&1 at the end of the command line.
In your case, the command will be
Program.exe > file.txt 2>&1

mplayer slave: set volume before starting the playback

I know this is about a specific program (mplayer back-end); however it will be used to program a front-end so I hope it is still considered on-topic on Stack Overflow.
I want to run two mplayer slave instances which will be used to fade between different audio streams (webradio; smoothly change the channel). To do this, I set the "software volume" of mplayer so it will not affect the PCM output channel of the sound card but insert a software volume mixer to adjust the volume.
However, I encounter the following problem.
I start mplayer with the following command (can be tested on command-line):
mplayer -slave -idle -softvol
and send the following commands to mplayer:
loadfile <url>
set volume 0
it starts (for a short time) to play the file at 100% volume and then jumps to 0% volume. If I swap the two commands, mplayer tells me that I can't adjust the volume:
Failed to set property 'volume' to '0'.
ANS_ERROR=PROPERTY_UNAVAILABLE
Obviously, the audio filter isn't yet loaded / audio output not yet set up or something like that, so mplayer can't change the volume of a non-existing audio output.
Can I force mplayer to initialize everything in advance so that I can set the volume to 0%, load the file and then increase the volume to fade in the playback?
I already checked whether I can set the volume after playing some file (e.g. a silent dummy file); mplayer complains with the same error. For now, the only option I can think of is to start such a dummy file, adjust the volume, stop the dummy file, load the correct file to be played, and it will start with the volume just set. But I can't believe this is the best option.
I solved the problem by myself: While I tried to follow this guide, -af volume=0 didn't help. However, there also is a -volume 0 command line option which worked for me:
mplayer -slave -idle -softvol -volume 0
I have found this working perfect # my home player
important, while volume setting: the 1 in 'echo "volume $i 1 "' otherwise no change in vol.
mkfifo /tmp/mi1 and /tmp/mi2
and set the $url1 and $url2 to your radiostreamurls prior to this
mplayer -softvol -slave -input file=/tmp/mi1 $url1 &>/dev/null & mplayer -softvol -slave -input file=/tmp/mi2 $url2 & for ((i=1; i<=100; i+=1)); do sleep 0.25; echo "volume $(( 100 - $i )) 1 ">/tmp/mi1; sleep 0.25 ; echo "volume $i 1 ">/tmp/mi2 ; done ; (sleep 5 ; echo quit >/tmp/mi1 ;echo quit >/tmp/mi2)&
happy listening

Why does grep hang when run against the / directory?

My question is in two parts :
1) Why does grep hang when I grep all files under "/" ?
for example :
grep -r 'h' ./
(note : right before the hang/crash, I note that I see some "no such device or address" messages , regarding sockets....
Of course, I know that grep shouldn't run against a socket, but I would think that since sockets are just files in Unix, it should return a negative result, rather than crashing.
2) Now, my follow up question : In any case -- how can I grep the whole filesystem? Are there certain *NIX directories which we should leave out when doing this ? In particular, I'm looking for all recently written log files.
As #ninjalj said, if you don't use -D skip, grep will try to read all your device files, socket files, and FIFO files. In particular, on a Linux system (and many Unix systems), it will try to read /dev/zero, which appears to be infinitely long.
You'll be waiting for a while.
If you're looking for a system log, starting from /var/log is probably the best approach.
If you're looking for something that really could be anywhere in your file system, you can do something like this:
find / -xdev -type f -print0 | xargs -0 grep -H pattern
The -xdev argument to find tells it to stay within a single filesystem; this will avoid /proc and /dev (as well as any mounted filesystems). -type f limits the search to ordinary files. -print0 prints the file names separated by null characters rather than newlines; this avoid problems with files having spaces or other funny characters in their names.
xargs reads a list of file names (or anything else) on its standard input and invokes the specified command on everything in the list. The -0 option works with find's -print0.
The -H option to grep tells it to prefix each match with the file name. By default, grep does this only if there are two or more file names on its command line. Since xargs splits its arguments into batches, it's possible that the last batch will have just one file, which would give you inconsistent results.
Consider using find ... -name '*.log' to limit the search to files with names ending in .log (assuming your log files have such names), and/or using grep -I ... to skip binary files.
Note that all this depends on GNU-specific features. Some of these options might not be available on MacOS (which is based on BSD) or on other Unix systems. Consult your local documentation, and consider installing GNU findutils (for find and xargs) and/or GNU grep.
Before trying any of this, use df to see just how big your root filesystem is. Mine is currently 268 gigabytes; searching all of it would probably take several hours. A few minutes spent (a) restricting the files you search and (b) making sure the command is correct will be well worth the time you spend.
By default, grep tries to read every file. Use -D skip to skip device files, socket files and FIFO files.
If you keep seeing error messages, then grep is not hanging. Keep iotop open in a second window to see how hard your system is working to pull all the contents off its storage media into main memory, piece by piece. This operation should be slow, or you have a very barebones system.
Now, my follow up question : In any case -- how can I grep the whole filesystem? Are there certain *NIX directories which we should leave out when doing this ? In particular, Im looking for all recently written log files.
Grepping the whole FS is very rarely a good idea. Try grepping the directory where the log files should have been written; likely /var/log. Even better, if you know anything about the names of the files you're looking for (say, they have the extension .log), then do a find or locate and grep the files reported by those programs.