I'm currently using MPlayer in slave mode for a video player im making.
As of currently, the media player shows ==== PAUSED ==== when it's paused and I can read the output for this status to know when the video is paused.
The command line arg i am using as of now is msglevel identify=6:statusline=-1 (i disabled statusline as it produced A: 0.7 V: 0.6 A-V: 0.068 ... and unneccessary stuff)
What do I need to set the msglevel (or anything else) so that it will also show ==== PLAYING ==== or any indication that it started playing, stopped, media ended, loading, etc?
I found out how to get if the video is paused.
By sending command pausing_keep_force get_property pause to mplayer, it responds with ANS_pause=no if not paused, and ANS_pause=yes if paused. Problem solved.
Based on what I can decipher from the OP's answer to his/her own question, he/she was looking for a way to determine whether mplayer was paused or playing. I've written a little bash script that can handle this task with some simple function calls.
You can actually inspect the last couple lines of mplayer's output to see if mplayer is paused. I put together a little bash library that can be used to query some status information about mplayer. Take a look on my GitHub. There are instructions for integrating my script with other bash scripts.
If you implement my script, you will need to play your media file using the playMediaFile function. Then you can simply call the isPaused function as a condition in bash like this:
if isPaused; then
# do something
fi
# or
if ! isPaused; then
# do something
fi
# or
ifPaused && #do something
Related
Today... I used the following command: with subprocess.PIPE and subprocess.Popen in python 3:
ffmpeg -i udp://{address_of_camera} \
-vf select='if(eq(pict_type,I),st(1,t),gt(t,ld(1)))' setpts=N/FRAME_RATE/TB \
-f rawvideo -an -vframes {NUM_WANTED_FRAMES} pipe:`
This command helps me to capture NUM_WANTED_FRAMES frames from a live camera at a given moment.
However... it takes me about 4 seconds to read the frames, and about 2.5 seconds to open a socket between my computer and the camera's computer.
Is there a way, to have a socket/connection always open between my computer and the camera's computer, to save the 2.5 seconds?
I read something about fifo_size and overrun_fatal. I thought that maybe I can set fifo_size to be equal to NUM_WANTED_FRAMES, and overrun_fatal to True? Will this solve my problem? Or is there a different and simpler/better solution?
Should I try to record always (no -vframes flag) store the frames in a queue(With max size), and upon a wish to slice the video, read from my queue buffer? Will it work well with the keyframe?
Also... What to do when ffmpeg fails? restart the ffmpeg command?
FFmpeg itself is an one-n-done type of app. So, to keep the camera running, the best option is to "record always (no -vframes flag)" and handle whether to drop/record frames in Python.
So, a rough sketch of the idea:
import subprocess as sp
from threading import Thread, Event
from queue import Queue
NUM_WANTED_FRAMES = 4 # whatever it is
width = 1920
height = 1080
ncomp = 3 # rgb
framesize = width*height*ncomp # in bytes
nbytes = framesize * NUM_WANTED_FRAMES
proc = Popen(<ffmpeg command>, stdout=sp.PIPE)
stdout = proc.stdout
buffer = Queue(NUM_WANTED_FRAMES)
req_frame = Event() # set to record, default to drop
def reader():
while True:
if req_frame.is_set():
queue.put(stdout.read(nbytes))
record_frame.clear()
else:
# frames not requested, drop
stdout.read(framesize)
rd_thread = threading.Thread(target=reader)
rd_thread.start()
...
# elsewhere in your program, do this when you need to get the camera data
req_frame.set()
framedata = queue.get()
....
Will it work well with the keyframe?
Yes, if your FFmpeg command has -discard nokey it'll read just keyframes.
What to do when ffmpeg fails? restart the ffmpeg command?
Have another thread to monitor the health of proc (Popen object) and if it is dead, you need to restart subprocess with the same command and overwrite with the new stdout. You probably want to protect your code with try-except blocks as well. Also, adding timeouts to queue ops would be a good idea, too.
I understood this page to mean that queuing in pyglet provides a gapless transition between audio tracks. But when I test it out, there is a noticeable gap. Has anyone here worked with gapless audio in pyglet?
Example:
player = pyglet.media.Player()
source1 = pyglet.media.load([file1]) # adding streaming=False doesn't fix the issue
source2 = pyglet.media.load([file2])
player.queue(source1)
player.queue(source2)
player.play()
player.seek([time]) # to avoid having to wait until the end of the track. removing this doesn't fix the gap issue
pyglet.app.run()
I would suggest you either edit your url1 and url2 into caching them locally if they're external sources. And then use Player().time to identify when you're about to reach the end. And then call player.next_source.
Or if it's local files and you don't want to programatically solve the problem you could chop up the audio files in something like Audacity to make them seamless on start/stop.
You could also experiment with having multiple players and layer them on top of each other. But if you're only interested in audio playback, there's other alternatives.
It turns out that there were 2 problems.
The first one: I should have used
source_group = pyglet.media.SourceGroup()
source_group.add(source1)
source_group.add(source2)
player.queue(source_group)
The second one: mp3 files are apparently slightly padded at the beginning and at the end, so that is where the gap is coming from. However, this does not seem to be an issue with any other file type.
I'm trying to run this (script.exs):
System.cmd("zsh", ["-c" "com.spotify.Client"])
IO.puts("done.")
Spotify opens, but "done." never shows up on the screen. I also tried:
System.cmd("zsh", ["-c" "nohup com.spotify.Client &"])
IO.puts("done.")
My script only halts when I close the spotify window. Is it possible to run commands without waiting for it to end?
One should not spawn system tasks in a blind hope they would work properly. If the system task crashes, some actions should be taken in the calling OTP process, otherwise sooner or later it’ll crash in production and nobody would know what had happened and why.
There are many possible scenarios, I would go with Task.start_link/1 (assuming the calling process traps exits,) or with Task.async/1 accompanied by an explicit Task.await/1 somewhere in the supervision tree.
If despite everything explained above you don’t care about robustness, use Kernel.spawn/1 like below
pid = spawn(System, :cmd, ~w|zsh -c com.spotify.Client|)
# Process.monitor(pid) # yet give it a chance to handle errors
IO.puts("done.")
I hope someone is able to help me with this.
Ableton Live when you set a clip's launch mode to gate, it only plays when you hold down the key. I'm using a patch that takes an OSC message to launch the clip, but it will not work as a gate - it needs to have the stop all clips message, and this won't help in my situation.
I need to "call fire" when 1 and "call stop all clips" when 0, but I'm not sure how to do this.
Can anyone help me with which object I should use? I've looked at various gates and swtiches, but I'm missing something.
Thanks.
Create a new object and type "togedge" or "select" (or its shorthand "sel"). Both of them will have 2 outputs: One for 0, one for not 0.
"togedge" will only output if the input changes.
"sel" will always output, and you can enter different numbers to match your input directly (like "sel 34 56").
Btw you can also use "call stop" on the clip_slot object directly instead of "stop_all_clips" on the track object.
After fiddling with the sel object, I discovered this: I needed to change the live.text object used to launch the clip from button to toggle.
Application consists of two pipelines:
Sending pipeline
filesrc ! decodebin ! encoder ! payloader ! udpsink
Receiving pipeline
udpsrc ! rtpbin ! depayloader ! decoder ! encoder ! filesink
The wanted behavior is that the sending pipeline plays a file, and when that has finished, another file plays and recording starts.
The actual behavior varies. With some approaches it is that the recording starts from the same time that the first playback starts. This I believe is due to that the pipelines share the same GSocket, in order to get it to work at all. So somehow data coming to the socket must be buffered.
Other approaches result in a few frames from before the recording should start, and then jumps to after the recording begins, resulting in a messy picture (i-frames without keyframe).
I've tried a couple of different approaches to try to get the recording to start at the right time:
Start the receiving pipeline when the second file starts playing
Start both pipelines at the same time and have a valve element dropping everything until the second file starts playing
Start both pipelines at the same time and Seek to the time where the second file starts playing
Start both pipelines at the same time and have the receiving pipeline connected to a fakesink until and switch to the real filter chain when second file starts playing
Set an offset on the receiving pipeline
I would be very grateful for any help with this!
Start both pipelines at the same time and have a valve element dropping everything until the second file starts playing
This actually works. The problem I had was that no picture fast update was sent, and it took a while for the next keyframe to arrive by itself.