I am making mpv stream videos that are stored on my server. And mpv streams them as video.mp4 Is there a way to make mpv stream with a name provided as a parameter?
for example i use:
import os
video = input("Enter video id :")
link = "192.168.62.1://file/path/" + video + ".mp4"
command = "mpv {link}"
print(os.popen(command).read())
PS: i am using windows and the serer runs on linux.
PS2: The filenames on the server are rundomly generated.
Related
I am creating a bot that will record the Microsoft team live sessions. Audio recording is working fine but facing problems in generating the video file. The process I am following is that I am converting the video data into a byte array and then writing the data to a video format file.
I am adding some code snippets, I have examined so far.
1. Stream videoStream = new FileStream(videoFilePath, FileMode.Create);
BinaryWriter videoStreamWriter = new BinaryWriter(videoStream);
videoStreamWriter.Write(videoBytesArray, 0, videoBytesArray.Length);
videoStreamWriter.Close();
2. System.IO.File.WriteAllBytes(videoFilePath, videoBytesArray);
The generated files from the above code snippets are of an unsupported format.
It may be because of the data receiving from the session.
I am receiving the data through the Local media session's Video Socket on VideoMediaReceived Event (ICall.ILocalMediaSession.VideoSockets). The Video Color Format of the data that the socket is receiving is of H264 Format.
A similar problem I encountered when creating the audio file. For that, I utilized the WaveFormat package for creating an audio file.
So, Is there any library/method to convert the byte array to a video file of any format?
#Murtaza, you can try this and see if it helps. If the byte array is already a video stream then, simply you can serialize it to the disk with the extension mp4. (If it's an MP4 encoded stream).
Stream t = new FileStream("video.mp4", FileMode.Create);
BinaryWriter b = new BinaryWriter(t);
b.Write(videoData);
t.Close();
In developing a streaming audio application I used the gst-launch-1.0 command-line tool to generate an MPEG Transport stream for testing. This worked as intended (I was able to serve the stream from a simple http server and hear it using VLC media player). I then tried to replicate the encoding part of that stream in Python gstreamer code. The Python version connected to the server ok, but no audio could be heard. I'm trying to understand why the command-line implementation worked, but the Python one did not. I am working on Mac OS 10.11 and Python 2.7.
The command line that worked was as follows:
gst-launch-1.0 audiotestsrc freq=1000 ! avenc_aac ! aacparse ! mpegtsmux ! tcpclientsink host=127.0.0.1 port=9999
The Python code that created the gstreamer pipeline is below. It instantiated without producing any errors and it connected successfully to the http server, but no sound could be heard through VLC. I verified that the AppSrc in the Python code was working, by using it with a separate gstreamer pipeline that played the audio directly. This worked fine.
def create_mpeg2_pipeline():
play = Gst.Pipeline()
src = GstApp.AppSrc(format=Gst.Format.TIME, emit_signals=True)
src.connect('need-data', need_data, samples()) # need_data and samples defined elsewhere
play.add(src)
capsFilterOne = Gst.ElementFactory.make('capsfilter', 'capsFilterOne')
capsFilterOne.props.caps = Gst.Caps('audio/x-raw, format=(string)S16LE, rate=(int)44100, channels=(int)2')
play.add(capsFilterOne)
src.link(capsFilterOne)
audioConvert = Gst.ElementFactory.make('audioconvert', 'audioConvert')
play.add(audioConvert)
capsFilterOne.link(audioConvert)
capsFilterTwo = Gst.ElementFactory.make('capsfilter', 'capsFilterTwo')
capsFilterTwo.props.caps = Gst.Caps('audio/x-raw, format=(string)F32LE, rate=(int)44100, channels=(int)2')
play.add(capsFilterTwo)
audioConvert.link(capsFilterTwo)
aacEncoder = Gst.ElementFactory.make('avenc_aac', 'aacEncoder')
play.add(aacEncoder)
capsFilterTwo.link(aacEncoder)
aacParser = Gst.ElementFactory.make('aacparse', 'aacParser')
play.add(aacParser)
aacEncoder.link(aacParser)
mpegTransportStreamMuxer = Gst.ElementFactory.make('mpegtsmux', 'mpegTransportStreamMuxer')
play.add(mpegTransportStreamMuxer)
aacParser.link(mpegTransportStreamMuxer)
tcpClientSink = Gst.ElementFactory.make('tcpclientsink', 'tcpClientSink')
tcpClientSink.set_property('host', '127.0.0.1')
tcpClientSink.set_property('port', 9999)
play.add(tcpClientSink)
mpegTransportStreamMuxer.link(tcpClientSink)
My question is, how does the gstreamer pipeline that I've implemented in Python differ from the command-line pipeline? And more generally, how do you DEBUG this sort of thing? Does gstreamer have any 'verbose' mode?
Thanks.
One question at a time:
1) How does it differ from gst-launch-1.0?
It is hard to tell without seeing your full code but I'll try to guess:
gst-launch-1.0 does proper pad linking. When you have a muxer like you do you can't directly link it as it is created without any sink pads. You need to request one to be created before you can link. Take a look at dynamic pads: https://gstreamer.freedesktop.org/documentation/application-development/basics/pads.html
Also, gst-launch-1.0 has error handling, so it checks that every action succeeded and otherwise reports an error. I'd recommend you add a GstBus message handler to get notified of error messages at least. Also you should check the return for the functions you call in GStreamer, that would allow you to catch this linking error in your program.
2) Gstreamer debugging?
Mostly done by setting the GST_DEBUG variable: https://gstreamer.freedesktop.org/documentation/tutorials/basic/debugging-tools.html#the-debug-log
Run your application with: GST_DEBUG=6 ./yourapplication and you should see lots of logging.
I have C++ Blackberry Cascade application. I'm trying to read metadata of video file using this code
onMetaDataChanged: {
console.log("player onMetaDataChanged");
console.log("--------------------------------bit_rate=" + myPlayer.metaData.bit_rate);
console.log("-----------------------------------genre=" + myPlayer.metaData.genre);
console.log("-----------------------------sample_rate=" + myPlayer.metaData.sample_rate);
console.log("-----------------------------------title=" + myPlayer.metaData.title); }
But this only works after the video file is played. Is there any way to get metadata of video file, without playing it? Thanks.
Call prepare slot. It will acquire resources necessary for playback without playing the track and emit MetaDataChanged signal.
myPlayer.prepare()
I'm trying to play an http live stream to iphone, it looks like i have looked every example, mistakes and everything that i could found on the internet and apple docs about http live stream, and i think i'm in a dead end now.. I'm using MPMoviePlayer as in most of examples. Also i have to add the i can see the stream if i open the url from vlc player.
I succeeded in playing the apple BipBop stream on my iPhone that is here but can't play my stream. I figured that my url shows not in the m3u8 file so i found this terminal command, and successfully used it.
/Applications/VLC.app/Contents/MacOS/VLC --intf=rc rtp://#239.35.86.11:10000
'--sout=#transcode{fps=25,vcodec=h264,venc=x264{aud,profile=baseline,level=30, keyint=30,bframes=0,ref=1,nocabac},acodec=mp3,ab=56,audio-sync,deinterlace}:standard{mux=ts,dst=-,access=file}' | mediastreamsegmenter -b http://192.168.1.16/~Jonas/streaming/ -f
/users/jonas/sites/streaming/ -D
Now i have a playlist m3u8 file locally on my machine. As i understand with the command i download stream divide it into smaller ts files and generate m3u8 file that is like a reference to those ts files. So i've tried to load this, but still no luck. For some reasons i can't even open the m3u8 file in vlc or itunes, it throws me errors. So i guess it is something wrong with the playlist file?
Maybe some of you can see what am i doing wrong here or have some suggestions how to find my problem? I would really appreciate it.
It looks like your iOS code is just fine and that it is your server-side code that is causing issues, primarily with regards to generating the m3u8 playlist and possibly with how you're hosting the ts files that it references.
Unfortunately my example code is a bit noisy as I wrote it a year ago (it is in python) and it does a bit more than you're asking for (it live-transcodes a video into the correct m3u8/ts stuff) but it is tested and functional.
You can take a look at the code here: https://github.com/DFTi/ScribbeoServer/blob/python/transcode.py
I will paste some of the relevant methods here for your convenience; I hope it helps you:
def start_transcoding(self, videoPath):
if DISABLE_LIVE_TRANSCODE:
print "Live transcoding is currently disabled! There is a problem with your configuration."
return
print "Initiating transcode for asset at path: "+videoPath
videoPath = unquote(videoPath)
video_md5 = md5.new(videoPath).hexdigest()
if self.sessions.has_key(video_md5): # Session already exists?
return self.m3u8_bitrates_for(video_md5)
transcodingSession = TranscodeSession(self, videoPath)
if transcodingSession.can_be_decoded():
self.sessions[transcodingSession.md5] = transcodingSession
return self.m3u8_bitrates_for(transcodingSession.md5)
else:
return "Cannot decode this file."
def m3u8_segments_for(self, md5_hash, video_bitrate):
segment = string.Template("#EXTINF:$length,\n$md5hash-$bitrate-$segment.ts\n")
partCount = math.floor(self.sessions[md5_hash].duration / 10)
m3u8_segment_file = "#EXTM3U\n#EXT-X-TARGETDURATION:10\n"
for i in range(0, int(partCount)):
m3u8_segment_file += segment.substitute(length=10, md5hash=md5_hash, bitrate=video_bitrate, segment=i)
last_segment_length = math.ceil((self.sessions[md5_hash].duration - (partCount * 10)))
m3u8_segment_file += segment.substitute(length=last_segment_length, md5hash=md5_hash, bitrate=video_bitrate, segment=i)
m3u8_segment_file += "#EXT-X-ENDLIST"
return m3u8_segment_file
def m3u8_bitrates_for(self, md5_hash):
m3u8_fudge = string.Template(
"#EXTM3U\n"
# "#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=384000\n"
# "$hash-384-segments.m3u8\n"
# "#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=512000\n"
# "$hash-512-segments.m3u8\n"
"#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=768000\n"
"$hash-768-segments.m3u8\n"
# "#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=1024000\n"
# "$hash-1024-segments.m3u8\n"
)
return m3u8_fudge.substitute(hash=md5_hash)
def segment_path(self, md5_hash, the_bitrate, segment_number):
# A segment was requested.
path = self.sessions[md5_hash].transcode(segment_number, the_bitrate)
if path:
return path
else:
raise "Segment path not found"
That project is all open source now and can be found here: https://github.com/DFTi/ScribbeoServer/tree/python
Binaries can be found here: http://scribbeo.com/server
Good luck!
I want to record a stream which is published with Flash Live Encoder to FMS 3.5, but split the recording in files with predefined length. For example if a stream 'webcam' is published I want to record it in chunks of 10 minutes: 'webcam1.flv', 'webcam2.flv' ...
From what I can tell there's no facility to work with timers. The only solution I could think of was using stream.record() with a time limit parameter but that seems like a hack because it triggers NetStream.Record.DiskQuotaExceeded on the stream when the recordin should stop and start recording another chunk.
Has anyone done something similar?
On the server side why not just republish and record the stream with some timestamped name. Then run a timer that fires every ten minutes (or whatever) which stops the recording of that stream, and creates a new server side stream playing the client stream.
Something along the lines of:
setInterval("setNewStream", 600000);
function setNewStream() {
var now = new Date();
serverStream.record(false);
var filename = "recording-"+ now.getHours() + "-" + now.getMinutes();
serverStream = Stream.get(filename);
serverStream.play("clientStream");
serverStream.record();
}