I have an application that reads footage from an RTSP stream and processes the frames. I need to restream these processed frames to another RTSP stream. I have used the following command to stream a video file using FFMPEG:
ffmpeg -re -stream_loop -1 -i D:\Proj\sample.mp4 -c copy -f rtsp rtsp://10.0.0.0:8554/mystream.
Is it possible to stream individual frames as soon as they are processed and not only a full video file?
The algorithm has been built in MATLAB.
Thank you.
Related
I would like to get data from audio file based on microphone input (both Android and iOS), currently I'm using audioplayers and recordMp3 to record the microphone input. This results in a mp3 file with a local file path. In order to use the audio data, I want an uncompressed format like WAV. Would ffmpeg help with this conversion ? I want to eventually use this data for visualization.
MP3 to WAV
ffmpeg -i input.mp3 output.wav
Note that any encoding artifacts in the MP3 will be included in the WAV.
Piping from ffmpeg to your visualizer
I'm assuming you need WAV/PCM because your visualizer only accepts that format and does not accept MP3. You can create a WAV file as shown in the example above, but if your visualizer accepts a pipe as input you can avoid creating a temporary file:
ffmpeg -i input.mp3 -f wav - | yourvisualizer …
Using ffmpeg for visualization
See examples at How do I turn audio into video (that is, show the waveforms in a video)?
I want to stream from my rapsberry the microphone via HTTP with VLC.
This command works fine:
vlc -vvv alsa://hw:1,0 --sout '#transcode{vcodec=none,acodec=mpga,ab=128,channels=2,samplerate=44100}:standard{access=http,mux=mp3,dst=192.168.178.30:8080}'
But when changing the code to s16l and mux to wav I can't hear anything in the VLC.
This is the command I've tried:
vlc -vvv alsa://hw:1,0 --sout '#transcode{vcodec=none,acodec=s16l,channels=1,samplerate=16000,scodec=none}:standard{access=http,mux=wav,dst=192.168.178.30:8080}'
Bu the same codec using RTP works:
vlc -vvv alsa://hw:1,0 --sout '#transcode{vcodec=none,acodec=s16l,channels=1,samplerate=16000,scodec=none}:rtp{dst=192.168.178.30,port=1234,sdp=rtsp://192.168.178.30:8080/test.sdp}'
Some logs: https://gist.github.com/timaschew/9e7e027cd1b371b01b0f186f23b47068
Not all codecs can be muxed, check VLC documentation.
Currently PCM(wave) can be muxed only in RTP.
mux is the encapsulation method required for streaming. wav in VLC is a container intended for storing.
Wave is a file container type, it can hold different types of codec data (compressed /uncompressed).
[Wiki]
Audio in WAV files can be encoded in a variety of audio coding formats, such as GSM or MP3, to reduce the file size.
This is a reference to compare the monophonic (not stereophonic) audio quality and compression bitrates of audio coding formats available for WAV files including PCM, ADPCM, Microsoft GSM 06.10, CELP, SBC, Truespeech and MPEG Layer-3.
For HTTP streaming using VLC
Select the Codec you need to stream like mp3 codec.
Note : Muxing is not applicable here
I want to record video conference. I can receive rtp media from video conferencing server. I want to output fragmented mp4 file format for live streaming. So, how to write a fragmented mp4 file programmatically using Bento4?
MP4Box supports DASH. i supply the following simple example:
MP4Box -dash 4000 -frag 4000 -rap -segment-name test_ input.mp4
'-dash 4000' to segment the input mp4 file into 4000ms chunks
'-frag 4000' since frag = dash, actually segments are not fragmented further.
'-rap' to enforce each segment to start random access points, i.e. at keyframes. In such case the segment duration may differ from 4000ms depending on distribution of key frames.
'-segment-name' to specify the pattern of segments names. So in this case, the segments will be named like this: test_1.m4s, test_2.m4s, ...
I'm working on a networking component where the server provides a Texture and sends it to FFmpeg to be encoded (h264_qsv), and sends it over the network. The client receives the stream (mp4 presumably), decodes it using FFmpeg again and displays it on a Texture.
Currently this works very slowly since I am saving the texture to the disk before encoding it to a mp4 file (also saved to disk), and on the client side I am saving the .png texture to disk after decoding it so that I could use it in Unity.
Server side FFmpeg process is started with process.StartInfo.Arguments = #" -y -i testimg.png -c:v h264_qsv -q 5 -look_ahead 0 -preset:v faster -crf 0 test.qsv.mp4"; currently and client side with process.StartInfo.Arguments = #" -y -i test.qsv.mp4 output.png";
Since this needs to be very fast (30 fps at least) and real time, I need to pipe the Texture directly to the FFmpeg process. On the client side, I need to pipe the decoded data to the displayed Texture directly as well (opposed to saving it and then reading from disk).
A few days of researching showed me that FFmpeg supports various pipelining options, including using data formats such as bmp_pipe (piped bmp sequence), bin(binary text), data (raw data) and image2pipe (piped image2 sequence) however documentation and examples on how to use these options are very scarce.
Please help me: which format should I use (and how should it be used) ?
I have some .mov files want to stream to Flash media server. i have already tried to stream a single .mov by FFMPEG command in terminal and it works, the FMS can display the thing i streaming in live.
ffmpeg -re -i file1.mov -vcodec libx264 -f flv rtmp://localhost/livepkgr/livestream
Now i want to stream multiple files,
i tried to use above command one by one,
but it seems Flash media server stop the streaming when file1 is finished,
then start the stream with file2.
It makes the stream player stopped when file1 is finish, and have to refresh the page in order to continue on file2.
i am calling the FFMPEG command by a C program in linux, i wonder is there any method that i can prevent the FMS stopped when i switch the file source in FFMPEG? or is that possible to let FFMPEG constantly deliver the stream by multiple files source without stopped when a file finish?
Reformat your source file to a TS or MPEG or other "concatable" file. Then you can either use ffmpeg's concat protocol or just "cat" by yourself.
I found something like this it will be useful for you :
I managed to stream a static playlist of videos by using for each video a pipe (ex vid1.mp4 -> pipe1, vid2.mp4 -> pipe2 etc). Then i write into a single stream named pipe "stream" this way cat pipe1 pipe2 pipe3 > stream, and i use the stream pipe as input in FFMPEG to publish my stream