I'm working on a networking component where the server provides a Texture and sends it to FFmpeg to be encoded (h264_qsv), and sends it over the network. The client receives the stream (mp4 presumably), decodes it using FFmpeg again and displays it on a Texture.
Currently this works very slowly since I am saving the texture to the disk before encoding it to a mp4 file (also saved to disk), and on the client side I am saving the .png texture to disk after decoding it so that I could use it in Unity.
Server side FFmpeg process is started with process.StartInfo.Arguments = #" -y -i testimg.png -c:v h264_qsv -q 5 -look_ahead 0 -preset:v faster -crf 0 test.qsv.mp4"; currently and client side with process.StartInfo.Arguments = #" -y -i test.qsv.mp4 output.png";
Since this needs to be very fast (30 fps at least) and real time, I need to pipe the Texture directly to the FFmpeg process. On the client side, I need to pipe the decoded data to the displayed Texture directly as well (opposed to saving it and then reading from disk).
A few days of researching showed me that FFmpeg supports various pipelining options, including using data formats such as bmp_pipe (piped bmp sequence), bin(binary text), data (raw data) and image2pipe (piped image2 sequence) however documentation and examples on how to use these options are very scarce.
Please help me: which format should I use (and how should it be used) ?
Related
I want to record video conference. I can receive rtp media from video conferencing server. I want to output fragmented mp4 file format for live streaming. So, how to write a fragmented mp4 file programmatically using Bento4?
MP4Box supports DASH. i supply the following simple example:
MP4Box -dash 4000 -frag 4000 -rap -segment-name test_ input.mp4
'-dash 4000' to segment the input mp4 file into 4000ms chunks
'-frag 4000' since frag = dash, actually segments are not fragmented further.
'-rap' to enforce each segment to start random access points, i.e. at keyframes. In such case the segment duration may differ from 4000ms depending on distribution of key frames.
'-segment-name' to specify the pattern of segments names. So in this case, the segments will be named like this: test_1.m4s, test_2.m4s, ...
I download lectures in mp4 format from Udacity, but they're often broken down into 2-5 minute chunks. I'd like to combine the videos for each lecture into one continuous stream, which I've had success with on Windows using AnyVideo Converter. I'm trying to do the same thing on Ubuntu 15, and most of my web search results suggest MP4Box, whose documentation and all the online examples I can find offer the following syntax:
MP4Box -cat vid1.mp4 -cat vid2.mp4 -cat vid3.mp4 -new combinedfile.mp4
This creates a new file with working audio, but the video doesn't work. When I open with Ubuntu's native video player, I get the error "No valid frames decoded before end of stream." When I open with VLC, I get the error "Codec not supported: VLC could not decode the format 'avc3' (No description for this codec." I've tried using the -keepsys switch, as well, but I get the same results.
All the documentation and online discussion makes it sound as though what I'm trying to do is and should be really simple, but I can't seem to find info relevant to the specific errors I'm getting. What am I missing?
Use the -force-cat option.
For example,
MP4Box -force-cat -add in1.mp4 -cat in2.mp4 -cat in3.mp4 ... -new out.mp4
From the MP4Box documentation:
-force-cat
skips media configuration check when concatenating file.
It looks, by the presence of 'avc3', that these videos are encoded with h.264|avc. There are several modes for the concatenation of such streams. Either the video streams have compatible encoder configurations (frame size, ...) in which case only one configuration description is used in the file (signaled by 'avc1'). If the configurations are not fully compatible, MP4Box uses the 'inband' storage of those configurations (signaled by 'avc3'). The other way would be to use multiple sample description entries (stream configurations) but that is not well supported by players and not yet possible with MP4Box. There is no other way unless you want to reencode your videos. On Ubuntu, you should be able to play 'avc3' streams with the player that goes with MP4Box: MP4Client.
=== BACKGROUND ===
Some time ago I ripped a lot of music from an internet radio station. Unfortunately something seems to have went wrong, since the length of most files is displayed as being several hours, but they started playing at the correct position.
Example: If a file is really 3 minutes long and it would be displayed as 3 hours, playback would start at 2 hours and 57 minutes.
Before I upgraded my system, gstreamer was in an older version and its behaviour would be as described above, so I didn't pay too much attention. Now I have a new version of gstreamer which cannot handle these files correctly: It "plays" the whole initial offset.
=== /BACKGROUND ===
So here is my question: How is it possible to modify an OGG/Vorbis file in order to get rid of useless initial offsets? Although I tried several tag-edit programs, none of them would allow me to edit these values. (Interestingly enough easytag will display me both times, but write the wrong one...)
I finally found a solution! Although it wasn't quite what I expected...
After trying several other options I ended up with the following code:
#!/bin/sh
cd "${1}"
OUTDIR="../`basename "${1}"`.new"
IFS="
"
find . -wholename '*.ogg' | while read filepath;
do
# Create destination directory
mkdir -p "${OUTDIR}/`dirname "${filepath}"`"
# Convert OGG to OGG
avconv -i "${filepath}" -f ogg -acodec libvorbis -vn "${OUTDIR}/${filepath}"
# Copy tags
vorbiscomment -el "${filepath}" | vorbiscomment -ew "${OUTDIR}/${filepath}"
done
This code recursively reencodes all OGG files and then copies all vorbis comments. It's not a very efficient solution, but it works nevertheless...
What the problem was: I guess it has something to do with the output of ogginfo:
...
New logical stream (#1, serial: 74a4ca90): type vorbis
WARNING: Vorbis stream 1 does not have headers correctly framed. Terminal header page contains additional packets or has non-zero granulepos
Vorbis headers parsed for stream 1, information follows...
Version: 0
Vendor: Xiph.Org libVorbis I 20101101 (Schaufenugget)
...
Which disappears after reencoding the file...
At the rate at which I'm currently encoding it will probably take several hours until my whole media library will be completely reencoded... but at least I verified with several samples that it works :)
I have some .mov files want to stream to Flash media server. i have already tried to stream a single .mov by FFMPEG command in terminal and it works, the FMS can display the thing i streaming in live.
ffmpeg -re -i file1.mov -vcodec libx264 -f flv rtmp://localhost/livepkgr/livestream
Now i want to stream multiple files,
i tried to use above command one by one,
but it seems Flash media server stop the streaming when file1 is finished,
then start the stream with file2.
It makes the stream player stopped when file1 is finish, and have to refresh the page in order to continue on file2.
i am calling the FFMPEG command by a C program in linux, i wonder is there any method that i can prevent the FMS stopped when i switch the file source in FFMPEG? or is that possible to let FFMPEG constantly deliver the stream by multiple files source without stopped when a file finish?
Reformat your source file to a TS or MPEG or other "concatable" file. Then you can either use ffmpeg's concat protocol or just "cat" by yourself.
I found something like this it will be useful for you :
I managed to stream a static playlist of videos by using for each video a pipe (ex vid1.mp4 -> pipe1, vid2.mp4 -> pipe2 etc). Then i write into a single stream named pipe "stream" this way cat pipe1 pipe2 pipe3 > stream, and i use the stream pipe as input in FFMPEG to publish my stream
I need an encoder that can convert mp3 files to he-aac (aka aac+).
So far the best one I have found is nero aac encoder .
I have two problemes with it :
- Only one input format : wav . It is a little bit slow to transform mp3 files to wav and then to he-aac.
- a free license for non commercial use.
Too bad ffmpeg does not support he-aac ...
There is a commercial solution, on2 flix, but it seems to be a golden hammer for the simple task I need to do.
Nero AAC is the only one as far as I know. Even if FAAC supported HE-AAC it would be useless, since as an encoder its pretty awfully designed and its quality is not even competitive with LAME, let alone a good AAC encoder.
Kostya on the FFMPEG team is currently working on an AAC encoder but it has a long way to go--its not ready for primetime with LC-AAC, let alone HE-AAC (its not even committed to the repository yet). The first step before anything will be to get the ffmpeg decoder to support HE-AAC; currently it can only be decoded through FAAD.
I don't believe there is any HE-AAC encoder on any platform with a more permissive license than Nero's at this point in time.
I've been using neroAacEnc for quite a while now, and I'm largely satisfied with the results. If you're on Linux, making an .AAC file out of an .MP3 or whatever else, is quite easy, all you need is a small wrapper script, that takes care of decoding into .WAV and after encoding, removes the .WAV file.
Be advised: Converting from one lossy encoding to another further reduces quality. So when you can live with .MP3 and you don't have lossless sources, you better stick to them.
Here's a small script, that converts from .FLACto .AAC, it accepts only .FLAC files as arguments:
#!/bin/zsh
for file in ${argv[*]}; do
flac -d ${file}
neroAacEnc -q 0.6 -if ${file%%.flac}.wav -of ${file%%.flac}.aac
rm ${file%%.flac}.wav
done
This script is sequential, but it can be easily made into a multithreaded script.
There is an encoder called accplus which is under the GNU license available here.
Another encoder: mp4tools
I don't have an idea about the quality, but I just found
enhAacPlusEnc