I found some directshow filters that does text overlay, but they always build this graph:
source mpeg2 (only video) -> mpeg2 decoder -> overlay -> (some encoder) ... -> file writer
Is possible (also with a third party filter), in directshow, to build a text overlay without decode the mpeg2 stream?
source mpeg2 (only video) -> overlay -> file writer
Because the encoding process is very cpu critical (I have to process about 6 or 8 video in real time) and writing decoded files without compression takes about 170 MB (320x240) every 2 minutes per file.
Thanks
You can't get the overlay burned in to the video without decoding the video first. But you could have a text stream in the file, which was rendered and overlaid at playback time. A custom filter for decoding would be the easiest, and you would implement IStreamBuilder on the custom filter's output pin to connect it to a VMR secondary input when building the graph. Or you could encode it in a recognised caption format, and then choose a player that supports that format.
G
Related
I'm attempting to stream a H.264 video feed to a web browser. Media Foundation is used for encoding a fragmented MPEG4 stream (MFCreateFMPEG4MediaSink with MFTranscodeContainerType_FMPEG4, MF_LOW_LATENCY and MF_READWRITE_ENABLE_HARDWARE_TRANSFORMS enabled). The stream is then connected to a web server through IMFByteStream.
Streaming of the H.264 video works fine when it's being consumed by a <video src=".."/> tag. However, the resulting latency is ~2sec, which is too much for the application in question. My suspicion is that client-side buffering causes most of the latency. Therefore, I'm experimenting with Media Source Extensions (MSE) for programmatic control over the in-browser streaming. Chrome does, however, fail with the following error when consuming the same MPEG4 stream through MSE:
Failure parsing MP4: TFHD base-data-offset not allowed by MSE. See
https://www.w3.org/TR/mse-byte-stream-format-isobmff/#movie-fragment-relative-addressing
mp4dump of a moof/mdat fragment in the MPEG4 stream. This clearly shows that the TFHD contains an "illegal" base data offset parameter:
[moof] size=8+200
[mfhd] size=12+4
sequence number = 3
[traf] size=8+176
[tfhd] size=12+16, flags=1
track ID = 1
base data offset = 36690
[trun] size=12+136, version=1, flags=f01
sample count = 8
data offset = 0
[mdat] size=8+1624
I'm using Chrome 65.0.3325.181 (Official Build) (32-bit), running on Win10 version 1709 (16299.309).
Is there any way of generating a MSE-compatible H.264/MPEG4 video stream using Media Foundation?
Status Update:
Based on roman-r advise, I managed to fix the problem myself by intercepting the generated MPEG4 stream and perform the following modifications:
Modify Track Fragment Header Box (tfhd):
remove base_data_offset parameter (reduces stream size by 8bytes)
set default-base-is-moof flag
Add missing Track Fragment Decode Time (tfdt) (increases stream size by 20bytes)
set baseMediaDecodeTime parameter
Modify Track fragment Run box (trun):
adjust data_offset parameter
The field descriptions are documented in https://www.iso.org/standard/68960.html (free download).
Switching to MSE-based video streaming reduced the latency from ~2.0 to 0.7 sec. The latency was furthermore reduced to 0-1 frames by calling IMFSinkWriter::NotifyEndOfSegment after each IMFSinkWriter::WriteSample call.
There's a sample implementation available on https://github.com/forderud/AppWebStream
I was getting the same error (Failure parsing MP4: TFHD base-data-offset not allowed by MSE) when trying to play a fmp4 via MSE. The fmp4 had been created from a mp4 using the following ffmpeg comand:
ffmpeg -i myvideo.mp4 -g 52 -vcodec copy -f mp4 -movflags frag_keyframe+empty_moov myfmp4video.mp4
Based on this question I was able to find out that to have the fmp4 working in Chrome I had to add the "default_base_moof" flag. So, after creating the fmp4 with the following command:
ffmpeg -i myvideo.mp4 -g 52 -vcodec copy -f mp4 -movflags frag_keyframe+empty_moov+default_base_moof myfmp4video.mp4
I was able to play successfully the video using Media Source Extensions.
This Mozilla article helped to find out that missing flag:
https://developer.mozilla.org/en-US/docs/Web/API/Media_Source_Extensions_API/Transcoding_assets_for_MSE
The mentioned 0.7 sec latency (in your Status Update) is caused by the Media Foundation's MFTranscodeContainerType_FMPEG4 containterizer which gathers and outputs each roughly 1/3 seconds (from unknown reason) of frames in one MP4 moof/mdat box pair. This means that you need to wait 19 frames before getting any output from MFTranscodeContainerType_FMPEG4 at 60 FPS.
To output single MP4 moof/mdat per each frame, simply lie that MF_MT_FRAME_RATE is 1 FPS (or anything higher than 1/3 sec). To play the video at the correct speed, use Media Source Extensions' <video>.playbackRate or rather update timescale (i.e. multiply by real FPS) of mvhd and mdhd boxes in your MP4 stream interceptor to get the correctly timed MP4 stream.
Doing that, the latency can be squeezed to under 20 ms. This is barely recognizable when you see the output side by side on localhost in chains such as Unity (research) -> NvEnc -> MFTranscodeContainerType_FMPEG4 -> WebSocket -> Chrome Media Source Extensions display.
Note that MFTranscodeContainerType_FMPEG4 still introduces 1 frame delay (1st frame in, no output, 2nd frame in, 1st frame out, ...), hence the 20 ms latency at 60 FPS. The only solution to that seems to be writing own FMPEG4 containerizer. But that is order of magnitude more complex than intercepting of Media Foundation's MP4 streams.
The problem was solved by following roman-r's advise, and modifying the generated MPEG4 stream. See answer above.
Another way to do this is again using the same code #Fredrik mentioned but I write my own IMFByteStream and and I check the chunks written to the IMFByteStream.
FFMpeg writes the atoms almost once at a time. So you can check the atom name and do the mods. It is the same thing. I wish there was an MSE compliant windows sinker.
Is there one that can generate .ts files for HLS?
I have extracted the audio data from .m4a file using mp4v2 library (sample-by-sample). Does this library have function that decodes the data? Anybody with experience with this library and can provide some help?
The documentation says:
MP4ReadSample function reads the specified sample from the specified track.
Typically this sample is then decoded in a codec dependent fashion and
rendered in an appropriate fashion.
I am interesed in decoding the output.
Thanks in advance.
You tagged MP4(video data) and M4A(audio data). Since you are extracting from M4A, I can only imagine you actually have either AAC or MP3 audio data.
Each extracted sample (bytes) is audio frame.
To make a playable MP3 file : Simply join all MP3 frames' bytes together. Save as .mp3 to play later.
To make a playable AAC file : For each AAC frame, first create an ADTS header (7 bytes) followed by that frame's data. You can test your header bytes here (site shows what your byte values mean). When all your AAC frames each begin with an ADTS header, simply save as .aac to play later using some audio payer code.
I have researched everything and the answer is NO. There is no decoder in mp4/mp4v2 libraries. One has to use some other library to do that.
How to create an Mp4 file from H264 raw data that I am receiving from a live streamer (no predefined duration or moov atom), unfortunately can't use FFMPEG, I have to write my own code using live555. Can somebody help me with Mp4 container and how h264 data has to be pushed into it.? Thank you in advance : )
There are several operations to be made to store H.264 raw data into MP4, among them:
create box structures, in particular the moov box
store the NAL units in a mdatbox, possibly storing non-VCL NAL units in the moovbox
replace start codes with length fields
It also depends on your requirements. If you want to do the conversion on-the-fly, you have to use fragmented mp4. If you can store the H264 and then do the conversion, you may use non-fragmented mp4. In particular using MP4Box:
MP4Box -add file.264 file.mp4
I am building an application that needs to read H264 and AC3 streams from a MP4 container and mux them into a single ISMV file. The source MP4 file contains a number of video streams of different bitrates and a number of audio streams of different languages.
When I call IGraphBuilder::AddSourceFilter for my source file, I get a filter that has just two output pins: "Video" and "Audio". How do I choose which particular stream (e.g.: which bitrate of a video stream) to use for "Video" and "Audio"?
Do I have to instantiate multiple source filters to read that file and mux them into ISMV, or am I missing something?
That depends on the demux you are using for MP4. I don't think there is a stock MP4 demux, so you have probably got one as part of a decoder package, and that is acting as both source and demux.
You can try the free open-source MP4 demux at www.gdcl.co.uk/mpeg4. You will need to AddSourceFilter (getting a file source with a single output) and then explicitly connect the source output to the demux input. Then you will have output pins corresponding to all enabled streams that the demux understands, and you can select the ones you want.
G
I'm trying to figure out the proper technique for performing skipping ahead or seeking within an mp4 (or m4a) audio file while playing it using the AudioFileStream and AudioQueue APIs on the iPhone.
If I pass the complete mp4 header (up to the mdat box) to an open AudioFileStream, the underlying audio file type is properly identified (in my case, AAC) and when I then pass the actual mdat data portion of the file, the AudioFileStream correctly begins generating audio packets and these can be sent to the AudioQueue and playback works.
However, if I try a random access approach to the playing back the file, I can't seem to get it to work properly, unless I always send the first frame of the mdat box to the AudioFileStream. If instead, after sending the mp4 header to the AudioFileStream, I then attempt to initially skip ahead to a later frame in the mdat by first calling AudioFileStreamSeek() and then passing the data for the associated packets, the AudioFileStream appears to generate audio packets, but when I pass these on to the AudioQueue and call AudioQueuePrime(), I always get an error of 'nope' returned.
My question is this: am I always required to at least pass in the first packet of the mdat box before attempting to do random playback of other packets in the mp4 file?
I can't seem to find any documentation on doing random playback of sections of an mp4 file while using an AudioFileStream and an AudioQueue. I've found Apple's QuickTime File Format pdf which describes the technique of randomly seeking within an mp4 file, but it's just a high level description and doesn't have any mention of using specific APIs (such as AudioFileStream).
Thanks for any insights.
It turns out the approach I was using with AudioFileStreamSeek() is valid, I just wasn't sending the full initial mp4 header to the AudioFileStreamParseBytes() routine.
The problem was I had assumed the packets began immediately after the mdat box tag. By examining the data offset value (kAudioFileStreamProperty_DataOffset) returned by the AudioFileStream Property Listener callback, I discovered the true start of the packet data was 18 bytes later.
These 18 bytes are considered part of the initial mp4 header that must be sent to the AudioFileStream parser before sending the data of arbitrary packets after calls to AudioFileStreamSeek().
If these extra bytes are left out, then the AudioQueuePrime() call will always fail with a 'nope' error even though you may have sent valid parsed audio packets to the AudioQueue.