Play or Stream .ts on Unreal Engine Electra Media Player - unreal-engine4

I'm currently trying to play a stream on my unreal project, but it seems that .ts is not supported cause I always get this error
LogElectraPlayer: Error: [000002EBF74A7900][000002EBE9E4B450] MP4 parser: Invalid filler data size of 0x47400010 to read at offset 0x8 in box 0x0000b00d (size 0x47400018, offset 0x0, dataoffset 0x8)
LogElectraPlayer: Error: [000002EBF74A7900][000002EBE9E4B450] HLS fmp4 reader: Failed to download segment "our own url.ts"
any idea how can Unreal support .TS, since they said in their document they can play HLS Streaming.
In our encoder/decoder it seems it can't change from .TS to .MP4/.fmp4, we're using Brightcove

Related

ffprobe Error "Could not find codec parameters..."

My MP4 file issue is a bit complicated, so I have created a very simple scenario to help me diagnose it step by step.
I can create a working MP4 file that works flawlessly. The following is its structure shown by Mp4Explorer:
For debugging purpose, I removed the media data box mdat, and al stts, stsz,stss, stsc, stco boxes and kept everything else the same. This means the MP4 file has no media data. It just has some metadata.
This file is named error.mp4. If I run:
ffprobe "error.mp4"
I get the following error:
[mov,mp4,m4a,3gp,3g2,mj2 # 00000286e763ef00] Could not find codec parameters for stream 0 (Video: h264 (avc1 / 0x31637661), none, 1280x800): unspecified pixel format
Consider increasing the value for the 'analyzeduration' (0) and 'probesize' (5000000) options
Could anyone shed some light on this error? Why is it "Could not find codec parameters for stream 0"? If I remove the video track and leave the audio track as is, ffprobe will be happy with no errors.

Convert byte[] to Video (mp4 or any other palyable format)

I am creating a bot that will record the Microsoft team live sessions. Audio recording is working fine but facing problems in generating the video file. The process I am following is that I am converting the video data into a byte array and then writing the data to a video format file.
I am adding some code snippets, I have examined so far.
1. Stream videoStream = new FileStream(videoFilePath, FileMode.Create);
BinaryWriter videoStreamWriter = new BinaryWriter(videoStream);
videoStreamWriter.Write(videoBytesArray, 0, videoBytesArray.Length);
videoStreamWriter.Close();
2. System.IO.File.WriteAllBytes(videoFilePath, videoBytesArray);
The generated files from the above code snippets are of an unsupported format.
It may be because of the data receiving from the session.
I am receiving the data through the Local media session's Video Socket on VideoMediaReceived Event (ICall.ILocalMediaSession.VideoSockets). The Video Color Format of the data that the socket is receiving is of H264 Format.
A similar problem I encountered when creating the audio file. For that, I utilized the WaveFormat package for creating an audio file.
So, Is there any library/method to convert the byte array to a video file of any format?
#Murtaza, you can try this and see if it helps. If the byte array is already a video stream then, simply you can serialize it to the disk with the extension mp4. (If it's an MP4 encoded stream).
Stream t = new FileStream("video.mp4", FileMode.Create);
BinaryWriter b = new BinaryWriter(t);
b.Write(videoData);
t.Close();

MFCreateFMPEG4MediaSink does not generate MSE-compatible MP4

I'm attempting to stream a H.264 video feed to a web browser. Media Foundation is used for encoding a fragmented MPEG4 stream (MFCreateFMPEG4MediaSink with MFTranscodeContainerType_FMPEG4, MF_LOW_LATENCY and MF_READWRITE_ENABLE_HARDWARE_TRANSFORMS enabled). The stream is then connected to a web server through IMFByteStream.
Streaming of the H.264 video works fine when it's being consumed by a <video src=".."/> tag. However, the resulting latency is ~2sec, which is too much for the application in question. My suspicion is that client-side buffering causes most of the latency. Therefore, I'm experimenting with Media Source Extensions (MSE) for programmatic control over the in-browser streaming. Chrome does, however, fail with the following error when consuming the same MPEG4 stream through MSE:
Failure parsing MP4: TFHD base-data-offset not allowed by MSE. See
https://www.w3.org/TR/mse-byte-stream-format-isobmff/#movie-fragment-relative-addressing
mp4dump of a moof/mdat fragment in the MPEG4 stream. This clearly shows that the TFHD contains an "illegal" base data offset parameter:
[moof] size=8+200
[mfhd] size=12+4
sequence number = 3
[traf] size=8+176
[tfhd] size=12+16, flags=1
track ID = 1
base data offset = 36690
[trun] size=12+136, version=1, flags=f01
sample count = 8
data offset = 0
[mdat] size=8+1624
I'm using Chrome 65.0.3325.181 (Official Build) (32-bit), running on Win10 version 1709 (16299.309).
Is there any way of generating a MSE-compatible H.264/MPEG4 video stream using Media Foundation?
Status Update:
Based on roman-r advise, I managed to fix the problem myself by intercepting the generated MPEG4 stream and perform the following modifications:
Modify Track Fragment Header Box (tfhd):
remove base_data_offset parameter (reduces stream size by 8bytes)
set default-base-is-moof flag
Add missing Track Fragment Decode Time (tfdt) (increases stream size by 20bytes)
set baseMediaDecodeTime parameter
Modify Track fragment Run box (trun):
adjust data_offset parameter
The field descriptions are documented in https://www.iso.org/standard/68960.html (free download).
Switching to MSE-based video streaming reduced the latency from ~2.0 to 0.7 sec. The latency was furthermore reduced to 0-1 frames by calling IMFSinkWriter::NotifyEndOfSegment after each IMFSinkWriter::WriteSample call.
There's a sample implementation available on https://github.com/forderud/AppWebStream
I was getting the same error (Failure parsing MP4: TFHD base-data-offset not allowed by MSE) when trying to play a fmp4 via MSE. The fmp4 had been created from a mp4 using the following ffmpeg comand:
ffmpeg -i myvideo.mp4 -g 52 -vcodec copy -f mp4 -movflags frag_keyframe+empty_moov myfmp4video.mp4
Based on this question I was able to find out that to have the fmp4 working in Chrome I had to add the "default_base_moof" flag. So, after creating the fmp4 with the following command:
ffmpeg -i myvideo.mp4 -g 52 -vcodec copy -f mp4 -movflags frag_keyframe+empty_moov+default_base_moof myfmp4video.mp4
I was able to play successfully the video using Media Source Extensions.
This Mozilla article helped to find out that missing flag:
https://developer.mozilla.org/en-US/docs/Web/API/Media_Source_Extensions_API/Transcoding_assets_for_MSE
The mentioned 0.7 sec latency (in your Status Update) is caused by the Media Foundation's MFTranscodeContainerType_FMPEG4 containterizer which gathers and outputs each roughly 1/3 seconds (from unknown reason) of frames in one MP4 moof/mdat box pair. This means that you need to wait 19 frames before getting any output from MFTranscodeContainerType_FMPEG4 at 60 FPS.
To output single MP4 moof/mdat per each frame, simply lie that MF_MT_FRAME_RATE is 1 FPS (or anything higher than 1/3 sec). To play the video at the correct speed, use Media Source Extensions' <video>.playbackRate or rather update timescale (i.e. multiply by real FPS) of mvhd and mdhd boxes in your MP4 stream interceptor to get the correctly timed MP4 stream.
Doing that, the latency can be squeezed to under 20 ms. This is barely recognizable when you see the output side by side on localhost in chains such as Unity (research) -> NvEnc -> MFTranscodeContainerType_FMPEG4 -> WebSocket -> Chrome Media Source Extensions display.
Note that MFTranscodeContainerType_FMPEG4 still introduces 1 frame delay (1st frame in, no output, 2nd frame in, 1st frame out, ...), hence the 20 ms latency at 60 FPS. The only solution to that seems to be writing own FMPEG4 containerizer. But that is order of magnitude more complex than intercepting of Media Foundation's MP4 streams.
The problem was solved by following roman-r's advise, and modifying the generated MPEG4 stream. See answer above.
Another way to do this is again using the same code #Fredrik mentioned but I write my own IMFByteStream and and I check the chunks written to the IMFByteStream.
FFMpeg writes the atoms almost once at a time. So you can check the atom name and do the mods. It is the same thing. I wish there was an MSE compliant windows sinker.
Is there one that can generate .ts files for HLS?

mp4v2 extracting and decoding data from .M4A file

I have extracted the audio data from .m4a file using mp4v2 library (sample-by-sample). Does this library have function that decodes the data? Anybody with experience with this library and can provide some help?
The documentation says:
MP4ReadSample function reads the specified sample from the specified track.
Typically this sample is then decoded in a codec dependent fashion and
rendered in an appropriate fashion.
I am interesed in decoding the output.
Thanks in advance.
You tagged MP4(video data) and M4A(audio data). Since you are extracting from M4A, I can only imagine you actually have either AAC or MP3 audio data.
Each extracted sample (bytes) is audio frame.
To make a playable MP3 file : Simply join all MP3 frames' bytes together. Save as .mp3 to play later.
To make a playable AAC file : For each AAC frame, first create an ADTS header (7 bytes) followed by that frame's data. You can test your header bytes here (site shows what your byte values mean). When all your AAC frames each begin with an ADTS header, simply save as .aac to play later using some audio payer code.
I have researched everything and the answer is NO. There is no decoder in mp4/mp4v2 libraries. One has to use some other library to do that.

DirectShow: select a source video stream from an MP4 container

I am building an application that needs to read H264 and AC3 streams from a MP4 container and mux them into a single ISMV file. The source MP4 file contains a number of video streams of different bitrates and a number of audio streams of different languages.
When I call IGraphBuilder::AddSourceFilter for my source file, I get a filter that has just two output pins: "Video" and "Audio". How do I choose which particular stream (e.g.: which bitrate of a video stream) to use for "Video" and "Audio"?
Do I have to instantiate multiple source filters to read that file and mux them into ISMV, or am I missing something?
That depends on the demux you are using for MP4. I don't think there is a stock MP4 demux, so you have probably got one as part of a decoder package, and that is acting as both source and demux.
You can try the free open-source MP4 demux at www.gdcl.co.uk/mpeg4. You will need to AddSourceFilter (getting a file source with a single output) and then explicitly connect the source output to the demux input. Then you will have output pins corresponding to all enabled streams that the demux understands, and you can select the ones you want.
G