File types used for MISB KLV Encoding - embedded-resource

I'm curious as to what file types are used for Motion Industry Standards Board KLV (Key Length Value). I've read the documentation at the MISB site, which is quite huge. It indicates, to my understanding, that MPEG-2, is usually used so I tried to get an idea as to what to look for in file extensions to recognise files that have the capability for embeding KLV metadata.
My question is: If a file has an extension like these - *.TS *.mpg, does that indicate potential KLV embedding? Are there any more types ? Can an active video stream from a camera contain KLV?
Any resonse or elaboration is appreciated. Thanks ahead !

As MISB docs state, only MPEG video streams for full motion video and NITF images for high resolution wide area coverage.
Mostly MPEG-TS streams, AFAIK.
You can scan your files for strings like "KLVA" or some of the keys defined in the standard to quickly detect such files with high confidence.

By scanning the titles of documents on the MISB site, one can glean that KLV can be encoded in AES3 Serial Digital Audio Streams, SMPTE 291 Ancillary Data Packets, and SDP (Session Description Protocol) [streams]. Reading a quick bit of the guidance .pdf also reveals that:
...MPEG-2 Transport Streams are a common multiplexing device for multiple data streams...
...but this does not limit KLV to only the MPEG-2 TS.
As a direct answer to your question: if the KLV stream is contained within an MPEG-2 Transport Stream, then yes - either *.ts or *.mpg files would qualify. (As would any other file capable of storing/containing an MPEG-2 Transport Stream.)
If you have further questions, reach out to me.

Related

RTSP Source Filter with GDCL MP4 Muxer incompatibility

I'm trying to use GDCL MP4 Muxer with my RTSP Source Filter. They work fine together except after stopping the graph, muxer doesn't finilize the file and write the reqiured tables to the end of file via file writer (some parts are written starting from moov but not the time table values). When I try another RTSP source filter (which I don't have its source codes), table values are created with GDCL MP4 Muxer.
But when I try Elecard's MP4 Muxer, it works fine with my RTSP Source Filter. So, there is an incompatibility. I examined GDCL's source codes but couldn't find what it was expecting from me. I already calculate and set timestamp values to samples using SetTime method. But GDCL still doesn't finilaze file. Is it caused by missing information or missing signal when graph stops? What can be the problem, any ideas?
One thing you should be aware of regarding Geraint's MP4 Mux is that it is checking incoming media samples to have both start and stop time. You might be having only .tStart/AM_SAMPLE_TIMEVALID which still makes sense for video, but this would be a problem.
So the samples have to have stop time, or you need to fix this in multiplexer code.
A typical symptom for the problem is that generated files are empty or of zero duration.

DirectShow: select a source video stream from an MP4 container

I am building an application that needs to read H264 and AC3 streams from a MP4 container and mux them into a single ISMV file. The source MP4 file contains a number of video streams of different bitrates and a number of audio streams of different languages.
When I call IGraphBuilder::AddSourceFilter for my source file, I get a filter that has just two output pins: "Video" and "Audio". How do I choose which particular stream (e.g.: which bitrate of a video stream) to use for "Video" and "Audio"?
Do I have to instantiate multiple source filters to read that file and mux them into ISMV, or am I missing something?
That depends on the demux you are using for MP4. I don't think there is a stock MP4 demux, so you have probably got one as part of a decoder package, and that is acting as both source and demux.
You can try the free open-source MP4 demux at www.gdcl.co.uk/mpeg4. You will need to AddSourceFilter (getting a file source with a single output) and then explicitly connect the source output to the demux input. Then you will have output pins corresponding to all enabled streams that the demux understands, and you can select the ones you want.
G

Midi Message need help

How do I interpret dwParam1 from the midiInProc delegate into midi status message like note-off, or note-on, control change?
Because as long i try dwParam1 is 254, and is not equal to note-off or anything else.
You won't necessarily receive note-offs from every input device. IIRC it is legal for a device to send a note-on with volume=0 as a substitute for note-off. Also a drum stream (from a drum machine and/or on MIDI channel 10) I believe commonly contains only note-ons, no note-offs.
Given that your question mentions dwParam1 and midiInProc, I'm assuming this is for Windows. When you receive MIM_DATA in your midiInProc, you can parse dwParam1 as follows:
For the status byte (command and channel), use LOBYTE(dwParam1).
For the first data byte, use HIBYTE(dwParam1).
If applicable, for the second data byte, use LOBYTE(HIWORD(dwParam1)).
I'm not entirely sure what you are asking, but I think you are trying to figure out how to interpret MIDI data.
I suggest this resource:
http://www.midi.org/techspecs/midimessages.php
MIDI messages related to notes are differentiated by the first 4 bits, not by the whole byte. The last four bits of the first byte specify the channel.
The answer by #Conrad Albrecht is mostly right, but I wanted to chip in with an answer (instead of a comment), as I think that the original poster is probably being confused by MIDI running status.
If you are seeing bytes which don't resemble normal MIDI status bytes, you can assume that they are of the same type as the previous byte which you received. Therefore it is not only legal, but very common, to use MIDI note on events with velocity of 0 as a substitute for MIDI note offs.
You should just interpret these bytes as the normal second two bytes of a MIDI note on event.

x264 IDR access unit with a SPS and a PPS

I am trying to encode video in h.264 that when split with Apples HTTP Live Streaming tools media file segmenter will pass the media file validator I am getting two errors on the split MPEG-TS file
WARNING: Media segment contains a video track but does not contain any IDR access unit with a SPS and a PPS.
WARNING: 7 samples (17.073 %) do not have timestamps in track 257 (avc1).
After hours of research I think the "IDR" warning relates to not having keyframes in the right place on the segmented MPEG-TS file so in my ffmpeg command I set -keyint_min 1 to ensure keyframes where at every frame, but this didn't work.
Although it would be great to get an answer, if anyone can shed any light on what a "IDR access unit with a SPS and a PPS" is or what the timestamps warning means I would be very grateful, thanks.
Fix can be found on this thread https://devforums.apple.com/thread/45830?tstart=15

Issues with iPhone Http Streaming with concatenated video files

We are seeing this when "tying" two video files together.
Example we have Ad video that is segmented and content file which is also segmented.
We create a new file which has both Ad and content segment information together. However we are seeing an issue where either the Ad content is truncated or the content starts having A/V sync issues.
Both ad and content are segmented the same way , 5 sec segmentation. however since Ads are variable length the result file may have left over segment something like:
#EXTM3U
#EXT-X-TARGETDURATION:5
#EXT-X-MEDIA-SEQUENCE:0
#EXTINF:5,
fileSequence6.ts
#EXTINF:5,
fileSequence7.ts
#EXTINF:4,
fileSequence8.ts
#EXTINF:5,
fileSequence0.ts
#EXTINF:5,
fileSequence1.ts
#EXTINF:5,
fileSequence2.ts
#EXTINF:3,
fileSequence3.ts
Is this the proper way to play 2 files one after the other without rebuffering?
should generate-variant-plist be used to a play list of 2 files?
When you have a break in the stream to switch to a commercial, ad, or alternate video source then you want to introduce the discontinuity tag before the start of the next segment, for example:
#EXTM3U
#EXT-X-TARGETDURATION:5
#EXT-X-MEDIA-SEQUENCE:0
#EXTINF:5,
movie0.ts
#EXTINF:2,
movie1.ts
#EXT-X-DISCONTINUITY
#EXTINF:5,
commercial0.ts
#EXTINF:5,
commercial1.ts
#EXTINF:3,
commercial2.ts
This gets a little more complicated if you encrypt the streams because they use progressive encryption based on the prior segments encryption state and the sequence number which come together to form an "Initialization Vector". If you break the stream you have to reset the initialization vector so that the encryption/decryption can continue uninterrupted. This is an involved process so best to just search on Initialization Vector in Apple's docs.