MPEG-DASH: standard "available time" parameter in manifests for "live-dvr" - streaming

Related: Terminology: "live-dvr" in mpeg-dash streaming
I'm a little bit confused about the MPEG-DASH standard and an use case. I would like to know if there's a way to specify in MPEG-DASH manifests for a "live-dvr" setup the amount of available time for seeking back in players.
That is, for example, if a "live-dvr" stream has 30' of media available for replay, what would be a standard way to specify this in the manifest.
I know I can configure a given player for a desired behaviour. My question is not about players but about the manifests.
I don't fully understand yet if this use case is formally addresed in the standard or not (see the related link). I'm guessing a relation between #timeShiftBufferDepth and #presentationTimeOffset should work, but i'm confused regarding how it should manage "past time" instead of terms like "length" or "duration".
Thanks in advance.

Yes - you are on the right lines.
The MPEG DASH implementation guidelines provide this formula (my bolding):
The CheckTime is defined on the MPD-documented media time axis; when the client’s playback time reaches CheckTime - MPD#minBufferTime it should fetch a new MPD.
Then, the Media Segment list is further restricted by the CheckTime together with the MPD attribute MPD#timeShiftBufferDepth such that only Media Segments for which the sum of the start time of the Media Segment and the Period start time falls in the interval [NOW- MPD#timeShiftBufferDepth - #duration, min(CheckTime, NOW)] are included.
The full guidelines are available at:
http://mpeg.chiariglione.org/standards/mpeg-dash/implementation-guidelines/text-isoiec-dtr-23009-3-2nd-edition-dash

Related

in web-audio api how to obtain an array(eg. FLOAT32 array) from a stream (eg a microphone stream) for several seconds

I would like to fill an array from a stream for around ten seconds.{I wish to do some processing on the data)So far I can:
(a) obtain the microphone stream using mediaRecorder
(b) use analyser and analyser.getFloatTimeDomainData(dataArray) to obtain an array but it is size limited to only a little over half a second of data.I can also successfully output the data after processing back onto a stream and to outDestination.
(c) I have also experimented with obtaining a 'chunks' array from mediaRecorder directly but the problem then is that I can't find any mime type that would give me a simple array of values - ie an uncompressed sample by sample single channel set of value - ie a longer version of 'dataArray' in (b).
I am wondering if I am missing a simple way round this problem?
Solutions I have seen tend to use step (b) and do regular polls then reassemble a longer array - however it seems the timing is a bit tricky ..
I'v also seen suggestions to use audio workouts - I might have to do this but would prefer a simpler solution!
Or again, if someone knows how to drive mediaRecorder to output the chunks array in a simple array format FLOAT32.of one channel.That would do the trick.
Or maybe I'm missing something simpler?
I have code showing those steps that have been successful and will upload if anyone requests.

Filtering an audio signal and then reading the meter without sending it to master

I'm trying to filter a signal and then analyse the values of the filtered signal using Tone.js / Web-Audio API.
I'm expecting to get values of the filtered signal, but I only get -Infinity, meaning that my connections between the nodes are wrong. I've made a small fiddle demonstrating this, however in my use-case I do not want to send this node to the destination of the context - I only want to analyse it, not hear it.
osc.connect(filter)
filter.connect(gainNode)
gainNode.connect(meter)
console.log(meter.getLevel())
I guess you tested the code in Chrome because there is a problem with Chrome which causes it to not process anything until it is connected to the destination. When using Tone.js that means you need to call .toMaster() at the end of your chain. I updated you fiddle to make it work: https://jsfiddle.net/8f7abzoL/.
In Firefox calling .toMaster() is not necessary therefore the following works in Firefox as well: https://jsfiddle.net/yrjgfdtz/.
After some digging I've found out that I need to have a scriptProcessorNode - which is apparently no longer recommended - so looking into Audio Worklet Nodes

Switch monitor position in Weston

Is it possible to indicate monitor position in Weston/Wayland?
I have two monitors and been testing the Weston compositor, but I have been unable to indicate which monitor should be the main one (or which one should show the "left part" of the screen).
Checking the weston.ini docs (http://manpages.ubuntu.com/manpages/xenial/en/man5/weston.ini.5.html) I found info about setting resolution, scaling and transform/rotation, but nothing about the position of the monitors.
I have been interested into the same some weeks ago. I send the question to the wayland IRC channel.
You can have a look at:
https://people.freedesktop.org/~cbrill/dri-log/index.php?channel=wayland&highlight_names=&date=2017-12-19
https://people.freedesktop.org/~cbrill/dri-log/index.php?channel=wayland&highlight_names=&date=2017-12-20
Here is the relevant part:
22:31 maggu2810: Is there any change to change the display / output position in weston? I am using three outputs but don't know how to configure which one if left / right of the other one.
22:33 maggu2810: ... chance to change ...
22:35 maggu2810: the output sections in weston.ini currently contains the name, the mode and the scale. I didn't find any "position" option or anything that looks like an option to modify the "order" of the displays.
07:07 pq: maggu2810, yes, not implemented yet, there were indeed some WIP patches in 2016
Perhaps you will be able to post a feature request in their mailing list. I don't think they will look for feature requests on SO ;)

Controlling light using midi inputs

I currently am using Max/MSP to create an interactive system between lights and sound.
I am using Philips hue lighting which I have hooked up to Max/MSP and now I am wanting to trigger an increase in brightness/saturation on the input of a note from a Midi instrument. Does anyone have any ideas how this might be accomplished?
I have built this.
I used the shell object. And then feed an array of parameters into it via a javascipt file with the HUE API. There is a lag time of 1/6 of a second between commands.
Javascript file:
inlets=1;
outlets=1;
var bridge="192.168.0.100";
var hash="newdeveloper";
var bulb= 1;
var brt= 200;
var satn= 250;
var hcolor= 10000;
var bulb=1;
function list(bulb,hcolor,brt,satn,tran) {
execute('PUT','http://'+bridge+'/api/'+hash+'/lights/'+bulb+'/state', '"{\\\"on\\\":true,\\\"hue\\\":'+hcolor+', \\\"bri\\\":'+brt+',\\\"sat\\\":'+satn+',\\\"transitiontime\\\":'+tran+'}"');
}
function execute($method,$url,$message){
outlet(0,"curl --request",$method,"--data",$message,$url);
}
To control Philips Hue you need to issue calls to a restful http based api, like so: http://www.developers.meethue.com/documentation/core-concepts, using the [jweb] or [maxweb] objects: https://cycling74.com/forums/topic/making-rest-call-from-max-6-and-saving-the-return/
Generally however, to control lights you use DMX, the standard protocol for professional lighting control. Here is a somewhat lengthy post on the topic: https://cycling74.com/forums/topic/controlling-video-and-lighting-with-max/, scroll down to my post from APRIL 11, 2014 | 3:42 AM.
To change the bri/sat of your lights is explained in the following link (Registration/Login required)
http://www.developers.meethue.com/documentation/lights-api#16_set_light_state
You will need to know the IP Address of your hue hue bridge which is explained here: http://www.developers.meethue.com/documentation/getting-started and a valid username.
Also bear in mind the performance limitations. As a general rule you can send up to 10 lightstate commands per second. I would recommend having a 100ms gap between each one, to prevent flooding the bridge (and losing commands).
Are you interested in finding out details of who to map this data from a MIDI input to the phillips HUE lights within max? or are you already familiar with Max.
Using Tommy b's javascript (which you could put into a js object), You could for example scale the MIDI messages you want to use using midiin and borax objects and map them to the outputs you want using the scale object. Karlheinz Essl's RTC library is a good place to start with algorithmic composition if you want to transform the data at all http://www.essl.at/software.html
+1 for DMX light control via Max. There are lots of good max-to-dmx tutorials and USB-DMX hardware is getting pretty cheap. However, as someone who previously believed in dragging a bunch of computer equipment on stage just to control a light or two with an instrument, I'd recommend researching and purchasing a simple one channel "color organ" circuit kit (e.g., Velleman MK 110). Controlling a 120/240V light bulb via audio is easier than you might think; a computer for this type of application is usually overkill. Keep it simple and good luck!

File types used for MISB KLV Encoding

I'm curious as to what file types are used for Motion Industry Standards Board KLV (Key Length Value). I've read the documentation at the MISB site, which is quite huge. It indicates, to my understanding, that MPEG-2, is usually used so I tried to get an idea as to what to look for in file extensions to recognise files that have the capability for embeding KLV metadata.
My question is: If a file has an extension like these - *.TS *.mpg, does that indicate potential KLV embedding? Are there any more types ? Can an active video stream from a camera contain KLV?
Any resonse or elaboration is appreciated. Thanks ahead !
As MISB docs state, only MPEG video streams for full motion video and NITF images for high resolution wide area coverage.
Mostly MPEG-TS streams, AFAIK.
You can scan your files for strings like "KLVA" or some of the keys defined in the standard to quickly detect such files with high confidence.
By scanning the titles of documents on the MISB site, one can glean that KLV can be encoded in AES3 Serial Digital Audio Streams, SMPTE 291 Ancillary Data Packets, and SDP (Session Description Protocol) [streams]. Reading a quick bit of the guidance .pdf also reveals that:
...MPEG-2 Transport Streams are a common multiplexing device for multiple data streams...
...but this does not limit KLV to only the MPEG-2 TS.
As a direct answer to your question: if the KLV stream is contained within an MPEG-2 Transport Stream, then yes - either *.ts or *.mpg files would qualify. (As would any other file capable of storing/containing an MPEG-2 Transport Stream.)
If you have further questions, reach out to me.