On the Everyplay Unity3D guide page these three lines are given as an example of using metadata with Everyplay.
Everyplay.SharedInstance.SetMetadata("level", levelNumber);
Everyplay.SharedInstance.SetMetadata("level_name", levelName);
Everyplay.SharedInstance.SetMetadata("score", score)
Are there any other metadata keys available than those three? Can you define your own metadata for your game instead of just using predefined keys? Could not find any more documentation on this than the above mentioned example.
You can and should provide as much metadata about the video as you can as there are several features to use the data already under work. The metadata supplied with the video has several intended (future) purposes. Currently only score and level_name are displayed with the videos on Everyplay (for example: http://everyplay.com/videos/8106 ).
The developer can (in the near future) configure what metadata to show with the videos, a racing game could show time, circuit and laps and an FPS game might show kills and deaths. Also we are already developing features into our API to allow developers to use the metadata to query existing videos, for example fetching a list of videos from level 1 in the past 10 days sorted by "score" and so on.
For a quick example here is the metadata that stair dismount (the video in the link above) provided with the same video:
metadata: {
somersaults: 1,
level: 60,
decapitation: false,
bifurcation: false,
push_force_z: -3957.182,
push_force_y: 1773.326,
distance: -1,
push_pos_z: 8.371746,
push_force_x: -1675.732,
push_pos_y: 24.18944,
push_body_name: "LeftForearm",
ragdoll_custom_face: true,
push_pos_x: -0.6025434,
push_body_id: 2189472344,
leaderboard_id: 1208019,
score: 3802645,
level_name: "Revolting Doors",
ragdoll_breakability: false,
distance_leaderboard_id: 0,
ragdoll_name: "Mr. Dismount",
ragdoll: 0
}
Related
Related: Terminology: "live-dvr" in mpeg-dash streaming
I'm a little bit confused about the MPEG-DASH standard and an use case. I would like to know if there's a way to specify in MPEG-DASH manifests for a "live-dvr" setup the amount of available time for seeking back in players.
That is, for example, if a "live-dvr" stream has 30' of media available for replay, what would be a standard way to specify this in the manifest.
I know I can configure a given player for a desired behaviour. My question is not about players but about the manifests.
I don't fully understand yet if this use case is formally addresed in the standard or not (see the related link). I'm guessing a relation between #timeShiftBufferDepth and #presentationTimeOffset should work, but i'm confused regarding how it should manage "past time" instead of terms like "length" or "duration".
Thanks in advance.
Yes - you are on the right lines.
The MPEG DASH implementation guidelines provide this formula (my bolding):
The CheckTime is defined on the MPD-documented media time axis; when the client’s playback time reaches CheckTime - MPD#minBufferTime it should fetch a new MPD.
Then, the Media Segment list is further restricted by the CheckTime together with the MPD attribute MPD#timeShiftBufferDepth such that only Media Segments for which the sum of the start time of the Media Segment and the Period start time falls in the interval [NOW- MPD#timeShiftBufferDepth - #duration, min(CheckTime, NOW)] are included.
The full guidelines are available at:
http://mpeg.chiariglione.org/standards/mpeg-dash/implementation-guidelines/text-isoiec-dtr-23009-3-2nd-edition-dash
I currently am using Max/MSP to create an interactive system between lights and sound.
I am using Philips hue lighting which I have hooked up to Max/MSP and now I am wanting to trigger an increase in brightness/saturation on the input of a note from a Midi instrument. Does anyone have any ideas how this might be accomplished?
I have built this.
I used the shell object. And then feed an array of parameters into it via a javascipt file with the HUE API. There is a lag time of 1/6 of a second between commands.
Javascript file:
inlets=1;
outlets=1;
var bridge="192.168.0.100";
var hash="newdeveloper";
var bulb= 1;
var brt= 200;
var satn= 250;
var hcolor= 10000;
var bulb=1;
function list(bulb,hcolor,brt,satn,tran) {
execute('PUT','http://'+bridge+'/api/'+hash+'/lights/'+bulb+'/state', '"{\\\"on\\\":true,\\\"hue\\\":'+hcolor+', \\\"bri\\\":'+brt+',\\\"sat\\\":'+satn+',\\\"transitiontime\\\":'+tran+'}"');
}
function execute($method,$url,$message){
outlet(0,"curl --request",$method,"--data",$message,$url);
}
To control Philips Hue you need to issue calls to a restful http based api, like so: http://www.developers.meethue.com/documentation/core-concepts, using the [jweb] or [maxweb] objects: https://cycling74.com/forums/topic/making-rest-call-from-max-6-and-saving-the-return/
Generally however, to control lights you use DMX, the standard protocol for professional lighting control. Here is a somewhat lengthy post on the topic: https://cycling74.com/forums/topic/controlling-video-and-lighting-with-max/, scroll down to my post from APRIL 11, 2014 | 3:42 AM.
To change the bri/sat of your lights is explained in the following link (Registration/Login required)
http://www.developers.meethue.com/documentation/lights-api#16_set_light_state
You will need to know the IP Address of your hue hue bridge which is explained here: http://www.developers.meethue.com/documentation/getting-started and a valid username.
Also bear in mind the performance limitations. As a general rule you can send up to 10 lightstate commands per second. I would recommend having a 100ms gap between each one, to prevent flooding the bridge (and losing commands).
Are you interested in finding out details of who to map this data from a MIDI input to the phillips HUE lights within max? or are you already familiar with Max.
Using Tommy b's javascript (which you could put into a js object), You could for example scale the MIDI messages you want to use using midiin and borax objects and map them to the outputs you want using the scale object. Karlheinz Essl's RTC library is a good place to start with algorithmic composition if you want to transform the data at all http://www.essl.at/software.html
+1 for DMX light control via Max. There are lots of good max-to-dmx tutorials and USB-DMX hardware is getting pretty cheap. However, as someone who previously believed in dragging a bunch of computer equipment on stage just to control a light or two with an instrument, I'd recommend researching and purchasing a simple one channel "color organ" circuit kit (e.g., Velleman MK 110). Controlling a 120/240V light bulb via audio is easier than you might think; a computer for this type of application is usually overkill. Keep it simple and good luck!
So I'm trying to develop a music player for my office with a restful API. In a nutshell, the API will be able to download music from youtube and load it into the current playlist of a running MPD instance. I also want to be able to control playback / volume with the API
Here's kind of what I was thinking so far:
Endpoint: /queue
Methods:
GET: Gets the current MPD playlist
POST: Accepts JSON with these arguments:
source-type: specify the type of the source of the music (usually youtube, but i might want to expand later to support pulling from soundcloud, etc)
source-desc: Used in conjunction with source-type, ie, if source-type were youtube, this would be a youtube search query
It would use these arguments to go out and find the song you want and put it in the queue
DELETE: Would clear the queue
Endpoint: /playbackcontrol
Methods:
GET: Returns volume, whether playing, paused, or stopped, etc
POST: Accepts JSON with these arguments:
operation: describe the operation you want (ie, next, previous, volume adjust)
optional_value: value for operations that need a value (like volume)
So that's basically what I'm thinking right now. I know this is really high level, I just wanted to get some input to see if I'm on the right track. Does this look like an acceptable way to implement such an API?
DELETE to clear the queue is not cool. PUT an empty queue representation instead. This will also come in handy later when you want to be able to rearrange items in the queue, remove them one by one etc.—you can GET the current queue, apply changes and PUT it back.
Volume is clearly better modeled as a separate /status/volume resource with GET and PUT. Maybe PATCH if you absolutely need distinct “volume up” and “volume down” operations (that is, if your client will not be keeping track of current volume).
Ditto for the playing/paused/stopped status: GET/PUT /status/playback.
To seed the client with the current status, make GET /status respond with a summary of what’s going on: current track, volume, playing/paused.
I would use the following 2 main modules:
playlist/{trackId}
source
index
player
playing
track
time
volume
Playlist:
adding a track: POST playlist {source: ...}
removing a track: DELETE playlist/{id}
ordering tracks: PUT playlist/{id}/index 123
get track list: GET playlist
Player:
loading a track: PUT player/track {id: 123}
rewind a track: PUT player/time 0
stop the player: PUT player/playing false
start the player: PUT player/playing true
adjust volume: PUT player/volume .95
get current status: GET player
Ofc you should use the proper RDF vocab for link relations and for describing your data. You can find it probably here.
i'm back with one more question related to BASS. I already had posted this question How Can we control bass of music in iPhone, but not get as much attention of your people as it should get. But now I have done some more search and had read the Core AUDIO. I got one sample code which i want to share with you people here is the link to download it iPhoneMixerEqGraphTest. Have a look on it in this code what i had seen is the developer had use preset Equalizer given by iPod in Apple. Lets see some code snippet too:----
// iPodEQ unit
CAComponentDescription eq_desc(kAudioUnitType_Effect, kAudioUnitSubType_AUiPodEQ, kAudioUnitManufacturer_Apple);
What kAudioUnitSubType_AUiPodEQ does is it get preset values from iPod's equalizer and return us in Xcode in an array which we can use in PickerView/TableView and can set any category like bass, rock, Dance etc. It is helpless for me as it only returns names of equalizer types like bass, rock, Dance etc. as i want to implement bass only and want to implement it on UISLider.
To implement Bass on slider i need values so that i can set minimum and maximum value so that on moving slider bass can be changed.
After getting all this i start reading Core Audio's Audio Unit framework's classes and got this
after that i start searching for bass control and got this
So now i need to implement this kAudioUnitSubType_LowShelfFilter. But now i don't know how to implement this enum in my code so that i can control the bass as written documentation. Even Apple had not write that how can we use it. kAudioUnitSubType_AUiPodEQ this category was returning us an array but kAudioUnitSubType_LowShelfFilter category is not returning any array. While using kAudioUnitSubType_AUiPodEQ this category we can use types of equalizer from an array but how can we use this category kAudioUnitSubType_LowShelfFilter. Can anybody help me regarding this in any manner? It would be highly appreciable.
Thanks.
Update
Although it's declared in the iOS headers, the Low Shelf AU is not actually available on iOS.
The parameters of the Low Shelf are different from the iPod EQ.
Parameters are declared and documented in `AudioUnit/AudioUnitParameters.h':
// Parameters for the AULowShelfFilter unit
enum {
// Global, Hz, 10->200, 80
kAULowShelfParam_CutoffFrequency = 0,
// Global, dB, -40->40, 0
kAULowShelfParam_Gain = 1
};
So after your low shelf AU is created, configure its parameters using AudioUnitSetParameter.
Some initial parameter values you can try would be 120 Hz (kAULowShelfParam_CutoffFrequency) and +6 dB (kAULowShelfParam_Gain) -- assuming your system reproduces bass well, your low frequency content should be twice as loud.
Can u tell me how can i use this kAULowShelfParam_CutoffFrequency to change the frequency.
If everything is configured right, this should be all that is needed:
assert(lowShelfAU);
const float frequencyInHz = 120.0f;
OSStatus result = AudioUnitSetParameter(lowShelfAU,
kAULowShelfParam_CutoffFrequency,
kAudioUnitScope_Global,
0,
frequencyInHz,
0);
if (noErr != result) {
assert(0 && "error!");
return ...;
}
in my sencha touch app I need to display a list of over 600 entries of objects per selected customer.
imagine one store holds some customers, displayed in a list. each of them has some "has-many"-related sub-stores, one holding about 600 objects (with urls, title, description...). these sub-info has to be listed when you select one customer from the first list.
the problem is on iOS you have to wait some seconds before the list is shown and it is very slow to scroll/use. it seems that it slows down the whole app.
are there any other options to display long lists, maybe like pagination ore something...
thnx!
edit: I found this article and will test these thoughts soon: Link
edit2: here we go: https://github.com/Lioarlan/UxBufList-Sench-Touch-Extension
You can paginate your list by adding a pageSize param to your store and the listpaging plugin to your list. By setting the autoPaging option, you can control whether the data is loaded automatically or on user click. Below is an example:
// store
Ext.regStore('BarginsStore', {
model: 'BarginModel',
autoLoad: true,
pageSize: 6,
clearOnPageLoad: false,
sorters: 'category',
getGroupString: function(record) {
return record.get('category');
}
});
// list
this.list = new Ext.List({
store: 'BarginsStore',
plugins: [{
ptype: 'listpaging',
autoPaging: true
}],
singleSelection: true,
emptyText: '<p class="no-bargins">No bargins found matching this criteria.</p>',
itemTpl: '<div class="bargin-record">{name}</div>'
});
are there any other options to display long lists, maybe like pagination ore something...
Pagination. Smartphones have far more limited CPU and RAM resources than a desktop PC. A six hundred row table with several elements is not going to display well on the devices on the market now. Hell, it'll probably slow down desktop browsers. Paginate it.