What is a restful way to implement this proposed API? - rest

So I'm trying to develop a music player for my office with a restful API. In a nutshell, the API will be able to download music from youtube and load it into the current playlist of a running MPD instance. I also want to be able to control playback / volume with the API
Here's kind of what I was thinking so far:
Endpoint: /queue
Methods:
GET: Gets the current MPD playlist
POST: Accepts JSON with these arguments:
source-type: specify the type of the source of the music (usually youtube, but i might want to expand later to support pulling from soundcloud, etc)
source-desc: Used in conjunction with source-type, ie, if source-type were youtube, this would be a youtube search query
It would use these arguments to go out and find the song you want and put it in the queue
DELETE: Would clear the queue
Endpoint: /playbackcontrol
Methods:
GET: Returns volume, whether playing, paused, or stopped, etc
POST: Accepts JSON with these arguments:
operation: describe the operation you want (ie, next, previous, volume adjust)
optional_value: value for operations that need a value (like volume)
So that's basically what I'm thinking right now. I know this is really high level, I just wanted to get some input to see if I'm on the right track. Does this look like an acceptable way to implement such an API?

DELETE to clear the queue is not cool. PUT an empty queue representation instead. This will also come in handy later when you want to be able to rearrange items in the queue, remove them one by one etc.—you can GET the current queue, apply changes and PUT it back.
Volume is clearly better modeled as a separate /status/volume resource with GET and PUT. Maybe PATCH if you absolutely need distinct “volume up” and “volume down” operations (that is, if your client will not be keeping track of current volume).
Ditto for the playing/paused/stopped status: GET/PUT /status/playback.
To seed the client with the current status, make GET /status respond with a summary of what’s going on: current track, volume, playing/paused.

I would use the following 2 main modules:
playlist/{trackId}
source
index
player
playing
track
time
volume
Playlist:
adding a track: POST playlist {source: ...}
removing a track: DELETE playlist/{id}
ordering tracks: PUT playlist/{id}/index 123
get track list: GET playlist
Player:
loading a track: PUT player/track {id: 123}
rewind a track: PUT player/time 0
stop the player: PUT player/playing false
start the player: PUT player/playing true
adjust volume: PUT player/volume .95
get current status: GET player
Ofc you should use the proper RDF vocab for link relations and for describing your data. You can find it probably here.

Related

in web-audio api how to obtain an array(eg. FLOAT32 array) from a stream (eg a microphone stream) for several seconds

I would like to fill an array from a stream for around ten seconds.{I wish to do some processing on the data)So far I can:
(a) obtain the microphone stream using mediaRecorder
(b) use analyser and analyser.getFloatTimeDomainData(dataArray) to obtain an array but it is size limited to only a little over half a second of data.I can also successfully output the data after processing back onto a stream and to outDestination.
(c) I have also experimented with obtaining a 'chunks' array from mediaRecorder directly but the problem then is that I can't find any mime type that would give me a simple array of values - ie an uncompressed sample by sample single channel set of value - ie a longer version of 'dataArray' in (b).
I am wondering if I am missing a simple way round this problem?
Solutions I have seen tend to use step (b) and do regular polls then reassemble a longer array - however it seems the timing is a bit tricky ..
I'v also seen suggestions to use audio workouts - I might have to do this but would prefer a simpler solution!
Or again, if someone knows how to drive mediaRecorder to output the chunks array in a simple array format FLOAT32.of one channel.That would do the trick.
Or maybe I'm missing something simpler?
I have code showing those steps that have been successful and will upload if anyone requests.

Is it possible to change MediaRecorder's stream?

getUserMedia(constrains).then(stream => {
var recorder = new MediaRecorder(stream)
})
recorder.start()
recorder.pause()
// get new stream getUserMedia(constrains_new)
// how to update recorder stream here?
recorder.resume()
Is it possible? I've try to create MediaStream and use addTrack and removeTrack methods to change stream tracks but no success (recorder stops when I try to resume it with updated stream)
Any ideas?
The short answer is no, it's not possible. The MediaStream recording spec explicitly describes this behavior: https://w3c.github.io/mediacapture-record/#dom-mediarecorder-start. It's bullet point 15.3 of that algorithm which says "If at any point, a track is added to or removed from stream’s track set, the UA MUST immediately stop gathering data ...".
But in case you only want to record audio you can probably use an AudioContext to proxy your streams. Create a MediaStreamAudioDestinationNode and use the stream that it provides for recording. Then you can feed your streams with MediaStreamAudioSourceNodes and/or MediaStreamTrackAudioSourceNodes into the audio graph and mix them in any way you desire.
Last but not least there are currently plans to add the functionality you are looking for to the spec. Maybe you just have to wait a bit. Or maybe a bit longer depending on the browser you are using. :-)
https://github.com/w3c/mediacapture-record/issues/167
https://github.com/w3c/mediacapture-record/pull/186

MPEG-DASH: standard "available time" parameter in manifests for "live-dvr"

Related: Terminology: "live-dvr" in mpeg-dash streaming
I'm a little bit confused about the MPEG-DASH standard and an use case. I would like to know if there's a way to specify in MPEG-DASH manifests for a "live-dvr" setup the amount of available time for seeking back in players.
That is, for example, if a "live-dvr" stream has 30' of media available for replay, what would be a standard way to specify this in the manifest.
I know I can configure a given player for a desired behaviour. My question is not about players but about the manifests.
I don't fully understand yet if this use case is formally addresed in the standard or not (see the related link). I'm guessing a relation between #timeShiftBufferDepth and #presentationTimeOffset should work, but i'm confused regarding how it should manage "past time" instead of terms like "length" or "duration".
Thanks in advance.
Yes - you are on the right lines.
The MPEG DASH implementation guidelines provide this formula (my bolding):
The CheckTime is defined on the MPD-documented media time axis; when the client’s playback time reaches CheckTime - MPD#minBufferTime it should fetch a new MPD.
Then, the Media Segment list is further restricted by the CheckTime together with the MPD attribute MPD#timeShiftBufferDepth such that only Media Segments for which the sum of the start time of the Media Segment and the Period start time falls in the interval [NOW- MPD#timeShiftBufferDepth - #duration, min(CheckTime, NOW)] are included.
The full guidelines are available at:
http://mpeg.chiariglione.org/standards/mpeg-dash/implementation-guidelines/text-isoiec-dtr-23009-3-2nd-edition-dash

ResearchKit: How to get pedometer data (step count specifically) from ORKOrderedTask.fitnessCheckTaskWithIdentifier result

I added the ORKOrderedTask.fitnessCheckTaskWithIdentifier Task and it renders find in the UI. But unlike other simpler tasks containing scale/choice/date questions, I was not able to find the exact way to read the sensor data collected via ORKOrderedTask.fitnessCheckTaskWithIdentifier.
I have used the following:
private var walkingTask : ORKTask {
return ORKOrderedTask.fitnessCheckTaskWithIdentifier("shortWalkTask", intendedUseDescription: "Take a short walk", walkDuration: 10, restDuration: 5, options: nil)
}
upon task completion the task view controller delegate below is hit.
//ORKTaskViewControllerDelegate
func taskViewController(taskViewController: ORKTaskViewController, didFinishWithReason reason: ORKTaskViewControllerFinishReason, error: NSError?)
is there a way to drill down into the result object contained in task view controller (taskViewController.result) to get the step count? Or will i have to go through health kit or something and then query the required observation? Request help from anyone who has used this task before and can provide some input on how to fetch the pedometer data (step count specifically) for the duration the task was active?
I'm using swift.
The step count is not reflected in the result objects per se. Instead, one of the child ORKFileResult objects, generated from the pedometer recorder, will contain the pedometer records queried from CoreMotion, serialized to JSON.
However, exposing the step count on a result object, sounds like a useful extension / improvement, and we should see if it generalizes to other recorders too. Please open an issue on GitHub and we will see what we can do!

Adding videos to Unity3d

We are developing a game about driving awareness.
The problem is we need to show videos to the user if he makes any mistakes after completing driving. For example, if he makes two mistakes we need to show two videos at the end of the game.
Can you help with this. I don't have any idea.
#solus already gave you an answer, regarding "how to play a (pre-registered) video from your application". However, from what I've understood, you are asking about saving (and visualize) a kind of replay for the "wrong" actions, performed by the player. This is not an easy task, and I don't think that you can receive an exaustive answer, but only some advices. I will try to give you my own ones.
First of all, you should "capture" the position of the player's car, in various time periods.
As an example, you could read player's car position every 0.2 seconds, and save it into a structure (example: a List).
Then, you would implement some logic to detect the "wrong" actions (crashes, speeding...They obviously depend on your game) and save a reference to the pair ["mistake", "relevant portion of the list containg car's positions for that event"].
Now, you have all what you need to recreate a replay of the action: that is, making the car "driving alone", by reading the previously saved positions (that will act as waypoints for generating the route).
Obviously, you also have to deal with the camera's position and rotation: just leave it attached to the car (as the normal "in-game" action), or modify it during time to catch the more interesting angulations, as the AAA racing games do (this will make the overall task more difficult, of course).
Unity will import a video as a MovieTexture. It will be converted to the native Theora/Vorbis (Ogg) format. (Use ffmpeg2theora if import fails.)
Simply apply it as you would any texture. You could use a plane or a flat cube. You should adjust its localScale to the aspect ratio of your video (movie.width/(float)movie.height).
Put the attached audioclip in an AudioSource. Then call movie.Play() and audio.Play().
You could also load the video from a local file path or the web (in the correct format).
var movie = new WWW(#"file://C:\videos\myvideo.ogv").movie;
...
if(movie.isReadyToPlay)
{
renderer.material.mainTexture = movie;
audio.clip = movie.audioClip;
movie.Play();
audio.clip.Play();
}
Use MovieTexture, but do not forget to install QuickTime, you need it to import movie clip (.mov file for example).