Accessing Web Audio Context in React VR - web-audio-api

I'm seeking inputs from people who have worked with Web Audio API in the react-vr. React-VR already has very cool components to place sounds in your scene, however, I need to go down one-step and access the audio buffer which is easily achieved by AudioContext provided by Web Audio.
In my client.js init() I can find the audio context in the vr instance
function init(bundle, parent, options) {
const vr = new VRInstance(bundle, 'WelcomeToVR', parent, {
...options,
});
audioCtx = vr.rootView.context.AudioModule.audioContext._context; //HERE
vr.render = function() { };
vr.start();
return vr;
}
I am struggling to figure how to expose the audio context. It's scope ends once I exit the init() function. Is there another way to access the audio context in index.vr.js?

I'm having the same issue... I can set the audio context:
audioOsc._setAudioContext(vr.rootView.context.AudioModule.audioContext._context;)
And inside of client.js it console.logs just fine. BUT...
inside my AudioOsc module this:
AudioOsc.getAudioContext(this.getAc, this.getAc);
where this.getAc is a callback (as per the reactvr docs) logs out an empty Object.
HOWEVER...
inside my AudioOsc module I can create an oscillator and connect it to a destination and it hums along just fine. So it seems to me there is actually no way to pass the context from a Native Module into ReactVR and back again... They eat the object somewhere along the way.
If things change I'd love to know! Otherwise, I think we may have to create a crap ton of audio modules on our own.

Related

Flutter - Audio Player

hello i am new to flutter
i am trying to play audio files from url or network but which to use because
i searched google it showed many but which one to use.
if possible can show an example on how to create like below image
i want to create an audio player like this
kindly help...
Thanks in Advance!!!
An answer that shows how to do everything in your screenshot would probably not fit in a StackOverflow answer (audio code, UI code, and how to extract audio wave data) but I will give you some hopefully useful pointers.
Using the just_audio plugin you can load audio from these kinds of URLs:
https://example.com/track.mp3 (any web URL)
file:///path/to/file.mp3 (any file URL with permissions)
asset:///path/to/asset.mp3 (any Flutter asset)
You will probably want a playlist, and here is how to define one:
final playlist = ConcatenatingAudioSource(children: [
AudioSource.uri(Uri.parse('https://example.com/track1.mp3')),
AudioSource.uri(Uri.parse('https://example.com/track2.mp3')),
AudioSource.uri(Uri.parse('https://example.com/track3.mp3')),
AudioSource.uri(Uri.parse('https://example.com/track4.mp3')),
AudioSource.uri(Uri.parse('https://example.com/track5.mp3')),
]);
Now to play that, you create a player:
final player = AudioPlayer();
Set the playlist:
await player.setAudioSource(playlist);
And then as the user clicks on things, you can perform these operations:
player.play();
player.pause();
player.seekToNext();
player.seekToPrevious();
player.seek(Duration(milliseconds: 48512), index: 3);
player.dispose(); // to release resources once finished
For the screen layout, note that just_audio includes an example which looks like this, and since there are many similarities to your own proposed layout, you may get some ideas by looking at its code:
Finally, for the audio wave display, there is another package called audio_wave. You can use it to display an audio wave, but the problem is that there is no plugin that I'm aware of that provides you access to the waveform data. If you really want a waveform, you could possibly use a fake waveform (if it's just meant to visually indicate position progress), otherwise either you or someone will need to write a plugin to decode an audio file into a list of samples.

Unity VideoPlayer and WebGLMovieTexture cant play two videos in a row

I'm trying to play videos in Unity WebGL in the browser, but having lots of problems.
I tried two different video players and none of them work fully.
The WebGLMovieTexture player works like this
public WebGLStreamingVideoPlugin _videoPlugin = new WebGLStreamingVideoPlugin("http://www.example.net/video.mp4");
_videoPlugin.Play();
Basically when you want to play a video you create a new instance and give it the URL like above, and it plays great!!
The problem is when you want to stop that video, and play a different video, it seems impossible because there is no way dispose of the first video because there is only a Stop() in the API, which stops the playback but it continues to stream the video data from the internet in the background.
There is no way to delete the instance because Destroy() cant be called since that WebGLMovieTexture is not derived from monodevelop, and C# does not seem to give a way to delete an object (how silly). Even setting it to null doesnt do it, it continues to stream the video in the background.
So if you create a new instance in order to play a different video, you end up with TWO video streams, and if you do it again to play a third, you end up with THREE, and so on, so quickly you can see how bad that will end up.
My question is how to dispose of or destroy the first WebGLMovieTexture player, or maybe tell it to stop streaming the first video and start playing a different one?
The second player I tried is the VideoPlayer for WebGL in Unity Version 5.6.0b4, with this one I can only get it to play a video if I hardcode the URL in the inspector, if I put the URL in code it doesn't play it in the browser.
vPlayer = gameObject.GetComponent<UnityEngine.Video.VideoPlayer>();
if (vPlayer.isPlaying) {
Debug.Log("STOPPING PLAY");
vPlayer.Stop();
}
vPlayer.url = url;
vPlayer.isLooping = true;
vPlayer.frame = 0;
vPlayer.targetCameraAlpha = 1F;
vPlayer.Prepare();
vPlayer.Play();
And to get it to play a second video I suspect I will have the same problems as the other one.
Anybody have any ideas that can help me?

Passing IOS native objects back to unity3d

I implemented IOS plugin with couple of simple methods to interract my unity application with native static library.
The problem, I faced, is passing back native UI elements(objects) to unity.
F.e native SDK has method that creates badge (UIView), on the other hand I have button in unity (it could be GUI element or some 3d object, whatever)
I access this method from unity through my plugin, smth like:
[DllImport("__Internal")]
private static extern void XcodePlugin_GetBadgeView();
and following:
void XcodePlugin_GetBadgeView()
{
// Generate native UIView
UIView *badge = [Badge badge];
???? Return UIView badge instance to unity (hm)?!
}
So I need something like:
[someViewOrObject addSubView:badge];
but inside unity.
I know there is ability to send message back to unity:
UnitySendMessage( "UnityCSharpClassName" , "UnityMethod", "WhateverValueToSendToUnity");
but WhateverValueToSendToUnity should be UTF8 string.
In conclusion:
Only one idea I have to pass coordinates from unity to native plugin, and then add badge to these coordinates(not sure this is best solution):
[DllImport("__Internal")]
private static extern void XcodePlugin_AddBadgeViewToCoordinates(x,y);
If it's just for the badge coordinates, your idea seems fine.
If there are other things you'd like to get from iOS into Unity, I don't think there are many other options. The only other way that I can think of is to save the data in a file (eg, in NSTemporaryDirectory) and then pass the filename back as a UTF8String using UnitySendMessage. I've used this technique before (to pass images and JSON files) to Unity. Files in that directory get cleaned up automatically by iOS.

Access data from another class without creating new instance

I have a problem since two days researching more I could not solve.
I have an app in a class that runs a streaming audio. In another view, I have a table with podcasts that will be opened via url.
In order to take advantage of the code of the first class created a delegate, so when the User goes to play in any audio podcast I occupy the methods of the main class just changing the parameters (in this case the URL).
The delegate works correctly, the passing of parameters too. The only problem is that the delegate have to instantiate the main class.
ClassePrincipal *classePrincipal = [[ClassePrincipal alloc] init];
classePrincipal.delegate = self;
[classePrincipal method];
If the audio is already running in the main class, instantiated as a new object class, it will start playing the audio Podcast on top of what was already running.
and even if I have a major stop before he continues to play the podcast, eg:
- (void) playPodcast {
[classePrincipal destroyStreamer];
[classePrincipal startStream];
}
destroyStreamer the method is called correctly, but as the instance was created from scratch classePrincipal he did not see any audio being played.
kind of rolled the question, but is there any way to call a method of parameter passing ClassePrincipal without instantiating the class? For not allocating a new object in memory, I could see if the audio is playing and for him.
if there is any other way to solve also thank.
From what I can tell you may want to turn this class into a singleton. This way you will be instantiating it if its not already and if it has already been instantiated you can just place some checks in your code to stop the current audio before starting the new. A random tutorial I found is enter link description here

How to write a web-based music visualizer?

I'm trying to find the best approach to build a music visualizer to run in a browser over the web. Unity is an option, but I'll need to build a custom audio import/analysis plugin to get the end user's sound output. Quartz does what I need but only runs on Mac/Safari. WebGL seems not ready. Raphael is mainly 2D, and there's still the issue of getting the user's sound... any ideas? Has anyone done this before?
Making something audio reactive is pretty simple. Here's an open source site with lots audio reactive examples.
As for how to do it you basically use the Web Audio API to stream the music and use its AnalyserNode to get audio data out.
"use strict";
const ctx = document.querySelector("canvas").getContext("2d");
ctx.fillText("click to start", 100, 75);
ctx.canvas.addEventListener('click', start);
function start() {
ctx.canvas.removeEventListener('click', start);
// make a Web Audio Context
const context = new AudioContext();
const analyser = context.createAnalyser();
// Make a buffer to receive the audio data
const numPoints = analyser.frequencyBinCount;
const audioDataArray = new Uint8Array(numPoints);
function render() {
ctx.clearRect(0, 0, ctx.canvas.width, ctx.canvas.height);
// get the current audio data
analyser.getByteFrequencyData(audioDataArray);
const width = ctx.canvas.width;
const height = ctx.canvas.height;
const size = 5;
// draw a point every size pixels
for (let x = 0; x < width; x += size) {
// compute the audio data for this point
const ndx = x * numPoints / width | 0;
// get the audio data and make it go from 0 to 1
const audioValue = audioDataArray[ndx] / 255;
// draw a rect size by size big
const y = audioValue * height;
ctx.fillRect(x, y, size, size);
}
requestAnimationFrame(render);
}
requestAnimationFrame(render);
// Make a audio node
const audio = new Audio();
audio.loop = true;
audio.autoplay = true;
// this line is only needed if the music you are trying to play is on a
// different server than the page trying to play it.
// It asks the server for permission to use the music. If the server says "no"
// then you will not be able to play the music
// Note if you are using music from the same domain
// **YOU MUST REMOVE THIS LINE** or your server must give permission.
audio.crossOrigin = "anonymous";
// call `handleCanplay` when it music can be played
audio.addEventListener('canplay', handleCanplay);
audio.src = "https://twgljs.org/examples/sounds/DOCTOR%20VOX%20-%20Level%20Up.mp3";
audio.load();
function handleCanplay() {
// connect the audio element to the analyser node and the analyser node
// to the main Web Audio context
const source = context.createMediaElementSource(audio);
source.connect(analyser);
analyser.connect(context.destination);
}
}
canvas { border: 1px solid black; display: block; }
<canvas></canvas>
Then it's just up to you to draw something creative.
note some troubles you'll likely run into.
At this point in time (2017/1/3) neither Android Chrome nor iOS Safari support analysing streaming audio data. Instead you have to load the entire song. Here'a a library that tries to abstract that a little
On Mobile you can not automatically play audio. You must start the audio inside an input event based on user input like 'click' or 'touchstart'.
As pointed out in the sample you can only analyse audio if the source is either from the same domain OR you ask for CORS permission and the server gives permission. AFAIK only Soundcloud gives permission and it's on a per song basis. It's up to the individual artist's song's settings whether or not audio analysis is allowed for a particular song.
To try to explain this part
The default is you have permission to access all data from the same domain but no permission from other domains.
When you add
audio.crossOrigin = "anonymous";
That basically says "ask the server for permission for user 'anonymous'". The server can give permission or not. It's up to the server. This includes asking even the server on the same domain which means if you're going to request a song on the same domain you need to either (a) remove the line above or (b) configure your server to give CORS permission. Most servers by default do not give CORS permission so if you add that line, even if the server is the same domain, if it does not give CORS permission then trying to analyse the audio will fail.
music: DOCTOR VOX - Level Up
By WebGL being "not ready", I'm assuming that you're referring to the penetration (it's only supported in WebKit and Firefox at the moment).
Other than that, equalisers are definitely possible using HTML5 audio and WebGL. A guy called David Humphrey has blogged about making different music visualisers using WebGL and was able to create some really impressive ones. Here's some videos of the visualisations (click to watch):
I used SoundManager2 to pull the waveform data from the mp3 file. That feature requires Flash 9 so it might not be the best approach.
My waveform demo with HMTL5 Canvas:
http://www.momentumracer.com/electriccanvas/
and WebGL:
http://www.momentumracer.com/electricwebgl/
Sources:
https://github.com/pepez/Electric-Canvas
Depending on complexity you might be interested in trying out Processing (http://www.processing.org), it has really easy tools to make web-based apps, and it has tools to get the FFT and waveform of an audio file.