I need to loop my source with some cross parametr (in sec). It will be great to listen looping without interrupting on the sample border.AudioBufferSourceNode is audioNode in my code.
I faced with the problem of inability to reuse the buffer, is it possible to get around this?
playNoteOn: function(indexNote){
var attack = this.get('attack'),
release = this.get('release'),
volume = 1 + this.get('volume') / 100,
reverb = _.clone(this.get('reverb')),
loop = this.get('loop'), cross;
//peace for Loop process
if (loop) {
//milli sec
attack = this.get('startLoop')*1000;
release = this.get('endLoop')*1000;
//sec
cross = this.get('crossLoop');
}
//peace for ADSR process
var t0 = this.get('audioNode').context.currentTime,
spread = attack/1000 + release/1000,
attackSpread = t0 + attack/1000;
[this.get('schema').leftGain, this.get('schema').rightGain].forEach(function(gain, index){
gain.gain.cancelScheduledValues(0);
gain.gain.setValueAtTime(0, t0);
gain.gain.linearRampToValueAtTime(volume, attackSpread);
// gain.gain.setValueAtTime(volume, decaySpread);
// gain.gain.linearRampToValueAtTime(0, releaseSpread);
});
this.get('audioNode').connect(this.get('schema').splitter, 0, 0);
this.get('audioNode').connect(this.get('schema').leftGain);
this.get('audioNode').connect(this.get('schema').rightGain);
this.get('audioNode').connect(this.get('schema').reverb);
this.get('audioNode').connect(APP.Models.Synth.get('schema').reverb);
APP.Models.Synth.get('effects').where({active: false}).forEach(function(effect){
effect.get('node').disconnect();
});
APP.Models.Synth.get('effects').where({active: true}).forEach(function(effect){
effect.get('node').disconnect();
effect.get('node').setParams(effect.toJSON()).getNode(this.get('audioNode'), [this.get('schema').leftGain, this.get('schema').rightGain]);
}, this);
if(loop){
this.get('audioNode').loop = true;
this.get('audioNode').loopEnd = this.get('audioNode').buffer.duration - cross;
}
this.get('audioNode').start(t0);
},
You cannot reuse a buffer. Once stopped the sourcebuffer is gone forever. Where is your audio node object anyway? But that's no problem. When you decoded from a sound file you can use the buffer from the decoding again. Just create more buffer sources. You can create as much as you like. What language are you using anyway? Say Some backgrounds to what you are doing and which frameworks you use.
Beware the difference between buffer from decoding and buffer source. Your audionode is a buffer source which can be feeded by a buffer. You can reuse the buffer but not the buffer source. So create the buffer source in your playnote code.
Related
Hope there will be nothing confusing in what I'm going to talk about, because my mother tongue is not English and my grammar is poor:p
I'm working on a mipmap analyzing tool which need to calculate with pixels from the render texture. Here's a part of the C# code:
private IEnumerator CSGroupColor(RenderTexture rt, GroupColor[] groupColors)
{
var outputBuffer = new ComputeBuffer(groupColors.Length, 8);
csKernelID = cs.FindKernel("CSGroupColor");
cs.SetTexture(csKernelID, "rt", rt);
cs.SetBuffer(csKernelID, "groupColorOut", outputBuffer);
cs.Dispatch(csKernelID, rt.width / 8, rt.height / 8, 1);
var req = AsyncGPUReadback.Request(outputBuffer);
yield return new WaitUntil(() => req.done);
req.GetData<GroupColor>().CopyTo(groupColors);
foreach (var color in groupColors)
{
if (!m_staticsDatas.TryGetValue(color.groupindex, out var vl))
continue;
if (color.value > 0)
vl.allColors.Add(color.value);
}
}
And what I want to implement next, is to make every buffer smaller(e.g.with a length of 4096), like we usually do in other asynchronous communications. Maybe I can pass the first buffer to CPU right away when it's full, and then replace it with the second buffer, and so on.
As I see it, using SetBuffer() again after req.done must be permitted to make that viable. I have been finding on Internet all day for a sample usage, but still found nothing.
Is there anyone who would give some help? Thx very much.
I have a BufferSource, which I create thusly:
const proxyUrl = location.origin == 'file://' ? 'https://cors-anywhere.herokuapp.com/' : '';
const request = new XMLHttpRequest();
request.open('GET', proxyUrl + 'http://heliosophiclabs.com/~mad/projects/mad-music/non.mp3', true);
// request.open('GET', 'non.mp3', true);
request.responseType = 'arraybuffer';
request.onload = () => {
audioCtx.decodeAudioData(request.response, buffer => {
buff = buffer;
}, err => {
console.error(err);
});
}
request.send();
Yes, the CORS workaround is pathetic, but this is the way I found to be able to work locally without needing to run a HTTP server. Anyway...
I would like to shift the pitch of this buffer. I've tried various different forms of this:
const source = audioCtx.createBufferSource();
source.buffer = buff;
const analyser = audioCtx.createAnalyser();
analyser.connect(audioCtx.destination);
analyser.minDecibels = -140;
analyser.maxDecibels = 0;
analyser.smoothingTimeConstant = 0.8;
analyser.fftSize = 2048;
const dataArray = new Float32Array(analyser.frequencyBinCount);
source.connect(analyser);
analyser.connect(audioCtx.destination);
source.start(0);
analyser.getFloatFrequencyData(dataArray);
console.log('dataArray', dataArray);
All to no avail. dataArray is always filled with -Infinity values, no matter what I try.
My idea is to get this frequency domain data and then to move all the frequencies up/down by some amount and create a new Oscillator node out of these, like this:
const wave = audioCtx.createPeriodicWave(real, waveCompnents);
oscillator.setPeriodicWave(wave);
Anyway. If anyone has a better idea of how to shift pitch, I'd love to hear it. Sadly, detune and playbackRate both seem to do basically the same thing (why are there two ways of doing the same thing?), namely just to speed up or slow down the playback, so that's not it.
First, there's a small issue with the code: you connect the analyser to the destination twice. You don't actually need to connect it at all.
Second, I think the reason you're getting all -infinity values is because you call getFloatFrequencyData right after you start the source. There's a good chance that no samples have been played so the analyser only has buffers of all zeros.
You need to call getFloatFrequencyData after a bit of time to see non-zero values.
Third, I don't think this will work at all, even for shifting the pitch of an oscillator. getFloatFrequencyData only returns the magnitude information. You will need the phase information for the harmonics to get everything shifted correctly. Currently there's no way to get the phase information.
Fourth, if you have an AudioBuffer with the data you need, consider using the playbackRate to change the pitch. Not sure if this will produce the shift you want.
When I interact with the screen the objects in my game start to stutter. My FPS is at 60 and doesn't drop but the stuttering is still prevalent. I believe my problem is how I'm animating the objects on screen(code below).If anybody could help I would appreciate it.
I have an x amount of nodes inside an array called _activePool. In the Update function I am moving the nodes x position inside _activePool, adding new nodes when the last node in _activePool position is <= 25 and removing the first node in _activePool if it's position is <= -25.
if _cycleIsActive{
for obj in _activePool{
//move the obj in _activePool
obj.position.x += Float(dt * self.speedConstant);
}
let lastObj = _activePool.last;
if (lastObj?.position.x)! + getWidthOfNode(node: lastObj!) + Float(random(min: 15, max: 20)) <= 25{
// get new obj(pattern) and add to _activePool
self.getPatternData(sequencePassedIn: selectedSeq, level: self._currentLevel, randomPattern: randomPattern());
}
let firstObj = _activePool.first;
if (firstObj?.position.x)! + getWidthOfNode(node: firstObj!) <= -25{
// remove object and return to specific pool
firstObj?.removeFromParentNode();
returnItems(item: firstObj!);
_activePool.removeFirst();
}
}
I create several arrays and add them to a dictionary
func activatePools(){
temp1Pool = ObjectPool(tag: 1, data: []);
dictPool[(temp1Pool?.tag)!] = temp1Pool;
temp2Pool = ObjectPool(tag: 2, data: []);
dictPool[(temp2Pool?.tag)!] = temp2Pool;
for i in 0... dictPool.count {
obstacleCreationFactory(factorySwitch: i);
}
}
Creating my obstacles(enemies)
func obstacleCreationFactory(factorySwitch: Int){
Enemies = Enemy();
switch factorySwitch {
case 0:
for _ in 0...100{
let blueEnemy = Enemies?.makeCopy() as! Enemy
blueEnemy.geometry = (Enemies?.geometry?.copy() as! SCNGeometry);
blueEnemy.geometry?.firstMaterial?.diffuse.contents = UIColor.blue;
blueEnemy.tag = 1;
temp1Pool?.addItemToPool(item: blueEnemy);
}
case 1:
for _ in 0...100{
let redEnemy = Enemies?.makeCopy() as! Enemy
redEnemy.geometry = (Enemies?.geometry?.copy() as! SCNGeometry);
redEnemy.geometry?.firstMaterial?.diffuse.contents = UIColor.red;
redEnemy.tag = 2;
temp2Pool?.addItemToPool(item: redEnemy);
}
default:
print("factory error");
}
}
Without being able to look at the rest of your code base it’s really difficult to guess what would be causing your issue.
If somewhere you are creating a ton of temporary objects in a loop somewhere, you might consider creating a local autorelease pool to prevent memory spikes. Here is a good article that describes why in some situations it’s a good idea.
You could also be calling some particularly expensive functions on a timer or something. It’s difficult to say.
In short, you should consider using Xcode’s Profiling tools (called Instruments). Specifically I would recommend using Time Profiler to examine what functions are taking the most time and causing those spikes.
Here is a great WWDC session video that shows how you can use the time profiler, I’d recommend regularly profiling your app, especially when you have an issue like this.
I am building an app that uses microphone input to detect sounds and trigger events. I based my code on AKAmplitudeTap, but I when I ran it, I found that I was only obtaining sample data for intervals with missing sections.
The tap code looks like this (with the guts ripped out and simply keeping track of how many samples would have been processed):
open class MyTap {
// internal let bufferSize: UInt32 = 1_024 // 8-9 kSamples/sec
internal let bufferSize: UInt32 = 4096 // 39.6 kSamples/sec
// internal let bufferSize: UInt32 = 16536 // 43.3 kSamples/sec
public init(_ input: AKNode?) {
input?.avAudioNode.installTap(onBus: 0, bufferSize: bufferSize, format: nil ) { buffer, _ in
sampleCount += self.bufferSize
}
}
I initialize the tap with:
func afterLoad() {
assert(!loaded)
AKSettings.audioInputEnabled = true
do {
try AKSettings.setSession(category: .playAndRecord, with: .allowBluetoothA2DP)
} catch {
print("Could not set session category.")
}
mic = AKMicrophone()
myTap = MyTap(mic) // seriously, can it be that easy?
loaded = true
}
The original tap code was capturing samples to a buffer, but I saw that big chunks of time were missing with a buffer size of 1024. I suspected that the processing time for the sample buffer might be excessive, so...
I simplified the code to simply keep track of how many samples were being passed to the tap. In another part of the code, I simply print out sampleCount/elapsedTime and, as noted in the comments after 'bufferSize' I get different amounts of samples per second.
The sample rate converges on 43.1 KSamples/sec with a 16K buffer, and only collects about 20% of the samples with a 1K buffer. I would prefer to use the small buffer size to obtain near real-time response to detected sounds. As I've been writing this, the 4K buffer version has been running and has stabilized at 39678 samples/sec.
Am I missing something? Can a tap with a small buffer size actually capture 44.1 Khz sample data?
Problem resolved... the tap requires this line of code
buffer.frameLength = self.bufferSize
... and suddenly all the samples appear. I obviously stripped out a bit too much code from the code I obviously didn't understand.
For mostly security reasons, I'm not allowed to store a WAV file on the server to be accessed by a browser. What I have is a byte array contains audio data (the data portion of a WAV file I believe) on the sever, and I want it to be played on a browser through JavaScript (or Applet but JS preferred), I can use JSON-PRC to send the whole byte[] over, or I can open a socket to stream it over, but in either case I don't know who to play the byte[] within the browser?
The following code will play the sine wave at 0.5 and 2.0. Call the function play_buffersource() in your button or anywhere you want.
Tested using Chrome with Web Audio flag enabled. For your case, all that you need to do is just to shuffle your audio bytes to the buf.
<script type="text/javascript">
const kSampleRate = 44100; // Other sample rates might not work depending on the your browser's AudioContext
const kNumSamples = 16834;
const kFrequency = 440;
const kPI_2 = Math.PI * 2;
function play_buffersource() {
if (!window.AudioContext) {
if (!window.webkitAudioContext) {
alert("Your browser sucks because it does NOT support any AudioContext!");
return;
}
window.AudioContext = window.webkitAudioContext;
}
var ctx = new AudioContext();
var buffer = ctx.createBuffer(1, kNumSamples, kSampleRate);
var buf = buffer.getChannelData(0);
for (i = 0; i < kNumSamples; ++i) {
buf[i] = Math.sin(kFrequency * kPI_2 * i / kSampleRate);
}
var node = ctx.createBufferSource(0);
node.buffer = buffer;
node.connect(ctx.destination);
node.noteOn(ctx.currentTime + 0.5);
node = ctx.createBufferSource(0);
node.buffer = buffer;
node.connect(ctx.destination);
node.noteOn(ctx.currentTime + 2.0);
}
</script>
References:
http://epx.com.br/artigos/audioapi.php
https://dvcs.w3.org/hg/audio/raw-file/tip/webaudio/specification.html
If you need to resample the audio, you can use a JavaScript resampler: https://github.com/grantgalitz/XAudioJS
If you need to decode the base64 data, there are a lot of JavaScript base64 decoder: https://github.com/carlo/jquery-base64
I accomplished this via the following code. I pass in a byte array containing the data from the wav file to the function playByteArray. My solution is similar to Peter Lee's, but I could not get his to work in my case (the output was garbled) whereas this solution works well for me. I verified that it works in Firefox and Chrome.
window.onload = init;
var context; // Audio context
var buf; // Audio buffer
function init() {
if (!window.AudioContext) {
if (!window.webkitAudioContext) {
alert("Your browser does not support any AudioContext and cannot play back this audio.");
return;
}
window.AudioContext = window.webkitAudioContext;
}
context = new AudioContext();
}
function playByteArray(byteArray) {
var arrayBuffer = new ArrayBuffer(byteArray.length);
var bufferView = new Uint8Array(arrayBuffer);
for (i = 0; i < byteArray.length; i++) {
bufferView[i] = byteArray[i];
}
context.decodeAudioData(arrayBuffer, function(buffer) {
buf = buffer;
play();
});
}
// Play the loaded file
function play() {
// Create a source node from the buffer
var source = context.createBufferSource();
source.buffer = buf;
// Connect to the final output node (the speakers)
source.connect(context.destination);
// Play immediately
source.start(0);
}
If you have the bytes on the server then I would suggest that you create some kind of handler on the server that will stream the bytes to the response as a wav file. This "file" would only be in memory on the server and not on disk. Then the browser can just handle it like a normal wav file.
More details on the server stack would be needed to give more information on how this could be done in your environment.
I suspect you can achieve this with HTML5 Audio API easily enough:
https://developer.mozilla.org/en/Introducing_the_Audio_API_Extension
This library might come in handy too, though I'm not sure if it reflects the latest browser behaviours:
https://github.com/jussi-kalliokoski/audiolib.js