So I can't find anything online that says I can't do this, but whenever I try to do it on the iPhone, errors are returned from AudioQueueSetParameter. Specifically, if I try this code:
AudioQueueParameterValue val = f;
XThrowIfError(AudioQueueSetParameter(mQueue, kAudioQueueParam_Volume, val), "set queue volume");
Then I get the following error: kAudioQueueErr_InvalidParameter. Which Apple's documentation says it means: "The specified parameter ID is invalid".
But if I try the same exact code on an output queue, it works just fine. Does anyone have any idea why I can change the volume on output, but not input?
Thanks
According to Apple's Audio Queue Services Reference
AudioQueue Parameters apply only to playback audio queues.
To retrieve information about your input stream try to use AudioQueue Properties.
// streamDescription here means your AudioStreamBasicDescription
UInt32 levelSize = sizeof(AudioQueueLevelMeterState) * streamDescription.mChannelsPerFrame;
AudioQueueLevelMeterState *level = (AudioQueueLevelMeterState*)malloc(levelSize);
if (AudioQueueGetProperty(inQueue,
kAudioQueueProperty_CurrentLevelMeter,
&levelSize,
&level) == noErr) {
printf("Current peak: %f", level[0].mPeakPower);
}
I presume you could just multiply the PCM values of the AudioQueueBuffers by some volume factor yourself to produce a volume adjustment.
Related
I've hunted high and low and cannot find a solution to this problem. I am looking for a method to change the input/output devices which an AVAudioEngine will use on macOS.
When simply playing back an audio file the following works as expected:
var outputDeviceID:AudioDeviceID = xxx
let result:OSStatus = AudioUnitSetProperty(outputUnit, kAudioOutputUnitProperty_CurrentDevice, kAudioUnitScope_Global, 0, &outputDeviceID, UInt32(MemoryLayout<AudioObjectPropertyAddress>.size))
if result != 0 {
print("error setting output device \(result)")
return
}
However if I initialize the audio input (with let input = engine.inputNode) then I get an error once I attempt to start the engine:
AVAEInternal.h:88 required condition is false: [AVAudioEngine.mm:1055:CheckCanPerformIO: (canPerformIO)]
I know that my playback code is OK since, if I avoid changing the output device then I can hear the microphone and the audio file, and if I change the output device but don't initialize the inputNode the file plays to the specified destination.
Additionally to this I have been trying to change the input device, I understood from various places that the following should do this:
let result1:OSStatus = AudioUnitSetProperty(inputUnit, kAudioOutputUnitProperty_CurrentDevice, kAudioUnitScope_Output, 0, &inputDeviceID, UInt32(MemoryLayout<AudioObjectPropertyAddress>.size))
if result1 != 0 {
print("failed with error \(result1)")
return
}
However, this doesn't work - in most cases it throws an error (10853) although if I select a sound card that has both inputs and outputs it succeeds - it appears that when I am attempting to set the output or the input node it is actually setting the device for both.
I would think that this meant that an AVAudioEngine instance can only deal with one device, however it is quite happy working with the default devices (mic and speakers/headphones) so I am confident that isn't the issue. Looking at some solutions I have seen online people simply change the default input, but this isn't a massively nice solution.
Does anyone have any ideas as to whether this is possible?
It's worth noting that kAudioOutputUnitProperty_CurrentDevice is the only property available, there is not an equivalent kAudioInputUnitProperty_CurrentDevice key, due to the fact that as I understand it both the inputNode and outputNode are classed as "Output Units" (as they both emit sound somewhere).
Any ideas would be much appreciated as this is very very frustrating!!
Thanks
So I filed a support request with apple on this and another issue and the response confirms that an AVAudioEngine can only be assigned to a single Aggregate device (that is, a device with both input and output channels) - the system default units create effectively an aggregate device internally which is why they work, although I've found an additional issue in that if the input device also has output capabilities (and you activate the inputNode) then that device has to be both the input and output device as otherwise the output appears not to work.
So answer is that I think there is no answer..
Okay guys, I've read many things about the FFT stuff, but it seems to be a bit more complicated than building a tableView.
I am searching for a way to analyze the playing audio (from iPod Library) in three ranges (low, mid, high). I think FFT is doing the job, but I'm not sure if I could filter (Lowpass, Bandpass and Highpass) the playing audio and analyze the peaks as well.
So if anyone knows what is the best (by best I mean, fastest (CPU) way to do so, please help me. There will be no front-end, so I won't draw the FFT in a Window (I guess the drawing does eat a lot of the cpu).
Then I have no idea how I could analyze the audio. All the FFT Sample Codes I found are using the mic. I do not want to use the mic. I saw something getting the Audio File and exporting it to a uncompressed file, but I need a live-analysation.
I've had a look at aurioTouch2, but I don't get how I could change the input from the mic to the iPod Library.
I think, the part I'm searching for is here:
// Initialize our remote i/o unit
inputProc.inputProc = PerformThru;
inputProc.inputProcRefCon = self;
CFURLRef url = NULL;
try {
url = CFURLCreateWithFileSystemPath(kCFAllocatorDefault, CFStringRef([[NSBundle mainBundle] pathForResource:#"button_press" ofType:#"caf"]), kCFURLPOSIXPathStyle, false);
XThrowIfError(AudioServicesCreateSystemSoundID(url, &buttonPressSound), "couldn't create button tap alert sound");
CFRelease(url);
// Initialize and configure the audio session
XThrowIfError(AudioSessionInitialize(NULL, NULL, rioInterruptionListener, self), "couldn't initialize audio session");
UInt32 audioCategory = kAudioSessionCategory_PlayAndRecord;
XThrowIfError(AudioSessionSetProperty(kAudioSessionProperty_AudioCategory, sizeof(audioCategory), &audioCategory), "couldn't set audio category");
XThrowIfError(AudioSessionAddPropertyListener(kAudioSessionProperty_AudioRouteChange, propListener, self), "couldn't set property listener");
Float32 preferredBufferSize = .005;
XThrowIfError(AudioSessionSetProperty(kAudioSessionProperty_PreferredHardwareIOBufferDuration, sizeof(preferredBufferSize), &preferredBufferSize), "couldn't set i/o buffer duration");
UInt32 size = sizeof(hwSampleRate);
XThrowIfError(AudioSessionGetProperty(kAudioSessionProperty_CurrentHardwareSampleRate, &size, &hwSampleRate), "couldn't get hw sample rate");
XThrowIfError(AudioSessionSetActive(true), "couldn't set audio session active\n");
XThrowIfError(SetupRemoteIO(rioUnit, inputProc, thruFormat), "couldn't setup remote i/o unit");
unitHasBeenCreated = true;
drawFormat.SetAUCanonical(2, false);
drawFormat.mSampleRate = 44100;
(...)
But I'm quite new to all of these AudioUnits, so I can't understand where an input is loaded. Then, the code mentioned above uses AVAudioSession. A little birdie told me, this will be deprecated, so what is the alternative?
So, basically:
How can I get the currently playing audio in order to do an analyzation? Can I just use a MPMusicPlayerController and get the samples? Or do I have to build a entire AudioUnit which plays the Library?
What is the fastest way (CPU) to analyze lows, mids and highs? Filtering? FFT? Something else?
Will I get in trouble with the Copyrights of bought music? Because I tried to convert the playing file to PCA Samples and sometimes I have this error:
VTM_AViPodReader[7666:307] * Terminating app
due to uncaught exception 'NSInvalidArgumentException', reason:
'* -[AVAssetReader initWithAsset:error:] invalid parameter not
satisfying: asset != ((void *)0)'
What is the "new" way to do an FFT if the whole AVAudioSession stuff won't work in the future?
You can't get the currently playing audio (security sandbox prevents this) on iOS, unless your app is the one playing the audio using certain select APIs (Audio Queue, RemoteIO, etc.)
3 bandpass filters (made with IIR biquads) will be faster than an FFT. But even a full FFT will use a very small percentage of CPU time.
An app can't convert or play protected music from the iTunes library in a form where samples can be captured.
The FFT is in the Accelerate framework, not in the audio session.
I'm using Matt Gallagher's AudioStreamer to play a mp3 audio stream. Now I want to do FFT in realtime and visualize the frequencies using OpenGL ES on the iPhone.
I'm wondering where to catch the audio data and pass it to my "Super-Fancy-FFT-Computing-3D-Visualization-Method". Matt is using the AudioQueue Framework and there is a Callback function that is set with:
err = AudioQueueNewOutput(&asbd, ASAudioQueueOutputCallback, self, NULL, NULL, 0, &audioQueue);
The Callback looks like this:
static void ASAudioQueueOutputCallback(void* inClientData,
AudioQueueRef inAQ,
AudioQueueBufferRef inBuffer){...}
In the moment I'm passing the data from the AudioQueueBufferRef and the result looks very weird. But with FFT and visualizations there are so many points where you can screw it up that I wanted to be sure to pass at least the right data to the FFT. I'm reading the data from the Buffer this way ignoring every second value because I only want to analyze one channel:
SInt32* buffPointer = (SInt32*)inBuffer->mAudioData;
int count = 0;
for (int i = 0; i < inBuffer->mAudioDataByteSize/2; i++) {
myBuffer[i] = buffPointer[count];
count += 2;
}
Then follows FFT computing with myBuffer containing 512 values.
Instead of sending the data you receive from the audio file stream callback directly to the audio queue, you could convert it to PCM, run your analysis, and then feed it to the audio queue (as PCM) if you still need to play it. To do the conversion, you could use Audio Converter Services (which will be a screaming nightmare without end), or an offline audio queue.
Option 3: look into the new Audio Queue "tap" on iOS 6, which lets you look at data inside a queue. I still need to check this out… it looks cool (and I'm giving a talk on it three weeks at CocoaConf, so, yeah…)
(repost from: http://lists.apple.com/archives/coreaudio-api/2012/Oct/msg00034.html )
In my application, I am receiving audio data in LinearPCM format, which I need to play.
I am following iOS SpeakHere example. However I cannot get how and where I should provide a buffer to AudioQueue.
Can anyone provide me a working example of playing audio buffer in iOS via AudioQueue?
In the SpeakHere example playback is achieved using AudioQueue.
In the set up of AudioQueue, a function is specified that will be called when the queue wants more data.
You can see that in this method:
void AQPlayer::SetupNewQueue()
Here's the line that specifies the callback function:
XThrowIfError(AudioQueueNewOutput(&mDataFormat, AQPlayer::AQBufferCallback, this,
CFRunLoopGetCurrent(), kCFRunLoopCommonModes, 0, &mQueue), "AudioQueueNew failed");
If you take a look at AQPlayer::AQBufferCallback, you'll see where it gets the data from. In this example, the data has been written out to a file on disk. That's a good solution if you want to save memory, or if there's a possibility the audio file could be quite large.
Anyway, looking at AQPlayer::AQBufferCallback, you'll see a call to a function AudioFileReadPackets. That's what reads in the audio packets from the file on disk. It reads them straight into the buffer that AudioQueue will use:
OSStatus result = AudioFileReadPackets(THIS->GetAudioFileID(), false, &numBytes, inCompleteAQBuffer->mPacketDescriptions, THIS->GetCurrentPacket(), &nPackets,
inCompleteAQBuffer->mAudioData);
That buffer is inCompleteAQBuffer->mAudioData.
Finally, the callback function must enqueue the buffer as follows:
if (nPackets > 0) {
inCompleteAQBuffer->mAudioDataByteSize = numBytes;
inCompleteAQBuffer->mPacketDescriptionCount = nPackets;
AudioQueueEnqueueBuffer(inAQ, inCompleteAQBuffer, 0, NULL);
THIS->mCurrentPacket = (THIS->GetCurrentPacket() + nPackets);
}
Note first that it has to check that we have some packets to play. It also has to specify how many bytes are in the buffer.
Then, this line here:
THIS->mCurrentPacket = (THIS->GetCurrentPacket() + nPackets);
That keeps a track of where we are overall in our audio buffer. In other words, as more data is copied in from the file, we need to position the mCurrentPacket forward to that the next copy puts data in the correct place.
While there are plenty of tutorials for how to use AVCaptureSession to grab camera data, I can find no information (even on apple's dev network itself) on how to properly handle microphone data.
I have implemented AVCaptureAudioDataOutputSampleBufferDelegate, and I'm getting calls to my delegate, but I have no idea how the contents of the CMSampleBufferRef I get are formatted. Are the contents of the buffer one discrete sample? What are its properties? Where can these properties be set?
Video properties can be set using [AVCaptureVideoDataOutput setVideoSettings:], but there is no corresponding call for AVCaptureAudioDataOutput (no setAudioSettings or anything similar).
They are formatted as LPCM! You can verify this by getting the AudioStreamBasicDescription like so:
CMFormatDescriptionRef formatDescription = CMSampleBufferGetFormatDescription(sampleBuffer);
const AudioStreamBasicDescription *streamDescription = CMAudioFormatDescriptionGetStreamBasicDescription(formatDescription);
and then checking the stream description’s mFormatId.