Using iOS 3d Mixer - iphone

I have a AUGraph setup fairly simply with a multichannel mixer connected to an I/O unit. The playback is accessed through a callback function and everything works nicely.
I am trying to switch over to the 3D Mixer instead of the Multichannel mixer. So I switched the parameter from kAudioUnitSubType_MultiChannelMixer to kAudioUnitSubType_AU3DMixerEmbedded and left all the other setup the same.
The result was sort of a high pitched whine that seemed to start sounding like something then became just whine-ish. I have gone through each of the 3D Mixer unit's parameters and set them to their defaults but there was no change. Flipping on and off the k3DMixerParam_Enable parameter did work at muting and unmuting the playback though.
What setup I might have missed? or know where to find an example of a working 3d Mixer?

As already pointed out the 3d mixer needs mono inputs. But you also have to use UInt16 as the input sample data type. This is a working AudioStreamBasicDescription:
AudioStreamBasicDescription streamFormat = {0};
size_t bytesPerSample = sizeof (UInt16);
streamFormat.mFormatID = kAudioFormatLinearPCM;
streamFormat.mFormatFlags = kAudioFormatFlagsCanonical;
streamFormat.mBytesPerPacket = bytesPerSample;
streamFormat.mFramesPerPacket = 1;
streamFormat.mBytesPerFrame = bytesPerSample;
streamFormat.mChannelsPerFrame = 1;
streamFormat.mBitsPerChannel = 8 * bytesPerSample;
streamFormat.mSampleRate = graphSampleRate;
// Set the input stream format of the desired 3D mixer unit audio bus
AudioUnitSetProperty (
mixerUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
audioBus,
&streamFormat,
sizeof (streamFormat)
);

As all answers already mention: the 3D Mixer on iOS needs mono inputs.
On iOS 8 / Xcode 6, the concept of canonical formats is deprecated and I found this (and only this) mono stream format description working as 3D Mixer input bus stream format description:
AudioStreamBasicDescription monoStreamFormat = {0};
monoStreamFormat.mSampleRate = sampleRate;
monoStreamFormat.mFormatID = kAudioFormatLinearPCM;
monoStreamFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
monoStreamFormat.mBitsPerChannel = 16;
monoStreamFormat.mChannelsPerFrame = 1;
monoStreamFormat.mFramesPerPacket = 1;
monoStreamFormat.mBytesPerPacket = 2;
monoStreamFormat.mBytesPerFrame = 2;
The sample rate should be set and then obtained from the AVAudioSession.
Set this format on the output of the Audio Unit connected to the 3D Mixer input. Which is probably a AUConverter Unit...
Note however, this hasn't been tested for < iOS 8.

The 3d Mixer needed mono inputs.
http://lists.apple.com/archives/coreaudio-api/2010/Sep/msg00144.html

Related

Streaming x264 with packet loss

I write the program where I use x264 as the coder.
I use the following parameters:
av_opt_set (codecContextH264[numberCoder]-> priv_data, "profile", "baseline", 0);
av_opt_set (codecContextH264[numberCoder]-> priv_data, "preset", "ultrafast", 0);
av_opt_set (codecContextH264[numberCoder]-> priv_data, "tune", "zerolatency", 0);
codecContextH264[numberCoder]-> bit_rate =bitrate;
codecContextH264[numberCoder]-> bit_rate_tolerance=bitrate-5000;
codecContextH264[numberCoder]-> width = w;
codecContextH264[numberCoder]-> height = h;
codecContextH264[numberCoder]-> time_base.den = fps;
codecContextH264[numberCoder]-> time_base.num = 1;
codecContextH264[numberCoder]-> pix_fmt = PIX_FMT_YUV420P;
codecContextH264[numberCoder]-> gop_size = fps*3;
codecContextH264[numberCoder]-> keyint_min = fps*3;
codecContextH264[numberCoder]-> max_b_frames = 0;
codecContextH264[numberCoder]-> slices = (int) (w*h)/1500+1;
I use only I and P frames.
What x264 settings I shall use that could lose P frames?
Perhaps x264 has no such opportunity?!
I read that if to use a "base" profile, it is possible to lose P frames...
Help please.
You can try setting the gop_size and keyint_min to 0 - that should result in a stream with only I frames, but that kind of looses the sense of compression as such.
The further is based on the assumption that you are using RTP over UDP - if you are streaming in an environment where packet loss is high, why not use TCP or implement some kind of quality service where if you see that RTP sequence numbers are missing you force the source to issue a new keyframe.

iOS Tone Generator with variable Oscillation Patterns

I have a Tone Generator Application that generates a tone based on Slider Value for frequency. This part of the application works fine. I'm redering tone using
#import <AudioToolbox/AudioToolbox.h>
OSStatus RenderTone(
void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
{
// Fixed amplitude is good enough for our purposes
const double amplitude = 0.25;
// Get the tone parameters out of the view controller
ToneGeneratorViewController *viewController =
(ToneGeneratorViewController *)inRefCon;
double theta = viewController->theta;
double theta_increment = 2.0 * M_PI * viewController->frequency / viewController- >sampleRate;
// This is a mono tone generator so we only need the first buffer
const int channel = 0;
Float32 *buffer = (Float32 *)ioData->mBuffers[channel].mData;
// Generate the samples
for (UInt32 frame = 0; frame < inNumberFrames; frame++)
{
buffer[frame] = sin(theta) * amplitude;
theta += theta_increment;
if (theta > 2.0 * M_PI)
{
theta -= 2.0 * M_PI;
}
}
// Store the theta back in the view controller
viewController->theta = theta;
return noErr;
}
- (void)createToneUnit
{
// Configure the search parameters to find the default playback output unit
// (called the kAudioUnitSubType_RemoteIO on iOS but
// kAudioUnitSubType_DefaultOutput on Mac OS X)
AudioComponentDescription defaultOutputDescription;
defaultOutputDescription.componentType = kAudioUnitType_Output;
defaultOutputDescription.componentSubType = kAudioUnitSubType_RemoteIO;
defaultOutputDescription.componentManufacturer = kAudioUnitManufacturer_Apple;
defaultOutputDescription.componentFlags = 0;
defaultOutputDescription.componentFlagsMask = 0;
// Get the default playback output unit
AudioComponent defaultOutput = AudioComponentFindNext(NULL, &defaultOutputDescription);
NSAssert(defaultOutput, #"Can't find default output");
// Create a new unit based on this that we'll use for output
OSErr err = AudioComponentInstanceNew(defaultOutput, &toneUnit);
NSAssert1(toneUnit, #"Error creating unit: %ld", err);
// Set our tone rendering function on the unit
AURenderCallbackStruct input;
input.inputProc = RenderTone;
input.inputProcRefCon = self;
err = AudioUnitSetProperty(toneUnit,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Input,
0,
&input,
sizeof(input));
NSAssert1(err == noErr, #"Error setting callback: %ld", err);
// Set the format to 32 bit, single channel, floating point, linear PCM
const int four_bytes_per_float = 4;
const int eight_bits_per_byte = 8;
AudioStreamBasicDescription streamFormat;
streamFormat.mSampleRate = sampleRate;
streamFormat.mFormatID = kAudioFormatLinearPCM;
streamFormat.mFormatFlags =
kAudioFormatFlagsNativeFloatPacked | kAudioFormatFlagIsNonInterleaved;
streamFormat.mBytesPerPacket = four_bytes_per_float;
streamFormat.mFramesPerPacket = 1;
streamFormat.mBytesPerFrame = four_bytes_per_float;
streamFormat.mChannelsPerFrame = 1;
streamFormat.mBitsPerChannel = four_bytes_per_float * eight_bits_per_byte;
err = AudioUnitSetProperty (toneUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
0,
&streamFormat,
sizeof(AudioStreamBasicDescription));
NSAssert1(err == noErr, #"Error setting stream format: %ld", err);
}
Now I need to modify the patterns in the application like Dog Whistler Application. Can anyone tell me what things do I need do to modify the wave patterns following this source code?
Thanks in advance
You would probably need different RenderTone implementations for each specific pattern. The implementation in your code produces a sampled pure sinusoidal wave with no modulation. There are various patterns you could generate, it depends on your needs what will you implement.
For example, generating shorter or longer beeps would require that you generate 'silence' (write 0-s to the buffer) in your 'for' loop for the sinusoidal for a certain number of frames within the loop and then generate the sinusiodal samples again and then silence again... (this is like chopping the signal)
You could also make an amplitude modulation (tremolo effect) by scaling the sample values with a factor computed with another sine signal (with much lower frequency).
Another example would be to produce a 'police siren' sound by modulating the frequency of the generated sample (vibrato effect), essentially the value of your variable theta_increment, also according to a low frequency signal. Or, simply using two different values for it alternating as with the 'beep' effect above.
Hope, this helps.

How to Make and Play A Procedurally Generated Chirp Sound

My goal is to create a "Chirper" class. A chirper should be able to emit a procedurally generated chirp sound. The specific idea is that the chirp must be procedurally generated, not a prerecorded sound played back.
What is the simplest way to achieve a procedurally generated chirp sound on the iPhone?
You can do it with a sine wave as you said, which you would define using the sin functions. Create a buffer as long as you want the sound in samples, such as:
// 1 second chirp
float samples[44100];
Then pick a start frequency and end frequency, which you probably want the start to be higher than the end, something like:
float startFreq = 1400;
float endFreq = 1100;
float thisFreq;
int x;
for(x = 0; x < 44100; x++)
{
float lerp = float(float(x) / 44100.0);
thisFreq = (lerp * endFreq) + ((1 - lerp) * startFreq);
samples[x] = sin(thisFreq * x);
}
Something like that, anyway.
And if you want a buzz or another sound, use different waveforms - create them to work very similarly to sin and you can use them interchangably. That way you could create saw() sqr() tri(), and you could do things like combine them to form more complex or varied sounds
========================
Edit -
If you want to play you should be able to do something along these lines using OpenAL. The important thing is to use OpenAL or a similar iOS API to play the raw buffer.
alGenBuffers (1, &buffer);
alBufferData (buffer, AL_FORMAT_MONO16, buf, size, 8000);
alGenSources (1, &source);
ALint state;
// attach buffer and play
alSourcei (source, AL_BUFFER, buffer);
alSourcePlay (source);
do
{
wait (200);
alGetSourcei (source, AL_SOURCE_STATE, &state);
}
while ((state == AL_PLAYING) && play);
alSourceStop(source);
alDeleteSources (1, &source);
delete (buf)
}
Using RemoteIO audio unit
Audio Unit Hosting Guide for iOS
You can use Nektarios's code in the render callback for remote I/O unit. Also you can even change waveforms real-time (low latency).

Help with IIR Comb Filter

Reverb.m
#define D 1000
OSStatus MusicPlayerCallback(
void* inRefCon,
AudioUnitRenderActionFlags * ioActionFlags,
const AudioTimeStamp * inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames
AudioBufferList * ioData){
MusicPlaybackState *musicPlaybackState = (MusicPlaybackState*) inRefCon;
//Sample Rate 44.1
float a0,a1;
double y0, sampleinp;
//Delay Gain
a0 = 1;
a1 = 0.5;
for (int i = 0; i< ioData->mNumberBuffers; i++){
AudioBuffer buffer = ioData->mBuffers[i];
SIn16 *outSampleBuffer = buffer.mData;
for (int j = 0; j < inNumberFrames*2; j++) {
//Delay Left Channel
sampleinp = *musicPlaybackState->samplePtr++;
/* IIR equation of Comb Filter
y[n] = (a*x[n])+ (b*x[n-D])
*/
y0 = (a0*sampleinp) + (a1*sampleinp-D);
outSample[j] = fmax(fmin(y0, 32767.0), -32768.0);
j++;
//Delay Right Channel
sampleinp = *musicPlaybackState->samplePtr++;
y0 = (a0*sampleinp) + (a1*sampleinp-D);
outSample[j] = fmax(fmin(y0, 32767.0), -32768.0);
}
}
}
Ok, I got a lot of info but I'm having trouble implementing it. Can someone help, it's probably something really easy i'm forgeting. It's just playing back as normal with a little boost but no delays.
Your treatment of the x0[] variables doesn't look right -- the way you have it, the left and right channels will be intermingled. You assign to x0[j] for the left channel, then
overwrite x0[j] with the right channel data. So the delayed signal x0[j-D] will
always correspond to the right channel, with the delayed left channel data being lost.
You didn't say what your sample rate is, but for a typical audio application, a
three-sample delay might not have much of an audible effect. At 44.1 ksamp/sec,
with a 3-sample delay the peaks and troughs of the filter response will be at
multiples of 14,700 Hz. All you'll get is a single peak in the audio frequency
range, in a part of the spectrum where there's hardly any power (assuming the
signal is speech or music).

How can I display a simple animated spectrogram to visualize audio from a MixerHostAudio object?

I'm working off of some of Apple's sample code for mixing audio (http://developer.apple.com/library/ios/#samplecode/MixerHost/Introduction/Intro.html) and I'd like to display an animated spectrogram (think itunes spectrogram in the top center that replaces the song title with moving bars). It would need to somehow get data from the audio stream live since the user will be mixing several loops together. I can't seem to find any tutorials online about anything to do with this.
I know I am really late for this question. But I just found some great resource to solve this question.
Solution:
Instantiate an audio unit to record samples from the microphone of the iOS device.
Perform FFT computations with the vDSP functions in Apple’s Accelerate framework.
Draw your results to the screen using a UIImage
Computing the FFT or spectrogram of an audio signal is a fundamental audio signal processing. As an iOS developer, whether you want to simply find the pitch of a sound, create a nice visualisation, or do some front end processing it’s something you’ve likely thought about if you are at all interested in audio. Apple does provide a sample application for illustrating this task (aurioTouch). The audio processing part of that App is obscured, however, imho, by extensive use of Open GL.
The goal of this project was to abstract as much of the audio dsp out of the sample app as possible and to render a visualisation using only the UIKit. The resulting application running in the iOS 6.1 simulator is shown to the left. There are three components. rscodepurple1 A simple ‘exit’ button at the top, a gain slider at the bottom (allowing the user to adjust the scaling between the magnitude spectrum and the brightness of the colour displayed), and a UIImageView displaying power spectrum data in the middle that refreshes every three seconds. Note that the frequencies run from low to the high beginning at the top of the image. So, the top of the display is actually DC while the bottom is the Nyquist rate. The image shows the results of processing some speech.
This particular Spectrogram App records audio samples at a rate of 11,025 Hz in frames that are 256 points long. That’s about 0.0232 seconds per frame. The frames are windowed using a 256 point Hanning window and overlap by 1/2 of a frame.
Let’s examine some of the relevant parts of the code that may cause confusion. If you want to try to build this project yourself you can find the source files in the archive below.
First of all look at the content of the PerformThru method. This is a callback for an audio unit. Here, it’s where we read the audio samples from a buffer into one of the arrays we have declared.
SInt8 *data_ptr = (SInt8 *)(ioData->mBuffers[0].mData);
for (i=0; i<inNumberFrames; i++) {
framea[readlas1] = data_ptr[2];
readlas1 += 1;
if (readlas1 >=33075) {
readlas1 = 0;
dispatch_async(dispatch_get_main_queue(), ^{
[THIS printframemethod];
});
}
data_ptr += 4;
}
Note that framea is a static array of length 33075. The variable readlas1 keeps track of how many samples have been read. When the counter hits 33075 (3 seconds at this sampling frequency) a call to another method printframemethod is triggered and the process restarts.
The spectrogram is calculated in printframemethod.
for (int b = 0; b < 33075; b++) {
originalReal[b]=((float)framea[b]) * (1.0 ); //+ (1.0 * hanningwindow[b]));
}
for (int mm = 0; mm < 250; mm++) {
for (int b = 0; b < 256; b++) {
tempReal[b]=((float)framea[b + (128 * mm)]) * (0.0 + 1.0 * hanningwindow[b]);
}
vDSP_ctoz((COMPLEX *) tempReal, 2, &A, 1, nOver2);
vDSP_fft_zrip(setupReal, &A, stride, log2n, FFT_FORWARD);
scale = (float) 1. / 128.;
vDSP_vsmul(A.realp, 1, &scale, A.realp, 1, nOver2);
vDSP_vsmul(A.imagp, 1, &scale, A.imagp, 1, nOver2);
for (int b = 0; b < nOver2; b++) {
B.realp[b] = (32.0 * sqrtf((A.realp[b] * A.realp[b]) + (A.imagp[b] * A.imagp[b])));
}
for (int k = 0; k < 127; k++) {
Bspecgram[mm][k]=gainSlider.value * logf(B.realp[k]);
if (Bspecgram[mm][k]<0) {
Bspecgram[mm][k]=0.0;
}
}
}
Note that in this method we first cast the signed integer samples to floats and store in the array originalReal. Then the FFT of each frame is computed by calling the vDSP functions. The two-dimensional array Bspecgram contains the actual magnitude values of the Short Time Fourier Transform. Look at the code to see how these magnitude values are converted to RGB pixel data.
Things to note:
To get this to build just start a new single-view project and replace the delegate and view controller and add the aurio_helper files. You need to link the Accelerate, AudioToolbox, UIKit, Foundation, and CoreGraphics frameworks to build this. Also, you need PublicUtility. On my system, it is located at /Developer/Extras/CoreAudio/PublicUtility. Where you find it, add that directory to your header search paths.
Get the code:
The delegate, view controller, and helper files are included in this zip archive.
A Spectrogram App for iOS in purple
Apple's aurioTouch example app (on developer.apple.com) has source code for drawing an animated frequency spectrum plot from recorded audio input. You could probably group FFT bins into frequency ranges for a coarser bar graph plot.