How to use kAudioUnitSubType_LowShelfFilter of kAudioUnitType_Effect which controls bass in core Audio? - iphone

i'm back with one more question related to BASS. I already had posted this question How Can we control bass of music in iPhone, but not get as much attention of your people as it should get. But now I have done some more search and had read the Core AUDIO. I got one sample code which i want to share with you people here is the link to download it iPhoneMixerEqGraphTest. Have a look on it in this code what i had seen is the developer had use preset Equalizer given by iPod in Apple. Lets see some code snippet too:----
// iPodEQ unit
CAComponentDescription eq_desc(kAudioUnitType_Effect, kAudioUnitSubType_AUiPodEQ, kAudioUnitManufacturer_Apple);
What kAudioUnitSubType_AUiPodEQ does is it get preset values from iPod's equalizer and return us in Xcode in an array which we can use in PickerView/TableView and can set any category like bass, rock, Dance etc. It is helpless for me as it only returns names of equalizer types like bass, rock, Dance etc. as i want to implement bass only and want to implement it on UISLider.
To implement Bass on slider i need values so that i can set minimum and maximum value so that on moving slider bass can be changed.
After getting all this i start reading Core Audio's Audio Unit framework's classes and got this
after that i start searching for bass control and got this
So now i need to implement this kAudioUnitSubType_LowShelfFilter. But now i don't know how to implement this enum in my code so that i can control the bass as written documentation. Even Apple had not write that how can we use it. kAudioUnitSubType_AUiPodEQ this category was returning us an array but kAudioUnitSubType_LowShelfFilter category is not returning any array. While using kAudioUnitSubType_AUiPodEQ this category we can use types of equalizer from an array but how can we use this category kAudioUnitSubType_LowShelfFilter. Can anybody help me regarding this in any manner? It would be highly appreciable.
Thanks.

Update
Although it's declared in the iOS headers, the Low Shelf AU is not actually available on iOS.
The parameters of the Low Shelf are different from the iPod EQ.
Parameters are declared and documented in `AudioUnit/AudioUnitParameters.h':
// Parameters for the AULowShelfFilter unit
enum {
// Global, Hz, 10->200, 80
kAULowShelfParam_CutoffFrequency = 0,
// Global, dB, -40->40, 0
kAULowShelfParam_Gain = 1
};
So after your low shelf AU is created, configure its parameters using AudioUnitSetParameter.
Some initial parameter values you can try would be 120 Hz (kAULowShelfParam_CutoffFrequency) and +6 dB (kAULowShelfParam_Gain) -- assuming your system reproduces bass well, your low frequency content should be twice as loud.
Can u tell me how can i use this kAULowShelfParam_CutoffFrequency to change the frequency.
If everything is configured right, this should be all that is needed:
assert(lowShelfAU);
const float frequencyInHz = 120.0f;
OSStatus result = AudioUnitSetParameter(lowShelfAU,
kAULowShelfParam_CutoffFrequency,
kAudioUnitScope_Global,
0,
frequencyInHz,
0);
if (noErr != result) {
assert(0 && "error!");
return ...;
}

Related

Controlling light using midi inputs

I currently am using Max/MSP to create an interactive system between lights and sound.
I am using Philips hue lighting which I have hooked up to Max/MSP and now I am wanting to trigger an increase in brightness/saturation on the input of a note from a Midi instrument. Does anyone have any ideas how this might be accomplished?
I have built this.
I used the shell object. And then feed an array of parameters into it via a javascipt file with the HUE API. There is a lag time of 1/6 of a second between commands.
Javascript file:
inlets=1;
outlets=1;
var bridge="192.168.0.100";
var hash="newdeveloper";
var bulb= 1;
var brt= 200;
var satn= 250;
var hcolor= 10000;
var bulb=1;
function list(bulb,hcolor,brt,satn,tran) {
execute('PUT','http://'+bridge+'/api/'+hash+'/lights/'+bulb+'/state', '"{\\\"on\\\":true,\\\"hue\\\":'+hcolor+', \\\"bri\\\":'+brt+',\\\"sat\\\":'+satn+',\\\"transitiontime\\\":'+tran+'}"');
}
function execute($method,$url,$message){
outlet(0,"curl --request",$method,"--data",$message,$url);
}
To control Philips Hue you need to issue calls to a restful http based api, like so: http://www.developers.meethue.com/documentation/core-concepts, using the [jweb] or [maxweb] objects: https://cycling74.com/forums/topic/making-rest-call-from-max-6-and-saving-the-return/
Generally however, to control lights you use DMX, the standard protocol for professional lighting control. Here is a somewhat lengthy post on the topic: https://cycling74.com/forums/topic/controlling-video-and-lighting-with-max/, scroll down to my post from APRIL 11, 2014 | 3:42 AM.
To change the bri/sat of your lights is explained in the following link (Registration/Login required)
http://www.developers.meethue.com/documentation/lights-api#16_set_light_state
You will need to know the IP Address of your hue hue bridge which is explained here: http://www.developers.meethue.com/documentation/getting-started and a valid username.
Also bear in mind the performance limitations. As a general rule you can send up to 10 lightstate commands per second. I would recommend having a 100ms gap between each one, to prevent flooding the bridge (and losing commands).
Are you interested in finding out details of who to map this data from a MIDI input to the phillips HUE lights within max? or are you already familiar with Max.
Using Tommy b's javascript (which you could put into a js object), You could for example scale the MIDI messages you want to use using midiin and borax objects and map them to the outputs you want using the scale object. Karlheinz Essl's RTC library is a good place to start with algorithmic composition if you want to transform the data at all http://www.essl.at/software.html
+1 for DMX light control via Max. There are lots of good max-to-dmx tutorials and USB-DMX hardware is getting pretty cheap. However, as someone who previously believed in dragging a bunch of computer equipment on stage just to control a light or two with an instrument, I'd recommend researching and purchasing a simple one channel "color organ" circuit kit (e.g., Velleman MK 110). Controlling a 120/240V light bulb via audio is easier than you might think; a computer for this type of application is usually overkill. Keep it simple and good luck!

CIFaceFeature trackingID is always coming same for multiple faces

I want to detect multiple faces in my project. Therefore I planned to use the trackingID property of the CIFaceFure to keep the track of the face. But I found that every time it is coming same for every face.
So my problem is that how can I identify a face uniquely when multiple face are there in the video frame. I don't want to recognize the face for later purpose only detection for the current video frame. Thanks.
I am using the same code as in SqaureCam apple sample project. in iOS 6.
for ( CIFaceFeature *face in features ) {
NSLog(#"face.trackingID %d",face.trackingID);
}
The above code is priting the same ID for every face.
If you haven't already done so, you need to make sure to specify the usage of CIDetectorTracking in the detector's options. If I remember correctly, it should look something like this:
NSDictionary *detectorOptions = #{CIDetectorTracking: #YES};
CIDetector *detector = [CIDetector detectorOfType:CIDetectorTypeFace context:nil options:detectorOptions];

how to use "FindBarcodesInUIImage"?

I am developing Barcode scanner application for iPhone.
Library: RedLaser
Just I want to scan the barcode from the existing image, not from camera.
I didn't get any documentation to call FindBarcodesInUIImage method manually.
Can I get any sample code ?
Does this snippet from the documentation help?
This method analyses a given image and returns information on any barcodes discovered in the image. It is intended to be used in cases where the user already has a picture of a barcode (in their photos library, for example) that they want to decode. This method performs a thorough check for all barcode symbologies we support, and is not intended for real-time use.
When scanning barcodes using this method, you cannot (and need not) specify a scan orientation or active scan region; the entire image is scanned in all orientations. Nor can you restrict the scan to particular symbol types. If such a feature is absolutely necessary, you can implement it post-scan by filtering the result set.
FindBarcodesInUIImage operates synchronously, but can be placed in a thread. Depending on image size and processor speed, it can take several seconds to process an image.
void ScanImageForBarcodes(UIImage *inputImage)
{
NSSet *resultSet = FindBarcodesInUIImage(inputImage);
// Print the results
NSLog(#"%#", resultSet);
}
If the SDK did not find any barcodes in the image, the log message will be (null). Otherwise, it will be something like:
{(
(0x19e0660) Code 39: 73250110 -- (1 finds)
)}
This log message indicates a found set containing one item, a Code 39 barcode with the value "73250110".
Remember that the SDK is not guaranteed to find barcodes in an image. Even if an image contains a barcode, the SDK might not be able to read it, and you will receive no results.

iPhone App Pick Up Sound

I am trying to do a certain action based on whether or not the user makes a loud sound. I'm not trying to do any voice recognition or anything. Just simply do an action based on whether the iPhone picks up a loud sound.
Any suggestions, tutorials, I can't find anything on the apple developer site. I'm assuming i'm not looking or searching right.
The easiest thing for you do is to use the AudioQueue services. Here's the manual:
Apple AQ manual
Basically, look for any example code that initialized things with AudioQueueNewInput(). Something like this:
Status = AudioQueueNewInput(&_Description,
Audio_Input_Buffer_Ready,
self,
NULL,
NULL,
0,
&self->Queue);
Once you have that going, you can enable sound level metering with something like this:
// Turn on level metering (iOS 2.0 and later)
UInt32 on = 1;
AudioQueueSetProperty(self->Queue,kAudioQueueProperty_EnableLevelMetering,&on,sizeof(on));
You will have a callback routine that is invoked for each chunk of audio data. In it, you can check the current meter levels with something like this:
//
// Check metering levels and detect silence
//
AudioQueueLevelMeterState meters[1];
UInt32 dlen = sizeof(meters);
Status = AudioQueueGetProperty(_Queue,kAudioQueueProperty_CurrentLevelMeterDB,meters,&dlen);
if (Status == 0) {
if (meters[0].mPeakPower > _threshold) {
silence = 0.0; // reset silence timer
} else {
silence += time;
}
}
//
// Notify observers of incoming data.
//
if (delegate) {
[delegate audioMeter:meters[0].mPeakPower duration:time];
[delegate audioData:Buffer->mAudioData size:Buffer->mAudioDataByteSize];
}
Or, in your case, instead of silence you can detect if the decibel level is over a certain value for long enough. Note that the decibel values you will see will range from about -70.0 for dead silence, up to 0.0db for very loud things. On an exponential scale. You'll have to play with it to see what values work for your particular application.
Apple has examples such as Speak Here which looks to have code relating to decibels. I would check some of the meter classes for examples. I have no audio programming experience but hopefully that will get you started while someone provides you with a better answer.

When to set kAudioUnitProperty_StreamFormat?

When to set kAudioUnitProperty_StreamFormat (and kAudioUnitProperty_SampleRate too)? For each AU in my AUGraph ? Or is it enough to set it jus for the AU Mixer ?
André
you set it on the inputs and outputs of each audiounit.
iphone only allows input signed ints. so don't bother with floats it just won't work.
you set the sample rates using
CAStreamBasicDesciption myDescription;
myDescription.mSampleRate = 44100.0f; // and do this for the other options such as mBitsPerChannel etc.
On the output of audiounits such as the mixer, it comes out as 8.24 fixed point format.
be aware of this when you're trying to create callbacks and using the audiounitrender function, the formats have to match and you can't change the output formats. (but you may still need to set it)
use printf("Mixer file format: "); myDescription.Print(); to get the format description. It will depend on where you put it in your initialization process.
In short, yes - for more detail on what you actually need to set on each unit, see Audio Unit Hosting Guide for iOS