Photo manipulation/filter library techniques for iPhone - iphone

Just wondering how difficult it would be to convert the following library to Objective-C to be used on the iPhone?
I guess I'm after some similar image processing libraries that would lead me in the right direction? I'm aware that it's not easy to apply the same filters as existing applications like Instragram, Path and Hipstamatic.
However, I'd like to be able to do something similar.
Here is the JavaScript library:
https://github.com/alexmic/filtrr/blob/master/filtrr.js
A demo of its functionality can be found here:
http://alexmic.net/demos/filtrr

I've started a bit of converting, here is a sample. Now of course, fully converting it would a lot of time, too much for me to do it. But just see how I've done it. I'm hoping you have prior experience with Obj-C?
Also, perhaps you could look at some existing libraries.
http://code.google.com/p/simple-iphone-image-processing/
http://mattgemmell.com/2010/07/05/mgimageutilities/
http://developer.apple.com/library/ios/#samplecode/GLImageProcessing/Introduction/Intro.html
Also, dont forget that XCode can compile C++ into your project so also investigate C or C++ libraries.
NSObject canvas;
int w;
int h;
int ctx;
NSData imageData;
#implementation filtr
{
-(id) initWithCanvas:(id)_canvas
{
if (!_canvas) {
throw "Canvas supplied to filtr was null or undefined.";
}
canvas = _canvas;
w = canvas.width;
h = canvas.height;
ctx = canvas.getContext("2d");
imageData = ctx.getImageData(0, 0, w, h);
}
/**
* Clamps the intensity level between 0 - 255.
*
* #param i The intensity level.
*/
-(int)safe:(int)i
{
return MIN(255, MAX(0, i));
}

Related

Flutter - Trying to Use Tensorflowlite - FloatEfficientNet

I am attempting to use a model that is successfully inferencing in both native swift and android/java to do the same in flutter, specifically the android side of it.
In this case the values I am receiving are way off.
What I have done so far:
I took the tensorflowlite android example github repo: https://github.com/tensorflow/examples/tree/master/lite/examples/image_classification/android, and found that the FloatEfficientNet option was accurately giving values for my model.
I took the flutter_tflite library, and I modified it so that the inferencing section of the android code matched that tensorflow example above:
https://github.com/shaqian/flutter_tflite
I used this tutorial and included repo which uses the above library to inference tensorflow via the platform channel:
https://github.com/flutter-devs/tensorflow_lite_flutter
Via the flutter tutorial, I use the camera plugin, which can stream CameraImage objects from the camera's live feed. I pass that into the modified flutter tensorflow library which uses the platform channel to pass the image into the android layer. It does so as a list of arrays of bytes. (3 planes, YuvImage). The tensorflow android example(1) with the working floatefficientnet code, examples a Bitmap. So I am using this method to convert:
public Bitmap imageToBitmap(List<byte[]> planes, float rotationDegrees, int width, int height) {
// NV21 is a plane of 8 bit Y values followed by interleaved Cb Cr
ByteBuffer ib = ByteBuffer.allocate(width * height * 2);
ByteBuffer y = ByteBuffer.wrap(planes.get(0));
ByteBuffer cr = ByteBuffer.wrap(planes.get(1));
ByteBuffer cb = ByteBuffer.wrap(planes.get(2));
ib.put(y);
ib.put(cb);
ib.put(cr);
YuvImage yuvImage = new YuvImage(ib.array(),
ImageFormat.NV21, width, height, null);
ByteArrayOutputStream out = new ByteArrayOutputStream();
yuvImage.compressToJpeg(new Rect(0, 0, width, height), 50, out);
byte[] imageBytes = out.toByteArray();
Bitmap bm = BitmapFactory.decodeByteArray(imageBytes, 0, imageBytes.length);
Bitmap bitmap = bm;
// On android the camera rotation and the screen rotation
// are off by 90 degrees, so if you are capturing an image
// in "portrait" orientation, you'll need to rotate the image.
if (rotationDegrees != 0) {
Matrix matrix = new Matrix();
matrix.postRotate(rotationDegrees);
Bitmap scaledBitmap = Bitmap.createScaledBitmap(bm,
bm.getWidth(), bm.getHeight(), true);
bitmap = Bitmap.createBitmap(scaledBitmap, 0, 0,
scaledBitmap.getWidth(), scaledBitmap.getHeight(), matrix, true);
}
return bitmap;
}
The inference is successful, I am able to return the values back to flutter and display the results, but they are way off. Using the same android phone, the results are completely different and way off.
I suspect the flaw is related to the conversion of the CameraImage data format into the Bitmap, since it's the only piece of the whole chain that I am not able to independently test. If anyone who has faced a similar issue could assist I am rather puzzled.
I think the reason is because matrix.postRotate() method expect an integer but you give it a float, so you have an implicit conversion from float to integer which messes it up.

How to make the sample run Open CV

I am trying to make Open CV project sample for Template Matching as explained here .
Steps i did so far includes :
Downloaded and imported Open CV framework in my project changed the .m extension files to .mm and in the .pch file i have included the code
#ifdef __cplusplus
#import <opencv2/opencv.hpp>
#endif
#ifdef __OBJC__
#import <UIKit/UIKit.h>
#import <Foundation/Foundation.h>
#endif
I have also downloaded and imported the MatchTemplate_Demo.cpp file from the link.
But here having library linking issue
ld: warning: directory not found for option '-L/Users/G1/Desktop/Xcode'
ld: warning: directory not found for option '-Lprojects/FirstOpenCv/opencv/lib/debug'
ld: library not found for -lopencv_calib3d
clang: error: linker command failed with exit code 1 (use -v to see invocation)
I followed the same step to include the library as given here.
2) Add $(SRCROOT)/opencv to header search path and $(SRCROOT)/opencv/lib/debug for library search path for debug configuration and $(SRCROOT)/opencv/lib/release for release build.
3) Add OpenCV libs to linker input by modifying "Other Linker Flags" option with "-lopencv_calib3d -lzlib -lopencv_contrib -lopencv_legacy -lopencv_features2d -lopencv_imgproc -lopencv_video -lopencv_core".
Now can please any one tell me how should i make the project run.
I have taken the source and template image and imported in the project.
I have basically ViewController.h and ViewController.mm file now i don't know what should i code in these files to see the result.
Also Step 2 :
I need to scan the image in real time using camera view (so that when i place my camera over the source image it should scan and find the template).
On following this link i got Linker error while importing the .cpp file :
ld: 1 duplicate symbol for architecture i386 clang:
error: linker command failed with exit code 1 (use -v to see invocation)
Please can any one suggest me how should i implement it.
You have three interrelated questions here:
1/ how to get openCV framework to run in an iOS project
2/ how to get the Template Matching c++ sample code to run in an iOS project
3/ how to do live template matching with the camera view
1/ how to get openCV framework to run in an iOS project
Download and import the openCV framework as you describe
Change the .pch file as you describe
check that c++ standard library is set to libc++ in your target build settings (this is the default for new projects)
Don't just import demo.cpp without making changes as described below (it is a 'raw' c++ progam with it's own main function, and needs alterations to work as part of an iOS/Cocoa project).
Don't mess with header search paths, other linker flags etc, this isn't necessary if you have imported the prebuilt framework from openCV.org.
Don't change your .m files to .mm unless you know that you need to. My advice is to keep your c++ code separate from your objective-C code as far as practicable, so most of you files should be .m files (objective-C) or .cpp files (c++). You only need .mm prefixes for "objective-C++" where you intend to mix objective-C and c++ in the same file.
2/ how to get the Template Matching c++ sample code to run in an iOS project
We are going to set this up so that your iOS viewController - and the bulk of your iOS code - does not need to know that the image is processed using openCV/C++, and likewise the C++ code doesn't need to know where it's input or output image data is being routed to. We do this by making a small wrapper class between the two that translates objective-C method calls to c++ class member functions and back. We will also set up a category on UIImage to translate image formats from iOS-friendly UIImage to openCV-native cv::Mat.
UIImage+OpenCV Category
You need some utility methods to convert from UIImage to cv::Mat and back. A good place to put these is in a UIImage category. In XCode: File>New FIle>Cocoa Touch>Objective-C category will set you up. Call the category OpenCV and make it a category on UIImage. This .m file you will want to change to .mm as it will need to understand c++ types from the openCV framework.
The header should look something like this:
#import <UIKit/UIKit.h>
#interface UIImage (OpenCV)
//cv::Mat to UIImage
+ (UIImage *)imageWithCVMat:(const cv::Mat&)cvMat;
//UIImage to cv::Mat
- (cv::Mat)cvMat;
#end
The .mm file should implement these methods by closely following this openCV.org code sample adapted to work as category methods (eg you don't pass a UIImage into the instance method, but refer to it using self).
You can use the category methods as if they are UIImage class and instance methods like this:
UIImage* image = [UIImage imageWithCVMat:matImage]; //class method
cv::Mat matImage = [image cvMat]; //instance method
openCV wrapper class
Make a wrapper class to convert your objective-C method (called from a viewController) to a c++ function
header something like this
// CVWrapper.h
#import <Foundation/Foundation.h>
#interface CVWrapper : NSObject
+ (NSImage*) templateMatchImage:(UIImage*)image
patch:(UIImage*)patch
method:(int)method;
#end
We send in the template image, patch image and template matching method, and get back an image showing the match
implementation (.mm file)
// CVWrapper.mm
#import "CVWrapper.h"
#import "CVTemplateMatch.h"
#import "UIImage+OpenCV.h"
#implementation CVWrapper
+ (UIImage*) templateMatchImage:(UIImage *)image
patch:(UIImage *)patch
method:(int)method
{
cv::Mat imageMat = [image cvMat];
cv::Mat patchMat = [patch cvMat];
cv::Mat matchImage =
CVTemplateMatch::matchImage(imageMat,
patchMat,
method);
UIImage* result = [UIImage imageWithCVMat:matchImage];
return result;
}
We are effectively taking a standard objective-C method and UIImage types and translating them into a call to a C++ member function with c++(openCV framework) types, and translating the result back to a UIImage.
C++ TemplateMatch class
Header:
// TemplateMatch.h
#ifndef __CVOpenTemplate__CVTemplateMatch__
#define __CVOpenTemplate__CVTemplateMatch__
class CVTemplateMatch
{
public:
static cv::Mat matchImage (cv::Mat imageMat,
cv::Mat patchMat,
int method);
};
#endif /* defined(__CVOpenTemplate__CVTemplateMatch__) */
#end
Implementation:
This is the Template Match openCV example code, reworked as a class implementation:
// TemplateMatch.cpp
/*
Alterations for use in iOS project
[1] remove GUI code (iOS supplies the GUI)
[2] change main{} to static member function
with appropriate inputs and return value
[3] change MatchingMethod{} signature
to return Mat value
*/
#include "CVTemplateMatch.h"
//[1] #include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include <iostream>
#include <stdio.h>
using namespace std;
using namespace cv;
/// Global Variables
Mat img; Mat templ; Mat result;
//[1] char* image_window = "Source Image";
//[1] char* result_window = "Result window";
int match_method;
//[1] int max_Trackbar = 5;
/// Function Headers
Mat MatchingMethod( int, void* ); //[3] (added return value to function)
// [2] /** #function main */
// [2] int main( int argc, char** argv )
Mat CVTemplateMatch::matchImage (Mat image,Mat patch, int method)
// [2]
{
/// Load image and template
//[2] img = imread( argv[1], 1 );
//[2] templ = imread( argv[2], 1 );
img = image; //[2]
templ = patch; //[2]
match_method = method; //[2]
/// Create windows
//[1] namedWindow( image_window, CV_WINDOW_AUTOSIZE );
//[1] namedWindow( result_window, CV_WINDOW_AUTOSIZE );
/// Create Trackbar
//[1] char* trackbar_label = "Method: \n 0: SQDIFF \n 1: SQDIFF NORMED \n 2: TM CCORR \n 3: TM CCORR NORMED \n 4: TM COEFF \n 5: TM COEFF NORMED";
//[1] createTrackbar( trackbar_label, image_window, &match_method, max_Trackbar, MatchingMethod );
Mat result = MatchingMethod( 0, 0 );
//[1] waitKey(0);
//[2] return 0;
return result; //[2]
}
//[3] void MatchingMethod( int, void* )
Mat MatchingMethod( int, void* )
{
/// Source image to display
Mat img_display;
img.copyTo( img_display );
/// Create the result matrix
int result_cols = img.cols - templ.cols + 1;
int result_rows = img.rows - templ.rows + 1;
result.create( result_cols, result_rows, CV_32FC1 );
/// Do the Matching and Normalize
matchTemplate( img, templ, result, match_method );
normalize( result, result, 0, 1, NORM_MINMAX, -1, Mat() );
/// Localizing the best match with minMaxLoc
double minVal; double maxVal; Point minLoc; Point maxLoc;
Point matchLoc;
minMaxLoc( result, &minVal, &maxVal, &minLoc, &maxLoc, Mat() );
/// For SQDIFF and SQDIFF_NORMED, the best matches are lower values. For all the other methods, the higher the better
if( match_method == CV_TM_SQDIFF || match_method == CV_TM_SQDIFF_NORMED )
{ matchLoc = minLoc; }
else
{ matchLoc = maxLoc; }
/// Show me what you got
rectangle( img_display, matchLoc, Point( matchLoc.x + templ.cols , matchLoc.y + templ.rows ), Scalar::all(0), 2, 8, 0 );
rectangle( result, matchLoc, Point( matchLoc.x + templ.cols , matchLoc.y + templ.rows ), Scalar::all(0), 2, 8, 0 );
//[1] imshow( image_window, img_display );
//[1] imshow( result_window, result );
return img_display; //[3] add return value
}
Now in your viewController you just need to call this method:
UIImage* matchedImage =
[CVWrapper templateMatchImage:self.imageView.image
patch:self.patchView.image
method:0];
with no c++ in sight.
3/ Template Matching with live camera view
The short answer: matchTemplate is not going to work too well in a live camera context. The algorithm is looking for a match in the image with the same scale and orientation as the patch: it slides the patch tile across the image at it's original orientation and size comparing for best match. This is not going to yield great results if the image is perspective-skewed, a different size or rotated to a different orientation.
You could look instead at OpenCV's Feature Detection algorithms, some of which have been moved to non-free. Here is a nice description of SIFT to give you the idea. For video capture you might also want to look at cap_ios.h in opencv2/highgui: here is a tutorial.
Actually you have downloaded already compiled library so no need to follow the steps that you have mentioned in your question and this the issues(i.e. you have followed incorrect steps) because that steps are to compile the source code into static library.
Follow the below steps and it will be done
Unzip the downloaded framework. You can see folder with name "opencv2.framework"
Drag that folder directly into project(Note. When you drag that folder into xcode, xcode will prompt you dialog there will be a tick mark to actually copy this into folder. please tick that check box)
In you .pch file import openCV like you have mentioned in your question is correct way
Now compile. One thing more wherever you want to use function of openCV that file should have .mm extension(i.e. Objective C++ source) It will run perfectly.
Help link:

glDrawArrays allocates memory on every frame

I recently found that glDrawArrays allocating and releasing huge amounts of memory on every frame.
I suspect that it's related to "Shaders compiled outside of initialization" issue reported by openGL profiler. That occurs on every frame! Should it be only once, and after shaders are compiled, disappear?
EDIT: I also double checked that my vertex are properly aligned. So I'm really confused what memory driver needs to allocate on every frame.
EDIT #2: I'm using VBO's and degenerated triangle strips to render sprites and . I'm passing geometry on every frame (GL_STREAM_DRAW).
EDIT #3:
I think I'm close to issue but still unable to solve it. Problem disappears if I pass same texture id value to shader (see source code comment). Somehow this issue is relate to fragment shader I think.
In my sprite batch I have list of sprites and I render them by texture id and FIFO queue.
Here's source code of my sprite batch class:
void spriteBatch::renderInRange(shader& prog, int start, int count){
int curTexture = textures[start];
int startFrom = start;
//Looping through all vertexes and rendering them by texture id's
for(int i=start;i<start+count;++i){
if(textures[i] != curTexture || i == (start + count) -1){
//Problem occurs after decommenting this line
// prog.setUniform("texture", curTexture-1);
prog.setUniform("texture", 0); // if I pass same texture id everything is OK
int startVertex = startFrom * vertexesPerSprite;
int cnt = ((i - startFrom) * vertexesPerSprite);
//If last one has same texture we just adding it
//to last render call
if(i == (start + count) - 1 && textures[i] == curTexture)
cnt = ((i + 1) - startFrom) * vertexesPerSprite;
render(vbo, GL_TRIANGLE_STRIP, startVertex+1, cnt-1);
//if last element has different texture
//we need to render it separately
if(i == (start + count) - 1 && textures[i] != curTexture){
// prog.setUniform("texture", textures[i]-1);
render(vbo, GL_TRIANGLE_STRIP, (i * vertexesPerSprite) + 1, 5);
}
curTexture = textures[i];
startFrom = i;
}
}
}
inline GLint getUniformLocation(GLuint shaderID, const string& name) {
GLint iLocation = glGetUniformLocation(shaderID, name.data());
if(iLocation == -1){ // shader variable not found
stringstream errorText;
errorText << "Uniform \"" << name << " was not found!";
throw logic_error(errorText.str());
}
return iLocation;
}
void shader::setUniform(const string& name, const matrix& value) {
GLint location = getUniformLocation(this->programID, name.data());
glUniformMatrix4fv(location, 1, GL_FALSE, &(value[0]));
}
void shader::setUniform(const string& name, int value) {
GLint iLocation = getUniformLocation(this->programID, name.data());
//GLenum error = glGetError();
glUniform1i(iLocation, value);
// error = glGetError();
}
EDIT#4: I tried to profile app on IOS 6 and Iphone5 and allocations are much bigger. But methods are different in this case. I'm attaching new screenshot.
Issue is resolved by creating separate shader for each texture.
It looks like bug in driver implementation that does happen on all IOS devices (I tested on IOS 5/6). However on higher iPhone models it's not that noticeable.
On iPhone4 performance hit was very significant from 60 FPS to 38!
More code would help, but have you checked to see if the amount of memory involved is comparable to the amount of geometry you're updating? (although that would seem like a lot of geometry!) It looks like GL is holding your update until glDrawArrays, releasing it when it can be pulled into internal GL state.
If you can run the code in a MacOS app, the OpenGL Profiler tool may be able to further isolate the condition. (look in XCode documentation for more info, if you're not familiar with this tool). I'd also suggest looking at texture use, given the amount of memory involved.
The easiest thing to do might be to conditionally break on malloc() for a large allocation, note the address, and examine what's been loaded there.
try to query the texture uniform just once (in initialization) and cache it. calling "glGetUniformLocation" too much in one frame will hammer the performance (depending on the sprite count).

Why doesn't gravity scale work in box2d

I am trying to turn off gravity on one of my bodies. I have used the bodyDef.gravityScale = 0.0f but I am having no luck. Here u can look at my code below. Please help.
b2BodyDef monkey1BodyDef;
monkey1BodyDef.position.Set(0, 200/PTM_RATIO);
monkey1BodyDef.type = b2_dynamicBody;
monkey1BodyDef.userData = monkey1;
monkey1BodyDef.bullet = true;
monkey1BodyDef.gravityScale = 0.0f; //Why doesn't this work I get an error that says no member named 'gravityScale' in 'b2BodyDef'
b2Body *monkey1Body = world->CreateBody(&monkey1BodyDef);
I've hit this problem too. After a little digging I've found that the stable Cocos2D builds don't include recent versions of Box2D, so gravityScale is missing from b2BodyDef. That explains the discrepancy with the Box2D documentation.
There are workarounds, but I've opted to update my Box2D to 2.2.1 (currently the latest). In doing that I encountered the following issues (with solutions):
The b2PolygonShape.SetAsEdge method no longer exists. If you're using that to define screen boundaries you'll need to use something like "myPolygonShape.Set(lowerLeftCorner, lowerRightCorner);" for each screen edge. There's an excellent discussion of this at Programmers' Goodies.
b2DebugDraw has been superseded by b2Draw. Just replace any calls to b2DebugDraw with b2Draw and you should be set. For example, if, like me, you're using the Cocos2D Box2D template, you'll need to replace this:
// Debug Draw functions
m_debugDraw = new GLESDebugDraw( PTM_RATIO );
_world->SetDebugDraw(m_debugDraw);
uint32 flags = 0;
flags += b2DebugDraw::e_shapeBit;
flags += b2DebugDraw::e_centerOfMassBit;
m_debugDraw->SetFlags(flags);
with this:
// Debug Draw functions
m_debugDraw = new GLESDebugDraw( PTM_RATIO );
_world->SetDebugDraw(m_debugDraw);
uint32 flags = 0;
flags += b2Draw::e_shapeBit;
flags += b2Draw::e_centerOfMassBit;
m_debugDraw->SetFlags(flags);
b2Transform has different attribute names for position and rotation. For example, myTransform.position is now myTransform.p (but is still a b2Vec2). myTransform.R, which was defined as b2Mat22, has been replaced with myTransform.q, defined as b2Rot. Again, if you're using the Cocos2D Box2D template, replace the following in GLES-Render.mm:
void GLESDebugDraw::DrawTransform(const b2Transform& xf)
{
b2Vec2 p1 = xf.position, p2;
const float32 k_axisScale = 0.4f;
p2 = p1 + k_axisScale * xf.R.col1;
DrawSegment(p1,p2,b2Color(1,0,0));
p2 = p1 + k_axisScale * xf.R.col2;
DrawSegment(p1,p2,b2Color(0,1,0));
}
…with:
void GLESDebugDraw::DrawTransform(const b2Transform& xf)
{
b2Vec2 p1 = xf.p, p2;
const float32 k_axisScale = 0.4f;
p2 = p1 + k_axisScale * xf.q.GetXAxis();
DrawSegment(p1,p2,b2Color(1,0,0));
p2 = p1 + k_axisScale * xf.q.GetXAxis();
DrawSegment(p1,p2,b2Color(0,1,0));
}
I hope this helps!
Because no member named 'gravityScale' exists in 'b2BodyDef' :(
documentation is outdated compared with the code
Change gravity definition of world, coz it's world, that have gravity, As:
b2Vec2 gravity = b2Vec2(0.0f, -10.0f);
bool doSleep = false;
world = new b2World(gravity, doSleep);
World is b2World
If body.setGravityScale(0); doesn't work, you can use it with body.setAwake(false); at second line.

where to start with audio synthesis on iPhone

I'd like to build a synthesizer for the iPhone. I understand that it's possible to use custom audio units for the iPhone. At first glance, this sounds promising, since there's lots and lots of Audio Unit programming resources available. However, using custom audio units on the iPhone seems a bit tricky ( see: http://lists.apple.com/archives/Coreaudio-api/2008/Nov/msg00262.html)
This seems like the sort of thing that loads of people must be doing, but a simple google search for "iphone audio synthesis" doesn't turn up anything along the lines of a nice and easy tutorial or recommended tool kit.
So, anyone here have experience synthesizing sound on the iPhone? Are custom audio units the way to go, or is there another, simpler approach I should consider?
I'm also investigating this. I think the AudioQueue API is probably the way to go.
Here's as far as I got, seems to work okay.
File: BleepMachine.h
//
// BleepMachine.h
// WgHeroPrototype
//
// Created by Andy Buchanan on 05/01/2010.
// Copyright 2010 Andy Buchanan. All rights reserved.
//
#include <AudioToolbox/AudioToolbox.h>
// Class to implement sound playback using the AudioQueue API's
// Currently just supports playing two sine wave tones, one per
// stereo channel. The sound data is liitle-endian signed 16-bit # 44.1KHz
//
class BleepMachine
{
static void staticQueueCallback( void* userData, AudioQueueRef outAQ, AudioQueueBufferRef outBuffer )
{
BleepMachine* pThis = reinterpret_cast<BleepMachine*> ( userData );
pThis->queueCallback( outAQ, outBuffer );
}
void queueCallback( AudioQueueRef outAQ, AudioQueueBufferRef outBuffer );
AudioStreamBasicDescription m_outFormat;
AudioQueueRef m_outAQ;
enum
{
kBufferSizeInFrames = 512,
kNumBuffers = 4,
kSampleRate = 44100,
};
AudioQueueBufferRef m_buffers[kNumBuffers];
bool m_isInitialised;
struct Wave
{
Wave(): volume(1.f), phase(0.f), frequency(0.f), fStep(0.f) {}
float volume;
float phase;
float frequency;
float fStep;
};
enum
{
kLeftWave = 0,
kRightWave = 1,
kNumWaves,
};
Wave m_waves[kNumWaves];
public:
BleepMachine();
~BleepMachine();
bool Initialise();
void Shutdown();
bool Start();
bool Stop();
bool SetWave( int id, float frequency, float volume );
};
// Notes by name. Integer value is number of semitones above A.
enum Note
{
A = 0,
Asharp,
B,
C,
Csharp,
D,
Dsharp,
E,
F,
Fsharp,
G,
Gsharp,
Bflat = Asharp,
Dflat = Csharp,
Eflat = Dsharp,
Gflat = Fsharp,
Aflat = Gsharp,
};
// Helper function calculates fundamental frequency for a given note
float CalculateFrequencyFromNote( SInt32 semiTones, SInt32 octave=4 );
float CalculateFrequencyFromMIDINote( SInt32 midiNoteNumber );
File:BleepMachine.mm
//
// BleepMachine.mm
// WgHeroPrototype
//
// Created by Andy Buchanan on 05/01/2010.
// Copyright 2010 Andy Buchanan. All rights reserved.
//
#include "BleepMachine.h"
void BleepMachine::queueCallback( AudioQueueRef outAQ, AudioQueueBufferRef outBuffer )
{
// Render the wave
// AudioQueueBufferRef is considered "opaque", but it's a reference to
// an AudioQueueBuffer which is not.
// All the samples manipulate this, so I'm not quite sure what they mean by opaque
// saying....
SInt16* coreAudioBuffer = (SInt16*)outBuffer->mAudioData;
// Specify how many bytes we're providing
outBuffer->mAudioDataByteSize = kBufferSizeInFrames * m_outFormat.mBytesPerFrame;
// Generate the sine waves to Signed 16-Bit Stero interleaved ( Little Endian )
float volumeL = m_waves[kLeftWave].volume;
float volumeR = m_waves[kRightWave].volume;
float phaseL = m_waves[kLeftWave].phase;
float phaseR = m_waves[kRightWave].phase;
float fStepL = m_waves[kLeftWave].fStep;
float fStepR = m_waves[kRightWave].fStep;
for( int s=0; s<kBufferSizeInFrames*2; s+=2 )
{
float sampleL = ( volumeL * sinf( phaseL ) );
float sampleR = ( volumeR * sinf( phaseR ) );
short sampleIL = (int)(sampleL * 32767.0);
short sampleIR = (int)(sampleR * 32767.0);
coreAudioBuffer[s] = sampleIL;
coreAudioBuffer[s+1] = sampleIR;
phaseL += fStepL;
phaseR += fStepR;
}
m_waves[kLeftWave].phase = fmodf( phaseL, 2 * M_PI ); // Take modulus to preserve precision
m_waves[kRightWave].phase = fmodf( phaseR, 2 * M_PI );
// Enqueue the buffer
AudioQueueEnqueueBuffer( m_outAQ, outBuffer, 0, NULL );
}
bool BleepMachine::SetWave( int id, float frequency, float volume )
{
if ( ( id < kLeftWave ) || ( id >= kNumWaves ) ) return false;
Wave& wave = m_waves[ id ];
wave.volume = volume;
wave.frequency = frequency;
wave.fStep = 2 * M_PI * frequency / kSampleRate;
return true;
}
bool BleepMachine::Initialise()
{
m_outFormat.mSampleRate = kSampleRate;
m_outFormat.mFormatID = kAudioFormatLinearPCM;
m_outFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
m_outFormat.mFramesPerPacket = 1;
m_outFormat.mChannelsPerFrame = 2;
m_outFormat.mBytesPerPacket = m_outFormat.mBytesPerFrame = sizeof(UInt16) * 2;
m_outFormat.mBitsPerChannel = 16;
m_outFormat.mReserved = 0;
OSStatus result = AudioQueueNewOutput(
&m_outFormat,
BleepMachine::staticQueueCallback,
this,
NULL,
NULL,
0,
&m_outAQ
);
if ( result < 0 )
{
printf( "ERROR: %d\n", (int)result );
return false;
}
// Allocate buffers for the audio
UInt32 bufferSizeBytes = kBufferSizeInFrames * m_outFormat.mBytesPerFrame;
for ( int buf=0; buf<kNumBuffers; buf++ )
{
OSStatus result = AudioQueueAllocateBuffer( m_outAQ, bufferSizeBytes, &m_buffers[ buf ] );
if ( result )
{
printf( "ERROR: %d\n", (int)result );
return false;
}
// Prime the buffers
queueCallback( m_outAQ, m_buffers[ buf ] );
}
m_isInitialised = true;
return true;
}
void BleepMachine::Shutdown()
{
Stop();
if ( m_outAQ )
{
// AudioQueueDispose also chucks any audio buffers it has
AudioQueueDispose( m_outAQ, true );
}
m_isInitialised = false;
}
BleepMachine::BleepMachine()
: m_isInitialised(false), m_outAQ(0)
{
for ( int buf=0; buf<kNumBuffers; buf++ )
{
m_buffers[ buf ] = NULL;
}
}
BleepMachine::~BleepMachine()
{
Shutdown();
}
bool BleepMachine::Start()
{
OSStatus result = AudioQueueSetParameter( m_outAQ, kAudioQueueParam_Volume, 1.0 );
if ( result ) printf( "ERROR: %d\n", (int)result );
// Start the queue
result = AudioQueueStart( m_outAQ, NULL );
if ( result ) printf( "ERROR: %d\n", (int)result );
return true;
}
bool BleepMachine::Stop()
{
OSStatus result = AudioQueueStop( m_outAQ, true );
if ( result ) printf( "ERROR: %d\n", (int)result );
return true;
}
// A (A4=440)
// A# f(n)=2^(n/12) * r
// B where n = number of semitones
// C and r is the root frequency e.g. 440
// C#
// D frq -> MIDI note number
// D# p = 69 + 12 x log2(f/440)
// E
// F
// F#
// G
// G#
//
// MIDI Note ref: http://www.phys.unsw.edu.au/jw/notes.html
//
// MIDI Node numbers:
// A3 57
// A#3 58
// B3 59
// C4 60 <--
// C#4 61
// D4 62
// D#4 63
// E4 64
// F4 65
// F#4 66
// G4 67
// G#4 68
// A4 69 <--
// A#4 70
// B4 71
// C5 72
float CalculateFrequencyFromNote( SInt32 semiTones, SInt32 octave )
{
semiTones += ( 12 * (octave-4) );
float root = 440.f;
float fn = powf( 2.f, (float)semiTones/12.f ) * root;
return fn;
}
float CalculateFrequencyFromMIDINote( SInt32 midiNoteNumber )
{
SInt32 semiTones = midiNoteNumber - 69;
return CalculateFrequencyFromNote( semiTones, 4 );
}
//for ( SInt32 midiNote=21; midiNote<=108; ++midiNote )
//{
// printf( "MIDI Note %d: %f Hz \n",(int)midiNote,CalculateFrequencyFromMIDINote( midiNote ) );
//}
Update: Basic usage info
Initialise. Somehere near the start, I'm using initFromNib: in my code
m_bleepMachine = new BleepMachine;
m_bleepMachine->Initialise();
m_bleepMachine->Start();
Now the sound playback is running, but generating silence.
In your code, call this when you want to change the tone generation
m_bleepMachine->SetWave( ch, frq, vol );
where ch is the channel ( 0 or 1 )
where frq is the frequency to set in Hz
where vol is the volume ( 0=-Inf db, 1=-0db )
At program termination
delete m_bleepMachine;
Since my original post almost a year ago, I've come a long way. After a pretty exhaustive search, I came up with very few high-level synthesis tools suitable for iOS development. There are many which are GPL licensed, but the GPL license is too restrictive for me to feel comfortable using it. LibPD works great, and is what rjdj uses, but I found myself really frustrated by the graphical programming paradigm. JSyn's c-based engine, csyn, is an option, but it requires licensing, and I'm really used to programming with open-source tools. It does look worth a close look though.
In the end, I'm using STK as my basic framework. STK is a very low-level tool, and requires extensive buffer-level programming to get working. This is in contrast to something higher level like PD or SuperCollider, which allows you to simply plug unit generators together and not worry about handling the raw audio data.
Working this way with STK is certainly a bit slower than with a high level tool, but I'm becoming comfortable with it. Especially now that I'm becoming more comfortable with C/C++ programming in general.
There's a new project under way to create a patching-style add on to Open Frameworks. It's called Cleo I think, out of the University of Vancouver. It hasn't been released yet, but it looks like a very nice mix of patching-style connection of unit generators in C++ rather than requiring the use of another language. And it's tightly integrated with Open Frameworks, which may be appealing or not, depending.
So, to answer my original question, first you need to learn how to write to the output buffer. Here's some good sample code for that:
http://atastypixel.com/blog/using-remoteio-audio-unit/
Then you need to do some synthesis to generate the audio data. If you like patching, I wouldn't hesitate to recommend libpd. It seems to work great, and you can work the way you're accustomed to. If you hate graphical patching (like me), your best starting place for now is probably STK. If STK and low-level audio programming seems a bit over your head (like it was for me), just roll up your sleeves, pack a tent, and set up on a bit of a long hike up the learning curve. You'll be a much better programmer for it in the end.
Another bit of advice I wish I could have given myself a year ago: join Apple's Core Audio mailing list.
============== 2014 Edit ===========
I'm now using (and actively contributing to) the Tonic audio synthesis library. It's awesome, if I don't say so myself.
With the enormous caveat that I have yet to get past all the documentation or finishing browsing some classes / sample code, it looks like the fine folks from CCRMA over at Stanford may have put some nice toolkits together for our audio hacking pleasure. No guarantees these will do exactly what you want, but based on what I know about the original STK, they should do the trick. I'm about to embark on an audio synth app myself and the more code I can reuse, the better.
Links / descriptions from their site...
MoMu : MoMu is a light-weight software toolkit for creating musical instruments and experiences on mobile device, and currently supports the iPhone platform (iPhone, iPad, iPod Touches). MoMu provides API's for real-time full-duplex audio, accelerometer, location, multi-touch, networking (via OpenSoundControl), graphics, and utilities. (yada yada)
• and •
MoMu STK : The MoMu release of the Synthesis Toolkit (STK, originally by Perry R. Cook and Gary P. Scavone) is a lightly modified version of STK 4.4.2, and currently supports the iPhone platform (iPhone, iPad, iPod Touches).
I'm just getting into Audio Unit programming for iPhone to build a synth-like app as well. The Apple guide "Audio Unit Hosting Guide for iOS" seems like a good reference:
http://developer.apple.com/library/ios/#documentation/MusicAudio/Conceptual/AudioUnitHostingGuide_iOS/AudioUnitHostingFundamentals/AudioUnitHostingFundamentals.html#//apple_ref/doc/uid/TP40009492-CH3-SW11
The guide includes links to a couple sample projects. Audio Mixer (MixerHost) and aurioTouch:
http://developer.apple.com/library/ios/samplecode/MixerHost/Introduction/Intro.html#//apple_ref/doc/uid/DTS40010210
http://developer.apple.com/library/ios/samplecode/aurioTouch/Introduction/Intro.html#//apple_ref/doc/uid/DTS40007770
I'm one of the other contributors to Tonic along with morgancodes. For wrangling CoreAudio in a higher-level framework, I can't give enough praise to The Amazing Audio Engine.
We've both used it in tandem with Tonic in a number of projects. It takes so much of the pain out of dealing with CoreAudio directly, letting you focus on the actual content and synthesis instead of the hardware abstraction layer.
Lately I've been using AudioKit
It's a fresh and well designed wrapper over CSound which has been around for ages
I was using tonic with openframeworks and I was finding myself missing programming in swift.
Although tonic and openframeworks are both powerful tools,
I've chosen to get in bed with swift
PD has a version that runs on the iphone, used by RjDj. If you are OK with using someone else's app rather than writing your own, you can do quite a bit in an RjDj scene, and there is a set of objects that let you patch it out and test it on a regular PD on your own computer.
I should mention: PD is a visual dataflow programming language, that is to say, it is turing complete, and can be used to develop graphical applications - but if you are going to do anything interesting I would definitely look into best practices for patching.
Last time I checked you couldn't use custom AUs on iOS in a way that would allow all installed apps to use it (like on MacOS X).
You could theoretically use a custom AU from inside your iOS app by loading it from the app's bundle and calling the AU's render function directly, but then you could as well add the code directly to your app. Also, I'm pretty sure that loading and calling code that sits in a dynamic library would go against the AppStore policies.
So you will either have to do the processing in your remote IO callback or use the Apple AUs that are preinstalled, within an AUGraph.