Drawing waveform with AVAssetReader - iphone

I reading song from iPod library using assetUrl (in code it named audioUrl)
I can play it many ways, I can cut it, I can make some precessing with this but...
I really don't understand what I gonna do with this CMSampleBufferRef to get data for drawing waveform! I need info about peak values, how I can get it this (maybe another) way?
AVAssetTrack * songTrack = [audioUrl.tracks objectAtIndex:0];
AVAssetReaderTrackOutput * output = [[AVAssetReaderTrackOutput alloc] initWithTrack:songTrack outputSettings:nil];
[reader addOutput:output];
[output release];
NSMutableData * fullSongData = [[NSMutableData alloc] init];
[reader startReading];
while (reader.status == AVAssetReaderStatusReading){
AVAssetReaderTrackOutput * trackOutput =
(AVAssetReaderTrackOutput *)[reader.outputs objectAtIndex:0];
CMSampleBufferRef sampleBufferRef = [trackOutput copyNextSampleBuffer];
if (sampleBufferRef){/* what I gonna do with this? */}
Please help me!

I was searching for a similar thing and decided to "roll my own."
I realize this is an old post, but in case anyone else is in search of this, here is my solution. it is relatively quick and dirty and normalizes the image to "full scale".
the images it creates are "wide" ie you need to put them in a UIScrollView or otherwise manage the display.
this is based on some answers given to this question
Sample Output
EDIT: I have added a logarithmic version of the averaging and render methods, see the end of this message for the alternate version & comparison outputs. I personally prefer the original linear version, but have decided to post it, in case someone can improve on the algorithm used.
You'll need these imports:
#import <MediaPlayer/MediaPlayer.h>
#import <AVFoundation/AVFoundation.h>
First, a generic rendering method that takes a pointer to averaged sample data,
and returns a UIImage. Note these samples are not playable audio samples.
-(UIImage *) audioImageGraph:(SInt16 *) samples
normalizeMax:(SInt16) normalizeMax
sampleCount:(NSInteger) sampleCount
channelCount:(NSInteger) channelCount
imageHeight:(float) imageHeight {
CGSize imageSize = CGSizeMake(sampleCount, imageHeight);
UIGraphicsBeginImageContext(imageSize);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetFillColorWithColor(context, [UIColor blackColor].CGColor);
CGContextSetAlpha(context,1.0);
CGRect rect;
rect.size = imageSize;
rect.origin.x = 0;
rect.origin.y = 0;
CGColorRef leftcolor = [[UIColor whiteColor] CGColor];
CGColorRef rightcolor = [[UIColor redColor] CGColor];
CGContextFillRect(context, rect);
CGContextSetLineWidth(context, 1.0);
float halfGraphHeight = (imageHeight / 2) / (float) channelCount ;
float centerLeft = halfGraphHeight;
float centerRight = (halfGraphHeight*3) ;
float sampleAdjustmentFactor = (imageHeight/ (float) channelCount) / (float) normalizeMax;
for (NSInteger intSample = 0 ; intSample < sampleCount ; intSample ++ ) {
SInt16 left = *samples++;
float pixels = (float) left;
pixels *= sampleAdjustmentFactor;
CGContextMoveToPoint(context, intSample, centerLeft-pixels);
CGContextAddLineToPoint(context, intSample, centerLeft+pixels);
CGContextSetStrokeColorWithColor(context, leftcolor);
CGContextStrokePath(context);
if (channelCount==2) {
SInt16 right = *samples++;
float pixels = (float) right;
pixels *= sampleAdjustmentFactor;
CGContextMoveToPoint(context, intSample, centerRight - pixels);
CGContextAddLineToPoint(context, intSample, centerRight + pixels);
CGContextSetStrokeColorWithColor(context, rightcolor);
CGContextStrokePath(context);
}
}
// Create new image
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
// Tidy up
UIGraphicsEndImageContext();
return newImage;
}
Next, a method that takes a AVURLAsset, and returns PNG image data
- (NSData *) renderPNGAudioPictogramForAsset:(AVURLAsset *)songAsset {
NSError * error = nil;
AVAssetReader * reader = [[AVAssetReader alloc] initWithAsset:songAsset error:&error];
AVAssetTrack * songTrack = [songAsset.tracks objectAtIndex:0];
NSDictionary* outputSettingsDict = [[NSDictionary alloc] initWithObjectsAndKeys:
[NSNumber numberWithInt:kAudioFormatLinearPCM],AVFormatIDKey,
// [NSNumber numberWithInt:44100.0],AVSampleRateKey, /*Not Supported*/
// [NSNumber numberWithInt: 2],AVNumberOfChannelsKey, /*Not Supported*/
[NSNumber numberWithInt:16],AVLinearPCMBitDepthKey,
[NSNumber numberWithBool:NO],AVLinearPCMIsBigEndianKey,
[NSNumber numberWithBool:NO],AVLinearPCMIsFloatKey,
[NSNumber numberWithBool:NO],AVLinearPCMIsNonInterleaved,
nil];
AVAssetReaderTrackOutput* output = [[AVAssetReaderTrackOutput alloc] initWithTrack:songTrack outputSettings:outputSettingsDict];
[reader addOutput:output];
[output release];
UInt32 sampleRate,channelCount;
NSArray* formatDesc = songTrack.formatDescriptions;
for(unsigned int i = 0; i < [formatDesc count]; ++i) {
CMAudioFormatDescriptionRef item = (CMAudioFormatDescriptionRef)[formatDesc objectAtIndex:i];
const AudioStreamBasicDescription* fmtDesc = CMAudioFormatDescriptionGetStreamBasicDescription (item);
if(fmtDesc ) {
sampleRate = fmtDesc->mSampleRate;
channelCount = fmtDesc->mChannelsPerFrame;
// NSLog(#"channels:%u, bytes/packet: %u, sampleRate %f",fmtDesc->mChannelsPerFrame, fmtDesc->mBytesPerPacket,fmtDesc->mSampleRate);
}
}
UInt32 bytesPerSample = 2 * channelCount;
SInt16 normalizeMax = 0;
NSMutableData * fullSongData = [[NSMutableData alloc] init];
[reader startReading];
UInt64 totalBytes = 0;
SInt64 totalLeft = 0;
SInt64 totalRight = 0;
NSInteger sampleTally = 0;
NSInteger samplesPerPixel = sampleRate / 50;
while (reader.status == AVAssetReaderStatusReading){
AVAssetReaderTrackOutput * trackOutput = (AVAssetReaderTrackOutput *)[reader.outputs objectAtIndex:0];
CMSampleBufferRef sampleBufferRef = [trackOutput copyNextSampleBuffer];
if (sampleBufferRef){
CMBlockBufferRef blockBufferRef = CMSampleBufferGetDataBuffer(sampleBufferRef);
size_t length = CMBlockBufferGetDataLength(blockBufferRef);
totalBytes += length;
NSAutoreleasePool *wader = [[NSAutoreleasePool alloc] init];
NSMutableData * data = [NSMutableData dataWithLength:length];
CMBlockBufferCopyDataBytes(blockBufferRef, 0, length, data.mutableBytes);
SInt16 * samples = (SInt16 *) data.mutableBytes;
int sampleCount = length / bytesPerSample;
for (int i = 0; i < sampleCount ; i ++) {
SInt16 left = *samples++;
totalLeft += left;
SInt16 right;
if (channelCount==2) {
right = *samples++;
totalRight += right;
}
sampleTally++;
if (sampleTally > samplesPerPixel) {
left = totalLeft / sampleTally;
SInt16 fix = abs(left);
if (fix > normalizeMax) {
normalizeMax = fix;
}
[fullSongData appendBytes:&left length:sizeof(left)];
if (channelCount==2) {
right = totalRight / sampleTally;
SInt16 fix = abs(right);
if (fix > normalizeMax) {
normalizeMax = fix;
}
[fullSongData appendBytes:&right length:sizeof(right)];
}
totalLeft = 0;
totalRight = 0;
sampleTally = 0;
}
}
[wader drain];
CMSampleBufferInvalidate(sampleBufferRef);
CFRelease(sampleBufferRef);
}
}
NSData * finalData = nil;
if (reader.status == AVAssetReaderStatusFailed || reader.status == AVAssetReaderStatusUnknown){
// Something went wrong. return nil
return nil;
}
if (reader.status == AVAssetReaderStatusCompleted){
NSLog(#"rendering output graphics using normalizeMax %d",normalizeMax);
UIImage *test = [self audioImageGraph:(SInt16 *)
fullSongData.bytes
normalizeMax:normalizeMax
sampleCount:fullSongData.length / 4
channelCount:2
imageHeight:100];
finalData = imageToData(test);
}
[fullSongData release];
[reader release];
return finalData;
}
Advanced Option:
Finally, if you want to be able to play the audio using AVAudioPlayer, you'll need to cache
it to your apps's bundle cache folder. Since I was doing that, i decided to cache the image data
also, and wrapped the whole thing into a UIImage category. you need to include this open source offering to extract the audio, and some code from here to handle some background threading features.
first, some defines, and a few generic class methods for handling path names etc
//#define imgExt #"jpg"
//#define imageToData(x) UIImageJPEGRepresentation(x,4)
#define imgExt #"png"
#define imageToData(x) UIImagePNGRepresentation(x)
+ (NSString *) assetCacheFolder {
NSArray *assetFolderRoot = NSSearchPathForDirectoriesInDomains(NSCachesDirectory, NSUserDomainMask, YES);
return [NSString stringWithFormat:#"%#/audio", [assetFolderRoot objectAtIndex:0]];
}
+ (NSString *) cachedAudioPictogramPathForMPMediaItem:(MPMediaItem*) item {
NSString *assetFolder = [[self class] assetCacheFolder];
NSNumber * libraryId = [item valueForProperty:MPMediaItemPropertyPersistentID];
NSString *assetPictogramFilename = [NSString stringWithFormat:#"asset_%#.%#",libraryId,imgExt];
return [NSString stringWithFormat:#"%#/%#", assetFolder, assetPictogramFilename];
}
+ (NSString *) cachedAudioFilepathForMPMediaItem:(MPMediaItem*) item {
NSString *assetFolder = [[self class] assetCacheFolder];
NSURL * assetURL = [item valueForProperty:MPMediaItemPropertyAssetURL];
NSNumber * libraryId = [item valueForProperty:MPMediaItemPropertyPersistentID];
NSString *assetFileExt = [[[assetURL path] lastPathComponent] pathExtension];
NSString *assetFilename = [NSString stringWithFormat:#"asset_%#.%#",libraryId,assetFileExt];
return [NSString stringWithFormat:#"%#/%#", assetFolder, assetFilename];
}
+ (NSURL *) cachedAudioURLForMPMediaItem:(MPMediaItem*) item {
NSString *assetFilepath = [[self class] cachedAudioFilepathForMPMediaItem:item];
return [NSURL fileURLWithPath:assetFilepath];
}
Now the init method that does "the business"
- (id) initWithMPMediaItem:(MPMediaItem*) item
completionBlock:(void (^)(UIImage* delayedImagePreparation))completionBlock {
NSFileManager *fman = [NSFileManager defaultManager];
NSString *assetPictogramFilepath = [[self class] cachedAudioPictogramPathForMPMediaItem:item];
if ([fman fileExistsAtPath:assetPictogramFilepath]) {
NSLog(#"Returning cached waveform pictogram: %#",[assetPictogramFilepath lastPathComponent]);
self = [self initWithContentsOfFile:assetPictogramFilepath];
return self;
}
NSString *assetFilepath = [[self class] cachedAudioFilepathForMPMediaItem:item];
NSURL *assetFileURL = [NSURL fileURLWithPath:assetFilepath];
if ([fman fileExistsAtPath:assetFilepath]) {
NSLog(#"scanning cached audio data to create UIImage file: %#",[assetFilepath lastPathComponent]);
[assetFileURL retain];
[assetPictogramFilepath retain];
[NSThread MCSM_performBlockInBackground: ^{
AVURLAsset *asset = [[AVURLAsset alloc] initWithURL:assetFileURL options:nil];
NSData *waveFormData = [self renderPNGAudioPictogramForAsset:asset];
[waveFormData writeToFile:assetPictogramFilepath atomically:YES];
[assetFileURL release];
[assetPictogramFilepath release];
if (completionBlock) {
[waveFormData retain];
[NSThread MCSM_performBlockOnMainThread:^{
UIImage *result = [UIImage imageWithData:waveFormData];
NSLog(#"returning rendered pictogram on main thread (%d bytes %# data in UIImage %0.0f x %0.0f pixels)",waveFormData.length,[imgExt uppercaseString],result.size.width,result.size.height);
completionBlock(result);
[waveFormData release];
}];
}
}];
return nil;
} else {
NSString *assetFolder = [[self class] assetCacheFolder];
[fman createDirectoryAtPath:assetFolder withIntermediateDirectories:YES attributes:nil error:nil];
NSLog(#"Preparing to import audio asset data %#",[assetFilepath lastPathComponent]);
[assetPictogramFilepath retain];
[assetFileURL retain];
TSLibraryImport* import = [[TSLibraryImport alloc] init];
NSURL * assetURL = [item valueForProperty:MPMediaItemPropertyAssetURL];
[import importAsset:assetURL toURL:assetFileURL completionBlock:^(TSLibraryImport* import) {
//check the status and error properties of
//TSLibraryImport
if (import.error) {
NSLog (#"audio data import failed:%#",import.error);
} else{
NSLog (#"Creating waveform pictogram file: %#", [assetPictogramFilepath lastPathComponent]);
AVURLAsset *asset = [[AVURLAsset alloc] initWithURL:assetFileURL options:nil];
NSData *waveFormData = [self renderPNGAudioPictogramForAsset:asset];
[waveFormData writeToFile:assetPictogramFilepath atomically:YES];
if (completionBlock) {
[waveFormData retain];
[NSThread MCSM_performBlockOnMainThread:^{
UIImage *result = [UIImage imageWithData:waveFormData];
NSLog(#"returning rendered pictogram on main thread (%d bytes %# data in UIImage %0.0f x %0.0f pixels)",waveFormData.length,[imgExt uppercaseString],result.size.width,result.size.height);
completionBlock(result);
[waveFormData release];
}];
}
}
[assetPictogramFilepath release];
[assetFileURL release];
} ];
return nil;
}
}
An example of invoking this :
-(void) importMediaItem {
MPMediaItem* item = [self mediaItem];
// since we will be needing this for playback, save the url to the cached audio.
[url release];
url = [[UIImage cachedAudioURLForMPMediaItem:item] retain];
[waveFormImage release];
waveFormImage = [[UIImage alloc ] initWithMPMediaItem:item completionBlock:^(UIImage* delayedImagePreparation){
waveFormImage = [delayedImagePreparation retain];
[self displayWaveFormImage];
}];
if (waveFormImage) {
[waveFormImage retain];
[self displayWaveFormImage];
}
}
Logarithmic version of averaging and render methods
#define absX(x) (x<0?0-x:x)
#define minMaxX(x,mn,mx) (x<=mn?mn:(x>=mx?mx:x))
#define noiseFloor (-90.0)
#define decibel(amplitude) (20.0 * log10(absX(amplitude)/32767.0))
-(UIImage *) audioImageLogGraph:(Float32 *) samples
normalizeMax:(Float32) normalizeMax
sampleCount:(NSInteger) sampleCount
channelCount:(NSInteger) channelCount
imageHeight:(float) imageHeight {
CGSize imageSize = CGSizeMake(sampleCount, imageHeight);
UIGraphicsBeginImageContext(imageSize);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetFillColorWithColor(context, [UIColor blackColor].CGColor);
CGContextSetAlpha(context,1.0);
CGRect rect;
rect.size = imageSize;
rect.origin.x = 0;
rect.origin.y = 0;
CGColorRef leftcolor = [[UIColor whiteColor] CGColor];
CGColorRef rightcolor = [[UIColor redColor] CGColor];
CGContextFillRect(context, rect);
CGContextSetLineWidth(context, 1.0);
float halfGraphHeight = (imageHeight / 2) / (float) channelCount ;
float centerLeft = halfGraphHeight;
float centerRight = (halfGraphHeight*3) ;
float sampleAdjustmentFactor = (imageHeight/ (float) channelCount) / (normalizeMax - noiseFloor) / 2;
for (NSInteger intSample = 0 ; intSample < sampleCount ; intSample ++ ) {
Float32 left = *samples++;
float pixels = (left - noiseFloor) * sampleAdjustmentFactor;
CGContextMoveToPoint(context, intSample, centerLeft-pixels);
CGContextAddLineToPoint(context, intSample, centerLeft+pixels);
CGContextSetStrokeColorWithColor(context, leftcolor);
CGContextStrokePath(context);
if (channelCount==2) {
Float32 right = *samples++;
float pixels = (right - noiseFloor) * sampleAdjustmentFactor;
CGContextMoveToPoint(context, intSample, centerRight - pixels);
CGContextAddLineToPoint(context, intSample, centerRight + pixels);
CGContextSetStrokeColorWithColor(context, rightcolor);
CGContextStrokePath(context);
}
}
// Create new image
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
// Tidy up
UIGraphicsEndImageContext();
return newImage;
}
- (NSData *) renderPNGAudioPictogramLogForAsset:(AVURLAsset *)songAsset {
NSError * error = nil;
AVAssetReader * reader = [[AVAssetReader alloc] initWithAsset:songAsset error:&error];
AVAssetTrack * songTrack = [songAsset.tracks objectAtIndex:0];
NSDictionary* outputSettingsDict = [[NSDictionary alloc] initWithObjectsAndKeys:
[NSNumber numberWithInt:kAudioFormatLinearPCM],AVFormatIDKey,
// [NSNumber numberWithInt:44100.0],AVSampleRateKey, /*Not Supported*/
// [NSNumber numberWithInt: 2],AVNumberOfChannelsKey, /*Not Supported*/
[NSNumber numberWithInt:16],AVLinearPCMBitDepthKey,
[NSNumber numberWithBool:NO],AVLinearPCMIsBigEndianKey,
[NSNumber numberWithBool:NO],AVLinearPCMIsFloatKey,
[NSNumber numberWithBool:NO],AVLinearPCMIsNonInterleaved,
nil];
AVAssetReaderTrackOutput* output = [[AVAssetReaderTrackOutput alloc] initWithTrack:songTrack outputSettings:outputSettingsDict];
[reader addOutput:output];
[output release];
UInt32 sampleRate,channelCount;
NSArray* formatDesc = songTrack.formatDescriptions;
for(unsigned int i = 0; i < [formatDesc count]; ++i) {
CMAudioFormatDescriptionRef item = (CMAudioFormatDescriptionRef)[formatDesc objectAtIndex:i];
const AudioStreamBasicDescription* fmtDesc = CMAudioFormatDescriptionGetStreamBasicDescription (item);
if(fmtDesc ) {
sampleRate = fmtDesc->mSampleRate;
channelCount = fmtDesc->mChannelsPerFrame;
// NSLog(#"channels:%u, bytes/packet: %u, sampleRate %f",fmtDesc->mChannelsPerFrame, fmtDesc->mBytesPerPacket,fmtDesc->mSampleRate);
}
}
UInt32 bytesPerSample = 2 * channelCount;
Float32 normalizeMax = noiseFloor;
NSLog(#"normalizeMax = %f",normalizeMax);
NSMutableData * fullSongData = [[NSMutableData alloc] init];
[reader startReading];
UInt64 totalBytes = 0;
Float64 totalLeft = 0;
Float64 totalRight = 0;
Float32 sampleTally = 0;
NSInteger samplesPerPixel = sampleRate / 50;
while (reader.status == AVAssetReaderStatusReading){
AVAssetReaderTrackOutput * trackOutput = (AVAssetReaderTrackOutput *)[reader.outputs objectAtIndex:0];
CMSampleBufferRef sampleBufferRef = [trackOutput copyNextSampleBuffer];
if (sampleBufferRef){
CMBlockBufferRef blockBufferRef = CMSampleBufferGetDataBuffer(sampleBufferRef);
size_t length = CMBlockBufferGetDataLength(blockBufferRef);
totalBytes += length;
NSAutoreleasePool *wader = [[NSAutoreleasePool alloc] init];
NSMutableData * data = [NSMutableData dataWithLength:length];
CMBlockBufferCopyDataBytes(blockBufferRef, 0, length, data.mutableBytes);
SInt16 * samples = (SInt16 *) data.mutableBytes;
int sampleCount = length / bytesPerSample;
for (int i = 0; i < sampleCount ; i ++) {
Float32 left = (Float32) *samples++;
left = decibel(left);
left = minMaxX(left,noiseFloor,0);
totalLeft += left;
Float32 right;
if (channelCount==2) {
right = (Float32) *samples++;
right = decibel(right);
right = minMaxX(right,noiseFloor,0);
totalRight += right;
}
sampleTally++;
if (sampleTally > samplesPerPixel) {
left = totalLeft / sampleTally;
if (left > normalizeMax) {
normalizeMax = left;
}
// NSLog(#"left average = %f, normalizeMax = %f",left,normalizeMax);
[fullSongData appendBytes:&left length:sizeof(left)];
if (channelCount==2) {
right = totalRight / sampleTally;
if (right > normalizeMax) {
normalizeMax = right;
}
[fullSongData appendBytes:&right length:sizeof(right)];
}
totalLeft = 0;
totalRight = 0;
sampleTally = 0;
}
}
[wader drain];
CMSampleBufferInvalidate(sampleBufferRef);
CFRelease(sampleBufferRef);
}
}
NSData * finalData = nil;
if (reader.status == AVAssetReaderStatusFailed || reader.status == AVAssetReaderStatusUnknown){
// Something went wrong. Handle it.
}
if (reader.status == AVAssetReaderStatusCompleted){
// You're done. It worked.
NSLog(#"rendering output graphics using normalizeMax %f",normalizeMax);
UIImage *test = [self audioImageLogGraph:(Float32 *) fullSongData.bytes
normalizeMax:normalizeMax
sampleCount:fullSongData.length / (sizeof(Float32) * 2)
channelCount:2
imageHeight:100];
finalData = imageToData(test);
}
[fullSongData release];
[reader release];
return finalData;
}
comparison outputs
Linear plot for start of "Warm It Up" by Acme Swing Company
Logarithmic plot for start of "Warm It Up" by Acme Swing Company

You should be able to get a buffer of audio from your sampleBuffRef and then iterate through those values to build your waveform:
CMBlockBufferRef buffer = CMSampleBufferGetDataBuffer( sampleBufferRef );
CMItemCount numSamplesInBuffer = CMSampleBufferGetNumSamples(sampleBufferRef);
AudioBufferList audioBufferList;
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(
sampleBufferRef,
NULL,
&audioBufferList,
sizeof(audioBufferList),
NULL,
NULL,
kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment,
&buffer
);
// this copies your audio out to a temp buffer but you should be able to iterate through this buffer instead
SInt32* readBuffer = (SInt32 *)malloc(numSamplesInBuffer * sizeof(SInt32));
memcpy( readBuffer, audioBufferList.mBuffers[0].mData, numSamplesInBuffer*sizeof(SInt32));

Another approach using Swift 5 and using AVAudioFile:
///Gets the audio file from an URL, downsaples and draws into the sound layer.
func drawSoundWave(fromURL url:URL, fromPosition:Int64, totalSeconds:UInt32, samplesSecond:CGFloat) throws{
print("\(logClassName) Drawing sound from \(url)")
do{
waveViewInfo.samplesSeconds = samplesSecond
//Get audio file and format from URL
let audioFile = try AVAudioFile(forReading: url)
waveViewInfo.format = audioFile.processingFormat
audioFile.framePosition = fromPosition * Int64(waveViewInfo.format.sampleRate)
//Getting the buffer
let frameCapacity:UInt32 = totalSeconds * UInt32(waveViewInfo.format.sampleRate)
guard let audioPCMBuffer = AVAudioPCMBuffer(pcmFormat: waveViewInfo.format, frameCapacity: frameCapacity) else{ throw AppError("Unable to get the AVAudioPCMBuffer") }
try audioFile.read(into: audioPCMBuffer, frameCount: frameCapacity)
let audioPCMBufferFloatValues:[Float] = Array(UnsafeBufferPointer(start: audioPCMBuffer.floatChannelData?.pointee,
count: Int(audioPCMBuffer.frameLength)))
waveViewInfo.points = []
waveViewInfo.maxValue = 0
for index in stride(from: 0, to: audioPCMBufferFloatValues.count, by: Int(audioFile.fileFormat.sampleRate) / Int(waveViewInfo.samplesSeconds)){
let aSample = CGFloat(audioPCMBufferFloatValues[index])
waveViewInfo.points.append(aSample)
let fix = abs(aSample)
if fix > waveViewInfo.maxValue{
waveViewInfo.maxValue = fix
}
}
print("\(logClassName) Finished the points - Count = \(waveViewInfo.points.count) / Max = \(waveViewInfo.maxValue)")
populateSoundImageView(with: waveViewInfo)
}
catch{
throw error
}
}
///Converts the sound wave in to a UIImage
func populateSoundImageView(with waveViewInfo:WaveViewInfo){
let imageSize:CGSize = CGSize(width: CGFloat(waveViewInfo.points.count),//CGFloat(waveViewInfo.points.count) * waveViewInfo.sampleSpace,
height: frame.height)
let drawingRect = CGRect(origin: .zero, size: imageSize)
UIGraphicsBeginImageContextWithOptions(imageSize, false, 0)
defer {
UIGraphicsEndImageContext()
}
print("\(logClassName) Converting sound view in rect \(drawingRect)")
guard let context:CGContext = UIGraphicsGetCurrentContext() else{ return }
context.setFillColor(waveViewInfo.backgroundColor.cgColor)
context.setAlpha(1.0)
context.fill(drawingRect)
context.setLineWidth(1.0)
// context.setLineWidth(waveViewInfo.lineWidth)
let sampleAdjustFactor = imageSize.height / waveViewInfo.maxValue
for pointIndex in waveViewInfo.points.indices{
let pixel = waveViewInfo.points[pointIndex] * sampleAdjustFactor
context.move(to: CGPoint(x: CGFloat(pointIndex), y: middleY - pixel))
context.addLine(to: CGPoint(x: CGFloat(pointIndex), y: middleY + pixel))
context.setStrokeColor(waveViewInfo.strokeColor.cgColor)
context.strokePath()
}
// for pointIndex in waveViewInfo.points.indices{
//
// let pixel = waveViewInfo.points[pointIndex] * sampleAdjustFactor
//
// context.move(to: CGPoint(x: CGFloat(pointIndex) * waveViewInfo.sampleSpace, y: middleY - pixel))
// context.addLine(to: CGPoint(x: CGFloat(pointIndex) * waveViewInfo.sampleSpace, y: middleY + pixel))
//
// context.setStrokeColor(waveViewInfo.strokeColor.cgColor)
// context.strokePath()
//
// }
// var xIncrement:CGFloat = 0
// for point in waveViewInfo.points{
//
// let normalizedPoint = point * sampleAdjustFactor
//
// context.move(to: CGPoint(x: xIncrement, y: middleY - normalizedPoint))
// context.addLine(to: CGPoint(x: xIncrement, y: middleX + normalizedPoint))
// context.setStrokeColor(waveViewInfo.strokeColor.cgColor)
// context.strokePath()
//
// xIncrement += waveViewInfo.sampleSpace
//
// }
guard let soundWaveImage = UIGraphicsGetImageFromCurrentImageContext() else{ return }
soundWaveImageView.image = soundWaveImage
// //In case of handling sample space in for
// updateWidthConstraintValue(soundWaveImage.size.width)
updateWidthConstraintValue(soundWaveImage.size.width * waveViewInfo.sampleSpace)
}
WHERE
class WaveViewInfo {
var format:AVAudioFormat!
var samplesSeconds:CGFloat = 50
var lineWidth:CGFloat = 0.20
var sampleSpace:CGFloat = 0.20
var strokeColor:UIColor = .red
var backgroundColor:UIColor = .clear
var maxValue:CGFloat = 0
var points:[CGFloat] = [CGFloat]()
}
At the moment only prints one sound wave but it can be extended. The good part is that you can print an audio track by parts

A little bit refactoring from the above answers (using AVAudioFile)
import AVFoundation
import CoreGraphics
import Foundation
import UIKit
class WaveGenerator {
private func readBuffer(_ audioUrl: URL) -> UnsafeBufferPointer<Float> {
let file = try! AVAudioFile(forReading: audioUrl)
let audioFormat = file.processingFormat
let audioFrameCount = UInt32(file.length)
guard let buffer = AVAudioPCMBuffer(pcmFormat: audioFormat, frameCapacity: audioFrameCount)
else { return UnsafeBufferPointer<Float>(_empty: ()) }
do {
try file.read(into: buffer)
} catch {
print(error)
}
// let floatArray = Array(UnsafeBufferPointer(start: buffer.floatChannelData![0], count: Int(buffer.frameLength)))
let floatArray = UnsafeBufferPointer(start: buffer.floatChannelData![0], count: Int(buffer.frameLength))
return floatArray
}
private func generateWaveImage(
_ samples: UnsafeBufferPointer<Float>,
_ imageSize: CGSize,
_ strokeColor: UIColor,
_ backgroundColor: UIColor
) -> UIImage? {
let drawingRect = CGRect(origin: .zero, size: imageSize)
UIGraphicsBeginImageContextWithOptions(imageSize, false, 0)
let middleY = imageSize.height / 2
guard let context: CGContext = UIGraphicsGetCurrentContext() else { return nil }
context.setFillColor(backgroundColor.cgColor)
context.setAlpha(1.0)
context.fill(drawingRect)
context.setLineWidth(0.25)
let max: CGFloat = CGFloat(samples.max() ?? 0)
let heightNormalizationFactor = imageSize.height / max / 2
let widthNormalizationFactor = imageSize.width / CGFloat(samples.count)
for index in 0 ..< samples.count {
let pixel = CGFloat(samples[index]) * heightNormalizationFactor
let x = CGFloat(index) * widthNormalizationFactor
context.move(to: CGPoint(x: x, y: middleY - pixel))
context.addLine(to: CGPoint(x: x, y: middleY + pixel))
context.setStrokeColor(strokeColor.cgColor)
context.strokePath()
}
guard let soundWaveImage = UIGraphicsGetImageFromCurrentImageContext() else { return nil }
UIGraphicsEndImageContext()
return soundWaveImage
}
func generateWaveImage(from audioUrl: URL, in imageSize: CGSize) -> UIImage? {
let samples = readBuffer(audioUrl)
let img = generateWaveImage(samples, imageSize, UIColor.blue, UIColor.white)
return img
}
}
Usage
let url = Bundle.main.url(forResource: "TEST1.mp3", withExtension: "")!
let img = waveGenerator.generateWaveImage(from: url, in: CGSize(width: 600, height: 200))

Related

jerky motion using CMMotionManager iPhone

I am using CMMotionManager to move buttons, images etc. on my view as the device is tipped forward, backward, right, left. I get the attitude pitch and roll and then use these to assign values to a string representing pixel placement on the screen.
Here is how I get the info
motionManager = [[CMMotionManager alloc] init];
motionManager.deviceMotionUpdateInterval = 0.05f;
[motionManager startDeviceMotionUpdatesToQueue:[NSOperationQueue currentQueue] withHandler:^(CMDeviceMotion *motion, NSError *error) {
NSNumberFormatter *formatter = [[NSNumberFormatter alloc] init];
[formatter setNumberStyle:NSNumberFormatterDecimalStyle];
[formatter setMaximumFractionDigits:2];
[formatter setRoundingMode: NSNumberFormatterRoundUp];
numberString = [formatter stringFromNumber:[NSNumber numberWithFloat:motion.attitude.roll /8]];
numberString1 = [formatter stringFromNumber:[NSNumber numberWithFloat:motion.attitude.pitch /8]];
}];
and this is how I assign them.
NSString *deltax = numberString;
NSString *deltay = numberString1;
NSLog(#"xString;%#",numberString);
if ([deltax isEqualToString:#"-0.01"]) {
xString = #"160";}
else if ([deltax isEqualToString:#"-0.02"]) {
xString = #"161";}
else if ([deltax isEqualToString:#"-0.03"]) {
xString = #"162";}
else if ([deltax isEqualToString:#"-0.04"]) {
xString = #"163";}
etc.
I have made them so the x axis continuously goes up and down (back and forth) on the screen. i.e. -.01 is the same as -.40, .01 and .40. and they are approached the same as you go up from -.01 and .01the values get bigger and from -.40 and 4.4 they get smaller. I did the same with y axis also. -.01 and .01 are 284 and -.20 and .20 are at 264. However in this case the numbers go up from -.01 and down from .01.
here are some examples
//x axis
if ([deltax isEqualToString:#"-0.01"]) {
xString = #"160";}
else if ([deltax isEqualToString:#"-0.02"]) {
xString = #"161";}
else if ([deltax isEqualToString:#"-0.03"]) {
xString = #"162";}
else if ([deltax isEqualToString:#"-0.04"]) {
xString = #"163";}
else if ([deltax isEqualToString:#"-0.37"]) {
xString = #"156";}
else if ([deltax isEqualToString:#"-0.38"]) {
xString = #"157";}
else if ([deltax isEqualToString:#"-0.39"]) {
xString = #"158";}
else if ([deltax isEqualToString:#"-0.4"]) {
xString = #"159";}
else if ([deltax isEqualToString:#"0.01"]) {
xString = #"159";}
else if ([deltax isEqualToString:#"0.02"]) {
xString = #"158";}
else if ([deltax isEqualToString:#"0.03"]) {
xString = #"157"; }
else if ([deltax isEqualToString:#"0.04"]) {
xString = #"156";}
else if ([deltax isEqualToString:#"0.37"]) {
xString = #"163";}
else if ([deltax isEqualToString:#"0.38"]) {
xString = #"162";}
else if ([deltax isEqualToString:#"0.39"]) {
xString = #"161";}
else if ([deltax isEqualToString:#"0.4"]) {
xString = #"160";}
//y axis
if ([deltay isEqualToString:#"-0.01"]) {
yString = #"284";}
else if ([deltay isEqualToString:#"-0.02"]) {
yString = #"285";}
else if ([deltay isEqualToString:#"-0.03"]) {
yString = #"286";}
else if ([deltay isEqualToString:#"-0.04"]) {
yString = #"287";}
else if ([deltay isEqualToString:#"-0.17"]) {
yString = #"300"; }
else if ([deltay isEqualToString:#"-0.18"]) {
yString = #"301";}
else if ([deltay isEqualToString:#"-0.19"]) {
yString = #"302";}
else if ([deltay isEqualToString:#"-0.2"]) {
yString = #"303";}
else if ([deltay isEqualToString:#"0.01"]) {
yString = #"283";}
else if ([deltay isEqualToString:#"0.02"]) {
yString = #"282";}
else if ([deltay isEqualToString:#"0.03"]) {
yString = #"281";}
else if ([deltay isEqualToString:#"0.04"]) {
yString = #"280"; }
else if ([deltay isEqualToString:#"0.17"]) {
yString = #"267";}
else if ([deltay isEqualToString:#"0.18"]) {
yString = #"266";}
else if ([deltay isEqualToString:#"0.19"]) {
yString = #"265";}
else if ([deltay isEqualToString:#"0.2"]) {
yString = #"264";}
In this way there is continuous flow from one orientation to the next.
However I am having a small problem. when the phone is tipped forward and passes from .2 faceUp orientation to .2 faceDown orientation. I get a jerky motion. In faceUp .2 buttons etc. move to the left all of a sudden and when it moves to faceDown .20 they move to the right. Otherwise the buttons etc. are centered at .19 and -.19 and return there to operate normally (as expected). I have tried to call DeviceOrientationFaceUp and FaceDown but this didn't seem to help.
Does anyone have any experience with this behavior and know of a fix or have any suggestions.
I also add this part to move the buttons up and down and side to side. Using this method the button up and down side to side wherever the position of the phone.
//used to assign numerical values
NSString *string2 = xString;
int n = [string2 intValue];
NSLog(#"n:%d",n);
NSString *string3 = xString1;
int p = [string3 intValue];
NSLog(#"p:%d",p);
NSString *string4 = yString;
int q = [string4 intValue];
NSLog(#"q:%d",q);
NSString *string5 = yString1;
int r = [string5 intValue];
NSLog(#"r:%d",r);
//used to move the buttons etc.
imageView2.center = CGPointMake(p, q);
imageView.center = CGPointMake(n -63, q);
imageframe.center = CGPointMake(n -63, q);
textView.center = CGPointMake(n -62, q - 184);
label1.center = CGPointMake(n + 97, q - 195 );
english.center = CGPointMake(n + 97, q - 159);
this is a kind of parallax motion thing.
A couple of reactions:
I might suggest using gravity rather than attitude, as it nicely represents the physical orientation of the device.
Rather than using a string to hold the value, I might just update the numeric values.
Rather than all of those if statements, just have a formula that represents how the view should be placed.
If you want to move a bunch of views, you should put them in a UIView (commonly called a "container view", not to be confused with the custom container control in IB of the same name) and then just move the container view.
So, if using autolayout, I might have the following properties:
#property (nonatomic, strong) CMMotionManager *motionManager;
#property CGFloat top;
#property CGFloat left;
And then I can update the autolayout constraints like so:
self.motionManager = [[CMMotionManager alloc] init];
self.motionManager.deviceMotionUpdateInterval = 0.05f;
self.top = self.topConstraint.constant;
self.left = self.leadingConstraint.constant;
[self.motionManager startDeviceMotionUpdatesToQueue:[NSOperationQueue mainQueue] withHandler:^(CMDeviceMotion *motion, NSError *error) {
self.leadingConstraint.constant = self.left - motion.gravity.x / 8.0 * 100.0;
self.topConstraint.constant = self.top - motion.gravity.y / 8.0 * 100.0;
}];
Or if not using autolayout:
#property (nonatomic, strong) CMMotionManager *motionManager;
#property CGPoint originalCenter;
and
self.motionManager = [[CMMotionManager alloc] init];
self.motionManager.deviceMotionUpdateInterval = 0.05f;
self.originalCenter = self.containerView.center;
[self.motionManager startDeviceMotionUpdatesToQueue:[NSOperationQueue mainQueue] withHandler:^(CMDeviceMotion *motion, NSError *error) {
self.containerView.center = CGPointMake(self.originalCenter.x - motion.gravity.x / 8.0 * 100.0, self.originalCenter.y - motion.gravity.y / 8.0 * 100.0);
}];
Frankly, having the motion manager operate in the main queue gives me the willies, so I might (a) create a NSOperationQueue for the motion updates and only update some class property; and (b) use a CADisplayLink (part of the QuartzCore.framework) to update the view (if needed). Thus:
#property (nonatomic, strong) CMMotionManager *motionManager;
#property (nonatomic, strong) NSOperationQueue *motionQueue;
#property (nonatomic) CMAcceleration gravity;
#property (nonatomic) CGPoint originalCenter;
#property (nonatomic) BOOL updatePosition;
#property (nonatomic, strong) NSMutableArray *angles;
#property (nonatomic, strong) CADisplayLink *displayLink;
and
- (void)viewDidDisappear:(BOOL)animated
{
[self stopDisplayLink];
[self.motionManager stopDeviceMotionUpdates];
self.motionManager = nil;
self.motionQueue = nil;
}
- (void)viewDidAppear:(BOOL)animated
{
[super viewDidAppear:animated];
self.motionManager = [[CMMotionManager alloc] init];
self.motionManager.deviceMotionUpdateInterval = 0.05f;
self.motionQueue = [[NSOperationQueue alloc] init];
self.motionQueue.name = [[[NSBundle mainBundle] bundleIdentifier] stringByAppendingString:#".motion"];
self.originalCenter = self.containerView.center;
self.updatePosition = NO;
[self.motionManager startDeviceMotionUpdatesToQueue:self.motionQueue withHandler:^(CMDeviceMotion *motion, NSError *error) {
#synchronized(self) {
self.gravity = motion.gravity;
self.updatePosition = YES;
}
}];
[self startDisplayLink];
}
- (void)startDisplayLink
{
self.displayLink = [CADisplayLink displayLinkWithTarget:self selector:#selector(handleDisplayLink:)];
[self.displayLink addToRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode];
}
- (void)stopDisplayLink
{
[self.displayLink invalidate];
self.displayLink = nil;
}
- (void)handleDisplayLink:(CADisplayLink *)displayLink
{
#synchronized(self) {
if (!self.updatePosition)
return;
self.containerView.center = CGPointMake(self.originalCenter.x - self.gravity.x / 8.0 * 100.0, self.originalCenter.y - self.gravity.y / 8.0 * 100.0);
self.updatePosition = NO;
}
}
I did the following:
A math that takes the max and minimum values within a adjustable window (flicker noise) and takes the average.
No more oscillations!!!
Here is my code to avoid flickering in may app:
float xAxis,yAxis, zAxis;
xAxis = self.manager.accelerometerData.acceleration.x;
yAxis = self.manager.accelerometerData.acceleration.y;
zAxis = self.manager.accelerometerData.acceleration.z;
// returns if the phone is lying on the table:
if (zAxis < -0.8 || zAxis > 0.8) return;
CGFloat angle = atan2f(xAxis, yAxis ); // The angle!!!
float noise = 0.011; // Flicker noise (the noise you want to filter)
// max and min are global variables
if (angle > max){
max = angle;
min = max - noise;
}
if (angle < min){
min = angle;
max = min + noise;
}
// Average: (no flickering):
angle = min + (max - min) / 2.0;

How to generate audio wave form programmatically while recording Voice in iOS?

How to generate audio wave form programmatically while recording Voice in iOS?
m working on voice modulation audio frequency in iOS... everything is working fine ...just need some best simple way to generate audio wave form on detection noise...
Please dont refer me the code tutorials of...speakhere and auriotouch... i need some best suggestions from native app developers.
I have recorded the audio and i made it play after recording . I have created waveform and attached screenshot . But it has to been drawn in the view as audio recording in progress
-(UIImage *) audioImageGraph:(SInt16 *) samples
normalizeMax:(SInt16) normalizeMax
sampleCount:(NSInteger) sampleCount
channelCount:(NSInteger) channelCount
imageHeight:(float) imageHeight {
CGSize imageSize = CGSizeMake(sampleCount, imageHeight);
UIGraphicsBeginImageContext(imageSize);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetFillColorWithColor(context, [UIColor blackColor].CGColor);
CGContextSetAlpha(context,1.0);
CGRect rect;
rect.size = imageSize;
rect.origin.x = 0;
rect.origin.y = 0;
CGColorRef leftcolor = [[UIColor whiteColor] CGColor];
CGColorRef rightcolor = [[UIColor redColor] CGColor];
CGContextFillRect(context, rect);
CGContextSetLineWidth(context, 1.0);
float halfGraphHeight = (imageHeight / 2) / (float) channelCount ;
float centerLeft = halfGraphHeight;
float centerRight = (halfGraphHeight*3) ;
float sampleAdjustmentFactor = (imageHeight/ (float) channelCount) / (float) normalizeMax;
for (NSInteger intSample = 0 ; intSample < sampleCount ; intSample ++ ) {
SInt16 left = *samples++;
float pixels = (float) left;
pixels *= sampleAdjustmentFactor;
CGContextMoveToPoint(context, intSample, centerLeft-pixels);
CGContextAddLineToPoint(context, intSample, centerLeft+pixels);
CGContextSetStrokeColorWithColor(context, leftcolor);
CGContextStrokePath(context);
if (channelCount==2) {
SInt16 right = *samples++;
float pixels = (float) right;
pixels *= sampleAdjustmentFactor;
CGContextMoveToPoint(context, intSample, centerRight - pixels);
CGContextAddLineToPoint(context, intSample, centerRight + pixels);
CGContextSetStrokeColorWithColor(context, rightcolor);
CGContextStrokePath(context);
}
}
// Create new image
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
// Tidy up
UIGraphicsEndImageContext();
return newImage;
}
Next a method that takes a AVURLAsset, and returns PNG Data
- (NSData *) renderPNGAudioPictogramForAssett:(AVURLAsset *)songAsset {
NSError * error = nil;
AVAssetReader * reader = [[AVAssetReader alloc] initWithAsset:songAsset error:&error];
AVAssetTrack * songTrack = [songAsset.tracks objectAtIndex:0];
NSDictionary* outputSettingsDict = [[NSDictionary alloc] initWithObjectsAndKeys:
[NSNumber numberWithInt:kAudioFormatLinearPCM],AVFormatIDKey,
// [NSNumber numberWithInt:44100.0],AVSampleRateKey, /*Not Supported*/
// [NSNumber numberWithInt: 2],AVNumberOfChannelsKey, /*Not Supported*/
[NSNumber numberWithInt:16],AVLinearPCMBitDepthKey,
[NSNumber numberWithBool:NO],AVLinearPCMIsBigEndianKey,
[NSNumber numberWithBool:NO],AVLinearPCMIsFloatKey,
[NSNumber numberWithBool:NO],AVLinearPCMIsNonInterleaved,
nil];
AVAssetReaderTrackOutput* output = [[AVAssetReaderTrackOutput alloc] initWithTrack:songTrack outputSettings:outputSettingsDict];
[reader addOutput:output];
[output release];
UInt32 sampleRate,channelCount;
NSArray* formatDesc = songTrack.formatDescriptions;
for(unsigned int i = 0; i < [formatDesc count]; ++i) {
CMAudioFormatDescriptionRef item = (CMAudioFormatDescriptionRef)[formatDesc objectAtIndex:i];
const AudioStreamBasicDescription* fmtDesc = CMAudioFormatDescriptionGetStreamBasicDescription (item);
if(fmtDesc ) {
sampleRate = fmtDesc->mSampleRate;
channelCount = fmtDesc->mChannelsPerFrame;
// NSLog(#"channels:%u, bytes/packet: %u, sampleRate %f",fmtDesc->mChannelsPerFrame, fmtDesc->mBytesPerPacket,fmtDesc->mSampleRate);
}
}
UInt32 bytesPerSample = 2 * channelCount;
SInt16 normalizeMax = 0;
NSMutableData * fullSongData = [[NSMutableData alloc] init];
[reader startReading];
UInt64 totalBytes = 0;
SInt64 totalLeft = 0;
SInt64 totalRight = 0;
NSInteger sampleTally = 0;
NSInteger samplesPerPixel = sampleRate / 50;
while (reader.status == AVAssetReaderStatusReading){
AVAssetReaderTrackOutput * trackOutput = (AVAssetReaderTrackOutput *)[reader.outputs objectAtIndex:0];
CMSampleBufferRef sampleBufferRef = [trackOutput copyNextSampleBuffer];
if (sampleBufferRef){
CMBlockBufferRef blockBufferRef = CMSampleBufferGetDataBuffer(sampleBufferRef);
size_t length = CMBlockBufferGetDataLength(blockBufferRef);
totalBytes += length;
NSAutoreleasePool *wader = [[NSAutoreleasePool alloc] init];
NSMutableData * data = [NSMutableData dataWithLength:length];
CMBlockBufferCopyDataBytes(blockBufferRef, 0, length, data.mutableBytes);
SInt16 * samples = (SInt16 *) data.mutableBytes;
int sampleCount = length / bytesPerSample;
for (int i = 0; i < sampleCount ; i ++) {
SInt16 left = *samples++;
totalLeft += left;
SInt16 right;
if (channelCount==2) {
right = *samples++;
totalRight += right;
}
sampleTally++;
if (sampleTally > samplesPerPixel) {
left = totalLeft / sampleTally;
SInt16 fix = abs(left);
if (fix > normalizeMax) {
normalizeMax = fix;
}
[fullSongData appendBytes:&left length:sizeof(left)];
if (channelCount==2) {
right = totalRight / sampleTally;
SInt16 fix = abs(right);
if (fix > normalizeMax) {
normalizeMax = fix;
}
[fullSongData appendBytes:&right length:sizeof(right)];
}
totalLeft = 0;
totalRight = 0;
sampleTally = 0;
}
}
[wader drain];
CMSampleBufferInvalidate(sampleBufferRef);
CFRelease(sampleBufferRef);
}
}
NSData * finalData = nil;
if (reader.status == AVAssetReaderStatusFailed || reader.status == AVAssetReaderStatusUnknown){
// Something went wrong. return nil
return nil;
}
if (reader.status == AVAssetReaderStatusCompleted){
NSLog(#"rendering output graphics using normalizeMax %d",normalizeMax);
UIImage *test = [self audioImageGraph:(SInt16 *)
fullSongData.bytes
normalizeMax:normalizeMax
sampleCount:fullSongData.length / 4
channelCount:2
imageHeight:100];
finalData = imageToData(test);
}
[fullSongData release];
[reader release];
return finalData;
}
I have
If you want real-time graphics derived from mic input, then use the RemoteIO Audio Unit, which is what most native iOS app developers use for low latency audio, and Metal or Open GL for drawing waveforms, which will give you the highest frame rates. You will need completely different code from that provided in your question to do so, as AVAssetRecording, Core Graphic line drawing and png rendering are far far too slow to use.
Update: with iOS 8 and newer, the Metal API may be able to render graphic visualizations with even greater performance than OpenGL.
Update 2: Here are some code snippets for recording live audio using Audio Units and drawing bit maps using Metal in Swift 3: https://gist.github.com/hotpaw2/f108a3c785c7287293d7e1e81390c20b
You should check out EZAudio (https://github.com/syedhali/EZAudio), specifically the EZRecorder and the EZAudioPlot (or GPU-accelerated EZAudioPlotGL).
There is also an example project that does exactly what you want, https://github.com/syedhali/EZAudio/tree/master/EZAudioExamples/iOS/EZAudioRecordExample
EDIT: Here's the code inline
/// In your interface
/**
Use a OpenGL based plot to visualize the data coming in
*/
#property (nonatomic,weak) IBOutlet EZAudioPlotGL *audioPlot;
/**
The microphone component
*/
#property (nonatomic,strong) EZMicrophone *microphone;
/**
The recorder component
*/
#property (nonatomic,strong) EZRecorder *recorder;
...
/// In your implementation
// Create an instance of the microphone and tell it to use this view controller instance as the delegate
-(void)viewDidLoad {
self.microphone = [EZMicrophone microphoneWithDelegate:self startsImmediately:YES];
}
// EZMicrophoneDelegate will provide these callbacks
-(void)microphone:(EZMicrophone *)microphone
hasAudioReceived:(float **)buffer
withBufferSize:(UInt32)bufferSize
withNumberOfChannels:(UInt32)numberOfChannels {
dispatch_async(dispatch_get_main_queue(),^{
// Updates the audio plot with the waveform data
[self.audioPlot updateBuffer:buffer[0] withBufferSize:bufferSize];
});
}
-(void)microphone:(EZMicrophone *)microphone hasAudioStreamBasicDescription:(AudioStreamBasicDescription)audioStreamBasicDescription {
// The AudioStreamBasicDescription of the microphone stream. This is useful when configuring the EZRecorder or telling another component what audio format type to expect.
// We can initialize the recorder with this ASBD
self.recorder = [EZRecorder recorderWithDestinationURL:[self testFilePathURL]
andSourceFormat:audioStreamBasicDescription];
}
-(void)microphone:(EZMicrophone *)microphone
hasBufferList:(AudioBufferList *)bufferList
withBufferSize:(UInt32)bufferSize
withNumberOfChannels:(UInt32)numberOfChannels {
// Getting audio data as a buffer list that can be directly fed into the EZRecorder. This is happening on the audio thread - any UI updating needs a GCD main queue block. This will keep appending data to the tail of the audio file.
if( self.isRecording ){
[self.recorder appendDataFromBufferList:bufferList
withBufferSize:bufferSize];
}
}
I was searching the same thing. (Making wave from the data of the audio recorder). I found some library that might be helpful and worth to check the code to understand the logic behind this.
The calculation is all based with sin and mathematic formula. This is much simple if you take a look to the code!
https://github.com/stefanceriu/SCSiriWaveformView
or
https://github.com/raffael/SISinusWaveView
This is only few examples that you can find on the web.

How to compare images using opencv in iOS (iPhone)

I want to compare 2 images taken by iPhone camera in my project. I am using OpenCV for doing that. Is there any other better way to do that?
If i got the % similarity, It will be great.
I am using OpenCV following code for image comparison:
-(void)opencvImageCompare{
NSMutableArray *valuesArray=[[NSMutableArray alloc]init];
IplImage *img = [self CreateIplImageFromUIImage:imageView.image];
// always check camera image
if(img == 0) {
printf("Cannot load camera img");
}
IplImage *res;
CvPoint minloc, maxloc;
double minval, maxval;
double values;
UIImage *imageTocompare = [UIImage imageNamed:#"MyImageName"];
IplImage *imageTocompareIpl = [self CreateIplImageFromUIImage:imageTocompare];
// always check server image
if(imageTocompareIpl == 0) {
printf("Cannot load serverIplImageArray image");
}
if(img->width-imageTocompareIpl->width<=0 && img->height-imageTocompareIpl->height<=0){
int balWidth=imageTocompareIpl->width-img->width;
int balHeight=imageTocompareIpl->height-img->height;
img->width=img->width+balWidth+100;
img->height=img->height+balHeight+100;
}
CvSize size = cvSize(
img->width - imageTocompareIpl->width + 1,
img->height - imageTocompareIpl->height + 1
);
res = cvCreateImage(size, IPL_DEPTH_32F, 1);
// CV_TM_SQDIFF CV_TM_SQDIFF_NORMED
// CV_TM_CCORR CV_TM_CCORR_NORMED
// CV_TM_CCOEFF CV_TM_CCOEFF_NORMED
cvMatchTemplate(img, imageTocompareIpl, res,CV_TM_CCOEFF);
cvMinMaxLoc(res, &minval, &maxval, &minloc, &maxloc, 0);
printf("\n value %f", maxval-minval);
values=maxval-minval;
NSString *valString=[NSString stringWithFormat:#"%f",values];
[valuesArray addObject:valString];
weedObject.values=[valString doubleValue];
printf("\n------------------------------");
cvReleaseImage(&imageTocompareIpl);
cvReleaseImage(&res);
}
cvReleaseImage(&img);
}
For the same image I am getting non zero result (14956...) and if I pass different image its crash.
try this code, It compares images bit by bit, ie 100%
UIImage *img1 = // Some photo;
UIImage *img2 = // Some photo;
NSData *imgdata1 = UIImagePNGRepresentation(img1);
NSData *imgdata2 = UIImagePNGRepresentation(img2);
if ([imgdata1 isEqualToData:imgdata2])
{
NSLog(#"Same Image");
}
Try this code - it compare images pixel by pixel
-(void)removeindicator :(UIImage *)image
{
for (int i =0; i < [imageArray count]; i++)
{
CGFloat width =100.0f;
CGFloat height=100.0f;
CGSize newSize1 = CGSizeMake(width, height); //whaterver size
UIGraphicsBeginImageContext(newSize1);
[[UIImage imageNamed:[imageArray objectAtIndex:i]] drawInRect:CGRectMake(0, 0, newSize1.width, newSize1.height)];
UIImage *newImage1 = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageView *imageview_camera=(UIImageView *)[self.view viewWithTag:-3];
CGSize newSize2 = CGSizeMake(width, height); //whaterver size
UIGraphicsBeginImageContext(newSize2);
[[imageview_camera image] drawInRect:CGRectMake(0, 0, newSize2.width, newSize2.height)];
UIImage *newImage2 = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
float numDifferences = 0.0f;
float totalCompares = width * height;
NSArray *img1RGB=[[NSArray alloc]init];
NSArray *img2RGB=[[NSArray alloc]init];
for (int yCoord = 0; yCoord < height; yCoord += 1)
{
for (int xCoord = 0; xCoord < width; xCoord += 1)
{
img1RGB = [self getRGBAsFromImage:newImage1 atX:xCoord andY:yCoord];
img2RGB = [self getRGBAsFromImage:newImage2 atX:xCoord andY:yCoord];
if (([[img1RGB objectAtIndex:0]floatValue] - [[img2RGB objectAtIndex:0]floatValue]) == 0 || ([[img1RGB objectAtIndex:1]floatValue] - [[img2RGB objectAtIndex:1]floatValue]) == 0 || ([[img1RGB objectAtIndex:2]floatValue] - [[img2RGB objectAtIndex:2]floatValue]) == 0)
{
//one or more pixel components differs by 10% or more
numDifferences++;
}
}
}
// It will show result in percentage at last
CGFloat percentage_similar=((numDifferences*100)/totalCompares);
NSString *str=NULL;
if (percentage_similar>=10.0f)
{
str=[[NSString alloc]initWithString:[NSString stringWithFormat:#"%i%# Identical", (int)((numDifferences*100)/totalCompares),#"%"]];
UIAlertView *alertview=[[UIAlertView alloc]initWithTitle:#"i-App" message:[NSString stringWithFormat:#"%# Images are same",str] delegate:nil cancelButtonTitle:#"Ok" otherButtonTitles:nil];
[alertview show];
break;
}
else
{
str=[[NSString alloc]initWithString:[NSString stringWithFormat:#"Result: %i%# Identical",(int)((numDifferences*100)/totalCompares),#"%"]];
UIAlertView *alertview=[[UIAlertView alloc]initWithTitle:#"i-App" message:[NSString stringWithFormat:#"%# Images are not same",str] delegate:nil cancelButtonTitle:#"OK" otherButtonTitles:nil];
[alertview show];
}
}
}
-(NSArray*)getRGBAsFromImage:(UIImage*)image atX:(int)xx andY:(int)yy
{
//NSArray *result = [[NSArray alloc]init];
// First get the image into your data buffer
CGImageRef imageRef = [image CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = (unsigned char*) calloc(height * width * 4, sizeof(unsigned char));
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
// Now your rawData contains the image data in the RGBA8888 pixel format.
int byteIndex = (bytesPerRow * yy) + xx * bytesPerPixel;
// for (int ii = 0 ; ii < count ; ++ii)
// {
CGFloat red = (rawData[byteIndex] * 1.0) / 255.0;
CGFloat green = (rawData[byteIndex + 1] * 1.0) / 255.0;
CGFloat blue = (rawData[byteIndex + 2] * 1.0) / 255.0;
//CGFloat alpha = (rawData[byteIndex + 3] * 1.0) / 255.0;
byteIndex += 4;
// UIColor *acolor = [UIColor colorWithRed:red green:green blue:blue alpha:alpha];
//}
free(rawData);
NSArray *result = [NSArray arrayWithObjects:
[NSNumber numberWithFloat:red],
[NSNumber numberWithFloat:green],
[NSNumber numberWithFloat:blue],nil];
return result;
}

Crash on CFDataGetBytes(image, CFRangeMake(0, CFDataGetLength(image)), destPixels);

I am making video of screen but crashes on this line.
CFDataGetBytes(image, CFRangeMake(0, CFDataGetLength(image)), destPixels);
Note: It will work if the pixel buffer is contiguous and has the same bytesPerRow as the input data
So I am providing my code that grabs frames from the camera - maybe it will help After grabbing the data, I put it on a queue for further processing. I had to remove some of the code as it was not relevant to you - so what you see here has pieces you should be able to use.
- (void)captureOutput:(AVCaptureVideoDataOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
#autoreleasepool {
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
//NSLog(#"PE: value=%lld timeScale=%d flags=%x", prStamp.value, prStamp.timescale, prStamp.flags);
/*Lock the image buffer*/
CVPixelBufferLockBaseAddress(imageBuffer,0);
NSRange captureRange;
/*Get information about the image*/
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
// Note Apple sample code cheats big time - the phone is big endian so this reverses the "apparent" order of bytes
CGContextRef newContext = CGBitmapContextCreate(NULL, width, captureRange.length, 8, bytesPerRow, colorSpace, kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Little); // Video in ARGB format
assert(newContext);
uint8_t *newPtr = (uint8_t *)CGBitmapContextGetData(newContext);
size_t offset = captureRange.location * bytesPerRow;
memcpy(newPtr, baseAddress + offset, captureRange.length * bytesPerRow);
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
CMTime prStamp = CMSampleBufferGetPresentationTimeStamp(sampleBuffer); // when it was taken?
//CMTime deStamp = CMSampleBufferGetDecodeTimeStamp(sampleBuffer); // now?
NSDictionary *dict = [NSDictionary dictionaryWithObjectsAndKeys:
[NSValue valueWithBytes:&saveState objCType:#encode(saveImages)], kState,
[NSValue valueWithNonretainedObject:(__bridge id)newContext], kImageContext,
[NSValue valueWithBytes:&prStamp objCType:#encode(CMTime)], kPresTime,
nil ];
dispatch_async(imageQueue, ^
{
// could be on any thread now
OSAtomicDecrement32(&queueDepth);
if(!isCancelled) {
saveImages state; [(NSValue *)[dict objectForKey:kState] getValue:&state];
CGContextRef context; [(NSValue *)[dict objectForKey:kImageContext] getValue:&context];
CMTime stamp; [(NSValue *)[dict objectForKey:kPresTime] getValue:&stamp];
CGImageRef newImageRef = CGBitmapContextCreateImage(context);
CGContextRelease(context);
UIImageOrientation orient = state == saveOne ? UIImageOrientationLeft : UIImageOrientationUp;
UIImage *image = [UIImage imageWithCGImage:newImageRef scale:1.0 orientation:orient]; // imageWithCGImage: UIImageOrientationUp UIImageOrientationLeft
CGImageRelease(newImageRef);
NSData *data = UIImagePNGRepresentation(image);
// NSLog(#"STATE:[%d]: value=%lld timeScale=%d flags=%x", state, stamp.value, stamp.timescale, stamp.flags);
{
NSString *name = [NSString stringWithFormat:#"%d.png", num];
NSString *path = [[wlAppDelegate snippetsDirectory] stringByAppendingPathComponent:name];
BOOL ret = [data writeToFile:path atomically:NO];
//NSLog(#"WROTE %d err=%d w/time %f path:%#", num, ret, (double)stamp.value/(double)stamp.timescale, path);
if(!ret) {
++errors;
} else {
dispatch_async(dispatch_get_main_queue(), ^
{
if(num) [delegate progress:(CGFloat)num/(CGFloat)(MORE_THAN_ONE_REV * SNAPS_PER_SEC) file:path];
} );
}
++num;
}
} else NSLog(#"CANCELLED");
} );
}
}

How to decode the Google Directions API polylines field into lat long points in Objective-C for iPhone?

I want to draw routes on a map corresponding to directions JSON which I am getting through the Google Directions API: https://developers.google.com/maps/documentation/directions/start
I have figured out how to extract the latitude and longitude from the steps field, however this doesn't follow curvy roads very well. I think what I need is to decode the polyline information, I found Googles instructions on how to encode polylines: https://developers.google.com/maps/documentation/utilities/polylinealgorithm
I did find some code here for Android and also Javascript on decoding the polylines, for example:
Map View draw directions using google Directions API - decoding polylines
android get and parse Google Directions
But I can't find same for Objective-C iPhone code, can anybody help me with this? I'm sure I can do it myself if I have to, but it sure would save me some time if it's already available somewhere.
EDIT: the key here is being able to decode the base64 encoding on a character by character basis. To be more specific, I get something like this in JSON from Google which is encoded using base64 encoding among other things:
... "overview_polyline" : {
"points" : "ydelDz~vpN_#NO#QEKWIYIIO?YCS#WFGBEBICCAE?G#y#RKBEBEBAD?HTpB#LALALCNEJEFSP_#LyDv#aB\\GBMB"
},
...
Note: I should mention that this question refers to Google Maps API v1, it is much easier to do this in v2 using GMSPolyLine polyLineWithPath as many answers below will tell you (thanks to #cdescours).
I hope it's not against the rules to link to my own blog post if it's relevant to the question, but I've solved this problem in the past. Stand-alone answer from linked post:
#implementation MKPolyline (MKPolyline_EncodedString)
+ (MKPolyline *)polylineWithEncodedString:(NSString *)encodedString {
const char *bytes = [encodedString UTF8String];
NSUInteger length = [encodedString lengthOfBytesUsingEncoding:NSUTF8StringEncoding];
NSUInteger idx = 0;
NSUInteger count = length / 4;
CLLocationCoordinate2D *coords = calloc(count, sizeof(CLLocationCoordinate2D));
NSUInteger coordIdx = 0;
float latitude = 0;
float longitude = 0;
while (idx < length) {
char byte = 0;
int res = 0;
char shift = 0;
do {
byte = bytes[idx++] - 63;
res |= (byte & 0x1F) << shift;
shift += 5;
} while (byte >= 0x20);
float deltaLat = ((res & 1) ? ~(res >> 1) : (res >> 1));
latitude += deltaLat;
shift = 0;
res = 0;
do {
byte = bytes[idx++] - 0x3F;
res |= (byte & 0x1F) << shift;
shift += 5;
} while (byte >= 0x20);
float deltaLon = ((res & 1) ? ~(res >> 1) : (res >> 1));
longitude += deltaLon;
float finalLat = latitude * 1E-5;
float finalLon = longitude * 1E-5;
CLLocationCoordinate2D coord = CLLocationCoordinate2DMake(finalLat, finalLon);
coords[coordIdx++] = coord;
if (coordIdx == count) {
NSUInteger newCount = count + 10;
coords = realloc(coords, newCount * sizeof(CLLocationCoordinate2D));
count = newCount;
}
}
MKPolyline *polyline = [MKPolyline polylineWithCoordinates:coords count:coordIdx];
free(coords);
return polyline;
}
#end
The best and lightest answer should be to use the method provided by Google in the framework :
[GMSPolyline polylineWithPath:[GMSPath pathFromEncodedPath:encodedPath]]
If you are working with Google Map on iOS and want to draw the route including the polylines, google itself provides an easier way to get the GMSPath from polyline as,
GMSPath *pathFromPolyline = [GMSPath pathFromEncodedPath:polyLinePoints];
Here is the complete code:
+ (void)callGoogleServiceToGetRouteDataFromSource:(CLLocation *)sourceLocation toDestination:(CLLocation *)destinationLocation onMap:(GMSMapView *)mapView_{
NSString *baseUrl = [NSString stringWithFormat:#"http://maps.googleapis.com/maps/api/directions/json?origin=%f,%f&destination=%f,%f&sensor=false", sourceLocation.coordinate.latitude, sourceLocation.coordinate.longitude, destinationLocation.coordinate.latitude, destinationLocation.coordinate.longitude];
NSURL *url = [NSURL URLWithString:[baseUrl stringByAddingPercentEscapesUsingEncoding:NSUTF8StringEncoding]];
NSLog(#"Url: %#", url);
NSURLRequest *request = [NSURLRequest requestWithURL:url];
[NSURLConnection sendAsynchronousRequest:request queue:[NSOperationQueue mainQueue] completionHandler:^(NSURLResponse *response, NSData *data, NSError *connectionError) {
GMSMutablePath *path = [GMSMutablePath path];
NSError *error = nil;
NSDictionary *result = [NSJSONSerialization JSONObjectWithData:data options:0 error:&error];
NSArray *routes = [result objectForKey:#"routes"];
NSDictionary *firstRoute = [routes objectAtIndex:0];
NSDictionary *leg = [[firstRoute objectForKey:#"legs"] objectAtIndex:0];
NSArray *steps = [leg objectForKey:#"steps"];
int stepIndex = 0;
CLLocationCoordinate2D stepCoordinates[1 + [steps count] + 1];
for (NSDictionary *step in steps) {
NSDictionary *start_location = [step objectForKey:#"start_location"];
stepCoordinates[++stepIndex] = [self coordinateWithLocation:start_location];
[path addCoordinate:[self coordinateWithLocation:start_location]];
NSString *polyLinePoints = [[step objectForKey:#"polyline"] objectForKey:#"points"];
GMSPath *polyLinePath = [GMSPath pathFromEncodedPath:polyLinePoints];
for (int p=0; p<polyLinePath.count; p++) {
[path addCoordinate:[polyLinePath coordinateAtIndex:p]];
}
if ([steps count] == stepIndex){
NSDictionary *end_location = [step objectForKey:#"end_location"];
stepCoordinates[++stepIndex] = [self coordinateWithLocation:end_location];
[path addCoordinate:[self coordinateWithLocation:end_location]];
}
}
GMSPolyline *polyline = nil;
polyline = [GMSPolyline polylineWithPath:path];
polyline.strokeColor = [UIColor grayColor];
polyline.strokeWidth = 3.f;
polyline.map = mapView_;
}];
}
+ (CLLocationCoordinate2D)coordinateWithLocation:(NSDictionary*)location
{
double latitude = [[location objectForKey:#"lat"] doubleValue];
double longitude = [[location objectForKey:#"lng"] doubleValue];
return CLLocationCoordinate2DMake(latitude, longitude);
}
Swift 3.0
let polyline = GMSPolyline(path: GMSPath.init(fromEncodedPath: encodedPolyline))
Python Implementation
This isn't in Objective-C, but this thread is where Google drops you if you're looking to decode polyline strings from Google Maps. In case anyone else needs it (much like I did), here's a Python implementation for decoding polyline strings. This is ported from the Mapbox JavaScript version; more info found on my repo page.
def decode_polyline(polyline_str):
index, lat, lng = 0, 0, 0
coordinates = []
changes = {'latitude': 0, 'longitude': 0}
# Coordinates have variable length when encoded, so just keep
# track of whether we've hit the end of the string. In each
# while loop iteration, a single coordinate is decoded.
while index < len(polyline_str):
# Gather lat/lon changes, store them in a dictionary to apply them later
for unit in ['latitude', 'longitude']:
shift, result = 0, 0
while True:
byte = ord(polyline_str[index]) - 63
index+=1
result |= (byte & 0x1f) << shift
shift += 5
if not byte >= 0x20:
break
if (result & 1):
changes[unit] = ~(result >> 1)
else:
changes[unit] = (result >> 1)
lat += changes['latitude']
lng += changes['longitude']
coordinates.append((lat / 100000.0, lng / 100000.0))
return coordinates
- (MKPolyline *)polylineWithEncodedString:(NSString *)encodedString {
const char *bytes = [encodedString UTF8String];
NSUInteger length = [encodedString lengthOfBytesUsingEncoding:NSUTF8StringEncoding];
NSUInteger idx = 0;
NSUInteger count = length / 4;
CLLocationCoordinate2D *coords = calloc(count, sizeof(CLLocationCoordinate2D));
NSUInteger coordIdx = 0;
float latitude = 0;
float longitude = 0;
while (idx < length) {
char byte = 0;
int res = 0;
char shift = 0;
do {
byte = bytes[idx++] - 63;
res |= (byte & 0x1F) << shift;
shift += 5;
} while (byte >= 0x20);
float deltaLat = ((res & 1) ? ~(res >> 1) : (res >> 1));
latitude += deltaLat;
shift = 0;
res = 0;
do {
byte = bytes[idx++] - 0x3F;
res |= (byte & 0x1F) << shift;
shift += 5;
} while (byte >= 0x20);
float deltaLon = ((res & 1) ? ~(res >> 1) : (res >> 1));
longitude += deltaLon;
float finalLat = latitude * 1E-5;
float finalLon = longitude * 1E-5;
CLLocationCoordinate2D coord = CLLocationCoordinate2DMake(finalLat, finalLon);
coords[coordIdx++] = coord;
if (coordIdx == count) {
NSUInteger newCount = count + 10;
coords = realloc(coords, newCount * sizeof(CLLocationCoordinate2D));
count = newCount;
}
}
MKPolyline *polyline = [MKPolyline polylineWithCoordinates:coords count:coordIdx];
free(coords);
return polyline;
}
- (MKPolygonRenderer *)mapView:(MKMapView *)mapView viewForOverlay:(id <MKOverlay>)overlay {
// MKPolygonRenderer *polylineView = [[MKPolygonRenderer alloc] initWithOverlay:overlay];
MKPolylineView *polylineView = [[MKPolylineView alloc] initWithPolyline:overlay];
polylineView.strokeColor = [UIColor redColor];
polylineView.lineWidth = 4.0;
[self zoomToPolyLine:mapview polyline:overlay animated:YES];
return polylineView;
}
-(void)zoomToPolyLine: (MKMapView*)map polyline: (MKPolyline*)polyline animated: (BOOL)animated
{
[map setVisibleMapRect:[polyline boundingMapRect] edgePadding:UIEdgeInsetsMake(25.0, 25.0, 25.0, 25.0) animated:animated];
}
- (void)locationManager:(CLLocationManager *)manager didUpdateToLocation:(CLLocation *)newLocation fromLocation:(CLLocation *)oldLocation
{
// NSLog(#"didUpdateToLocation: %#", newLocation);
CLLocation *currentLocation = newLocation;
if (currentLocation != nil) {
currlong = [NSString stringWithFormat:#"%.8f", currentLocation.coordinate.longitude];
currlt = [NSString stringWithFormat:#"%.8f", currentLocation.coordinate.latitude];
}
NSString *origin = [NSString stringWithFormat:#"%#%#%#",currlt,#",",currlong];
//I have just mention static location
NSString *drivein = #"23.0472963,72.52757040000006";
NSString *apikey = [NSString stringWithFormat:#"https://maps.googleapis.com/maps/api/directions/json?origin=%#&destination=%#",origin,drivein];
NSURL *url = [NSURL URLWithString:apikey];
NSURLRequest *request = [NSURLRequest requestWithURL:url];
NSURLResponse *response;
NSError *error;
NSData *responseData = [NSURLConnection sendSynchronousRequest:request returningResponse:&response error:&error];
NSString *responseString = [[NSString alloc] initWithData:responseData encoding:NSUTF8StringEncoding];
if(!error)
{
NSData *data = [responseString dataUsingEncoding:NSUTF8StringEncoding];
NSDictionary *jsonResponse = [NSJSONSerialization JSONObjectWithData:data
options:kNilOptions
error:&error];
NSArray *routesArray = [jsonResponse objectForKey:#"routes"];
NSLog(#"route array %#",routesArray);
if ([routesArray count] > 0)
{
NSDictionary *routeDict = [routesArray objectAtIndex:0];
NSDictionary *routeOverviewPolyline = [routeDict objectForKey:#"overview_polyline"];
NSString *points = [routeOverviewPolyline objectForKey:#"points"];
MKPolyline *line = [self polylineWithEncodedString:points];
[mapview addOverlay:line];
}
}
MKCoordinateRegion viewRegion = MKCoordinateRegionMakeWithDistance(currentLocation.coordinate, 500, 500);
MKCoordinateRegion adjustedRegion = [mapview regionThatFits:viewRegion];
[mapview setRegion:adjustedRegion animated:YES];
mapview.showsUserLocation = YES;
MKPointAnnotation *point = [[MKPointAnnotation alloc] init];
point.coordinate = currentLocation.coordinate;
point.title = #"Your current Locations";
point.subtitle = #"You are here!";
[mapview addAnnotation:point];
[locationmanger stopUpdatingLocation];
}
Here's how I do it in my directions app. keyPlace is your destination object
- (void)getDirections {
CLLocation *newLocation;// = currentUserLocation;
MKPointAnnotation *annotation = [[[MKPointAnnotation alloc] init] autorelease];
annotation.coordinate = CLLocationCoordinate2DMake(newLocation.coordinate.latitude, newLocation.coordinate.longitude);
annotation.title = #"You";
[mapView addAnnotation:annotation];
CLLocationCoordinate2D endCoordinate;
NSURL *url = [NSURL URLWithString:[NSString stringWithFormat:#"https://maps.googleapis.com/maps/api/directions/json?origin=%f,%f&destination=%f,%f&sensor=false&mode=walking", newLocation.coordinate.latitude, newLocation.coordinate.longitude, keyPlace.lat, keyPlace.lon]];
ASIHTTPRequest *request = [ASIHTTPRequest requestWithURL:url];
[request startSynchronous];
if ([[request.responseString.JSONValue valueForKey:#"status"] isEqualToString:#"ZERO_RESULTS"]) {
[[[[UIAlertView alloc] initWithTitle:#"Error"
message:#"Could not route path from your current location"
delegate:nil
cancelButtonTitle:#"Close"
otherButtonTitles:nil, nil] autorelease] show];
self.navigationController.navigationBar.userInteractionEnabled = YES;
return;
}
int points_count = 0;
if ([[request.responseString.JSONValue objectForKey:#"routes"] count])
points_count = [[[[[[request.responseString.JSONValue objectForKey:#"routes"] objectAtIndex:0] objectForKey:#"legs"] objectAtIndex:0] objectForKey:#"steps"] count];
if (!points_count) {
[[[[UIAlertView alloc] initWithTitle:#"Error"
message:#"Could not route path from your current location"
delegate:nil
cancelButtonTitle:#"Close"
otherButtonTitles:nil, nil] autorelease] show];
self.navigationController.navigationBar.userInteractionEnabled = YES;
return;
}
CLLocationCoordinate2D points[points_count * 2];
int j = 0;
NSArray *steps = nil;
if (points_count && [[[[request.responseString.JSONValue objectForKey:#"routes"] objectAtIndex:0] objectForKey:#"legs"] count])
steps = [[[[[request.responseString.JSONValue objectForKey:#"routes"] objectAtIndex:0] objectForKey:#"legs"] objectAtIndex:0] objectForKey:#"steps"];
for (int i = 0; i < points_count; i++) {
double st_lat = [[[[steps objectAtIndex:i] objectForKey:#"start_location"] valueForKey:#"lat"] doubleValue];
double st_lon = [[[[steps objectAtIndex:i] objectForKey:#"start_location"] valueForKey:#"lng"] doubleValue];
//NSLog(#"lat lon: %f %f", st_lat, st_lon);
if (st_lat > 0.0f && st_lon > 0.0f) {
points[j] = CLLocationCoordinate2DMake(st_lat, st_lon);
j++;
}
double end_lat = [[[[steps objectAtIndex:i] objectForKey:#"end_location"] valueForKey:#"lat"] doubleValue];
double end_lon = [[[[steps objectAtIndex:i] objectForKey:#"end_location"] valueForKey:#"lng"] doubleValue];
if (end_lat > 0.0f && end_lon > 0.0f) {
points[j] = CLLocationCoordinate2DMake(end_lat, end_lon);
endCoordinate = CLLocationCoordinate2DMake(end_lat, end_lon);
j++;
}
}
MKPolyline *polyline = [MKPolyline polylineWithCoordinates:points count:points_count * 2];
[mapView addOverlay:polyline];
}
#pragma mark - MapKit
- (MKAnnotationView *)mapView:(MKMapView *)mapView viewForAnnotation:(id <MKAnnotation>)annotation {
MKPinAnnotationView *annView = [[[MKPinAnnotationView alloc] initWithAnnotation:annotation reuseIdentifier:#"currentloc"] autorelease];
annView.canShowCallout = YES;
annView.animatesDrop = YES;
return annView;
}
- (MKOverlayView *)mapView:(MKMapView *)mapView
viewForOverlay:(id<MKOverlay>)overlay {
MKPolylineView *overlayView = [[[MKPolylineView alloc] initWithOverlay:overlay] autorelease];
overlayView.lineWidth = 5;
overlayView.strokeColor = [UIColor purpleColor];
overlayView.fillColor = [[UIColor purpleColor] colorWithAlphaComponent:0.5f];
return overlayView;
}
In case anyone would need the decoding-code in VBA, here is a (working) port:
Function decodeGeopoints(encoded)
decodeGeopoints = ""
' This code is a port to VBA from code published here:
' http://blog.synyx.de/2010/06/routing-driving-directions-on-android-part-1-get-the-route/
'//decoding
'List poly = new ArrayList();
'// replace two backslashes by one (some error from the transmission)
'encoded = encoded.replace("\\", "\");
encoded = Replace(encoded, "\\", "\")
'int index = 0, len = encoded.length();
Dim index As Long
index = 0
Dim leng As Long
leng = Len(encoded)
'int lat = 0, lng = 0;
Dim lat As Long
lat = 0
Dim lng As Long
lng = 0
'while (index < len) {
While (index < leng)
'int b, shift = 0, result = 0;
Dim b, shift, result As Long
b = 0
shift = 0
result = 0
'do {
Do
'b = encoded.charAt(index++) - 63;
index = index + 1
b = Asc(Mid(encoded, index, 1)) - 63
'result |= (b & 0x1f) << shift;
result = result Or ((b And 31) * (2 ^ shift))
'shift += 5;
shift = shift + 5
'} while (b >= 0x20);
Loop While (b >= 32)
'int dlat = ((result & 1) != 0 ? ~(result >> 1) : (result >> 1));
Dim dlat As Long
If (result And 1) <> 0 Then
dlat = Not Int(result / 2)
Else
dlat = Int(result / 2)
End If
'lat += dlat;
lat = lat + dlat
'shift = 0;
shift = 0
'result = 0;
result = 0
'do {
Do
'b = encoded.charAt(index++) - 63;
index = index + 1
b = Asc(Mid(encoded, index, 1)) - 63
'result |= (b & 0x1f) << shift;
result = result Or ((b And 31) * (2 ^ shift))
'shift += 5;
shift = shift + 5
'} while (b >= 0x20);
Loop While (b >= 32)
'int dlng = ((result & 1) != 0 ? ~(result >> 1) : (result >> 1));
Dim dlng As Long
If (result And 1) <> 0 Then
dlng = Not Int(result / 2)
Else
dlng = Int(result / 2)
End If
'lng += dlng;
lng = lng + dlng
'GeoPoint p = new GeoPoint((int) (((double) lat / 1E5) * 1E6), (int) (((double) lng / 1E5) * 1E6));
Dim myLat, myLng As Double
myLat = (lat / 100000)
'myLat = myLat * 1000000
myLng = (lng / 100000)
'myLng = myLng * 1000000
'poly.add(p);
decodeGeopoints = decodeGeopoints & Comma2Dot(myLng) & "," & Comma2Dot(myLat) & ",0 "
'}
Wend
End Function
For Google maps it already have a straight forward method , polylineWithPath, so I prefer this snippet.
-(void)drawPathFrom:(CLLocation*)source toDestination:(CLLocation*)destination{
NSString *baseUrl = [NSString stringWithFormat:#"http://maps.googleapis.com/maps/api/directions/json?origin=%f,%f&destination=%f,%f&sensor=true", source.coordinate.latitude, source.coordinate.longitude, destination.coordinate.latitude, destination.coordinate.longitude];
NSURL *url = [NSURL URLWithString:[baseUrl stringByAddingPercentEscapesUsingEncoding:NSUTF8StringEncoding]];
NSLog(#"Url: %#", url);
NSURLRequest *request = [NSURLRequest requestWithURL:url];
[NSURLConnection sendAsynchronousRequest:request queue:[NSOperationQueue mainQueue] completionHandler:^(NSURLResponse *response, NSData *data, NSError *connectionError) {
if(!connectionError){
NSDictionary *result = [NSJSONSerialization JSONObjectWithData:data options:0 error:nil];
NSArray *routes = [result objectForKey:#"routes"];
NSDictionary *firstRoute = [routes objectAtIndex:0];
NSString *encodedPath = [firstRoute[#"overview_polyline"] objectForKey:#"points"];
GMSPolyline *polyPath = [GMSPolyline polylineWithPath:[GMSPath pathFromEncodedPath:encodedPath]];
polyPath.strokeColor = [UIColor redColor];
polyPath.strokeWidth = 3.5f;
polyPath.map = _mapView;
}
}];
}
Swift 4.2 / Swift 5
let gmsPolyline = GMSPolyline(path: GMSPath(fromEncodedPath: encodedPolyline))
gmsPolyline.map = map
This is my own revisitation of Sedate Alien's answer.
It is the same implementation save for removing duplicated code and using NSMutableData instead of manually allocating stuff.
#implementation MKPolyline (EncodedString)
+ (float)decodeBytes:(const char *)bytes atPos:(NSUInteger *)idx toValue:(float *)value {
char byte = 0;
int res = 0;
char shift = 0;
do {
byte = bytes[(*idx)++] - 0x3F;
res |= (byte & 0x1F) << shift;
shift += 5;
}
while (byte >= 0x20);
(*value) += ((res & 1) ? ~(res >> 1) : (res >> 1));
return (*value) * 1E-5;
}
+ (MKPolyline *)polylineWithEncodedString:(NSString *)encodedString {
const char *bytes = [encodedString UTF8String];
NSUInteger length = [encodedString lengthOfBytesUsingEncoding:NSUTF8StringEncoding];
NSUInteger idx = 0;
NSMutableData *data = [NSMutableData data];
float lat = 0;
float lon = 0;
CLLocationCoordinate2D coords = CLLocationCoordinate2DMake(0, 0);
while (idx < length) {
coords.latitude = [self decodeBytes:bytes atPos:&idx toValue:&lat];
coords.longitude = [self decodeBytes:bytes atPos:&idx toValue:&lon];
[data appendBytes:&coords length:sizeof(CLLocationCoordinate2D)];
}
return [MKPolyline polylineWithCoordinates:(CLLocationCoordinate2D *)data.bytes count:data.length / sizeof(CLLocationCoordinate2D)];
}
#end
The other answers here seem to be about using Apple Maps, for using Google Maps I found I had to make some modifications to #SedateAlien's great category.
MODIFIED CATEGORY
+ (GMSPolyline *)polylineWithEncodedString:(NSString *)encodedString {
const char *bytes = [encodedString UTF8String];
NSUInteger length = [encodedString lengthOfBytesUsingEncoding:NSUTF8StringEncoding];
NSUInteger idx = 0;
NSUInteger count = length / 4;
CLLocationCoordinate2D *coords = calloc(count, sizeof(CLLocationCoordinate2D));
NSUInteger coordIdx = 0;
float latitude = 0;
float longitude = 0;
while (idx < length) {
char byte = 0;
int res = 0;
char shift = 0;
do {
byte = bytes[idx++] - 63;
res |= (byte & 0x1F) << shift;
shift += 5;
} while (byte >= 0x20);
float deltaLat = ((res & 1) ? ~(res >> 1) : (res >> 1));
latitude += deltaLat;
shift = 0;
res = 0;
do {
byte = bytes[idx++] - 0x3F;
res |= (byte & 0x1F) << shift;
shift += 5;
} while (byte >= 0x20);
float deltaLon = ((res & 1) ? ~(res >> 1) : (res >> 1));
longitude += deltaLon;
float finalLat = latitude * 1E-5;
float finalLon = longitude * 1E-5;
CLLocationCoordinate2D coord = CLLocationCoordinate2DMake(finalLat, finalLon);
coords[coordIdx++] = coord;
if (coordIdx == count) {
NSUInteger newCount = count + 10;
coords = realloc(coords, newCount * sizeof(CLLocationCoordinate2D));
count = newCount;
}
}
GMSMutablePath *path = [[GMSMutablePath alloc] init];
int i;
for (i = 0; i < coordIdx; i++)
{
[path addCoordinate:coords[i]];
}
GMSPolyline *polyline = [GMSPolyline polylineWithPath:path];
free(coords);
return polyline;
}
USAGE
// Here I make the call to the Google Maps API to get the routes between two points...
....
// Get the encoded array of points.
NSString *points = routes[#"routes"][0][#"overview_polyline"][#"points"];
// Use the modified category to get a polyline from the points.
GMSPolyline *polyline = [GMSPolyline polylineWithEncodedString:points];
// Add the polyline to the map.
polyline.strokeColor = [UIColor redColor];
polyline.strokeWidth = 10.f;
polyline.map = theMapView;
}
If anybody else is trying to do this in swift, here's #RootCode's answer adapted to swift (2.3):
let path = GMSMutablePath()
let steps = directionsToShowOnMap.steps
for (idx, step) in steps.enumerate() {
path.addCoordinate(coordinateFromJson(step["start_location"]))
if let polylinePoints = step["polyline"].string, subpath = GMSPath(fromEncodedPath: polylinePoints) {
for c in 0 ..< subpath.count() {
path.addCoordinate(subpath.coordinateAtIndex(c))
}
}
if idx == steps.count - 1 {
path.addCoordinate(coordinateFromJson(step["end_location"]))
}
}
let polyline = GMSPolyline(path: path)
polyline.strokeColor = UIColor.blueColor()
polyline.strokeWidth = 3
polyline.map = mapView
and then:
private func coordinateFromJson(location: JSON) -> CLLocationCoordinate2D {
return CLLocationCoordinate2DMake(location["lat"].double!, location["lng"].double!)
}