Play a wav file retrieved from a database on the iPhone? - iphone

I have alot of wav files stored in sqlite3, but when I retrieve one of them, I can't play it. The retrieve code is
NSData *soundData = (NSDATA *)sqlite3_column_blob(statement, 0);
mPlayer = [[AVAudioPlayer alloc] initWithData:soundData error:&error];
The data is stored as binary and it's there when I search for it using sqlite3.

Sorry. Never mind. I just compressed the data more and it works fine now. Seems the number of files is not as important as their size afterall.

Related

How store a video while streaming that? [duplicate]

So far I know how to stream a video and how to download it and afterwards stream it, but here's the tricky bit: streaming it once, storing it on the device and in the future play it from the device.
Is that possible?
Not quite sure here how you get your stream but look in to the AVAssetWriter, AVAssetWriterInput and AVAssetWriterPixelBufferAdaptor and as soon as you receive data you should be able to append the data to the to the pixel buffer adaptor using:
appendPixelBuffer:withPresentationTime:
not sure it will work for you but with some fiddling you should be able to adapt your input to match this method. There are lots of example code for setting up the writer
It's quite easy to save the video. Do something similar to this:
//Saving Movie
NSMutableData *data = [[NSMutableData alloc] init];
NSKeyedArchiver *archiver = [[NSKeyedArchiver alloc] initForWritingWithMutableData:data];
[archiver encodeObject:*MovieObject* forKey:#"MovieObjectDataKey"];
[archiver finishEncoding];
[[NSUserDefaults standardUserDefaults] setObject:data forKey:#"MovieObjectDefaultsDataKey"];
[archiver release];
[data release];
//Retrieving movie
NSData *savedMovieData = [[NSUserDefaults standardUserDefaults] objectForKey:#"MovieObjectDefaultsDataKey"];
if (savedMovieData != nil) {
NSKeyedUnarchiver *unarchiver = [[NSKeyedUnarchiver alloc] initForReadingWithData:savedMovieData];
*MovieObject* = [[unarchiver decodeObjectForKey:#"MovieObjectDataKey"] retain];
[unarchiver finishDecoding];
[savedMovieData release];
[unarchiver release];
} else {
//Download Stream of Your Movie
}
The only thing you really have to change there is * MovieObject *, once in each step.
I know what you want to achieve, I only got a workaround. I had to implement the same behavior and ended up with streaming the video from the server and downloading it next to streaming. Next time the user tries to stream the video determine whether it was downloaded to disk, otherwise stream it again. In a normal case the video was downloaded properly and could be reviewed offline.
BOOL fileExists = [[NSFileManager defaultManager] fileExistsAtPath:somePath];
and
fileURLWithPath:isDirectory:
Initializes and returns a newly created NSURL object as a file URL with a specified path.
+ (id)fileURLWithPath:(NSString *)path isDirectory:(BOOL)isDir
Parameters
path
The path that the NSURL object will represent. path should be a valid system path. If path begins with a tilde, it must first be expanded with stringByExpandingTildeInPath. If path is a relative path, it is treated as being relative to the current working directory.
Passing nil for this parameter produces an exception.
isDir
A Boolean value that specifies whether path is treated as a directory path when resolving against relative path components. Pass YES if the path indicates a directory, NO otherwise.
Return Value
An NSURL object initialized with path.
Availability
Available in iOS 2.0 and later.
You can't stream it and save it at the same time, especially with large video files as the Apple doc sais that you must use a transport stream for HTTP Live Streaming.
ASIHttpRequest might make your life easier.
ASIHTTPRequest *request = [ASIHTTPRequest requestWithURL:url];
[request setDownloadDestinationPath:#"video.m4v"]; // use [NSBundle mainBundle] to find a better place
From your delegate, handle this:
- (void)request:(ASIHTTPRequest *)request didReceiveData:(NSData *)data;
Do whatever data transcoding with data as you get it and push it off to your AVAssetWriter or movie player layer in real time, whatever you are using. When you're done, the asset should still be saved so you can get it later.

AudioToolbox/OpenAL ExtAudioFile to play compressed audio

I'm currently using OpenAL to play game music. It works fine, except that it doesn't work with anything except for raw WAV files. This means that I end up with a ~9mb soundtrack.
I'm new to OpenAL, and I'm using code directly from Apple's example (https://developer.apple.com/library/ios/#samplecode/MusicCube/Listings/Classes_MyOpenALSupport_h.html%23//apple_ref/doc/uid/DTS40008978-Classes_MyOpenALSupport_h-DontLinkElementID_9) to get the buffer data.
Question: Is there any way to modify this function so it reads compressed audio and decodes it on the fly?
I'm not so worried about the audio file format, just as long as it can be played and is compressed (like mp3, aac, caf). The only reason I want to do this (obviously) is to reduce file size.
Edit: It seems that the problem is not so much in OpenAL as the method I'm using to get the buffer. The function at https://developer.apple.com/library/ios/#samplecode/MusicCube/Listings/Classes_MyOpenALSupport_h.html%23//apple_ref/doc/uid/DTS40008978-Classes_MyOpenALSupport_h-DontLinkElementID_9 uses AudioFileOpenURL and AudioFileReadBytes. Is there any way to get the framework to decode the audio for me using ExtAudioFileOpenURL and ExtAudioFileRead?
I have tried the code here: https://devforums.apple.com/message/10678#10678, but I don't know what to make of it. The function I use to get the buffer is at https://developer.apple.com/library/ios/#samplecode/MusicCube/Listings/Classes_MyOpenALSupport_h.html%23//apple_ref/doc/uid/DTS40008978-Classes_MyOpenALSupport_h-DontLinkElementID_9, and I haven't really modified it, so that's what I need to build on.
I've started a bounty because I really need this, hopefully someone can point me in the right direction.
You'll need to use audio services to load other formats. Bear in mind that OpenAL ONLY supports uncompressed PCM data, so any data you load needs to be uncompressed during load.
Here's some code that will load any format supported by iOS: https://github.com/kstenerud/ObjectAL-for-iPhone/blob/master/ObjectAL/ObjectAL/Support/OALAudioFile.m
If you want to stream compressed soundtrack-type audio, use AVAudioPlayer since it plays compressed audio straight from disk.
You don't need any third party library to open archived files. With a little help from AudioToolbox/AudioToolbox.h framework you can open and read the data of a .caf file which is a very good choice by the way (better than mp3 or ogg) in terms of performance (minimal CPU impact during decompression). So ,when the data gets to OpenAL it is already PCM, ready to fill the buffers. Here is some sample code on how you can achieve this:
-(void) prepareFiles:(NSString *) filePath{
// get the full path of the file
NSString* fileName = [[NSBundle mainBundle] pathForResource:filePath ofType:#"caf"];
// open the file using the custom created methods (see below)
AudioFileID fileID = [self openAudioFile:fileName];
preparedAudioFileSize = [self audioFileSize:fileID];
if (preparedAudioFile){
free(preparedAudioFile);
preparedAudioFile = nil;
}
else{
;
}
preparedAudioFile = malloc(preparedAudioFileSize);
//read the data from the file into soundOutData var
AudioFileReadBytes(fileID, false, 0, &preparedAudioFileSize, preparedAudioFile);
//close the file
AudioFileClose(fileID);
}
-(AudioFileID)openAudioFile:(NSString*)filePath
{
AudioFileID fileID;
NSURL * url = [NSURL fileURLWithPath:filePath];
OSStatus result = AudioFileOpenURL((CFURLRef)url, kAudioFileReadPermission, 0, &fileID);
if (result != noErr) {
NSLog(#"fail to open: %#",filePath);
}
else {
;
}
return fileID;
}
-(UInt32)audioFileSize:(AudioFileID)fileDescriptor
{
UInt64 outDataSize = 0;
UInt32 thePropSize = sizeof(UInt64);
OSStatus result = AudioFileGetProperty(fileDescriptor, kAudioFilePropertyAudioDataByteCount, &thePropSize, &outDataSize);
if(result != 0) NSLog(#"cannot find file size");
return (UInt32)outDataSize;
}
based on Karl's reply above, I made a minimal single c++ function which opens a file and gives you back a buffer of pcm audio ( suitable for OpenAL ) and all the info you need to create an OpenAL sound ( format, samplerate, buffersize etc ).
the two files you need are here:
https://gist.github.com/ofTheo/5171369
hope it helps!
theo
Try if this works: http://kcat.strangesoft.net/openal-tutorial.html
You might try to use a third party library to load a mp3-ogg into a char* buffer, and then give this buffer to openAL. That would solve the file size problem.
For ogg, you should find the libraries on their website
For mp3, I honestly don't know where to find a lightweight library which could do that. But that should exist.

ZipArchive memory problems on iPhone for large archive

I am trying to compress multiple files into a single zip archive and I am running into low memory warning. Since the complete zip file is loaded into the memory I guess that's the problem. Is there a way by which I can manage the compression/decompression better using ZipArchive so that not all the data is in the memory at once?
Thanks!
After doing some investigation on alternatives to ZipArchive I found another project called Objective-zip that seems to be a little better than ZipArchive. Here is the link:
http://code.google.com/p/objective-zip/
The API is quite simple. One thing I ran into was that in the begging I was reading data and never releasing it so if you are adding a bunch of large files to the zip file remember to release the data. Here is a little code I used:
ZipFile *zipFile = [[ZipFile alloc] initWithFileName:archivePath mode:ZipFileModeCreate];
for(NSString *path in subpaths){
NSData *data= [[NSData alloc] initWithContentsOfFile:longPath];
ZipWriteStream *stream = [zipFile writeFileInZipWithName:path compressionLevel:ZipCompressionLevelNone];
[stream writeData:data];
[stream finishedWriting];
[data release];
}
[zipFile close];
[zipFile release];
I hope this is helpful for anyone who runs into the same issue.
An easier way to deal with this is to simply change ZipArchive's method of reading the file into the NSData. Just change the following code
data = [ NSData dataWithContentsOfFile:file ];
to
data = [ NSData dataWithContentsOfMappedFile:file ];
That will cause the OS to read the file in a memory mapped way. Basically it just uses way less memory as it reads from the file as it needs to rather than loading it all into memory at once.

How can I locally save an XML file on an iPhone for when the device is offline?

My app is accessing data from a remote XML file. I have no issues receiving and parsing the data. However, I'd like to take the most current XML data and store it locally so - in the event that the user's internet service isn't available - the local data from the previous load is used.
Is there a simple way to do this? Or am I going to have to create an algorithm that will create a plist as the xml data is parsed? That seems rather tedious... I was wondering if there was an easier way to save the data as a whole.
Thanks in advance!
I don't know what format your XML data is in as you receive it, but using NSData might be helpful here, because it has very easy-to-use methods for reading/writing data from either a URL or a pathname.
For example:
NSURL *url = [NSURL URLWithString:#"http://www.fubar.com/sample.xml"];
NSData *data = [NSData dataWithContentsOfURL:url]; // Load XML data from web
// construct path within our documents directory
NSString *applicationDocumentsDir =
[NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) lastObject];
NSString *storePath = [applicationDocumentsDir stringByAppendingPathComponent:#"sample.xml"];
// write to file atomically (using temp file)
[data writeToFile:storePath atomically:TRUE];
You can also easily convert an NSData object to/from a raw buffer (pointer/length) in memory, so if your data is already downloaded you might do:
NSData *data = [NSData dataWithBytes:ptr length:len]; // Load XML data from memory
// ... continue as above, to write the NSData object to file in Documents dir

AVAudioPlayer - Metering - Want to build a waveform (graph)

I need to build a visual graph that represents voice levels (dB) in a recorded file. I tried to do it this way:
NSError *error = nil;
AVAudioPlayer *meterPlayer = [[AVAudioPlayer alloc]initWithContentsOfURL:[NSURL fileURLWithPath:self.recording.fileName] error:&error];
if (error) {
_lcl_logger(lcl_cEditRecording, lcl_vError, #"Cannot initialize AVAudioPlayer with file %# due to: %# (%#)", self.recording.fileName, error, error.userInfo);
} else {
[meterPlayer prepareToPlay];
meterPlayer.meteringEnabled = YES;
for (NSTimeInterval i = 0; i <= meterPlayer.duration; ++i) {
meterPlayer.currentTime = i;
[meterPlayer updateMeters];
float averagePower = [meterPlayer averagePowerForChannel:0];
_lcl_logger(lcl_cEditRecording, lcl_vTrace, #"Second: %f, Level: %f dB", i, averagePower);
}
}
[meterPlayer release];
It would be cool if it worked out however it didn't. I always get -160 dB. Any other ideas on how to implement that?
UPD: Here is what I got finally:
alt text http://img22.imageshack.us/img22/5778/waveform.png
I just want to help the others who have come into this same question and used a lot of time to search. To save your time, I put out my answer. I dislike somebody here who treat this as kind of secret...
After search around the articles about extaudioservice, audio queue and avfoundation.
I realised that i should use AVFoundation, reason is simple, it is the latest bundle and it is Objective C but not so cpp style.
So the steps to do it is not complicated:
Create AVAsset from the audio file
Create avassetreader from the avasset
Create avassettrack from avasset
Create avassetreadertrackoutput from avassettrack
Add the avassetreadertrackoutput to the previous avassetreader to start reading out the audio data
From the avassettrackoutput you can copyNextSampleBuffer one by one (it is a loop to read all data out).
Each copyNextSampleBuffer gives you a CMSampleBufferRef which can be used to get AudioBufferList by CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer. AudioBufferList is array of AudioBuffer. AudioBuffer is the a bunch of audio data which is stored in its mData part.
You can implement the above in extAudioService as well. But i think the above avfoundation approach is easier.
So next question, what to do with the mData? Note that when you get the avassetreadertrackoutput, you can specify its output format, so we specify the output is lpcm.
Then the mData you finally get is actually a float format amplitude value.
Easy right? Though i used a lot of time to organise this from piece here and there.
Two useful resource for share:
Read this article to know basic terms and conceptions: https://www.mikeash.com/pyblog/friday-qa-2012-10-12-obtaining-and-interpreting-audio-data.html
Sample code: https://github.com/iluvcapra/JHWaveform
You can copy most of the above mentioned code from this sample directly and used for your own purpose.
I haven't used it myself, but Apple's avTouch iPhone sample has bar graphs powered by AVAudioPlayer, and you can easily check to see how they do it.
I don't think you can use AVAudioPlayer based on your constraints. Even if you could get it to "start" without actually playing the sound file, it would only help you build a graph as fast as the audio file would stream. What you're talking about is doing static analysis of the sound, which will require a much different approach. You'll need to read in the file yourself and parse it manually. I don't think there's a quick solution using anything in the SDK.
Ok guys, seems I'm going to answer my own question again: http://www.supermegaultragroovy.com/blog/2009/10/06/drawing-waveforms/ No a lot of concretics, but at least you will know what Apple docs to read.