Record audio to NSData - iphone

I have set up a TCP connection between two iPhones and I am able to send NSData packages between the two.
I would like to talk into the microphone and get the recording as an NSData object and send this to the other iPhone.
I have successfulyl used Audio Queue Services to record audio and play it but I have not managed to get the recording as NSData. I posted a question about converting the recording to NSData when using Audio Queue Services but it has not got me any further.
Therefore I would like to hear if there is any other approach I can take to speak into the microphone of an iPhone and have the input as raw data?
Update:
I need to send the packages continuous while recording. E.g. every second while recording I will send the data recorded during that second.

Both Audio Queues and the RemoteIO Audio Unit will give you buffers of raw audio in real-time with fairly low latency. You can take the buffer pointer and the byte length given in each audio callback to create a new block of NSData. RemoteIO will provide the lowest latency, but may require the network messaging to be done outside the callback thread.

Using AVAudioRecorder like this:
NSURL *filePath = //Your desired path for the file
NSDictionary *settings; //The settings for the recorded file
NSError *error = nil;
AVAudioRecorder *recorder = [[AVAudioRecorder alloc] initWithURL:filePath settings:recordSetting error:&error];
....
[recorder record];
....
[recorder stop];
....
And the retrieve the NSData from the file:
NSData *audioData = [[NSData alloc] initWithContentsOfFile:filePath];
AVAudioRecorder reference, here.
Edit:
In order to retrieve chunks of the recorded audio you could use the subdataWithRange: method in the NSData class. Keep an offset from which you wish to retrieve the bytes. You can have a NSTimer getting fired every second so you can collect the bytes and send them. You will need to find out how many bytes are getting recorded every second.

This is what I did on the recording iphone:
void AudioInputCallback(void * inUserData,
AudioQueueRef inAQ,
AudioQueueBufferRef inBuffer,
const AudioTimeStamp * inStartTime,
UInt32 inNumberPacketDescriptions,
const AudioStreamPacketDescription * inPacketDescs)
{
RecordState * recordState = (RecordState*)inUserData;
if (!recordState->recording)
{
printf("Not recording, returning\n");
}
// if (inNumberPacketDescriptions == 0 && recordState->dataFormat.mBytesPerPacket != 0)
// {
// inNumberPacketDescriptions = inBuffer->mAudioDataByteSize / recordState->dataFormat.mBytesPerPacket;
// }
printf("Writing buffer %lld\n", recordState->currentPacket);
OSStatus status = AudioFileWritePackets(recordState->audioFile,
false,
inBuffer->mAudioDataByteSize,
inPacketDescs,
recordState->currentPacket,
&inNumberPacketDescriptions,
inBuffer->mAudioData);
NSLog(#"DATA = %#",[NSData dataWithBytes:inBuffer->mAudioData length:inBuffer->mAudioDataByteSize]);
[[NSNotificationCenter defaultCenter] postNotificationName:#"Recording" object:[NSData dataWithBytes:inBuffer->mAudioData length:inBuffer->mAudioDataByteSize]];
if (status == 0)
{
recordState->currentPacket += inNumberPacketDescriptions;
}
AudioQueueEnqueueBuffer(recordState->queue, inBuffer, 0, NULL);
}
I have called upon a notification which help to send data packets to the other iPhone in the network.
But I do not know how to read the data on the other side. I am still trying to figure out how that works. I will surely update once I do that.

Related

How to Send and Receive Live audio from one iPhone mic to the another iPhone speaker?

I have managed to send and receive NSData from one iPhone to Another using Bonjour. I have wrote callbacks for recording audio using Audio Queue and sent the NSData using the following callback on the sender side:
void AudioInputCallback(void * inUserData,
AudioQueueRef inAQ,
AudioQueueBufferRef inBuffer,
const AudioTimeStamp * inStartTime,
UInt32 inNumberPacketDescriptions,
const AudioStreamPacketDescription * inPacketDescs)
{
RecordState * recordState = (RecordState*)inUserData;
if (!recordState->recording)
{
printf("Not recording, returning\n");
}
// if (inNumberPacketDescriptions == 0 && recordState->dataFormat.mBytesPerPacket != 0)
// {
// inNumberPacketDescriptions = inBuffer->mAudioDataByteSize / recordState->dataFormat.mBytesPerPacket;
// }
printf("Writing buffer %lld\n", recordState->currentPacket);
OSStatus status = AudioFileWritePackets(recordState->audioFile,
false,
inBuffer->mAudioDataByteSize,
inPacketDescs,
recordState->currentPacket,
&inNumberPacketDescriptions,
inBuffer->mAudioData);
NSLog(#"DATA = %#",[NSData dataWithBytes:inBuffer->mAudioData length:inBuffer->mAudioDataByteSize]);
[[NSNotificationCenter defaultCenter] postNotificationName:#"Recording" object:[NSData dataWithBytes:inBuffer->mAudioData length:inBuffer->mAudioDataByteSize]];
if (status == 0)
{
recordState->currentPacket += inNumberPacketDescriptions;
}
AudioQueueEnqueueBuffer(recordState->queue, inBuffer, 0, NULL);
}
In the above code I have managed to send the inBuffer->mAudioData converted to NSData and send it on the other iPhone. I have also received the same data on the receiver iPhone.
But now I do not know how to read this NSData. How to convert this data into an audio format and listen this audio on the speaker. Do I need to use Audio Queues on the receiver side or do I use simple AVPlayer to listen to the sound. If anyone can help solve it would be a great help.
To use the NSData just use the NSData writeToFile:atomically: method. Pass in the NSTemporary() path and append a path component like "temp.mp4". At that point you have a filePath with valid data that you can use to load the audio asset. AVAudioPlayer can support loading from a URL.
OR with iOS 7.0+
You can initialize the AVAudioPlayer directly with the NSData object.
init(data data: NSData, fileTypeHint utiString: String?) throws
et. al...

AVQueuePlayer and audio session issue

I am going to try to give a detailed account of my issue.
I have an app that is in the store that uses in app sound. Currently I am using AVQueuePlayer because some of the sound will overlap and allow it to play in order. A lot of this sound is being played while I am playing embedded videos using AVPlayer which may not matter at all. The problem is that I am having reports of the sound stopping across the entire app. I am unable to reproduce this myself but we have a lot of active users and it is reported by some. Whenever it is reported and we determine its not just the silent switch or the sound volume down restarting the app always solves the problem. Occasionally we've heard of the sound magically returning with no changes. I have also had a couple of reports that it happens when using airplay and bluetooth but that may just be an complication of the problem or coincidence.
Below is the code that I am using and maybe I'm just using a setting wrong or not using a setting that I should be but this code works 99.9% of the time.
I use ducking for all sounds I play to lower the volume of the user's iPod music.
Here is my initialization in appDidFinishLaunchingWithOptions (Maybe its not needed at all in the start and sorry for the mixing of conventions):
AudioSessionInitialize (NULL, NULL, NULL, NULL);
[[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryPlayback error:nil];
UInt32 sessionCategory = kAudioSessionCategory_AmbientSound;
AudioSessionSetProperty (kAudioSessionProperty_AudioCategory, sizeof (sessionCategory), &sessionCategory);
[[AVAudioSession sharedInstance] setActive:YES withFlags:AVAudioSessionSetActiveFlags_NotifyOthersOnDeactivation error:nil];
When I play a sound:
-(void)playSound: (NSString *)soundString
{
OSStatus propertySetError = 0;
UInt32 allowMixing = true;
propertySetError |= AudioSessionSetProperty(kAudioSessionProperty_OtherMixableAudioShouldDuck, sizeof(allowMixing), &allowMixing);
[[AVAudioSession sharedInstance] setActive:YES withFlags:AVAudioSessionSetActiveFlags_NotifyOthersOnDeactivation error:nil];
NSURL *thisUrl = [NSURL fileURLWithPath:[NSString stringWithFormat:#"%#/%#.caf", [[NSBundle mainBundle] resourcePath], soundString]];
AVPlayerItem *item = [[AVPlayerItem alloc] initWithURL:thisUrl];
[[NSNotificationCenter defaultCenter] addObserver:self
selector:#selector(reachedEndOfItem:)
name:AVPlayerItemDidPlayToEndTimeNotification
object:item];
if(_audioPlayerQueue == nil)
{
_audioPlayerQueue = [[AVQueuePlayer alloc] initWithItems:[NSArray arrayWithObject:item]];
}
else
{
if([_audioPlayerQueue canInsertItem:item afterItem:nil])
{
[_audioPlayerQueue insertItem:item afterItem:nil];
}
}
if(_audioPlayerQueue == nil)
{
NSLog(#"error");
}
else
{
[_audioPlayerQueue play];
}
return;
}
When the sound finishes playing:
- (void)reachedEndOfItem: (AVPlayerItem*)item
{
[self performSelector:#selector(turnOffDucking) withObject:nil afterDelay:0.5f];
}
- (void)turnOffDucking
{
NSLog(#"reached end");
[[AVAudioSession sharedInstance] setActive:NO withFlags:AVAudioSessionSetActiveFlags_NotifyOthersOnDeactivation error:nil];
OSStatus propertySetError = 0;
UInt32 allowMixing = false;
propertySetError |= AudioSessionSetProperty(kAudioSessionProperty_OtherMixableAudioShouldDuck, sizeof(allowMixing), &allowMixing);
}
Any insight on what I am doing wrong, what settings I should be using for the audio session or known bugs/problems would be very helpful. I would be willing to look into using a different audio engine as this can have some slight performance issues when playing a video and having iPod music playing in tandem but I'd rather stick with this method of playing audio.
Thank you for any help you can provide.
-Ryan
I had a similar issue in past and found out that concurrent thread access was the reason.
Specifically, I think calling performSelectorAfterDelay could be the reason if another thread tries to modify the audio session at the same time the delay ends, as then we'll have two different threads trying to access the audio session.
So, I suggest to check your code again and make sure all calls to playSound are made from the main thread. Also, it may be better to use performSelectorOnMainThread instead of performSelectorAfterDelay as the docs say:
Invocations of blocks, key-value observers, or notification handlers are not guaranteed to be made on any particular thread or queue. Instead, AV Foundation invokes these handlers on threads or queues on which it performs its internal tasks.

Upload video to FTP from iDevice

I am working on an APP for user to upload videos to our FTP server
So far, everything almost done but I met one issue is that after users upload videos(.MOV), I failed to open and play the files.
The error message that quicktime player returns is "can't open because the movie's file format is not recognized"
In my codes, I let users select videos by using ALAssetsLibrady
Then load the video into an ALAsset object, before start uploading, load the video into a NSInputStream object from ALAsset, here is the codes.
ALAssetRepresentation *rep = [currentAsset defaultRepresentation];
Byte *buffer = (Byte*)malloc(rep.size);
NSUInteger buffered = [rep getBytes:buffer fromOffset:0.0 length:rep.size error:nil];
NSData *data = [NSData dataWithBytesNoCopy:buffer length:buffered freeWhenDone:YES];
iStream = [NSInputStream inputStreamWithData:data];
[iStream open];
Next step is to set a NSOutputStream and open it, handle uploading operation by following codes.
- (void)stream:(NSStream *)aStream handleEvent:(NSStreamEvent)eventCode
{
switch (eventCode) {
case NSStreamEventNone:
{
break;
}
case NSStreamEventOpenCompleted:
{
//opened connection
NSLog(#"opened connection");
break;
}
case NSStreamEventHasBytesAvailable:
{
// should never happen for the output stream
[self stopSendWithStatus:#"should never happen for the output stream"];
break;
}
case NSStreamEventHasSpaceAvailable:
{
// If we don't have any data buffered, go read the next chunk of data.
NSInteger bufferSize = 65535;
uint8_t *buffer = malloc(bufferSize);
if (bufferOffset == bufferLimit) {
NSInteger bytesRead = [iStream read:buffer maxLength:bufferSize];
if (bytesRead == -1) {
[self stopSendWithStatus:#"file read error"];
} else if (bytesRead == 0) {
[self stopSendWithStatus:nil];
} else {
bufferOffset = 0;
bufferLimit = bytesRead;
}
}
// If we're not out of data completely, send the next chunk.
if (bufferOffset != bufferLimit) {
NSInteger bytesWritten = [oStream write:&buffer[bufferOffset] maxLength:bufferLimit - bufferOffset];
if (bytesWritten == -1) {
[self stopSendWithStatus:#"file write error"];
} else {
bufferOffset += bytesWritten;
}
}
//NSLog(#"available");
break;
}
case NSStreamEventErrorOccurred:
{
//stream open error
[self stopSendWithStatus:[[aStream streamError] description]];
break;
}
case NSStreamEventEndEncountered: //ignore
NSLog(#"end");
break;
}
}
There is no any error occurs, the video file does upload to FTP with correct file size and name, but just can't open it.
Anybody knows any clue?
I have made NSInputStream implementation for streaming ALAsset objects - POSInputStreamLibrary. It doesn't read the whole 1GB video into memory as your solution, but reads movie with chunks instead. Of course this is not the only feature of POSBlobInputStream. More info at my GitHub repository.
I know this probably isn't the answer you're looking for, but you should NOT use a direct connection via FTP to allow users to upload files to your webserver. It's unsecure and slow compared with REST.
Instead, why not write a tiny bit of php to handle the upload, and POST the file from the app via REST? here:
$uploaddir = 'uploads/';
$file = basename($_FILES['file']['name']);
$uploadfile = $uploaddir . $file;
I also recommend using AFNetworking to handle the POST request http://afnetworking.com/
First of all,I guess you meant to reduce memory capacity by convert ALAsset to NSInputStream other than NSData.But you convert it to NSData firstly then convert NSData you got to NSInputStream,it doesn't make sense and would not reduce memory capacity for you have already put your video into memory with NSData.
So if you want to transfer your video via Stream in order to reduce memory pressure(or you have no choice because your video is up to 2GB or more),you should use CFStreamCreateBoundPair to upload file chunk by chunk,see the Apple iOS Developer Library written below.
For large blocks of constructed data, call CFStreamCreateBoundPair to create a pair of streams, then call the setHTTPBodyStream: method to tell NSMutableURLRequest to use one of those streams as the source for its body content. By writing into the other stream, you can send the data a piece at a time.
I have a swift version of converting ALAsset to NSInputStream via CFStreamCreateBoundPair in github.The key point is just like the Documents written.Another reference is this question.
Hope it would be helpful for you.

AVCaptureMovieFileOutput - how it saves video

I am using the following code to capture video and save it to the documents folder of my app:
AVCaptureDeviceInput *captureInput = [AVCaptureDeviceInput deviceInputWithDevice:[AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo] error:NULL];
m_captureFileOutput = [[AVCaptureMovieFileOutput alloc] init];
captureSession = [[AVCaptureSession alloc] init];
[captureSession addInput:captureInput];
[captureSession addOutput:m_captureFileOutput];
[captureSession beginConfiguration];
[captureSession setSessionPreset:AVCaptureSessionPresetHigh];
[captureSession commitConfiguration];
[captureSession startRunning];
...some function that starts the recording process...
[m_captureFileOutput startRecordingToOutputFileURL:url recordingDelegate:self];
...some function that ends the recording process...
[m_captureFileOutput stopRecording];
The catch is, my goal is to be able to record up to 9 hours of video at a time. Practically, is is feasible to record a video of this size using this method? Does AVCaptureMovieFileOutput encode and save the video to the disk in real time as it receives frames from the camera, or is the entire video buffered in RAM before being processed after [m_captureFileOutput stopRecording]; is called?
If this approach is not reasonable for recording such a long duration of video, what might be a reasonable alternative?
Thanks,
James
Pretty sure the AVCaptureMovieFileOutput appends to the file and does not use a in memory buffer (maybe it does but it flushes it to the file before it gets too large)...some evidence of this can be seen in the movieFragmentInterval property here. Also i have used this method to record to a file for large files, and it works ok, if it was keeping the file in memory one would run out of memory pretty quickly under some presets (1280x720 for example)

AVAssetReader and Audio Queue streaming problem

I have a problem with the AVAssetReader here to get samples from the iPod library and stream it via Audio Queue. I have not been able to find any such example so I try to implement my own but it seems that somehow the AssetReader is "screwed up" at the callback function of audio queue. Specifically it fails when it does the copyNextSampleBuffer ie it returns null when it is not finished yet. I have made sure the pointer exists and such so it will be great if anyone can help.
Below is the callback function code that I have used. This callback function 'works' when it is not called by the AudioQueue callback.
static void HandleOutputBuffer (
void *playerStateH,
AudioQueueRef inAQ,
AudioQueueBufferRef inBuffer
) {
AQPlayerState *pplayerState = (AQPlayerState *) playerStateH;
//if (pplayerState->mIsRunning == 0) return;
UInt32 bytesToRead = pplayerState->bufferByteSize;
[[NSNotificationCenter defaultCenter] postNotificationName:NOTIF_callsample object:nil];
float * inData =(float *) inBuffer->mAudioData;
int offsetSample = 0;
//Loop until finish reading from the music data
while (bytesToRead) {
/*THIS IS THE PROBLEMATIC LINE*/
CMSampleBufferRef sampBuffer = [pplayerState->assetWrapper getNextSampleBuffer]; //the assetreader getting nextsample with copyNextSampleBuffer
if (sampBuffer == nil) {
NSLog(#"No more data to read from");
// NSLog(#"aro status after null %d",[pplayerState->ar status]);
AudioQueueStop (
pplayerState->mQueue,
false
);
pplayerState->mIsRunning = NO;
return;
}
AudioBufferList audioBufferList;
CMBlockBufferRef blockBuffer;
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampBuffer, NULL, &audioBufferList, sizeof(audioBufferList), NULL, NULL, 0, &blockBuffer);
AudioBuffer audioBuffer = audioBufferList.mBuffers[0];
memcpy(inData + (2*offsetSample),audioBuffer.mData,audioBuffer.mDataByteSize);
bytesToRead = bytesToRead - audioBuffer.mDataByteSize;
offsetSample = offsetSample + audioBuffer.mDataByteSize/8;
}
inBuffer->mAudioDataByteSize = offsetSample*8;
AudioQueueEnqueueBuffer (
pplayerState->mQueue,
inBuffer,
0,
0
);
}
I was getting this same mystifying error. Sure enough, "setting up" an audio session made the error go away. This is how I set up my audio session.
- (void)setupAudio {
[[AVAudioSession sharedInstance] setDelegate:self];
[[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryAmbient error:nil];
NSError *activationError = nil;
[[AVAudioSession sharedInstance] setActive: YES error:&activationError];
NSLog(#"setupAudio ACTIVATION ERROR IS %#", activationError);
[[AVAudioSession sharedInstance] setPreferredIOBufferDuration:0.1 error:&activationError];
NSLog(#"setupAudio BUFFER DURATION ERROR IS %#", activationError);
}
From the Audio Session Programming Guide, under AVAudioSessionCategoryAmbient:
This category allows audio from the iPod, Safari, and other built-in applications to play while your application is playing audio.
Using an AVAssetReader probably uses iOS' hardware decoder, which blocks the use of the AudioQueue. Setting AVAudioSessionCategoryAmbient means the audio is rendered in software, allowing both to work at the same time - however, this would have an impact on performance/battery life. (See Audio Session Programming Guide under "How Categories Affect Encoding and Decoding").
Ok I have somehow solved this weird error... Apparently it is because of the audio session not properly set. Talk about lack of documentation on this one...