So I setup an Audio session
AudioSessionInitialize(NULL, NULL, NULL, NULL);
AudioSessionSetActive(true);
UInt32 audioCategory = kAudioSessionCategory_MediaPlayback; //for output audio
OSStatus tErr = AudioSessionSetProperty(kAudioSessionProperty_AudioCategory,sizeof(audioCategory),&audioCategory);
Then setup either an AudioQueue or RemoteIO setup to play back some audio straight from a file.
AudioQueueStart(mQueue, NULL);
Once my audio is playing I can see the 'Play Icon' in the status bar of my app. I next setup an AVAssetReader.
AVAssetTrack* songTrack = [songURL.tracks objectAtIndex:0];
NSDictionary* outputSettingsDict = [[NSDictionary alloc] initWithObjectsAndKeys:
[NSNumber numberWithInt:kAudioFormatLinearPCM],AVFormatIDKey,
// [NSNumber numberWithInt:AUDIO_SAMPLE_RATE],AVSampleRateKey, /*Not Supported*/
// [NSNumber numberWithInt: 2],AVNumberOfChannelsKey, /*Not Supported*/
[NSNumber numberWithInt:16],AVLinearPCMBitDepthKey,
[NSNumber numberWithBool:NO],AVLinearPCMIsBigEndianKey,
[NSNumber numberWithBool:NO],AVLinearPCMIsFloatKey,
[NSNumber numberWithBool:NO],AVLinearPCMIsNonInterleaved,
nil];
NSError* error = nil;
AVAssetReader* reader = [[AVAssetReader alloc] initWithAsset:songURL error:&error];
// {
// AVAssetReaderTrackOutput* output = [[AVAssetReaderTrackOutput alloc] initWithTrack:songTrack outputSettings:outputSettingsDict];
// [reader addOutput:output];
// [output release];
// }
{
AVAssetReaderAudioMixOutput * readaudiofile = [AVAssetReaderAudioMixOutput assetReaderAudioMixOutputWithAudioTracks:(songURL.tracks) audioSettings:outputSettingsDict];
[reader addOutput:readaudiofile];
[readaudiofile release];
}
return reader;
and when I called [reader startReading] the Audio stops playing. In both the RemoteIO and AudioQueue case the callback stops getting called.
If I add the mixing option:
AudioSessionSetProperty(kAudioSessionProperty_OverrideCategoryMixWithOthers, sizeof (UInt32), &(UInt32) {0});
Then the 'Play Icon' no longer appears when the audio after AudioQueueStart is called. I am also locked out of other features since the phone doesn't view me as the primary audio source.
Does anyone know a way I can use the AVAssetReader and still remain the Primary audio source?
As of iOS 5 this has been fixed. It still does not work in iOS 4 or below. MixWithOthers is not needed (can be set to false) and the AudioQueue/RemoteIO will continue to receive callbacks even if an AVAssetReader is reading.
Related
I have searched for suitable answers to my question but I did not find any helpful so far.
I want to record the decibel in the environment. If a specific threshold is exceeded the app shall play a sound or song file. Everything works fine so far but I have troubles to keep the app running in the background.
I have already added the attribute "Application does not run in the background" and set its value to "NO". I've read that one should add the "external-accessory" element to the "Required background modes". I added that too but still it does not work.
I am using the AVAudioRecorder to record the sound and the AVPlayer to play the sound/music file. First I used the MPMediaController iPodMusicPlayer but it throws an exception along with the attribute "Required background modes".
EDIT:
I am using xCode 4.5 with iOS 6
EDIT 2:
When I add the string viop to the "Required background modes" it seems to continue recording while in background. But it still does not play the music file when being in background. I also tried to add the "audio" value too but it did not help.
EDIT 3:
I've consulted the apples developer reference. It seems like you have to configure your AVAudioSession. With that it seems to work (link to reference). But now I have troubles in playing more than one file because as soon as the first track has finished playing, the app will go into suspended mode again. As far as I know there is no possibility to initialize the AVPlayer or AVAudioPlayer with more than one file. I used the delegate methode audioPlayerDidFinishPlaying:successfully: to set the next track but it did not work.
EDIT 4: Ok, one possibility is to avoid stopping the recorder, that is removing the [record stop] so that it even records the sound when music is played. It is a work around that works but still I appreciate any other (better) solution to this. A solution that doesn't need to keep the recorder running all the time.
the relevant code:
I initialize everything in the viewDidLoad method:
- (void)viewDidLoad
{
[super viewDidLoad];
// Do any additional setup after loading the view.
//[MPMusicPlayerController applicationMusicPlayer];
NSURL *url = [NSURL fileURLWithPath:#"/dev/null"];
NSDictionary *settings = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithFloat: 44100.0], AVSampleRateKey,
[NSNumber numberWithInt: kAudioFormatAppleLossless], AVFormatIDKey,
[NSNumber numberWithInt: 1], AVNumberOfChannelsKey,
[NSNumber numberWithInt: AVAudioQualityMax], AVEncoderAudioQualityKey,
nil];
NSError *error;
recorder = [[AVAudioRecorder alloc] initWithURL:url settings:settings error:&error];
lowPassResults = -120.0;
thresholdExceeded = NO;
if (recorder) {
[recorder prepareToRecord];
recorder.meteringEnabled = YES;
[recorder record];
levelTimer = [NSTimer scheduledTimerWithTimeInterval: 0.03 target: self selector: #selector(levelTimerCallback:) userInfo: nil repeats: YES];
} else {
NSString* errorDescription = [error description];
NSLog(errorDescription);
}
}
The levelTimer Callback that is called every 0.03 seconds:
- (void)levelTimerCallback:(NSTimer *)timer {
//refreshes the average and peak power meters (the meter uses a logarithmic scale, with -160 being complete quiet and zero being maximum input
[recorder updateMeters];
const double ALPHA = 0.05;
float averagePowerForChannel = [recorder averagePowerForChannel:0];
//adjust the referential
averagePowerForChannel = averagePowerForChannel / 0.6;
//converts the values
lowPassResults = ALPHA * averagePowerForChannel + (1.0 - ALPHA) * lowPassResults;
float db = lowPassResults + 120;
db = db < 0? 0: db;
if(db >= THRESHOLD)
{
[self playFile];
}
}
Finally the playFile method which plays the music file:
- (void) playFile {
NSString* title = #"(You came down) For a day";
NSString* artist = #"Forge";
NSMutableArray *songItemsArray = [[NSMutableArray alloc] init];
MPMediaQuery *loadSongsQuery = [[MPMediaQuery alloc] init];
MPMediaPropertyPredicate *artistPredicate = [MPMediaPropertyPredicate predicateWithValue:artist forProperty:MPMediaItemPropertyArtist];
MPMediaPropertyPredicate *titlePredicate = [MPMediaPropertyPredicate predicateWithValue:title forProperty:MPMediaItemPropertyTitle];
[loadSongsQuery addFilterPredicate:artistPredicate];
[loadSongsQuery addFilterPredicate:titlePredicate];
NSArray *itemsFromGenericQuery = [loadSongsQuery items];
if([itemsFromGenericQuery count])
[songItemsArray addObject: [itemsFromGenericQuery objectAtIndex:0]];
if([songItemsArray count])
{
MPMediaItemCollection *collection = [[MPMediaItemCollection alloc] initWithItems:songItemsArray];
if ([collection count]) {
MPMediaItem* mpItem = [[collection items]objectAtIndex:0];
NSURL* mediaUrl = [mpItem valueForProperty:MPMediaItemPropertyAssetURL];
AVPlayerItem* item = [AVPlayerItem playerItemWithURL:mediaUrl];
musicPlayer = [[AVPlayer alloc] initWithPlayerItem:item];
[musicPlayer play];
}
}
}
Can anybody help me with my problem? Did I miss anything else?
Try this,
AppDelegate.m
- (void)applicationDidEnterBackground:(UIApplication *)application
{
__block UIBackgroundTaskIdentifier task = 0;
task=[application beginBackgroundTaskWithExpirationHandler:^{
NSLog(#"Expiration handler called %f",[application backgroundTimeRemaining]);
[application endBackgroundTask:task];
task=UIBackgroundTaskInvalid;
}];
}
I am using AVAudioRecorder .if I tap on record button,The recording should start/save only after recognising the voice.
- (void)viewDidLoad
{
recording = NO;
NSString * filePath = [NSHomeDirectory()
stringByAppendingPathComponent:#"Documents/recording.caf"];
NSDictionary *recordSettings =
[[NSDictionary alloc] initWithObjectsAndKeys:
[NSNumber numberWithFloat: 44100.0],AVSampleRateKey,
[NSNumber numberWithInt: kAudioFormatAppleLossless],
AVFormatIDKey,
[NSNumber numberWithInt: 1],
AVNumberOfChannelsKey,
[NSNumber numberWithInt: AVAudioQualityMax],
AVEncoderAudioQualityKey,nil];
AVAudioRecorder *newRecorder = [[AVAudioRecorder alloc]
initWithURL: [NSURL fileURLWithPath:filePath]
settings: recordSettings
error: nil];
[recordSettings release];
self.soundRecorder = newRecorder;
[newRecorder release];
self.soundRecorder.delegate = self;
NSLog(#"path is %#",filePath);
[super viewDidLoad];
}
- (IBAction) record:(id) sender {
if (recording) {
[self.soundRecorder stop];
[recordBtn setTitle:#"Record" forState:UIControlStateNormal];
recording = NO;
} else {
[self.soundRecorder record];
[recordBtn setTitle:#"Stop" forState:UIControlStateNormal];
recording = YES;
}
}
- (IBAction) play {
NSString * filePath = [NSHomeDirectory()
stringByAppendingPathComponent:#"Documents/recording.caf"];
AVAudioPlayer *newPlayer = [[AVAudioPlayer alloc] initWithContentsOfURL: [NSURL fileURLWithPath:filePath] error: nil];
newPlayer.delegate = self;
NSLog(#"playing file at url %# %d",[[newPlayer url] description],[newPlayer play]);
}
Please Help me out.
That's a challenging goal you have. iOS doesn't include the smarts to recognize voice specifically, you will have to provide a filter or your own to do that. If you just want VOX type support (i.e. start recording when a given level of audio is detected) it is easily done by monitoring audio levels using the Audio Toolbox Framework.
If you need to recognize voice specifically you will need a specialized recognition filter to run your audio data through.
If you had such a filter you could take one of two approaches: a) Just record everything then post-process the resulting audio data to locate the time index at which voice is recognized and just ignore the data up to that point (copy the remaining data to another buffer perhaps) or b) use the Audio Toolbox Framework to monitor the recorded data in real time. Pass the data through your voice finding filter and only start buffering the data when your filter triggers.
Actual implementation is quite involved and too long to address here, but I have seen sample code in books and online that you could start from. I'm sorry I don't have any links to share at this time but will post any I come across in the near future.
I think this might help you with this. I think detecting a volume spike is all you need for your purposes, right?
http://mobileorchard.com/tutorial-detecting-when-a-user-blows-into-the-mic/
Pier.
I´m trying to make a library for iPhone, so I´m trying to init the camera just with a call.
The problem comes when I call "self" in this declaration:
"[captureOutput setSampleBufferDelegate:self queue:queue];"
because the compiler says:" self was not declared in this scope", what Do I need to do to set the same class as a "AVCaptureVideoDataOutputSampleBufferDelegate"?. At least point me in the right direction :P.
Thank you !!!
here is the complete function:
bool VideoCamera_Init(){
//Init Capute from the camera and show the camera
/*We setup the input*/
AVCaptureDeviceInput *captureInput = [AVCaptureDeviceInput
deviceInputWithDevice:[AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo]
error:nil];
/*We setupt the output*/
AVCaptureVideoDataOutput *captureOutput = [[AVCaptureVideoDataOutput alloc] init];
/*While a frame is processes in -captureOutput:didOutputSampleBuffer:fromConnection: delegate methods no other frames are added in the queue.
If you don't want this behaviour set the property to NO */
captureOutput.alwaysDiscardsLateVideoFrames = YES;
/*We specify a minimum duration for each frame (play with this settings to avoid having too many frames waiting
in the queue because it can cause memory issues). It is similar to the inverse of the maximum framerate.
In this example we set a min frame duration of 1/10 seconds so a maximum framerate of 10fps. We say that
we are not able to process more than 10 frames per second.*/
captureOutput.minFrameDuration = CMTimeMake(1, 20);
/*We create a serial queue to handle the processing of our frames*/
dispatch_queue_t queue;
queue = dispatch_queue_create("cameraQueue", NULL);
variableconnombrealeatorio= [[VideoCameraThread alloc] init];
[captureOutput setSampleBufferDelegate:self queue:queue];
dispatch_release(queue);
// Set the video output to store frame in BGRA (It is supposed to be faster)
NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey;
NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA];
NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key];
[captureOutput setVideoSettings:videoSettings];
/*And we create a capture session*/
AVCaptureSession * captureSession = [[AVCaptureSession alloc] init];
captureSession.sessionPreset= AVCaptureSessionPresetMedium;
/*We add input and output*/
[captureSession addInput:captureInput];
[captureSession addOutput:captureOutput];
/*We start the capture*/
[captureSession startRunning];
return TRUE;
}
I also did the next class, but the buffer is empty:
"
#import "VideoCameraThread.h"
CMSampleBufferRef bufferCamara;
#implementation VideoCameraThread
(void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
bufferCamera=sampleBuffer;
}
"
You are writing a C function, which has no concept of Objective C classes, objects or the self identifier. You will need to modify your function to take a parameter to accept the sampleBufferDelegate that you want to use:
bool VideoCamera_Init(id<AVCaptureAudioDataOutputSampleBufferDelegate> sampleBufferDelegate) {
...
[captureOutput setSampleBufferDelegate:sampleBufferDelegate queue:queue];
...
}
Or you could write your library with an Objective C object-oriented interface rather than a C-style interface.
You also have problems with memory management in this function. For instance, you are allocating an AVCaptureSession and assigning it to a local variable. After this function returns you will have no way of retrieving that AVCaptureSession so that you can release it.
I want to record video and grab frames at the same time with my code.
I am using AVCaptureVideoDataOutput for grab frames and AVCaptureMovieFileOutput for video recording. But can't work and get the error code -12780 while working at the same time but individual.
I searched this problem but get no answer. Did anyone have the same experience or explain?
It's really bother me for a while time.
thanks.
I can't answer the specific question put, but I've been successfully recording video and grabbing frames at the same time using:
AVCaptureSession and AVCaptureVideoDataOutput to route frames into my own code
AVAssetWriter, AVAssetWriterInput and AVAssetWriterInputPixelBufferAdaptor to write frames out to an H.264 encoded movie file
That's without investigating audio. I end up getting CMSampleBuffers from the capture session and then pushing them into the pixel buffer adaptor.
EDIT: so my code looks more or less like, with the bits you're having no problems with skimmed over and ignoring issues of scope:
/* to ensure I'm given incoming CMSampleBuffers */
AVCaptureSession *captureSession = alloc and init, set your preferred preset/etc;
AVCaptureDevice *captureDevice = default for video, probably;
AVCaptureDeviceInput *deviceInput = input with device as above,
and attach it to the session;
AVCaptureVideoDataOutput *output = output for 32BGRA pixel format, with me as the
delegate and a suitable dispatch queue affixed.
/* to prepare for output; I'll output 640x480 in H.264, via an asset writer */
NSDictionary *outputSettings =
[NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:640], AVVideoWidthKey,
[NSNumber numberWithInt:480], AVVideoHeightKey,
AVVideoCodecH264, AVVideoCodecKey,
nil];
AVAssetWriterInput *assetWriterInput = [AVAssetWriterInput
assetWriterInputWithMediaType:AVMediaTypeVideo
outputSettings:outputSettings];
/* I'm going to push pixel buffers to it, so will need a
AVAssetWriterPixelBufferAdaptor, to expect the same 32BGRA input as I've
asked the AVCaptureVideDataOutput to supply */
AVAssetWriterInputPixelBufferAdaptor *pixelBufferAdaptor =
[[AVAssetWriterInputPixelBufferAdaptor alloc]
initWithAssetWriterInput:assetWriterInput
sourcePixelBufferAttributes:
[NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:kCVPixelFormatType_32BGRA],
kCVPixelBufferPixelFormatTypeKey,
nil]];
/* that's going to go somewhere, I imagine you've got the URL for that sorted,
so create a suitable asset writer; we'll put our H.264 within the normal
MPEG4 container */
AVAssetWriter *assetWriter = [[AVAssetWriter alloc]
initWithURL:URLFromSomwhere
fileType:AVFileTypeMPEG4
error:you need to check error conditions,
this example is too lazy];
[assetWriter addInput:assetWriterInput];
/* we need to warn the input to expect real time data incoming, so that it tries
to avoid being unavailable at inopportune moments */
assetWriterInput.expectsMediaDataInRealTime = YES;
... eventually ...
[assetWriter startWriting];
[assetWriter startSessionAtSourceTime:kCMTimeZero];
[captureSession startRunning];
... elsewhere ...
- (void) captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// a very dense way to keep track of the time at which this frame
// occurs relative to the output stream, but it's just an example!
static int64_t frameNumber = 0;
if(assetWriterInput.readyForMoreMediaData)
[pixelBufferAdaptor appendPixelBuffer:imageBuffer
withPresentationTime:CMTimeMake(frameNumber, 25)];
frameNumber++;
}
... and, to stop, ensuring the output file is finished properly ...
[captureSession stopRunning];
[assetWriter finishWriting];
This is a swift version of Tommy's answer.
// Set up the Capture Session
// Add the Inputs
// Add the Outputs
var outputSettings = [
AVVideoWidthKey : Int(640),
AVVideoHeightKey : Int(480),
AVVideoCodecKey : .h264
]
var assetWriterInput = AVAssetWriterInput(mediaType: AVMediaTypeVideo,outputSettings: outputSettings)
var pixelBufferAdaptor = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput, sourcePixelBufferAttributes:
[ kCVPixelBufferPixelFormatTypeKey : Int(kCVPixelFormatType_32BGRA)])
var assetWriter = AVAssetWriter(url: URLFromSomwhere, fileType: AVFileTypeMPEG4 , error : Error )
assetWriter.addInput(assetWriterInput)
assetWriterInput.expectsMediaDataInRealTime = true
assetWriter.startWriting()
assetWriter.startSession(atSourceTime: kCMTimeZero)
captureSession.startRunning()
func captureOutput(_ captureOutput: AVCaptureOutput, didOutputSampleBuffer sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
var imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
// a very dense way to keep track of the time at which this frame
// occurs relative to the output stream, but it's just an example!
var frameNumber: Int64 = 0
if assetWriterInput.readyForMoreMediaData {
pixelBufferAdaptor.appendPixelBuffer(imageBuffer, withPresentationTime: CMTimeMake(frameNumber, 25))
}
frameNumber += 1 }
captureSession.stopRunning()
assetWriter.finishWriting()
I don't gurantee a 100% accuracy though , because I'm new to swift.
Here's the problem. I am using AVCaptureVideoDataOutput to get video frames from camera and make video from them with AVAssetWriter. Its working OK, but the video that I get is upside down because default orientation of device for my app is landscape left, not landscape right as its stated by default in AVCaptureVideoDataOutput. Im trying to change orientation in AVCaptureConnection class, but isVideoOrientationSupported is always false, is it somehow possible to fix it?
Here is some code:
AVCaptureDeviceInput *captureInput = [AVCaptureDeviceInput
deviceInputWithDevice:[AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo]
error:nil];
/*We setupt the output*/
AVCaptureVideoDataOutput *captureOutput = [[AVCaptureVideoDataOutput alloc] init];
captureOutput.alwaysDiscardsLateVideoFrames = YES;
captureOutput.minFrameDuration = CMTimeMake(1.0, 24.0); //Uncomment it to specify a minimum duration for each video frame
[captureOutput setSampleBufferDelegate:self queue:dispatch_get_main_queue()];
// Set the video output to store frame in BGRA (It is supposed to be faster)
NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey;
NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA];
NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key];
[captureOutput setVideoSettings:videoSettings];
/*And we create a capture session*/
self.captureSession = [[AVCaptureSession alloc] init];
self.captureSession.sessionPreset = AVCaptureSessionPresetLow;
/*We add input and output*/
if ([self.captureSession canAddInput:captureInput])
{
[self.captureSession addInput:captureInput];
}
if ([self.captureSession canAddOutput:captureOutput])
{
[self.captureSession addOutput:captureOutput];
}
/*We add the preview layer*/
self.prevLayer = [AVCaptureVideoPreviewLayer layerWithSession: self.captureSession];
if ([self.prevLayer isOrientationSupported])
{
[self.prevLayer setOrientation:AVCaptureVideoOrientationLandscapeLeft];
}
self.prevLayer.frame = self.view.bounds;
self.prevLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
[self.view.layer addSublayer: self.prevLayer];
AVCaptureConnection *videoConnection = NULL;
[self.captureSession beginConfiguration];
for ( AVCaptureConnection *connection in [captureOutput connections] )
{
for ( AVCaptureInputPort *port in [connection inputPorts] )
{
if ( [[port mediaType] isEqual:AVMediaTypeVideo] )
{
videoConnection = connection;
}
}
}
if([videoConnection isVideoOrientationSupported]) // **Here it is, its always false**
{
[videoConnection setVideoOrientation:AVCaptureVideoOrientationLandscapeLeft];
}
[self.captureSession commitConfiguration];
[self.captureSession startRunning];
Upd: figured that when exporting video, the AVAssetExportSession loses preferredTransform info.
I ran into the same problem and poked around the AVCamDemo from WWDC. I don't know why (yet) but if you query your videoConnection right after you create all the inputs/outputs/connections then both isVideoOrientationSupported and supportsVideoOrientation return NO.
However, if you query supportsVideoOrientation or isVideoOrientationSupported at some later point (after the GUI is setup for instance) then it will return YES. For instance I query it right after the user clicks the record button just before I call [[self movieFileOutput] startRecordingToOutputFileURL...]
Give it a try, works for me.
From here: http://developer.apple.com/library/ios/#qa/qa1744/_index.html#//apple_ref/doc/uid/DTS40011134
Currently, the capture outputs for a movie file
(AVCaptureMovieFileOutput) and still image (AVCaptureStillImageOutput)
support setting the orientation, but the data output for processing
video frames (AVCaptureVideoDataOutput) does not.