Hardware problem? iphone12 consumes more time on VTCompressionSessionEncodeFrame and AVAssetWriter than iphoneXs - iphone

This question is similar with
[https://developer.apple.com/forums/thread/127613]
In my demo, the average execution time of VTCompressionSessionEncodeFrame in iphone12 is 10ms while iphoneXs only costs 6ms. If I decrease the frequency of calling that function, the execution time also decreases but the total time(delay+execution time) stays same about 11ms on iphone12 and 7ms on iphoneXs.
I try various configuration of VTCompressionSession, but the result (iphone12 > iphoneXs) never change!
Here is the configuration of VTCompressionSession
bool VideoToolboxEncoder::InitCompressionSession() {
CFMutableDictionaryRef sourceImageBufferAttributes = CFDictionaryCreateMutable(kCFAllocatorDefault, 0, NULL, NULL);
CFDictionarySetValue(sourceImageBufferAttributes, kCVPixelBufferOpenGLESCompatibilityKey, kCFBooleanTrue);
CFDictionaryRef io_surface_value = CFDictionaryCreate(kCFAllocatorDefault, NULL, NULL, 0, NULL, NULL);
CFDictionarySetValue(sourceImageBufferAttributes, kCVPixelBufferIOSurfacePropertiesKey, io_surface_value);
OSType target_pixelformat = kCVPixelFormatType_420YpCbCr8Planar;
dict_set_i32(sourceImageBufferAttributes,
kCVPixelBufferPixelFormatTypeKey, target_pixelformat);
dict_set_i32(sourceImageBufferAttributes,
kCVPixelBufferBytesPerRowAlignmentKey, 16);
CFDictionarySetValue(sourceImageBufferAttributes, kCVPixelBufferWidthKey, CFNumberCreate(NULL, kCFNumberIntType, &codec_settings.width));
CFDictionarySetValue(sourceImageBufferAttributes, kCVPixelBufferHeightKey, CFNumberCreate(NULL, kCFNumberIntType, &codec_settings.height));
OSStatus status = VTCompressionSessionCreate(NULL,
codec_settings.width,
codec_settings.height,
kCMVideoCodecType_HEVC,
NULL,
sourceImageBufferAttributes,
NULL,
encodeComplete,
this,
&compression_session_);
status = VTSessionSetProperty(compression_session_, kVTCompressionPropertyKey_RealTime, kCFBooleanTrue);
status = VTSessionSetProperty(compression_session_, kVTCompressionPropertyKey_AllowFrameReordering, kCFBooleanFalse);
status = VTSessionSetProperty(compression_session_, kVTCompressionPropertyKey_ExpectedFrameRate, (__bridge CFTypeRef)#(29.97));
status = VTSessionSetProperty(compression_session_, kVTCompressionPropertyKey_MaxKeyFrameInterval,
(__bridge CFTypeRef)#(codec_settings.gop_size));
CFStringRef profileRef;
profileRef = kVTProfileLevel_HEVC_Main_AutoLevel;
status = VTSessionSetProperty(compression_session_, kVTCompressionPropertyKey_ProfileLevel, profileRef);
status = VTSessionSetProperty(compression_session_, kVTCompressionPropertyKey_AllowOpenGOP, kCFBooleanFalse);
status = VTSessionSetProperty(compression_session_, kVTCompressionPropertyKey_AverageBitRate, (__bridge CFTypeRef)#(codec_settings.bitrate));
status = VTSessionSetProperty(compression_session_, kVTCompressionPropertyKey_DataRateLimits, (__bridge CFTypeRef)#[#20000000, #2]);
VTCompressionSessionPrepareToEncodeFrames(compression_session_);
return 0;
}
I also tryed to export video with AVAssetWriter and I got the same result.
It's unexpected that videotoolbox performance decreases on newer iphones.
I want to figure out this problem is due to my incorrect configuration or hardware of iphone.
Has anyone encountered the same problem?If someone could help with this issue would be really grateful!

Related

Importing a client certificate into the iPhone's keychain

I am writing an application that is communicating with a server which requires the client to authenticate itself using a client certificate. I need to extract the certificate from a .p12 file in the application bundle and add it to the application keychain.
I've been trying to figure out how to get it working from Apple's "Certificate, Key, and Trust Services Tasks for iOS", but to me it seems incomplete and does not specify how I add anything to the keychain(?).
I am quite lost and any help is appriciated, thanks in advance!
"Certificate, Key, and Trust Services Tasks for iOS" does contain sufficient information to extract certificate from a .p12 file.
from listing 2-1 demonstrate how you can extract SecIdentityRef
from listing 2-2 second line (// 1) shows how you can copy
SecCertificateRef out of SecIdentityRef.
example loading p12 file, extract certificate, install to keychain.
(error handling and memory management was not included)
NSString * password = #"Your-P12-File-Password";
NSString * path = [[NSBundle mainBundle]
pathForResource:#"Your-P12-File" ofType:#"p12"];
// prepare password
CFStringRef cfPassword = CFStringCreateWithCString(NULL,
password.UTF8String,
kCFStringEncodingUTF8);
const void *keys[] = { kSecImportExportPassphrase };
const void *values[] = { cfPassword };
CFDictionaryRef optionsDictionary
= CFDictionaryCreate(kCFAllocatorDefault, keys, values, 1,
NULL, NULL);
// prepare p12 file content
NSData * fileContent = [[NSData alloc] initWithContentsOfFile:path];
CFDataRef cfDataOfFileContent = (__bridge CFDataRef)fileContent;
// extract p12 file content into items (array)
CFArrayRef items = CFArrayCreate(NULL, 0, 0, NULL);
OSStatus status = errSecSuccess;
status = SecPKCS12Import(cfDataOfFileContent,
optionsDictionary,
&items);
// TODO: error handling on status
// extract identity
CFDictionaryRef yourIdentityAndTrust = CFArrayGetValueAtIndex(items, 0);
const void *tempIdentity = NULL;
tempIdentity = CFDictionaryGetValue(yourIdentityAndTrust,
kSecImportItemIdentity);
SecIdentityRef yourIdentity = (SecIdentityRef)tempIdentity;
// get certificate from identity
SecCertificateRef yourCertificate = NULL;
status = SecIdentityCopyCertificate(yourIdentity, &yourCertificate);
// at last, install certificate into keychain
const void *keys2[] = { kSecValueRef, kSecClass };
const void *values2[] = { yourCertificate, kSecClassCertificate };
CFDictionaryRef dict
= CFDictionaryCreate(kCFAllocatorDefault, keys2, values2,
2, NULL, NULL);
status = SecItemAdd(dict, NULL);
// TODO: error handling on status

AudioQueueEnqueueBuffer failing

My calls to AudioQueueEnqueueBuffer are failing, with OSStatus -66686. I've never seen this error code before, and am able to find no information on it anywhere. Converting it to an NSError and printing its description gives me the following console output:
Error: Error Domain=NSOSStatusErrorDomain Code=-66686 "The operation couldn’t be completed. (OSStatus error -66686.)"
Here's all my relevant AudioQueue initialization code:
AudioQueueRef audioQueue;
AudioQueueBufferRef aq_buffer[3];
AudioStreamBasicDescription streamFormat;
streamFormat.mSampleRate = 44100;
streamFormat.mFormatID = kAudioFormatLinearPCM;
streamFormat.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked;
streamFormat.mBitsPerChannel = 16;
streamFormat.mChannelsPerFrame = 1;
streamFormat.mBytesPerPacket = 2 * streamFormat.mChannelsPerFrame;
streamFormat.mBytesPerFrame = 2 * streamFormat.mChannelsPerFrame;
streamFormat.mFramesPerPacket = 1;
streamFormat.mReserved = 0;
OSStatus err;
err = AudioQueueNewOutput(&streamFormat, AudioPlayCallback, self,
CFRunLoopGetCurrent(), kCFRunLoopCommonModes, 0, &audioQueue);
// Start playback
err = AudioQueueStart(audioQueue, NULL);
// Enqueue buffers
for (int i = 0; i < 3; i++) {
err = AudioQueueAllocateBuffer (audioQueue, 1024, &aq_buffer[i]);
err = AudioQueueEnqueueBuffer (audioQueue, aq_buffer[i], 0, NULL);
NSLog(#"err : %d", err);
NSError *error = [NSError errorWithDomain:NSOSStatusErrorDomain
code:err
userInfo:nil];
NSLog(#"Error: %#", [error description]);
}
I've tried modifying the size of the buffer I allocate, which has no effect (and the call to AudioQueueAllocateBuffer does not fail anyway). I've tried switching the order of calls between AudioQueueStart and the buffer allocation and enqueueing calls, to no effect. I've checked the comments in AudioQueue.h, and I'm not seeing what I'm doing wrong. The error description is too vague to be helpful.
Why is my AudioQueueEnqueueBuffer call failing?
A search with Spotlight reveals that "66686" is to be found in AudioQueue.h:
kAudioQueueErr_BufferEmpty = -66686
So, whatever you are trying to do, the buffer is empty.
From a quick look at the code above, it looks to me like the SpeakHere recording code. However, you've set it up above for playback.
In either case, you need to allocate the queue's buffers before starting the queue. For recording, your function callback will be invoked periodically when newly recorded data is available.
For playback, the function callback will be invoked periodically when you need to supply more audio data to the queue.
Even after allocating the buffers, you'll have to populate the buffers with data. In the case of playing from a file, I suspect AudioFileReadPackets does it for you. But if you are e.g. playing from a stream, you'll have to push the data into the buffer yourself.
This worked for me:
char pcmData[pcmDataSize];
// add your own code here to populate pcmData with the PCM data to playback
memcpy(audioQueueBufferRef->mAudioData,pcmData, pcmDataSize);
audioQueueBufferRef->mAudioDataByteSize = pcmDataSize;
audioQueueBufferRef->mPacketDescriptionCount = pcmDataSize/2; // change this according to your num bytes per frame
AudioQueueEnqueueBuffer(audioQueueRef, audioQueueBufferRef, 0, NULL);
AudioQueueAllocateBuffer was done in an earlier preparation step.

iOS - AudioUnitRender returned error -10876 on device, but running fine in simulator

I encountered a problem which made me unable to capture input signal from microphone on the device (iPhone4). However, the code runs fine in the simulator.
The code was originally adopted from Apple's MixerHostAudio class from MixerHost sample code. it runs fine both on device and in simulator before I started adding code for capturing mic input.
Wondering if somebody could help me out. Thanks in advance!
Here is my inputRenderCallback function which feeds signal into mixer input:
static OSStatus inputRenderCallback (
void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
recorderStructPtr recorderStructPointer = (recorderStructPtr) inRefCon;
// ....
AudioUnitRenderActionFlags renderActionFlags;
err = AudioUnitRender(recorderStructPointer->iOUnit,
&renderActionFlags,
inTimeStamp,
1, // bus number for input
inNumberFrames,
recorderStructPointer->fInputAudioBuffer
);
// error returned is -10876
// ....
}
Here is my related initialization code:
Now I keep only 1 input in the mixer, so the mixer seems redundant, but works fine before adding input capture code.
// Convenience function to allocate our audio buffers
- (AudioBufferList *) allocateAudioBufferListByNumChannels:(UInt32)numChannels withSize:(UInt32)size {
AudioBufferList* list;
UInt32 i;
list = (AudioBufferList*)calloc(1, sizeof(AudioBufferList) + numChannels * sizeof(AudioBuffer));
if(list == NULL)
return nil;
list->mNumberBuffers = numChannels;
for(i = 0; i < numChannels; ++i) {
list->mBuffers[i].mNumberChannels = 1;
list->mBuffers[i].mDataByteSize = size;
list->mBuffers[i].mData = malloc(size);
if(list->mBuffers[i].mData == NULL) {
[self destroyAudioBufferList:list];
return nil;
}
}
return list;
}
// initialize audio buffer list for input capture
recorderStructInstance.fInputAudioBuffer = [self allocateAudioBufferListByNumChannels:1 withSize:4096];
// I/O unit description
AudioComponentDescription iOUnitDescription;
iOUnitDescription.componentType = kAudioUnitType_Output;
iOUnitDescription.componentSubType = kAudioUnitSubType_RemoteIO;
iOUnitDescription.componentManufacturer = kAudioUnitManufacturer_Apple;
iOUnitDescription.componentFlags = 0;
iOUnitDescription.componentFlagsMask = 0;
// Multichannel mixer unit description
AudioComponentDescription MixerUnitDescription;
MixerUnitDescription.componentType = kAudioUnitType_Mixer;
MixerUnitDescription.componentSubType = kAudioUnitSubType_MultiChannelMixer;
MixerUnitDescription.componentManufacturer = kAudioUnitManufacturer_Apple;
MixerUnitDescription.componentFlags = 0;
MixerUnitDescription.componentFlagsMask = 0;
AUNode iONode; // node for I/O unit
AUNode mixerNode; // node for Multichannel Mixer unit
// Add the nodes to the audio processing graph
result = AUGraphAddNode (
processingGraph,
&iOUnitDescription,
&iONode);
result = AUGraphAddNode (
processingGraph,
&MixerUnitDescription,
&mixerNode
);
result = AUGraphOpen (processingGraph);
// fetch mixer AudioUnit instance
result = AUGraphNodeInfo (
processingGraph,
mixerNode,
NULL,
&mixerUnit
);
// fetch RemoteIO AudioUnit instance
result = AUGraphNodeInfo (
processingGraph,
iONode,
NULL,
&(recorderStructInstance.iOUnit)
);
// enable input of RemoteIO unit
UInt32 enableInput = 1;
AudioUnitElement inputBus = 1;
result = AudioUnitSetProperty(recorderStructInstance.iOUnit,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Input,
inputBus,
&enableInput,
sizeof(enableInput)
);
// setup mixer inputs
UInt32 busCount = 1;
result = AudioUnitSetProperty (
mixerUnit,
kAudioUnitProperty_ElementCount,
kAudioUnitScope_Input,
0,
&busCount,
sizeof (busCount)
);
UInt32 maximumFramesPerSlice = 4096;
result = AudioUnitSetProperty (
mixerUnit,
kAudioUnitProperty_MaximumFramesPerSlice,
kAudioUnitScope_Global,
0,
&maximumFramesPerSlice,
sizeof (maximumFramesPerSlice)
);
for (UInt16 busNumber = 0; busNumber < busCount; ++busNumber) {
// set up input callback
AURenderCallbackStruct inputCallbackStruct;
inputCallbackStruct.inputProc = &inputRenderCallback;
inputCallbackStruct.inputProcRefCon = &recorderStructInstance;
result = AUGraphSetNodeInputCallback (
processingGraph,
mixerNode,
busNumber,
&inputCallbackStruct
);
// set up stream format
AudioStreamBasicDescription mixerBusStreamFormat;
size_t bytesPerSample = sizeof (AudioUnitSampleType);
mixerBusStreamFormat.mFormatID = kAudioFormatLinearPCM;
mixerBusStreamFormat.mFormatFlags = kAudioFormatFlagsAudioUnitCanonical;
mixerBusStreamFormat.mBytesPerPacket = bytesPerSample;
mixerBusStreamFormat.mFramesPerPacket = 1;
mixerBusStreamFormat.mBytesPerFrame = bytesPerSample;
mixerBusStreamFormat.mChannelsPerFrame = 2;
mixerBusStreamFormat.mBitsPerChannel = 8 * bytesPerSample;
mixerBusStreamFormat.mSampleRate = graphSampleRate;
result = AudioUnitSetProperty (
mixerUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
busNumber,
&mixerBusStreamFormat,
sizeof (mixerBusStreamFormat)
);
}
// set sample rate of mixer output
result = AudioUnitSetProperty (
mixerUnit,
kAudioUnitProperty_SampleRate,
kAudioUnitScope_Output,
0,
&graphSampleRate,
sizeof (graphSampleRate)
);
// connect mixer output to RemoteIO
result = AUGraphConnectNodeInput (
processingGraph,
mixerNode, // source node
0, // source node output bus number
iONode, // destination node
0 // desintation node input bus number
);
// initialize AudioGraph
result = AUGraphInitialize (processingGraph);
// start AudioGraph
result = AUGraphStart (processingGraph);
// enable mixer input
result = AudioUnitSetParameter (
mixerUnit,
kMultiChannelMixerParam_Enable,
kAudioUnitScope_Input,
0, // bus number
1, // on
0
);
First, it should be noted that the error code -10876 corresponds to the symbol named kAudioUnitErr_NoConnection. You can usually find these by googling the error code number along with the term CoreAudio. That should be a hint that you are asking the system to render to an AudioUnit which isn't properly connected.
Within your render callback, you are casting the void* user data to a recorderStructPtr. I'm going to assume that when you debugged this code that this cast returned a non-null structure which has your actual audio unit's address in it. However, you should be rendering it with the AudioBufferList which is passed in to your render callback (ie, the inputRenderCallback function). That contains the list of samples from the system which you need to process.
I solved the issue on my own. It is due to a bug in my code causing 10876 error on AudioUnitRender().
I set the category of my AudioSession as AVAudioSessionCategoryPlayback instead of AVAudioSessionCategoryPlayAndRecord. When I fixed the category to AVAudioSessionCategoryPlayAndRecord, I can finally capture microphone input successfully by calling A*udioUnitRender()* on the device.
using AVAudioSessionCategoryPlayback doesn't result to any error upon calling AudioUnitRender() to capture microphone input and is working well in the simulator. I think this should be an issue for
iOS simulator (though not critical).
I have also seen this issue occur when the values in the I/O Unit's stream format property are inconsistent. Make sure that your AudioStreamBasicDescription's bits per channel, channels per frame, bytes per frame, frames per packet, and bytes per packet all make sense.
Specifically I got the NoConnection error when I changed a stream format from stereo to mono by changing the channels per frame, but forgot to change the bytes per frame and bytes per packet to match the fact that there is half as much data in a mono frame as a stereo frame.
If you initialise an AudioUnit and don't set its kAudioUnitProperty_SetRenderCallback property, you'll get this error if you call AudioUnitRender on it.
Call AudioUnitProcess on it instead.

What are the required parameters for CMBufferQueueCreate?

Reading the documentation about iOS SDK CMBufferQueueCreate, it says that getDuration and version are required, all the others callbacks can be NULL.
But running the following code:
CFAllocatorRef allocator;
CMBufferCallbacks *callbacks;
callbacks = malloc(sizeof(CMBufferCallbacks));
callbacks->version = 0;
callbacks->getDuration = timeCallback;
callbacks->refcon = NULL;
callbacks->getDecodeTimeStamp = NULL;
callbacks->getPresentationTimeStamp = NULL;
callbacks->isDataReady = NULL;
callbacks->compare = NULL;
callbacks->dataBecameReadyNotification = NULL;
CMItemCount capacity = 4;
OSStatus s = CMBufferQueueCreate(allocator, capacity, callbacks, queue);
NSLog(#"QUEUE: %x", queue);
NSLog(#"STATUS: %i", s);
with timeCallback:
CMTime timeCallback(CMBufferRef buf, void *refcon){
return CMTimeMake(1, 1);
}
and queue is:
CMBufferQueueRef* queue;
queue creations fails (queue = 0) and returns a status of:
kCMBufferQueueError_RequiredParameterMissing = -12761,
The callbacks variable is correctly initialized, at least the debugger says so.
Has anybody used the CMBufferQueue?
Presumably there is nothing wrong with the parameters. At least the same as what you wrote is stated in CMBufferQueue.h about the required parameters. But it looks like you are passing a null pointer as the CMBufferQueueRef* parameter. I have updated your sample as follows and it seems to create the message loop OK.
CMBufferQueueRef queue;
CFAllocatorRef allocator = kCFAllocatorDefault;
CMBufferCallbacks *callbacks;
callbacks = malloc(sizeof(CMBufferCallbacks));
callbacks->version = 0;
callbacks->getDuration = timeCallback;
callbacks->refcon = NULL;
callbacks->getDecodeTimeStamp = NULL;
callbacks->getPresentationTimeStamp = NULL;
callbacks->isDataReady = NULL;
callbacks->compare = NULL;
callbacks->dataBecameReadyNotification = NULL;
CMItemCount capacity = 4;
OSStatus s = CMBufferQueueCreate(allocator, capacity, callbacks, &queue);
NSLog(#"QUEUE: %x", queue);
NSLog(#"STATUS: %i", s);
The time callback is still the same.
It does not look like it helps topic starter, but I hope it helps somebody else.

Creating CMSampleBufferRef from the data

I am trying to create a CMSampleBuffer Ref from the data and trying to feed it to AVAssetWriter.
But asset writer is failing to create the movie from the data. Following is the code to create the CMSampleBufferRef.
CVImageBufferRef cvimgRef = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(cvimgRef,0);
uint8_t *buf=(uint8_t *)CVPixelBufferGetBaseAddress(cvimgRef);
int width = 480;
int height = 360;
int bitmapBytesPerRow = width*4;
int bitmapByteCount = bitmapBytesPerRow*height;
CVPixelBufferRef pixelBufRef = NULL;
CMSampleBufferRef newSampleBuffer = NULL;
CMSampleTimingInfo timimgInfo = kCMTimingInfoInvalid;
CMSampleBufferGetSampleTimingInfo(sampleBuffer, 0, &timimgInfo);
OSStatus result = 0;
OSType pixFmt = CVPixelBufferGetPixelFormatType(cvimgRef);
CVPixelBufferCreateWithBytes(kCFAllocatorDefault, width, height, pixFmt, buf, bitmapBytesPerRow, NULL, NULL, NULL, &pixelBufRef);
CMVideoFormatDescriptionRef videoInfo = NULL;
result = CMVideoFormatDescriptionCreateForImageBuffer(NULL, pixelBufRef, &videoInfo);
CMSampleBufferCreateForImageBuffer(kCFAllocatorDefault, pixelBufRef, true, NULL, NULL, videoInfo, &timimgInfo, &newSampleBuffer);
Movie creation works fine when we use the original CMSampleBufferRef obtained from the AVFoundation data output callback method.
But the same fails when I try to create the movie using the custom CMSampleBufferRef. Asset writer throws the following error:
The operation couldn’t be completed. (AVFoundationErrorDomain error -11800.)
Please help me out in resolving this issue.
You should look into AVAssetWriterInputPixelBufferAdaptor - it accepts CVPixelBuffers so you don't need to convert the CVPixelBuffer in a CMSampleBuffer.
here is a link to a thread about it on the apple dev forum -> https://devforums.apple.com/thread/70258?tstart=0
Also - any chance you could post your project file or sample code of the capture movie working -- I am using the default CMSampleBuffer from the AVFoundation data output callback method - but when I save it to camera roll it is all black except the last 5 frames which I have to manually scrub to :S
any help in regards to my issue would be greatly appreciated.
Cheers,
Michael
The operation couldn’t be completed. (AVFoundationErrorDomain error -11800.)
For this error, it always occur when timingInfo is invalid. it need set it to valid values with PTS and Duration.
CMSampleTimingInfo timingInfo = kCMTimingInfoInvalid;
timingInfo.presentationTimeStamp = pts;
timingInfo.duration = duration;