What are the required parameters for CMBufferQueueCreate? - iphone

Reading the documentation about iOS SDK CMBufferQueueCreate, it says that getDuration and version are required, all the others callbacks can be NULL.
But running the following code:
CFAllocatorRef allocator;
CMBufferCallbacks *callbacks;
callbacks = malloc(sizeof(CMBufferCallbacks));
callbacks->version = 0;
callbacks->getDuration = timeCallback;
callbacks->refcon = NULL;
callbacks->getDecodeTimeStamp = NULL;
callbacks->getPresentationTimeStamp = NULL;
callbacks->isDataReady = NULL;
callbacks->compare = NULL;
callbacks->dataBecameReadyNotification = NULL;
CMItemCount capacity = 4;
OSStatus s = CMBufferQueueCreate(allocator, capacity, callbacks, queue);
NSLog(#"QUEUE: %x", queue);
NSLog(#"STATUS: %i", s);
with timeCallback:
CMTime timeCallback(CMBufferRef buf, void *refcon){
return CMTimeMake(1, 1);
}
and queue is:
CMBufferQueueRef* queue;
queue creations fails (queue = 0) and returns a status of:
kCMBufferQueueError_RequiredParameterMissing = -12761,
The callbacks variable is correctly initialized, at least the debugger says so.
Has anybody used the CMBufferQueue?

Presumably there is nothing wrong with the parameters. At least the same as what you wrote is stated in CMBufferQueue.h about the required parameters. But it looks like you are passing a null pointer as the CMBufferQueueRef* parameter. I have updated your sample as follows and it seems to create the message loop OK.
CMBufferQueueRef queue;
CFAllocatorRef allocator = kCFAllocatorDefault;
CMBufferCallbacks *callbacks;
callbacks = malloc(sizeof(CMBufferCallbacks));
callbacks->version = 0;
callbacks->getDuration = timeCallback;
callbacks->refcon = NULL;
callbacks->getDecodeTimeStamp = NULL;
callbacks->getPresentationTimeStamp = NULL;
callbacks->isDataReady = NULL;
callbacks->compare = NULL;
callbacks->dataBecameReadyNotification = NULL;
CMItemCount capacity = 4;
OSStatus s = CMBufferQueueCreate(allocator, capacity, callbacks, &queue);
NSLog(#"QUEUE: %x", queue);
NSLog(#"STATUS: %i", s);
The time callback is still the same.
It does not look like it helps topic starter, but I hope it helps somebody else.

Related

Why can't I found the implementation of _Unwind_Backtrace in android open source project of kitkat?

I am reading the code about the implementation of printing stack trace in native code of android, I found the following code:
ssize_t unwind_backtrace(backtrace_frame_t* backtrace, size_t ignore_depth, size_t max_depth) {
ALOGV("Unwinding current thread %d.", gettid());
map_info_t* milist = acquire_my_map_info_list();
backtrace_state_t state;
state.backtrace = backtrace;
state.ignore_depth = ignore_depth;
state.max_depth = max_depth;
state.ignored_frames = 0;
state.returned_frames = 0;
init_memory(&state.memory, milist);
_Unwind_Reason_Code rc = _Unwind_Backtrace(unwind_backtrace_callback, &state);
release_my_map_info_list(milist);
if (state.returned_frames) {
return state.returned_frames;
}
return rc == _URC_END_OF_STACK ? 0 : -1;
}
but I can't find the implementation of _Unwind_Backtrace in http://androidxref.com/4.4.2_r2/ , is there any body konw the reason? Where is the implementation of _Unwind_Backtrace?
For android 4.4, unwind_backtrace is in /system/core/libcorkscrew/backtrace.c

Unity: Converting Texture2D to YUV420P using FFmpeg

I'm trying to create a game in Unity where each frame is rendered into a texture and then put together into a video using FFmpeg. The output created by FFmpeg should eventually be sent over the network to a client UI. However, I'm struggling mainly with the part where a frame is caught, and passed to an unsafe method as a byte array where it should be processed further by FFmpeg. The wrapper I'm using is FFmpeg.AutoGen.
The render to texture method:
private IEnumerator CaptureFrame()
{
yield return new WaitForEndOfFrame();
RenderTexture.active = rt;
frame.ReadPixels(rect, 0, 0);
frame.Apply();
bytes = frame.GetRawTextureData();
EncodeAndWrite(bytes, bytes.Length);
}
The unsafe encoding method so far:
private unsafe void EncodeAndWrite(byte[] bytes, int size)
{
GCHandle pinned = GCHandle.Alloc(bytes, GCHandleType.Pinned);
IntPtr address = pinned.AddrOfPinnedObject();
sbyte** inData = (sbyte**)address;
fixed(int* lineSize = new int[1])
{
lineSize[0] = 4 * textureWidth;
// Convert RGBA to YUV420P
ffmpeg.sws_scale(sws, inData, lineSize, 0, codecContext->width, inputFrame->extended_data, inputFrame->linesize);
}
inputFrame->pts = frameCounter++;
if(ffmpeg.avcodec_send_frame(codecContext, inputFrame) < 0)
throw new ApplicationException("Error sending a frame for encoding!");
pkt = new AVPacket();
fixed(AVPacket* packet = &pkt)
ffmpeg.av_init_packet(packet);
pkt.data = null;
pkt.size = 0;
pinned.Free();
...
}
sws_scale takes a sbyte** as the second parameter, therefore I'm trying to convert the input byte array to sbyte** by first pinning it with GCHandle and doing an explicit type conversion afterwards. I don't know if that's the correct way, though.
Moreover, the condition if(ffmpeg.avcodec_send_frame(codecContext, inputFrame) < 0) alwasy throws an ApplicationException, where I also really don't know why this happens. codecContext and inputFrame are my AVCodecContext and AVFrame objects, respectively, and the fields are defined as the following:
codecContext
codecContext = ffmpeg.avcodec_alloc_context3(codec);
codecContext->bit_rate = 400000;
codecContext->width = textureWidth;
codecContext->height = textureHeight;
AVRational timeBase = new AVRational();
timeBase.num = 1;
timeBase.den = (int)fps;
codecContext->time_base = timeBase;
videoAVStream->time_base = timeBase;
AVRational frameRate = new AVRational();
frameRate.num = (int)fps;
frameRate.den = 1;
codecContext->framerate = frameRate;
codecContext->gop_size = 10;
codecContext->max_b_frames = 1;
codecContext->pix_fmt = AVPixelFormat.AV_PIX_FMT_YUV420P;
inputFrame
inputFrame = ffmpeg.av_frame_alloc();
inputFrame->format = (int)codecContext->pix_fmt;
inputFrame->width = textureWidth;
inputFrame->height = textureHeight;
inputFrame->linesize[0] = inputFrame->width;
Any help in fixing the issue would be greatly appreciated :)
Check examples on here: https://github.com/FFmpeg/FFmpeg/tree/master/doc/examples
Especially scaling_video.c. In FFmpeg scaling and pixel format conversion is same operation (keep the size parameters same for just pixel format conversion).
These examples very easy to follow. Give it a try.
I think your casting is incorrect sbyte** inData = (sbyte**)address;
because address is IntPtr object, so the correct casting probably should be
sbyte* pinData = (sbyte *)address.ToPointer(); sbyte** ppInData = &pinData;

Send multiple messages in Photoshop SDK

I am using the Photoshop Connection SDK to get my iPad app connected to Photoshop.
(The SDK with all sample iOS projects can be downloaded here if you want to look closer: http://www.adobe.com/devnet/photoshop/sdk.html)
While I have it working, its hard for me to solve my issue as I don't entirely understand the networking between the iPad and Photoshop. So here is my issue:
NSString *s1 = [NSString stringWithUTF8String:"app.activeDocument.layers[0].name;"];
NSData *dataToSend = [s1 dataUsingEncoding:NSUTF8StringEncoding];
[self sendJavaScriptMessage:dataToSend];
artLayerName_transaction = transaction_id -1;
There is a little snippet of code, to send a message asking for the name of layer at index 0. That works great. However, let's say I try and send the same message straight after that but for index 1 as well.
Both messages are sent, but only one returns its string. The string is returned here:
- (void)stream:(NSStream *)aStream handleEvent:(NSStreamEvent)streamEvent;
If you see the example projects its just the same, each message with its own transaction id as well. The method goes through a bunch of decrypting and things to receive that string and then reaches this:
NSString *string = [[NSString alloc] initWithBytes:received_data length:received_length encoding:NSUTF8StringEncoding];
if (content != 1)
{
if (transaction == artLayerName_transaction)
{
[self processLayerName:string];
needOutput = NO;
}
}
I've out the full method at the bottom for full analysing.
It checks that its receiving the specific message so I can then take the result (string, my layer name) and do what I like with it. However when I try and send more than one message with the same transaction id I only get one of the two results. In the code above, string gives me both my layer names, but the if statement is only called once.
Is there a known way around sending several messages at once? I have tried getting an array back rather than several strings without luck as well.
Apparently I need to modify the code to accept more then message. However I don't really understand the code well enough, so please also explain any of the principals behind it.
- (void)stream:(NSStream *)aStream handleEvent:(NSStreamEvent)streamEvent
{
NSInputStream * istream;
switch(streamEvent)
{
case NSStreamEventHasBytesAvailable:;
UInt8 buffer[1024];
unsigned int actuallyRead = 0;
istream = (NSInputStream *)aStream;
if (!dataBuffer)
{
dataBuffer = [[NSMutableData alloc] initWithCapacity:2048];
}
actuallyRead = [istream read:buffer maxLength:1024];
[dataBuffer appendBytes:buffer length:actuallyRead];
// see if we have enough to process, loop over messages in buffer
while( YES )
{
// Did we read the header yet?
if ( packetBodySize == -1 )
{
// Do we have enough bytes in the buffer to read the header?
if ( [dataBuffer length] >= sizeof(int) ) {
// extract length
memcpy(&packetBodySize, [dataBuffer bytes], sizeof(int));
packetBodySize = ntohl( packetBodySize ); // size is in network byte order
// remove that chunk from buffer
NSRange rangeToDelete = {0, sizeof(int)};
[dataBuffer replaceBytesInRange:rangeToDelete withBytes:NULL length:0];
}
else {
// We don't have enough yet. Will wait for more data.
break;
}
}
// We should now have the header. Time to extract the body.
if ( [dataBuffer length] >= ((NSUInteger) packetBodySize) )
{
// We now have enough data to extract a meaningful packet.
const int kPrologLength = 16;
char *buffer = (char *)[dataBuffer bytes];
// if incoming message is color change, then don't display message
BOOL needOutput = YES;
// fetch the communication status
unsigned long com_status = *((unsigned long *)(buffer + 0));
com_status = ntohl( com_status );
// decrypt the message
size_t decryptedLength = (size_t) packetBodySize - 4; // don't include com status
int skip_message = 0;
if (com_status == 0 && sCryptorRef)
{
PSCryptorStatus decryptResult = EncryptDecrypt (sCryptorRef, false, buffer+4, decryptedLength, buffer+4, decryptedLength, &decryptedLength);
if (kCryptorSuccess != decryptResult)
{
// failed to decrypt. Ingore messageg and disconnect
skip_message = 1;
[self logMessage:#"ERROR: Decryption failed. Wrong password.\n" clearLine:NO];
}
}
else
{
if (com_status != 0)
[self logMessage:#"ERROR: Problem with communication, possible wrong password.\n" clearLine:NO];
if (!sCryptorRef)
[self logMessage:#"ERROR: Cryptor Ref is NULL, possible reason being that password was not supplied or password binding function failed.\n" clearLine:NO];
}
// Interpret encrypted section
if (!skip_message)
{
// version, 32 bit unsigned int, network byte order
unsigned long protocol_version = *((unsigned long *)(buffer + 4));
protocol_version = ntohl( protocol_version );
if (protocol_version != 1)
{
// either the message is corrupted or the protocol is newer.
[self logMessage:#"Incoming protocol version is different the expected. (or the message is corrupted.) Not processing.\n" clearLine:NO];
skip_message = 1;
}
if (!skip_message)
{
// transaction, 32 bit unsigned int, network byte order
unsigned long transaction = *((unsigned long *)(buffer + 8));
transaction = ntohl( transaction );
// content type, 32 bit unsigned int, network byte order
unsigned long content = *((unsigned long *)(buffer + 12));
content = ntohl( content );
unsigned char *received_data = (unsigned char *)(buffer+kPrologLength);
int received_length = (decryptedLength-(kPrologLength-4));
if (content == 3) // image data
{
// process image data
unsigned char image_type = *((unsigned char *)received_data);
[self logMessage:#"Incoming data is IMAGE. Skipping\n" clearLine:NO];
if (image_type == 1) // JPEG
{
[self logMessage:#"By the way, incoming image is JPEG\n" clearLine:NO];
}
else if (image_type == 2) // Pixmap
{
[self logMessage:#"By the way, incoming image is Pixmap\n" clearLine:NO];
}
else
{
[self logMessage:#"Unknown image type\n" clearLine:NO];
}
}
else
{
// Set the response string
NSString *string = [[NSString alloc] initWithBytes:received_data length:received_length encoding:NSUTF8StringEncoding];
//NSLog(#"string: %#\n id:%li", string, transaction);
// see if this is a response we're looking for
if (content != 1)
{
if (transaction == foregroundColor_subscription || transaction == foregroundColor_transaction)
{
[self processForegroundChange:string];
needOutput = NO;
}
if (transaction == backgroundColor_subscription || transaction == backgroundColor_transaction)
{
[self processBackgroundChange:string];
needOutput = NO;
}
if (transaction == tool_transaction)
{
[self processToolChange:string];
needOutput = NO;
}
if (transaction == artLayerName_transaction)
{
[self processLayerName:string];
needOutput = NO;
}
}
//Tells me about every event thats happened (spammy tech nonsence, no good for user log)
//if (needOutput) [self logMessage:string clearLine:NO];
}
}
}
// Remove that chunk from buffer
NSRange rangeToDelete = {0, packetBodySize};
[dataBuffer replaceBytesInRange:rangeToDelete withBytes:NULL length:0];
// We have processed the packet. Resetting the state.
packetBodySize = -1;
}
else
{
// Not enough data yet. Will wait.
break;
}
}
break;
case NSStreamEventEndEncountered:;
[self closeStreams];
[self logMessage:[NSString stringWithFormat: #"%# End encountered, closing stream.\n", outputMessage.text] clearLine:NO];
break;
case NSStreamEventHasSpaceAvailable:
case NSStreamEventErrorOccurred:
case NSStreamEventOpenCompleted:
case NSStreamEventNone:
default:
break;
}
}
EDIT:
While this has worked, it has bumped me into another issue. I looked at the psconnection sample and saw what you were talking about, however I haven't been using the full framework but have it connect like in the other sample projects. I very simply increase my transaction id by 1 every time it comes to process a layer name such as in the code above.
Like this:
if (transaction == docName_transaction)
{
docName_transaction++;
[self processDocName:string];
}
This works for the most part, however if I do this to another transaction id at the same time I get overlap. Meaning I end up processing the result for one id at the wrong time. Say I am getting both the total number of docs and the name of each one.
So I have two if statements like above, but I end up processing the total docs in both if statements and I can't see how to atop this overlap. Its fairly important to be able to receive multiple messages at once.
I just downloaded samples, and as far as i can see, you should use unique transaction IDs for transactions that are waiting for response.
I mean if you send layer_name request with transactionID 1, you cannot use 1, until it's response is received.
So better case will be, store your transactionID's in a Dictionary (transactionID as key), and message type as value.
just like:
transactions[transaction_id] = #"layername";
and when you receive response:
use:
transactions[transaction_id] to get the message type (ex: layername) behave according to this.
You can also put some other details to transactions dictionary (you can put a dictionary containing all information, which command, on which object, etc)

AudioQueueStart reporting unsupported format

I'm trying to get audio queue working on an iphone app, and whenever AudioQueueStart is called it gives the "fmt?" result code (kAudioFormatUnsupportedDataFormatError). In the code below i'm setting the format to kAudioFormatLinearPCM, which surely is supported. What am i doing wrong?
data.mDataFormat.mSampleRate = 44100;
data.mDataFormat.mFormatID = kAudioFormatLinearPCM;
data.mDataFormat.mFormatFlags = 0;
data.mDataFormat.mBytesPerPacket = 4;
data.mDataFormat.mFramesPerPacket = 1;
data.mDataFormat.mBytesPerFrame = 4;
data.mDataFormat.mChannelsPerFrame = 2;
data.mDataFormat.mBitsPerChannel = 16;
OSStatus status;
status = AudioQueueNewOutput(&data.mDataFormat, audioCallback, &data, CFRunLoopGetCurrent (), kCFRunLoopCommonModes, 0, &data.mQueue);
for (int i = 0; i < NUMBUFFERS; ++i)
{
status = AudioQueueAllocateBuffer (data.mQueue, BUFFERSIZE, &data.mBuffers[i] );
audioCallback (&data, data.mQueue, data.mBuffers[i]);
}
Float32 gain = 1.0;
status = AudioQueueSetParameter (data.mQueue, kAudioQueueParam_Volume, gain);
status = AudioQueueStart(data.mQueue, NULL);
data is of type audioData which is like this:
typedef struct _audioData {
AudioQueueRef mQueue;
AudioQueueBufferRef mBuffers[NUMBUFFERS];
AudioStreamBasicDescription mDataFormat;
} audioData;
thanks
The cause of your error is actually AudioQueueNewOutput rather than AudioQueueStart.. See this related question audio streaming services failing to recognize file type
it turns out i needed to set some flags. it works with
data.mDataFormat.mFormatFlags = kLinearPCMFormatFlagIsBigEndian | kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked;
edit: actually, dont use kLinearPCMFormatFlagIsBigEndian, it seems that with this format it should be little endian.

CoreAudio: Why does ExtAudioFileCreateWithURL return 0xFFFFFFCE?

It's meant to return an OSType, but instead I'm just getting -50. Does anyone have any idea what error this represents? I can't find it anywhere.
A code snippet for context (the error is so ambiguous I don't know what snippet to paste, here's pretty much everything):
ExtAudioFileRef cafFile;
AudioStreamBasicDescription cafDesc;
cafDesc.mBitsPerChannel = 16;
cafDesc.mBytesPerFrame = 4;
cafDesc.mBytesPerPacket = 4;
cafDesc.mChannelsPerFrame = 2;
cafDesc.mFormatFlags = 0;
cafDesc.mFormatID = 'ima4';
cafDesc.mFramesPerPacket = 1;
cafDesc.mReserved = 0;
cafDesc.mSampleRate = 44100;
OSType status = ExtAudioFileCreateWithURL(
fileURL, // inURL
'caff', // inFileType
&cafDesc, // inStreamDesc
NULL, // inChannelLayout
kAudioFileFlags_EraseFile, // inFlags
&cafFile // outExtAudioFile
); // returns 0xFFFFFFCE
ExtAudioFileCreateWithURL() returns an OSStatus, not an OSType. See the file MacErrors.h for the various error codes. In this case, -50 is paramErr (error in user parameter list), so you're passing one or more of the parameters incorrectly to the function.