NSString drawAtPoint Crash on the iPhone (NSString drawAtPoint) - iphone

Hey. I have a very simple text output to buffer system which will crash randomly. It will be fine for DAYS, then sometimes it'll crash a few times in a few minutes. The callstack is almost exactly the same for other guys who use higher level controls:
http://discussions.apple.com/thread.jspa?messageID=7949746
iPhone app crashed: Assertion failed function evict_glyph_entry_from_strike, file Fonts/CGFontCache.c
It crashes at the line (below as well in drawTextToBuffer()):
[nsString drawAtPoint:CGPointMake(0, 0) withFont:clFont];
I have the same call of "evict_glyph_entry_from_cache" with the abort calls immediately following it.
Apparently it happens to other people. I can say that my NSString* is perfectly fine at the time of the crash. I can read the text from the debugger just fine.
static CGColorSpaceRef curColorSpace;
static CGContextRef myContext;
static float w, h;
static int iFontSize;
static NSString* sFontName;
static UIFont* clFont;
static int iLineHeight;
unsigned long* txb; /* 256x256x4 Buffer */
void selectFont(int iSize, NSString* sFont)
{
iFontSize = iSize;
clFont = [UIFont fontWithName:sFont size:iFontSize];
iLineHeight = (int)(ceil([clFont capHeight]));
}
void initText()
{
w = 256;
h = 256;
txb = (unsigned long*)malloc_(w * h * 4);
curColorSpace = CGColorSpaceCreateDeviceRGB();
myContext = CGBitmapContextCreate(txb, w, h, 8, w * 4, curColorSpace, kCGImageAlphaPremultipliedLast);
selectFont(12, #"Helvetica");
}
void drawTextToBuffer(NSString* nsString)
{
CGContextSaveGState(myContext);
CGContextSetRGBFillColor(myContext, 1, 1, 1, 1);
UIGraphicsPushContext(myContext);
/* This line will crash. It crashes even with constant Strings.. At the time of the crash, the pointer to nsString is perfectly fine. The data looks fine! */
[nsString drawAtPoint:CGPointMake(0, 0) withFont:clFont];
UIGraphicsPopContext();
CGContextRestoreGState(myContext);
}
It will happen with other non-unicode supporting methods as well such as CGContextShowTextAtPoint(); the callstack is similar with that as well.
Is this any kind of known issue with the iPhone? Or, perhaps, can something outside of this cause be causing an exception in this particular call (drawAtPoint)?

void selectFont(int iSize, NSString* sFont)
{
iFontSize = iSize;
clFont = [UIFont fontWithName:sFont size:iFontSize];
iLineHeight = (int)(ceil([clFont capHeight]));
}
After calling this function, clFont will be autoreleased, so there's no guarantee that a context irrelevant to this function can still have a valid clFont. You need to -retain an instance if you plan to use it later.
void selectFont(int iSize, NSString* sFont)
{
iFontSize = iSize;
[clFont release];
clFont = [[UIFont fontWithName:sFont size:iFontSize] retain];
iLineHeight = (int)(ceil([clFont capHeight]));
}

Related

Getting Potential Leak of an object in CVPixelBuffer

i am creating a app to screen capture from the iphone. So after i did the coding i used profiling and analyzing to check memory leaks. I am getting only one memory leak in one section in the code. Here is my code which gives me the memory leak.
-(void) writeSample: (NSTimer*) _timer {
if (assetWriterInput.readyForMoreMediaData) {
// CMSampleBufferRef sample = nil;
CVReturn cvErr = kCVReturnSuccess;
// get screenshot image!
CGImageRef image = (CGImageRef) [[self screenshot] CGImage];
NSLog (#"made screenshot");
// prepare the pixel buffer
CVPixelBufferRef pixelBuffer = NULL;
CFDataRef imageData= CGDataProviderCopyData(CGImageGetDataProvider(image));
NSLog (#"copied image data");
cvErr = CVPixelBufferCreateWithBytes(kCFAllocatorDefault,
FRAME_WIDTH,
FRAME_HEIGHT,
kCVPixelFormatType_32BGRA,
(void*)CFDataGetBytePtr(imageData),
CGImageGetBytesPerRow(image),
NULL,
NULL,
NULL,
&pixelBuffer);
NSLog (#"CVPixelBufferCreateWithBytes returned %d", cvErr);
// calculate the time
CFAbsoluteTime thisFrameWallClockTime = CFAbsoluteTimeGetCurrent();
CFTimeInterval elapsedTime = thisFrameWallClockTime - firstFrameWallClockTime;
NSLog (#"elapsedTime: %f", elapsedTime);
CMTime presentationTime = CMTimeMake (elapsedTime * TIME_SCALE, TIME_SCALE);
// write the sample
BOOL appended = [assetWriterPixelBufferAdaptor appendPixelBuffer:pixelBuffer withPresentationTime:presentationTime];
if (appended) {
NSLog (#"appended sample at time %lf", CMTimeGetSeconds(presentationTime));
} else {
NSLog (#"failed to append");
[self stopRecording];
self.startStopButton.selected = NO;
}
}
}
it says Potential leak of an object stored into 'imageData'. Can any one help me with finding the error in the above code. There is a memory leak in above code when i check it with the memory management tools too. If any one can help me it would be a great help.
Thanks in Advance !!
From comments -
Do a CFRelease on your imageData when your done with it?
You can put it right before or right after NSLog (#"CVPixelBufferCreateWithBytes returned %d", cvErr);
CFRelease(imageData);
http://developer.apple.com/library/mac/#documentation/QuartzCore/Reference/CVPixelBufferRef/Reference/reference.html
I am not sure about the rest of the code you have, but generally when there is a call with Crete as a word in it, it has to have a corresponding release statement. Please check the documentation above.
CVPixelBufferRelease
Releases a pixel buffer.
void CVPixelBufferRelease (
CVPixelBufferRef texture
);

iPhone - A problem with decoding H264 using ffmpeg

I am working with ffmpeg to decode H264 stream from server.
I referenced DecoderWrapper from http://github.com/dropcam/dropcam_for_iphone.
I compiled it successfully, but I don't know how use it.
Here are the function that has problem.
- (id)initWithCodec:(enum VideoCodecType)codecType
colorSpace:(enum VideoColorSpace)colorSpace
width:(int)width
height:(int)height
privateData:(NSData*)privateData {
if(self = [super init]) {
codec = avcodec_find_decoder(CODEC_ID_H264);
codecCtx = avcodec_alloc_context();
// Note: for H.264 RTSP streams, the width and height are usually not specified (width and height are 0).
// These fields will become filled in once the first frame is decoded and the SPS is processed.
codecCtx->width = width;
codecCtx->height = height;
codecCtx->extradata = av_malloc([privateData length]);
codecCtx->extradata_size = [privateData length];
[privateData getBytes:codecCtx->extradata length:codecCtx->extradata_size];
codecCtx->pix_fmt = PIX_FMT_YUV420P;
#ifdef SHOW_DEBUG_MV
codecCtx->debug_mv = 0xFF;
#endif
srcFrame = avcodec_alloc_frame();
dstFrame = avcodec_alloc_frame();
int res = avcodec_open(codecCtx, codec);
if (res < 0)
{
NSLog(#"Failed to initialize decoder");
}
}
return self;
}
What is the privateData parameter of this function? I don't know how to set the parameter...
Now avcodec_decode_video2 returns -1;
The framedata is coming successfully.
How solve this problem.
Thanks a lot.
Take a look at your ffmpeg example, where in PATH/TO/FFMPEG/doc/example/decoder_encoder.c,
and this link:
http://cekirdek.pardus.org.tr/~ismail/ffmpeg-docs/api-example_8c-source.html
Be careful, this code just too old, some function's name has already changed.

Asynchronously dispatched recursive blocks

Suppose I run this code:
__block int step = 0;
__block dispatch_block_t myBlock;
myBlock = ^{
if(step == STEPS_COUNT)
{
return;
}
step++;
dispatch_time_t delay = dispatch_time(DISPATCH_TIME_NOW, NSEC_PER_SEC / 2);
dispatch_after(delay, dispatch_get_current_queue(), myBlock);
};
dispatch_time_t delay = dispatch_time(DISPATCH_TIME_NOW, NSEC_PER_SEC / 2);
dispatch_after(delay, dispatch_get_current_queue(), myBlock);
The block is invoked once from outside. When the inner invocation is reached, the program crashes without any details. If I use direct invocations everywhere instead of GCD dispatches, everything works fine.
I've also tried calling dispatch_after with a copy of the block. I don't know if this was a step in the right direction or not, but it wasn't enough to make it work.
Ideas?
When trying to solve this problem, I found a snippet of code that solves much of the recursive block related issues. I have not been able to find the source again, but still have the code:
// in some imported file
dispatch_block_t RecursiveBlock(void (^block)(dispatch_block_t recurse)) {
return ^{ block(RecursiveBlock(block)); };
}
// in your method
dispatch_block_t completeTaskWhenSempahoreOpen = RecursiveBlock(^(dispatch_block_t recurse) {
if ([self isSemaphoreOpen]) {
[self completeTask];
} else {
double delayInSeconds = 0.3;
dispatch_time_t popTime = dispatch_time(DISPATCH_TIME_NOW, (int64_t)(delayInSeconds * NSEC_PER_SEC));
dispatch_after(popTime, dispatch_get_main_queue(), recurse);
}
});
completeTaskWhenSempahoreOpen();
RecursiveBlock allows for non-argument blocks. It can be rewritten for single or multiple argument blocks. The memory management is simplified using this construct, there is no chance of a retain cycle for example.
My solution was derived entirely from Berik's, so he gets all the credit here. I just felt that a more general framework was needed for the "recursive blocks" problem space (that I haven't found elsewhere), including for the asynchronous case, which is covered here.
Using these three first definitions makes the fourth and fifth methods - which are simply examples - possible, which is an incredibly easy, foolproof, and (I believe) memory-safe way to recurse any block to arbitrary limits.
dispatch_block_t RecursiveBlock(void (^block)(dispatch_block_t recurse)) {
return ^() {
block(RecursiveBlock(block));
};
}
void recurse(void(^recursable)(BOOL *stop))
{
// in your method
__block BOOL stop = NO;
RecursiveBlock(^(dispatch_block_t recurse) {
if ( !stop ) {
//Work
recursable(&stop);
//Repeat
recurse();
}
})();
}
void recurseAfter(void(^recursable)(BOOL *stop, double *delay))
{
// in your method
__block BOOL stop = NO;
__block double delay = 0;
RecursiveBlock(^(dispatch_block_t recurse) {
if ( !stop ) {
//Work
recursable(&stop, &delay);
//Repeat
dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(delay * NSEC_PER_SEC)), dispatch_get_main_queue(), recurse);
}
})();
}
You'll note that in the following two examples that the machinery of interacting with the recursion mechanism is extremely lightweight, basically amounting to having to wrap a block in recurse and that block must take a BOOL *stop variable, which should be set at some point to exit recursion (a familiar pattern in some of the Cocoa block iterators).
- (void)recurseTo:(int)max
{
__block int i = 0;
void (^recursable)(BOOL *) = ^(BOOL *stop) {
//Do
NSLog(#"testing: %d", i);
//Criteria
i++;
if ( i >= max ) {
*stop = YES;
}
};
recurse(recursable);
}
+ (void)makeSizeGoldenRatio:(UIView *)view
{
__block CGFloat fibonacci_1_h = 1.f;
__block CGFloat fibonacci_2_w = 1.f;
recurse(^(BOOL *stop) {
//Criteria
if ( fibonacci_2_w > view.superview.bounds.size.width || fibonacci_1_h > view.superview.bounds.size.height ) {
//Calculate
CGFloat goldenRatio = fibonacci_2_w/fibonacci_1_h;
//Frame
CGRect newFrame = view.frame;
newFrame.size.width = fibonacci_1_h;
newFrame.size.height = goldenRatio*newFrame.size.width;
view.frame = newFrame;
//Done
*stop = YES;
NSLog(#"Golden Ratio %f -> %# for view", goldenRatio, NSStringFromCGRect(view.frame));
} else {
//Iterate
CGFloat old_fibonnaci_2 = fibonacci_2_w;
fibonacci_2_w = fibonacci_2_w + fibonacci_1_h;
fibonacci_1_h = old_fibonnaci_2;
NSLog(#"Fibonnaci: %f %f", fibonacci_1_h, fibonacci_2_w);
}
});
}
recurseAfter works much the same, though I won't offer a contrived example here. I am using all three of these without issue, replacing my old -performBlock:afterDelay: pattern.
It looks like there are no problem except delay variable. The block uses always the same time that is generated at line 1. You have to call dispatch_time every time if you want to delay dispatching the block.
step++;
dispatch_time_t delay = dispatch_time(DISPATCH_TIME_NOW, NSEC_PER_SEC / 2);
dispatch_after(delay, dispatch_get_current_queue(), myBlock);
};
EDIT:
I understand.
The block is stored in stack by the block literal. myBlock variable is substituted for the address of the block in stack.
First dispatch_after copied the block from myBlock variable that is the address in stack. And this address is valid at this time. The block is in the current scope.
After that, the block is scoped out. myBlock variable has invalid address at this time. dispatch_after has the copied block in heap. It is safe.
And then, second dispatch_after in the block tries to copy from myBlock variable that is invalid address because the block in stack was already scoped out. It will execute corrupted block in stack.
Thus, you have to Block_copy the block.
myBlock = Block_copy(^{
...
});
And don't forget Block_release the block when you don't need it any more.
Block_release(myBlock);
Opt for a custom dispatch source.
dispatch_queue_t queue = dispatch_queue_create( NULL, DISPATCH_QUEUE_SERIAL );
__block unsigned long steps = 0;
dispatch_source_t source = dispatch_source_create(DISPATCH_SOURCE_TYPE_DATA_ADD, 0, 0, queue);
dispatch_source_set_event_handler(source, ^{
if( steps == STEPS_COUNT ) {
dispatch_source_cancel(source);
return;
}
dispatch_time_t delay = dispatch_time(DISPATCH_TIME_NOW, NSEC_PER_SEC / 2);
dispatch_after(delay, queue, ^{
steps += dispatch_source_get_data(source);
dispatch_source_merge_data(source, 1);
});
});
dispatch_resume( source );
dispatch_source_merge_data(source, 1);
I think you have to copy the block if you want it to stick around (releasing it when you don't want it to call itself anymore).

EXC_BAD_ACCESS if I call an Objective-C block directly

Continuing to try to understand blocks in Objective-C. I have the following function:
typedef void(^TAnimation)(void);
TAnimation makeAnim(UIView *aView, CGFloat angle, CGFloat x, CGFloat y,
CGFloat width, CGFloat height, UIInterfaceOrientation uiio) {
return Block_copy(^{
aView.transform = CGAffineTransformMakeRotation(angle);
aView.frame = CGRectMake(x, y, width, height);
[UIApplication sharedApplication].statusBarOrientation = uiio;
});
}
When I try to do the following:
TAnimation f = makeAnim( ... );
f();
I get an EXC_BAD_ACCESS. However, if I do the following instead:
TAnimation f = makeAnim( ... );
[UIView animateWithDuration:0 delay:0 options:0 animations:f completion:NULL];
it works fine. What is the problem in the first scenario?
Try using NSZombieEnabled. When you deallocate an object it turns into a NSZombie, so when you call it, it throws an exception. To activate NSZombieEnabled, open the info window for the executable, go under Arguments and enter NSZombieEnable with a value of yes under "Variables to be set in the Environment:".
A very simple block-based example like this:
#import <Foundation/Foundation.h>
typedef void(^printerBlock)(void);
printerBlock createPrinter(NSString *thingToPrint) {
return Block_copy(^{
NSLog(#"Printing: %#", thingToPrint);
});
}
int main (int argc, char const* argv[]) {
NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];
printerBlock pb = createPrinter(#"Testing string.");
pb();
[pool drain];
return 0;
}
prints this out:
2011-10-22 21:28:14.316 blocker[12834:707] Printing: Testing string.
when I compile the program as "blocker", so there must be some other reason the direct block invocation is failing. Some reasons might be because the view you're passing in is being over-released, in which case the NSZombieEnabled advice will help you out.
If it's not the case that the view is being over-released, then you'll want to run this in the debugger and figure out exactly where things are falling over.
We'll probably have to see more of your code in order to figure out what's actually breaking.

Any way to tell if my iPhone app is running under the debugger at runtime?

I would like to have my error handling code behave differently if it is running under the debugger. Specifically, if I am running on a handset, not attached to a debugger and fail an assertion I want to send the error to my server. When I am under gdb, I want to break into the debugger.
Although I can imagine how Apple would write the code, I can't find any documentation of a runtime way to test for the presence of the debugger.
The method described here worked fine for me
I tested by placing it in -(void)viewDidLoad
- (void)viewDidLoad {
[super viewDidLoad];
int mib[4];
size_t bufSize = 0;
int local_error = 0;
struct kinfo_proc kp;
mib[0] = CTL_KERN;
mib[1] = KERN_PROC;
mib[2] = KERN_PROC_PID;
mib[3] = getpid();
bufSize = sizeof (kp);
if ((local_error = sysctl(mib, 4, &kp, &bufSize, NULL, 0)) < 0) {
label.text = #"Failure calling sysctl";
return;
}
if (kp.kp_proc.p_flag & P_TRACED)
label.text = #"I am traced";
else
label.text = #"I am not traced";
}
Why not redefine assert to do what you want, when not compiled for debug?
Another option, is to create your own assert function, where you can then add a breakpoint on loading into GDB.
The assert prototype is
void assert(int expression);
void assert(int expression)
{
if( !expression )
{
// enable break point here
// log to server
}
}
or add the break point into the log to server code.