Bad access on CGContextClearRect - iphone

I am getting the error EXC_BAD_ACCESS on the line CGContextClearRect(c, self.view.bounds) below. I can't seem to figure out why. This is in a UIViewController class. Here is the function I am in during the crash.
- (void)level0Func {
printf("level0Func\n");
frameStart = [NSDate date];
UIImage *img;
CGContextRef c = startContext(self.view.bounds.size);
printf("address of context: %x\n", c);
/* drawing/updating code */ {
CGContextClearRect(c, self.view.bounds); // crash occurs here
CGContextSetFillColorWithColor(c, [UIColor greenColor].CGColor);
CGContextFillRect(c, self.view.bounds);
CGImageRef cgImg = CGBitmapContextCreateImage(c);
img = [UIImage imageWithCGImage:cgImg]; // this sets the image to be passed to the view for drawing
// CGImageRelease(cgImg);
}
endContext(c);
}
Here are my startContext() and endContext():
CGContextRef createContext(int width, int height) {
CGContextRef r = NULL;
CGColorSpaceRef colorSpace;
void *bitmapData;
int byteCount;
int bytesPerRow;
bytesPerRow = width * 4;
byteCount = width * height;
colorSpace = CGColorSpaceCreateDeviceRGB();
printf("allocating %i bytes for bitmap data\n", byteCount);
bitmapData = malloc(byteCount);
if (bitmapData == NULL) {
fprintf(stderr, "could not allocate memory when creating context");
//free(bitmapData);
CGColorSpaceRelease(colorSpace);
return NULL;
}
r = CGBitmapContextCreate(bitmapData, width, height, 8, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colorSpace);
return r;
}
CGContextRef startContext(CGSize size) {
CGContextRef r = createContext(size.width, size.height);
// UIGraphicsPushContext(r); wait a second, we dont need to push anything b/c we can draw to an offscreen context
return r;
}
void endContext(CGContextRef c) {
free(CGBitmapContextGetData(c));
CGContextRelease(c);
}
What I am basically trying to do is draw to a context that I am not pushing onto the stack so I can create a UIImage out of it. Here is my output:
wait_fences: failed to receive reply: 10004003
level0Func
allocating 153600 bytes for bitmap data
address of context: 68a7ce0
Any help would be appreciated. I am stumped.

You're not allocating enough memory. Here are the relevant lines from your code:
bytesPerRow = width * 4;
byteCount = width * height;
bitmapData = malloc(byteCount);
When you compute bytesPerRow, you (correctly) multiply the width by 4 because each pixel requires 4 bytes. But when you compute byteCount, you do not multiply by 4, so you act as though each pixel only requires 1 byte.
Change it to this:
bytesPerRow = width * 4;
byteCount = bytesPerRow * height;
bitmapData = malloc(byteCount);
OR, don't allocate any memory and Quartz will allocate the correct amount for you, and free it for you. Just pass NULL as the first argument of CGBitmapContextCreate:
r = CGBitmapContextCreate(NULL, width, height, 8, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast);

Check that the values for self.view.bounds.size are valid. This method could be called before the view size is properly setup.

Related

Sprite kit and colorWithPatternImage

Do we have any way of repeating an image across an area, like a SKSpriteNode? SKColor colorWithPatternImage doesn't work unfortunately.
Edit:
I did the following categories, it seems to work so far. Using Mac, not tested on iOS. Likely needs some fixing for iOS.
// Add to SKSpriteNode category or something.
+(SKSpriteNode*)patternWithImage:(NSImage*)image size:(const CGSize)SIZE;
// Add to SKTexture category or something.
+(SKTexture*)patternWithSize:(const CGSize)SIZE image:(NSImage*)image;
And the implementations. Put in respective files.
+(SKSpriteNode*)patternWithImage:(NSImage*)imagePattern size:(const CGSize)SIZE {
SKTexture* texturePattern = [SKTexture patternWithSize:SIZE image:imagePattern];
SKSpriteNode* sprite = [SKSpriteNode spriteNodeWithTexture:texturePattern];
return sprite;
}
+(SKTexture*)patternWithSize:(const CGSize)SIZE image:(NSImage*)image {
// Hopefully this function would be platform independent one day.
SKColor* colorPattern = [SKColor colorWithPatternImage:image];
// Correct way to find scale?
DLog(#"backingScaleFactor: %f", [[NSScreen mainScreen] backingScaleFactor]);
const CGFloat SCALE = [[NSScreen mainScreen] backingScaleFactor];
const size_t WIDTH_PIXELS = SIZE.width * SCALE;
const size_t HEIGHT_PIXELS = SIZE.height * SCALE;
CGContextRef cgcontextref = MyCreateBitmapContext(WIDTH_PIXELS, HEIGHT_PIXELS);
NSAssert(cgcontextref != NULL, #"Failed creating context!");
// CGBitmapContextCreate(
// NULL, // let the OS handle the memory
// WIDTH_PIXELS,
// HEIGHT_PIXELS,
CALayer* layer = CALayer.layer;
layer.frame = CGRectMake(0, 0, SIZE.width, SIZE.height);
layer.backgroundColor = colorPattern.CGColor;
[layer renderInContext:cgcontextref];
CGImageRef imageref = CGBitmapContextCreateImage(cgcontextref);
SKTexture* texture1 = [SKTexture textureWithCGImage:imageref];
DLog(#"size of pattern texture: %#", NSStringFromSize(texture1.size));
CGImageRelease(imageref);
CGContextRelease(cgcontextref);
return texture1;
}
Ok this is needed as well. This likely only works on Mac.
CGContextRef MyCreateBitmapContext(const size_t pixelsWide, const size_t pixelsHigh) {
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
void * bitmapData;
//int bitmapByteCount;
size_t bitmapBytesPerRow;
bitmapBytesPerRow = (pixelsWide * 4);// 1
//bitmapByteCount = (bitmapBytesPerRow * pixelsHigh);
colorSpace = CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB);// 2
bitmapData = NULL;
#define kBitmapInfo kCGImageAlphaPremultipliedLast
//#define kBitmapInfo kCGImageAlphaPremultipliedFirst
//#define kBitmapInfo kCGImageAlphaNoneSkipFirst
// According to http://stackoverflow.com/a/18921840/129202 it should be safe to just cast
CGBitmapInfo bitmapinfo = (CGBitmapInfo)kBitmapInfo; //kCGImageAlphaNoneSkipFirst; //0; //kCGBitmapAlphaInfoMask; //kCGImageAlphaNone; //kCGImageAlphaNoneSkipFirst;
context = CGBitmapContextCreate (bitmapData,// 4
pixelsWide,
pixelsHigh,
8, // bits per component
bitmapBytesPerRow,
colorSpace,
bitmapinfo
);
if (context== NULL)
{
free (bitmapData);// 5
fprintf (stderr, "Context not created!");
return NULL;
}
CGColorSpaceRelease( colorSpace );// 6
return context;// 7
}
iOS working code:
CGRect textureSize = CGRectMake(0, 0, 488, 650);
CGImageRef backgroundCGImage = [UIImage imageNamed:#"background.png"].CGImage;
UIGraphicsBeginImageContext(self.level.worldSize); // use WithOptions to set scale for retina display
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextDrawTiledImage(context, textureSize, backgroundCGImage);
UIImage *tiledBackground = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
SKTexture *backgroundTexture = [SKTexture textureWithCGImage:tiledBackground.CGImage];
SKSpriteNode *backgroundNode = [SKSpriteNode spriteNodeWithTexture:backgroundTexture];
[self addChild:backgroundNode];
I found that the above linked sprite kit shader did not work with Xcode 10, so I rolled my own. Here is the shader code:
void main(void) {
vec2 offset = sprite_size - fmod(node_size, sprite_size) / 2;
vec2 pixel = v_tex_coord * node_size + offset;
vec2 target = fmod(pixel, sprite_size) / sprite_size;
vec4 px = texture2D(u_texture, target);
gl_FragColor = px;
}
Note that the offset variable is only required if you want the pattern centralised - you can omit it, and its addition in the following line, if you want your tile pattern to start with the tile texture's bottom left corner.
Also note that you will need to manually add the node_size and sprite_size variables to the shader (and update them if they change) as neither of these have standard representations any more.
// The sprite node's texture will be used as a single tile
let node = SKSpriteNode(imageNamed: "TestTile")
let tileShader = SKShader(fileNamed: "TileShader.fsh")
// The shader needs to know the tile size and the node's final size.
tileShader.attributes = [
SKAttribute(name: "sprite_size", type: .vectorFloat2),
SKAttribute(name: "node_size", type: .vectorFloat2)
]
// At this point, the node's size is equal to its texture's size.
// We can therefore use it as the sprite size in the shader.
let spriteSize = vector_float2(
Float(node.size.width),
Float(node.size.height)
)
// Replace this with the desired size of the node.
// We will set this as the size of the node later.
let size = CGSize(x: 512, y: 256)
let nodeSize = vector_float2(
Float(size.width),
Float(size.height)
)
newBackground.setValue(
SKAttributeValue(vectorFloat2: spriteSize),
forAttribute: "sprite_size"
)
newBackground.setValue(
SKAttributeValue(vectorFloat2: nodeSize),
forAttribute: "node_size"
)
node.shader = tileShader
node.size = size
Yes, it is possible to implement that with a call to CGContextDrawTiledImage(), but that wastes a lot of memory for medium and large size nodes. A significantly improved approach is provided at spritekit_repeat_shader. This blog post provides example GLSL code and BSD licensed source is provided.

Saving imageRef from GLPaint creates completely black image

Hi I am trying out drawing app and have a problem when it comes to saving the image that is drawn. Right now I'm very early in learning this but I have added code from:
How to get UIImage from EAGLView? to save the image that was drawn.
I have created a new app, then displayed a viewController that I created. In IB I have added a view which is the PaintingView, and an imageView lies behind it.
The only modification I have done to the PaintingView so far is to change the background to clear and set the background to clear so that I can display an image behind it. The drawing works great, my only problem is saving.
- (void)saveImageFromGLView:(UIView *)glView {
if(glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES) != GL_FRAMEBUFFER_COMPLETE_OES)
{
/// This IS being activated with code 0
NSLog(#"failed to make complete framebuffer object %x", glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES));
}
int s = 1;
UIScreen* screen = [ UIScreen mainScreen ];
if ( [ screen respondsToSelector:#selector(scale) ] )
s = (int) [ screen scale ];
const int w = self.frame.size.width;
const int h = self.frame.size.height;
const NSInteger myDataLength = w * h * 4 * s * s;
// allocate array and read pixels into it.
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, w*s, h*s, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// gl renders "upside down" so swap top to bottom into new array.
// there's gotta be a better way, but this works.
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
for(int y = 0; y < h*s; y++)
{
memcpy( buffer2 + (h*s - 1 - y) * w * 4 * s, buffer + (y * 4 * w * s), w * 4 * s );
}
free(buffer); // work with the flipped buffer, so get rid of the original one.
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);
// prep the ingredients
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * w * s;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// make the cgimage
CGImageRef imageRef = CGImageCreate(w*s, h*s, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
// then make the uiimage from that
UIImage *myImage = [ UIImage imageWithCGImage:imageRef scale:s orientation:UIImageOrientationUp ];
UIImageWriteToSavedPhotosAlbum( myImage, nil, nil, nil );
CGImageRelease( imageRef );
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpaceRef);
free(buffer2);
}
Adding the above code to the Sample app works fine, the problem is doing it in my new app. The only difference I can tell is that I have not included PaintingWindow - would that be the problem?
It's as if the saveImage method isn't seeing the data for drawings.
The save method should be called within the scope of the OpenGL context.
To solve this you can move your method within the same rendering .m file and call this function from outside.
Also you need to consider OpenGL clear color.
(detail explanation in comments, lol)
I found that changing the CGBitmapInfo into:
CGBitmapInfo bitmapInfo = kCGImageAlphaPremultipliedLast;
Results in a transparent background.
Ah, you have to do this at the beginning.
[EAGLContext setCurrentContext:drawContext];

split UIImage by colors and create 2 images

I have looked through replacing colors in an image but cannot get it to work how i need because I am trying to do it with every color but one, as well as transparency.
what I am looking for is a way to take in an image and split out a color (say all the pure black) from that image. Then take that split out portion and make a new image with a transparent background and the split out portion.
(here is just an example of the idea, say i want to take a screenshot of this page. make every other color but pure black be transparent, and save that new image to the library, or put it into a UIImageView)
i have looked in to CGImageCreateWithMaskingColors but cant seem to do what I need with the transparent portion, and I dont really understand the colorMasking input other than you can provide it with a {Rmin,Rmax,Gmin,Gmax,Bmin,Bmax} color mask but when I do, it colors everything. any ideas or input would be great.
Sounds like you're going to have to get access to the underlying bytes and write code to process them directly. You can use CGImageGetDataProvider() to get access to the data of an image, but there's no guarantee that the format will be something you know how to handle. Alternately you can create a new CGContextRef using a specific format you know how to handle, then draw the original image into your new context, then process the underlying data. Here's a quick attempt at doing what you want (uncompiled):
- (UIImage *)imageWithBlackPixels:(UIImage *)image {
CGImageRef cgImage = image.CGImage;
// create a premultiplied ARGB context with 32bpp
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
size_t width = CGImageGetWidth(cgImage);
size_t height = CGImageGetHeight(cgImage);
size_t bpc = 8; // bits per component
size_t bpp = bpc * 4 / 8; // bytes per pixel
size_t bytesPerRow = bpp * width;
void *data = malloc(bytesPerRow * height);
CGBitmapInfo bitmapInfo = kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrder32Host;
CGContextRef ctx = CGBitmapContextCreate(data, width, height, bpc, bytesPerRow, colorspace, bitmapInfo);
CGColorSpaceRelease(colorspace);
if (ctx == NULL) {
// couldn't create the context - double-check the parameters?
free(data);
return nil;
}
// draw the image into the context
CGContextDrawImage(ctx, CGRectMake(0, 0, width, height), cgImage);
// replace all non-black pixels with transparent
// preserve existing transparency on black pixels
for (size_t y = 0; y < height; y++) {
size_t rowStart = bytesPerRow * y;
for (size_t x = 0; x < width; x++) {
size_t pixelOffset = rowStart + x*bpp;
// check the RGB components of the pixel
if (data[pixelOffset+1] != 0 || data[pixelOffset+2] != 0 || data[pixelOffset+3] != 0) {
// this pixel contains non-black. zero it out
memset(&data[pixelOffset], 0, 4);
}
}
}
// create our new image and release the context data
CGImageRef newCGImage = CGBitmapContextCreateImage(ctx);
CGContextRelease(ctx);
free(data);
UIImage *newImage = [UIImage imageWithCGImage:newCGImage scale:image.scale orientation:image.imageOrientation];
CGImageRelease(newCGImage);
return newImage;
}

iPhone Objective C: UIImage - Region of Interest

Is there any way to get the min rectangle area which contains all the non-transparent part of an UIImage?
Reading pixel by pixel to check where alpha == 0 ...isn't a good way I believe.
Any better way?
Many thanks for reading
I don't think there's a way to do this without examining the image pixel by pixel. Where are the images coming from? If you control them, you can at least do the pixel by pixel part only once and then either cache the information or distribute it along with the images if people are downloading them.
Okay here is my ugly solution to this problem. I hope there is a better way to do this.
- (CGRect) getROIRect:(UIImage*)pImage {
CGRect roiRect = {{0,0}, {0,0}};
int vMaxX = -999;
int vMinX = 999;
int vMaxY = -999;
int vMinY = 999;
int x,y;
CGImageRef inImage = pImage.CGImage;
// Create off screen bitmap context to draw the image into. Format ARGB is 4 bytes for each pixel: Alpa, Red, Green, Blue
CGContextRef cgctx = [self createARGBBitmapContextFromImage:inImage];
if (cgctx == NULL) { return roiRect; /* error */ }
size_t w = CGImageGetWidth(inImage);
size_t h = CGImageGetHeight(inImage);
CGRect rect = {{0,0},{w,h}};
// Draw the image to the bitmap context. Once we draw, the memory
// allocated for the context for rendering will then contain the
// raw image data in the specified color space.
CGContextDrawImage(cgctx, rect, inImage);
// Now we can get a pointer to the image data associated with the bitmap
// context.
unsigned char* data ;
BOOL tSet = NO;
data= CGBitmapContextGetData (cgctx);
if (data != NULL) {
for (x=0;x<w;x++) {
for (y=0;y<h;y++) {
//offset locates the pixel in the data from x,y.
//4 for 4 bytes of data per pixel, w is width of one row of data.
int offset = 4*((w*round(y))+round(x));
int alpha = data[offset];
if (alpha > 0) {
tSet = YES;
if (x > vMaxX) {
vMaxX = x;
}
if (x < vMinX) {
vMinX = x;
}
if (y > vMaxY) {
vMaxY = y;
}
if (y < vMinY) {
vMinY = y;
}
}
}
}
}
if (!tSet) {
vMaxX = w;
vMinX = 0;
vMaxY = h;
vMinY = 0;
}
// When finished, release the context
CGContextRelease(cgctx);
// Free image data memory for the context
if (data) { free(data); }
CGRect roiRect2 = {{vMinX,vMinY},{vMaxX - vMinX,vMaxY - vMinY}};
return roiRect2;
}
- (CGContextRef) createARGBBitmapContextFromImage:(CGImageRef) inImage {
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
void * bitmapData;
int bitmapByteCount;
int bitmapBytesPerRow;
// Get image width, height. We'll use the entire image.
size_t pixelsWide = CGImageGetWidth(inImage);
size_t pixelsHigh = CGImageGetHeight(inImage);
// Declare the number of bytes per row. Each pixel in the bitmap in this
// example is represented by 4 bytes; 8 bits each of red, green, blue, and
// alpha.
bitmapBytesPerRow = (pixelsWide * 4);
bitmapByteCount = (bitmapBytesPerRow * pixelsHigh);
// Use the generic RGB color space.
//colorSpace = CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB);
colorSpace = CGColorSpaceCreateDeviceRGB();
if (colorSpace == NULL)
{
fprintf(stderr, "Error allocating color space\n");
return NULL;
}
// Allocate memory for image data. This is the destination in memory
// where any drawing to the bitmap context will be rendered.
bitmapData = malloc( bitmapByteCount );
if (bitmapData == NULL)
{
fprintf (stderr, "Memory not allocated!");
CGColorSpaceRelease( colorSpace );
return NULL;
}
// Create the bitmap context. We want pre-multiplied ARGB, 8-bits
// per component. Regardless of what the source image format is
// (CMYK, Grayscale, and so on) it will be converted over to the format
// specified here by CGBitmapContextCreate.
context = CGBitmapContextCreate (bitmapData, pixelsWide, pixelsHigh, 8, bitmapBytesPerRow, colorSpace, kCGImageAlphaPremultipliedFirst);
if (context == NULL)
{
free (bitmapData);
fprintf (stderr, "Context not created!");
}
// Make sure and release colorspace before returning
CGColorSpaceRelease( colorSpace );
return context;
}

Multiple Image Operations Crash iPhone App

I'm new to the iPhone App development so it's likely that I'm doing something wrong.
Basically, I'm loading a bunch of images from the internet, and then cropping them. I managed to find examples of loading images asynchronous and adding them into views. I've managed to do that by adding an image with NSData, through a NSOperation, which was added into a NSOperationQueue.
Then, because I had to make fixed-sized thumbs, I needed a way to crop this images, so I found a script on the net which basically uses UIGraphicsBeginImageContext(), UIGraphicsGetImageFromCurrentImageContext() and UIGraphicsEndImageContext() to draw the cropped image, along with unimportant size calculations.
The thing is, the method works, but since it's generating like 20 of this images, it randomly crashes after a few of them were generated, or sometimes after I close and re-open the app one or two more times.
What should I do in this cases? I tried to make this methods run asynchronous somehow, as well, with NSOperations and a NSOperationQueue, but no luck.
If the crop code is more relevant than I think, here it is:
UIGraphicsBeginImageContext(CGSizeMake(50, 50));
CGRect thumbnailRect = CGRectZero;
thumbnailRect.origin = CGPointMake(0.0,0.0); //this is actually generated
// based on the sourceImage size
thumbnailRect.size.width = 50;
thumbnailRect.size.height = 50;
[sourceImage drawInRect:thumbnailRect];
newImage = UIGraphicsGetImageFromCurrentImageContext();
Thanks!
The code to scale the images looks too much simple.
Here is the one I am using. As you can see, there are no leaks, objects are released when no longer needed. Hope this helps.
// Draw the image into a pixelsWide x pixelsHigh bitmap and use that bitmap to
// create a new UIImage
- (UIImage *) createImage: (CGImageRef) image width: (int) pixelWidth height: (int) pixelHeight
{
// Set the size of the output image
CGRect aRect = CGRectMake(0.0f, 0.0f, pixelWidth, pixelHeight);
// Create a bitmap context to store the new thumbnail
CGContextRef context = MyCreateBitmapContext(pixelWidth, pixelHeight);
// Clear the context and draw the image into the rectangle
CGContextClearRect(context, aRect);
CGContextDrawImage(context, aRect, image);
// Return a UIImage populated with the new resized image
CGImageRef myRef = CGBitmapContextCreateImage (context);
UIImage *img = [UIImage imageWithCGImage:myRef];
free(CGBitmapContextGetData(context));
CGContextRelease(context);
CGImageRelease(myRef);
return img;
}
// MyCreateBitmapContext: Source based on Apple Sample Code
CGContextRef MyCreateBitmapContext (int pixelsWide,
int pixelsHigh)
{
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
void * bitmapData;
int bitmapByteCount;
int bitmapBytesPerRow;
bitmapBytesPerRow = (pixelsWide * 4);
bitmapByteCount = (bitmapBytesPerRow * pixelsHigh);
colorSpace = CGColorSpaceCreateDeviceRGB();
bitmapData = malloc( bitmapByteCount );
if (bitmapData == NULL)
{
fprintf (stderr, "Memory not allocated!");
CGColorSpaceRelease( colorSpace );
return NULL;
}
context = CGBitmapContextCreate (bitmapData,
pixelsWide,
pixelsHigh,
8,
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedLast);
if (context== NULL)
{
free (bitmapData);
CGColorSpaceRelease( colorSpace );
fprintf (stderr, "Context not created!");
return NULL;
}
CGColorSpaceRelease( colorSpace );
return context;
}
Your app is crashing because the calls you're using (e.g., UIGraphicsBeginImageContext) manipulate UIKit's context stack which you can only safely do from the main thread.
unforgiven's solution won't crash when used in a thread as it doesn't manipulate the context stack.
It does sounds suspiciously like an out of memory crash. Fire up the Leaks tool and see your overall memory trends.