quartz masks in iOS -- do they still cause crashes? - iphone

According to this question from 2008, using quartz masks can cause crashes! Is that still the case?
Basically, what I want to do is to draw dice of different colors on a fixed background, using one png for each die shape (there are a lot of them), and somehow add the colors in code.
EDIT: to clarify, for example I want to use one png file to make all of the following:
Basically, I want to multiply the red, green, and blue components of my image by three independent constants, while leaving the alpha unchanged.

Here's a shot. Tested, no leaks. No crashes.
.h
#import <UIKit/UIKit.h>
#interface ImageModViewController : UIViewController {
}
#property (nonatomic, retain) IBOutlet UIButton *test_button;
#property (nonatomic, retain) IBOutlet UIImageView *source_image;
#property (nonatomic, retain) IBOutlet UIImageView *destination_image;
#property (nonatomic, assign) float kr;
#property (nonatomic, assign) float kg;
#property (nonatomic, assign) float kb;
#property (nonatomic, assign) float ka;
-(IBAction)touched_test_button:(id)sender;
-(UIImage *) MultiplyImagePixelsByRGBA:(UIImage *)source kr:(float)red_k kg:(float)green_k kb:(float)blue_k ka:(float)alpha_k;
#end
.m
#define BITS_PER_WORD 32
#define BITS_PER_CHANNEL 8
#define COLOR_CHANNELS 4
#define BYTES_PER_PIXEL BITS_PER_WORD / BITS_PER_CHANNEL
#import "ImageModViewController.h"
#implementation ImageModViewController
#synthesize test_button;
#synthesize source_image;
#synthesize destination_image;
#synthesize kr;
#synthesize kg;
#synthesize kb;
#synthesize ka;
-(IBAction)touched_test_button:(id)sender
{
// Setup coefficients
kr = 1.0;
kg = 0.0;
kb = 0.0;
ka = 1.0;
// Set UIImageView image to the result of multiplying the pixels by the coefficients
destination_image.image = [self MultiplyImagePixelsByRGBA:source_image.image kr:kr kg:kg kb:kb ka:ka];
}
-(UIImage *) MultiplyImagePixelsByRGBA:(UIImage *)source kr:(float)red_k kg:(float)green_k kb:(float)blue_k ka:(float)alpha_k
{
// Get image information
CGImageRef bitmap = [source CGImage];
int width = source.size.width;
int height = source.size.height;
int total_pixels = width * height;
// Allocate a buffer
unsigned char *buffer = malloc(total_pixels * COLOR_CHANNELS);
// Copy image data to buffer
CGColorSpaceRef cs = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(buffer, width, height, BITS_PER_CHANNEL, width * BYTES_PER_PIXEL, cs, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrderDefault);
CGColorSpaceRelease(cs);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), bitmap);
CGContextRelease(context);
// Bounds limit coefficients
kr = ((((kr < 0.0) ? 0.0 : kr) > 1.0) ? 1.0 : kr);
kg = ((((kg < 0.0) ? 0.0 : kg) > 1.0) ? 1.0 : kg);
kb = ((((kb < 0.0) ? 0.0 : kb) > 1.0) ? 1.0 : kb);
ka = ((((ka < 0.0) ? 0.0 : ka) > 1.0) ? 1.0 : ka);
// Process the image in the buffer
int offset = 0; // Used to index into the buffer
for (int i = 0 ; i < total_pixels; i++)
{
buffer[offset] = (char)(buffer[offset] * red_k); offset++;
buffer[offset] = (char)(buffer[offset] * green_k); offset++;
buffer[offset] = (char)(buffer[offset] * blue_k); offset++;
buffer[offset] = (char)(buffer[offset] * alpha_k); offset++;
}
// Put the image back into a UIImage
context = CGBitmapContextCreate(buffer, width, height, BITS_PER_CHANNEL, width * BYTES_PER_PIXEL, cs, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrderDefault);
bitmap = CGBitmapContextCreateImage(context);
UIImage *output = [UIImage imageWithCGImage:bitmap];
CGContextRelease(context);
free(buffer);
return output;
}
- (void)dealloc
{
[super dealloc];
}
- (void)didReceiveMemoryWarning
{
// Releases the view if it doesn't have a superview.
[super didReceiveMemoryWarning];
// Release any cached data, images, etc that aren't in use.
}
#pragma mark - View lifecycle
/*
// Implement viewDidLoad to do additional setup after loading the view, typically from a nib.
- (void)viewDidLoad
{
[super viewDidLoad];
}
*/
- (void)viewDidUnload
{
[super viewDidUnload];
// Release any retained subviews of the main view.
// e.g. self.myOutlet = nil;
}
- (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation
{
// Return YES for supported orientations
return (interfaceOrientation == UIInterfaceOrientationPortrait);
}
#end
I setup the xib with two UIImageViews and one UIButton. The top UIImageView was pre-loaded with an image with Interface Builder. Touch the text button and the image is processed and set to the second UIImageView.
BTW, I had a little trouble with your icons copied right off your post. The transparency didn't work very well for some reason. I used fresh PNG test images of my own created in Photoshop with and without transparency and it all worked as advertised.
What you do inside the loop is to be modified per your needs, of course.
Watch endianess, it can really mess things up!

I've used masks fairly extensively recently in an iPhone app with no crashes. The code in that link doesn't even seem to be using masks, just clipping; the only mention of masks was as something else he tried. More likely he was calling that from a background thread, UIGraphicsBeginImageContext isn't thread safe.
Without knowing exactly what effect you're trying to get, it's hard to give advice on how to do it. A mask certainly could work, either alone (to get a sort of silkscreened effect) or to clip an overlay color drawn on a more realistic image. I'd probably use a mask or a path to set the clipping, then draw the die image (using kCBGlendModeNormal or kCBGlendModeCopy), and then paint the appropriate solid color over it using kCGBlendModeColor.

Related

How to render UIImage (picked from photo library) into CIContext / GLKView?

After researching and trying many things finally I made myself to ask SO:
basically I would like to pick a photo and then render it to a CIcontext, knowing that many other image rendering techniques available (eg. use UIImageView, etc), I have to use low-level OpenGL ES rendering.
Please check my full code (except the preload of UIImagePickerController in AppDelegate)
#import "ViewController.h"
#import "AppDelegate.h"
#import <GLKit/GLKit.h>
#interface ViewController () <GLKViewDelegate, UINavigationControllerDelegate, UIImagePickerControllerDelegate> {
GLKView *glkview;
CGRect glkview_bounds;
}
- (IBAction)click:(id)sender;
#property (strong, nonatomic) EAGLContext *eaglcontext;
#property (strong, nonatomic) CIContext *cicontext;
#end
#implementation ViewController
- (void)viewDidLoad {
[super viewDidLoad];
self.eaglcontext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
glkview = [[GLKView alloc] initWithFrame:[[UIScreen mainScreen] bounds] context:self.eaglcontext];
glkview.drawableDepthFormat = GLKViewDrawableDepthFormat24;
glkview.enableSetNeedsDisplay = YES;
glkview.delegate = self;
[self.view addSubview:glkview];
[glkview bindDrawable];
glkview_bounds = CGRectZero;
glkview_bounds.size.width = glkview.drawableWidth;
glkview_bounds.size.height = glkview.drawableHeight;
NSLog(#"glkview_bounds:%#", NSStringFromCGRect(glkview_bounds));
self.cicontext = [CIContext contextWithEAGLContext:self.eaglcontext options:#{kCIContextWorkingColorSpace : [NSNull null]} ];
UIAppDelegate.g_mediaUI.delegate = self;
}
- (IBAction)click:(id)sender {
[self presentViewController:UIAppDelegate.g_mediaUI animated:YES completion:nil];
}
- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info {
[self dismissViewControllerAnimated:NO completion:nil];
UIImage *pre_img = (UIImage *)[info objectForKey:UIImagePickerControllerOriginalImage];
CIImage *outputImage = [CIImage imageWithCGImage:pre_img.CGImage];
NSLog(#"orient:%d, size:%#, scale:%f, extent:%#", (int)pre_img.imageOrientation, NSStringFromCGSize(pre_img.size), pre_img.scale,NSStringFromCGRect(outputImage.extent));
if (outputImage) {
if (self.eaglcontext != [EAGLContext currentContext])
[EAGLContext setCurrentContext:self.eaglcontext];
// clear eagl view to grey
glClearColor(0.5, 0.5, 0.5, 1.0);
glClear(GL_COLOR_BUFFER_BIT);
// set the blend mode to "source over" so that CI will use that
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
[self.cicontext drawImage:outputImage inRect:glkview_bounds fromRect:[outputImage extent]];
[glkview display];
}
}
#end
What works: picked images having sizes eg. size 1390x1390 or 1440x1440 are displayed.
What doesn't work: picked images with size of 2592x1936 are not displayed, basically all large pictures which taken with the camera.
Please help finding out the solution, I'm stucked here...
OpenGL has a texture limit, depending on the iOS device it can be 2048x2048.
You can check the max texture limit for the device using the following code:
static GLint maxTextureSize = 0;
glGetIntegerv(GL_MAX_TEXTURE_SIZE, &maxTextureSize);
Older devices (before iPhone 4S) all have a maximum texture size of 2048x2048. A5 and above (after and including iPhone 4S) have a maximum 4096x4096 texture size.
Source: Black texture with OpenGL when image is too big

How do I make my variables accessible to other classes?

The variables bounds, width and height are at present local variables. I cannot access them from other classes or even access them from another method.
How can I make these variables available to the whole instance? I have tried placing them within the .h file and renaming them to CGFloats to no avail.
#import "TicTacToeBoard.h"
#implementation TicTacToeBoard
- (id)initWithFrame:(CGRect)frame
{
self = [super initWithFrame:frame];
if (self) {
// Initialization code
}
return self;
}
- (void)drawRect:(CGRect)rect
{
CGRect bounds = [self bounds];
float width = bounds.size.width;
float height = bounds.size.height;
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGContextSetRGBStrokeColor(ctx, 0.3, 0.3, 0.3, 1);
CGContextSetLineWidth(ctx, 5);
CGContextSetLineCap(ctx, kCGLineCapRound);
CGContextMoveToPoint(ctx, width/3, height * 0.95);
CGContextAddLineToPoint(ctx, width/3, height * 0.05);
CGContextStrokePath(ctx);
}
#end
You can use properties to make variables accessible to other objects.
In your interface add something like this:
#property (nonatomic, retain) NSString *myString;
and then add
#synthesize mystring;
to your implementation.
Two methods will be created to get the property and to change it.
[myObject myString]; // returns the property
[myObject setMyString:#"new string"]; // changes the property
// alternately, you can write it this way
myObject.myString;
myObject.mystring = #"new string";
You can change the value of a property within a class using [self setMystring:#"new value"] or if you have the same variable already declared in the interface and then create a property from it you can keep using your variables within the class the way you are .
There's more info on properties in the developer docs: http://developer.apple.com/library/ios/#documentation/cocoa/conceptual/objectiveC/Chapters/ocProperties.html#//apple_ref/doc/uid/TP30001163-CH17-SW1
bounds, width and height are local variables that exist only in the context of the drawRect method.
Why don't you use:
CGRect bounds = [self bounds];
float width = bounds.size.width;
float height = bounds.size.height;
in other methods?
Make them member variables or properties and write accessors or synthesize them.
See the Objective-C language reference.
Use getters setters or generate it with a
#property(nonatomic) CGFloat width;
#synthesize width;

Passing position and color of square to UIView subview

I work on a project for iPhone iOS4 with Xcode.
From MainViewController I want to draw a little red square (fill only, not stroke) in MyView, a subclass of UIView. How can I pass position and RGB color of the square?
MyView.h (subclass of UIView)
#interface {
CGPoint position; // OK
CGFloat[3] color; // ???
}
#property CGPoint position; // OK
#property CGFloat color; // ???
MyView.m
#synthesize position; // OK
#synthesize color; // ???
MainViewController.h
MyView *myRect;
property IBOutlet MyView *myRect;
MainViewController.m
#synthesize myRect;
- (void) viewDidLoad
myRect.position = CGPointMake (0, 0); // OK
myRect.color = CGFloat [] = {255, 0, 0, 1} // ???
I think I have no problems with the position of the square. But how can I pass the color of the square?
Thank you.
[myRect setBackgroundColor:[UIColor colorWithRed:color[0] green:color[1] blue:color[2] alpha:1.0]];

Inheriting UIView - Losing instance variables

Bit of an Objective-C rookie, I've looked around for an answer but haven't been able to find one so forgive me if this is an obvious question.
Basically, I need to draw on the screen segments of a circle (for instance, a 90 degree segment, where a horizontal and vertical line meet at the bottom left and an arc connects the end points). I've managed to achieve this in a custom class called CircleSegment that inherits UIView and overrides drawRect.
My issue is achieving this programatically; I need some way of creating my CircleSegment class and storing in it the desired angle before it draws the segment itself.
Here's what I have so far:
CircleSegment.h
#import <UIKit/UIKit.h>
#interface CircleSegment : UIView {
float angleSize;
UIColor *backColor;
}
-(float)convertDegToRad:(float)degrees;
-(float)convertRadToDeg:(float)radians;
#property (nonatomic) float angleSize;
#property (nonatomic, retain) UIColor *backColor;
#end
CircleSegment.m
#import "CircleSegment.h"
#implementation CircleSegment
#synthesize angleSize;
#synthesize backColor;
// INITIALISATION OVERRIDES
// ------------------------
- (id)initWithFrame:(CGRect)frame {
if ((self = [super initWithFrame:frame])) {
self.opaque = NO;
self.backgroundColor = [UIColor clearColor];
}
return self;
}
- (void)setBackgroundColor:(UIColor *)newBGColor
{
// Ignore.
}
- (void)setOpaque:(BOOL)newIsOpaque
{
// Ignore.
}
// MATH FUNCTIONS
// --------------
// Converts degrees to radians.
-(float)convertDegToRad:(float)degrees {
return degrees * M_PI / 180;
}
// Converts radians to degrees.
-(float)convertRadToDeg:(float)radians {
return radians * 180 / M_PI;
}
// DRAWING CODE
// ------------
- (void)drawRect:(CGRect)rect {
float endAngle = 360 - angleSize;
UIBezierPath* aPath = [UIBezierPath bezierPathWithArcCenter:CGPointMake(100, 100)
radius:100
startAngle:[self convertDegToRad:270]
endAngle:[self convertDegToRad:endAngle]
clockwise:YES];
[aPath addLineToPoint:CGPointMake(100.0, 100.0)];
[aPath closePath];
CGContextRef aRef = UIGraphicsGetCurrentContext();
CGContextSetFillColorWithColor(aRef, backColor.CGColor);
CGContextSaveGState(aRef);
aPath.lineWidth = 1;
[aPath fill];
[aPath stroke];
//CGContextRestoreGState(aRef);
}
- (void)dealloc {
[super dealloc];
}
#end
Note that the .m file is a bit messy with various bits of test code...
So essentially I want to create an instance of CircleSegment, store an angle in the angleSize property, draw a segment based on that angle and then add that view to the main app view to display it...
To try and achieve that, I've add the following test code to viewDidLoad on my ViewController:
CircleSegment *seg1 = [[CircleSegment alloc] init];
seg1.backColor = [UIColor greenColor];
seg1.angleSize = 10;
[self.view addSubview:seg1];
It seems to store the UIColor and angleSize fine when I breakpoint those places, however if I place a breakpoint in drawRect which I overrode on CircleSegment.m, the values have reverted back to nil values (or whatever the correct term would be, feel free to correct me).
I'd really appreciate it if someone could point me in the right direction!
Thanks
It seems to store the UIColor and
angleSize fine when I breakpoint those
places, however if I place a
breakpoint in drawRect which I
overrode on CircleSegment.m, the
values have reverted back to nil
values (or whatever the correct term
would be, feel free to correct me).
Are you sure you are working with the same instance?
Drop in something like NSLog(#"%# %p", NSStringFromSelector(_cmd), self); in viewDidLoad and in drawRect and make sure you are actually dealing with the same instance.
(I've seen cases where a developer will alloc/initWithFrame: a view and then use the instance that was created during NIB loading, wondering why that instance wasn't properly initialized.)
Try using self.angleSize in your DrawRect method (same with backColor). In your test method, replace 10 with 10.0 (to ensure it's setting it as a float).
Also, don't forget to release backColor in your dealloc.

My vector sprite renders in different locations in simulator and device

I'm implementing a subclass of UIView that displays a gauge dial with a sprite for the indicator. It has angle property that I can vary to make the needle point to different angles. It works, but on the same values for the position of the needle make it show up in different locations on the phone and the simulator. It's an iPhone 4, so I'm sure the double resolution thing is behind this, but I don't know what to do about it. I tried setting the UIView's layer's contentScaleFactor but that fails. I thought UIView got the resolution thing for free. Any suggestions?
I should note that the NSLog statements report 150 for both .frame.size. dimensions, in both the simulator and the device.
Here's the .m file
UPDATE: In the simulator, I found how to set the hardware to iPhone 4, and it looks just like the device now, both are scaling and positioning the sprite at half size.
UPDATE 2: I made a workaround. I set the .scale of my sprite equal to the UIView's contentScaleFactor and then use it to dived the UIView in half if it's a lo-res screen and the full width if it's hi-res. I still don't see why this is necessary, as I should be working in points now, not pixels. It must have something to do with the custom drawing code in the Sprite or VectorSprite classes.
I'd still appreciate some feedback if anyone has some...
#import "GaugeView.h"
#implementation GaugeView
#synthesize needle;
#define kVectorArtCount 4
static CGFloat kVectorArt[] = {
3,-4,
2,55,
-2,55,
-3,-4
};
- (id)initWithCoder:(NSCoder *)coder {
if (self = [super initWithCoder:coder]) {
needle = [VectorSprite withPoints:kVectorArt count:kVectorArtCount];
needle.scale = (float)self.contentScaleFactor; // returns 1 for lo-res, 2 for hi-res
NSLog(#" needle.scale = %1.1f", needle.scale);
needle.x = self.frame.size.width / ((float)(-self.contentScaleFactor) + 3.0); // divisor = 1 for hi-res, 2 for lo-res
NSLog(#" needle.x = %1.1f", needle.x);
needle.y = self.frame.size.height / ((float)(-self.contentScaleFactor) + 3.0);
NSLog(#" needle.y = %1.1f", needle.y);
needle.r = 0.0;
needle.g = 0.0;
needle.b = 0.0;
needle.alpha = 1.0; }
}
self.backgroundColor = [UIColor clearColor];
return self;
}
- (id)initWithFrame:(CGRect)frame {
if ((self = [super initWithFrame:frame])) {
// Initialization code
}
return self;
}
// Only override drawRect: if you perform custom drawing.
// An empty implementation adversely affects performance during animation.
- (void)drawRect:(CGRect)rect {
// Drawing code
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSaveGState(context);
CGAffineTransform t0 = CGContextGetCTM(context);
t0 = CGAffineTransformInvert(t0);
CGContextConcatCTM(context, t0);
[needle updateBox];
[needle draw: context];
}
- (void)dealloc {
[needle release];
[super dealloc];
}
#end
I believe the answer is that iOS takes care of the resolution scaling automatically in drawRect methods, but in custom drawing code, you have to do it yourself.
In my example, I used the UIView's contentsScaleFactor to scale my sprite. In the future, in my custom draw method (not shown) I'll query [UIScreen mainScreen] scale and scale accordingly there.