How to render UIImage (picked from photo library) into CIContext / GLKView? - iphone

After researching and trying many things finally I made myself to ask SO:
basically I would like to pick a photo and then render it to a CIcontext, knowing that many other image rendering techniques available (eg. use UIImageView, etc), I have to use low-level OpenGL ES rendering.
Please check my full code (except the preload of UIImagePickerController in AppDelegate)
#import "ViewController.h"
#import "AppDelegate.h"
#import <GLKit/GLKit.h>
#interface ViewController () <GLKViewDelegate, UINavigationControllerDelegate, UIImagePickerControllerDelegate> {
GLKView *glkview;
CGRect glkview_bounds;
}
- (IBAction)click:(id)sender;
#property (strong, nonatomic) EAGLContext *eaglcontext;
#property (strong, nonatomic) CIContext *cicontext;
#end
#implementation ViewController
- (void)viewDidLoad {
[super viewDidLoad];
self.eaglcontext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
glkview = [[GLKView alloc] initWithFrame:[[UIScreen mainScreen] bounds] context:self.eaglcontext];
glkview.drawableDepthFormat = GLKViewDrawableDepthFormat24;
glkview.enableSetNeedsDisplay = YES;
glkview.delegate = self;
[self.view addSubview:glkview];
[glkview bindDrawable];
glkview_bounds = CGRectZero;
glkview_bounds.size.width = glkview.drawableWidth;
glkview_bounds.size.height = glkview.drawableHeight;
NSLog(#"glkview_bounds:%#", NSStringFromCGRect(glkview_bounds));
self.cicontext = [CIContext contextWithEAGLContext:self.eaglcontext options:#{kCIContextWorkingColorSpace : [NSNull null]} ];
UIAppDelegate.g_mediaUI.delegate = self;
}
- (IBAction)click:(id)sender {
[self presentViewController:UIAppDelegate.g_mediaUI animated:YES completion:nil];
}
- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info {
[self dismissViewControllerAnimated:NO completion:nil];
UIImage *pre_img = (UIImage *)[info objectForKey:UIImagePickerControllerOriginalImage];
CIImage *outputImage = [CIImage imageWithCGImage:pre_img.CGImage];
NSLog(#"orient:%d, size:%#, scale:%f, extent:%#", (int)pre_img.imageOrientation, NSStringFromCGSize(pre_img.size), pre_img.scale,NSStringFromCGRect(outputImage.extent));
if (outputImage) {
if (self.eaglcontext != [EAGLContext currentContext])
[EAGLContext setCurrentContext:self.eaglcontext];
// clear eagl view to grey
glClearColor(0.5, 0.5, 0.5, 1.0);
glClear(GL_COLOR_BUFFER_BIT);
// set the blend mode to "source over" so that CI will use that
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
[self.cicontext drawImage:outputImage inRect:glkview_bounds fromRect:[outputImage extent]];
[glkview display];
}
}
#end
What works: picked images having sizes eg. size 1390x1390 or 1440x1440 are displayed.
What doesn't work: picked images with size of 2592x1936 are not displayed, basically all large pictures which taken with the camera.
Please help finding out the solution, I'm stucked here...

OpenGL has a texture limit, depending on the iOS device it can be 2048x2048.
You can check the max texture limit for the device using the following code:
static GLint maxTextureSize = 0;
glGetIntegerv(GL_MAX_TEXTURE_SIZE, &maxTextureSize);
Older devices (before iPhone 4S) all have a maximum texture size of 2048x2048. A5 and above (after and including iPhone 4S) have a maximum 4096x4096 texture size.
Source: Black texture with OpenGL when image is too big

Related

Screen capture with three embedded controllers where one is a UIImagePicker in video mode?

I need to perform a live screen capture of the entire iPhone screen. The screen has three container views embedded. One of these containers is a UIImagePickerController. Everything on the screen captures beautifully, but the one container that has the UIImagePickerController is black. I need the whole screen capture so the continuity of operation looks seamless. Is there a way to capture what is currently shown on the screen from a UIImagePickerController? Below is the code I am using to capture the screen image.
I have also tried Apple's Technical Q&A QA1703.
UIGraphicsBeginImageContextWithOptions(myView.bounds.size, YES, 0.0f);
[myView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Thanks in advance for any help!
I've had a similar problem before when trying to capture a screenshot containing both a GLKView and a UIImagePickerController. Sometimes I would get a black screen, other times I would get complaints about an invalid context (when using code similar to yours). I couldn't find a solution, so instead I implemented an AVFoundation camera and haven't looked back since. Here's some quick source code to help you out.
ViewController.h
// Frameworks
#import <CoreVideo/CoreVideo.h>
#import <CoreMedia/CoreMedia.h>
#import <AVFoundation/AVFoundation.h>
#import <UIKit/UIKit.h>
#interface CameraViewController : UIViewController <AVCaptureVideoDataOutputSampleBufferDelegate>
// Camera
#property (strong, nonatomic) AVCaptureSession* captureSession;
#property (strong, nonatomic) AVCaptureVideoPreviewLayer* previewLayer;
#property (strong, nonatomic) UIImage* cameraImage;
#end
ViewController.m
#import "CameraViewController.h"
#implementation CameraViewController
- (void)viewDidLoad
{
[super viewDidLoad];
[self setupCamera];
}
- (void)setupCamera
{
AVCaptureDeviceInput* input = [AVCaptureDeviceInput deviceInputWithDevice:[AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo] error:nil];
AVCaptureVideoDataOutput* output = [[AVCaptureVideoDataOutput alloc] init];
output.alwaysDiscardsLateVideoFrames = YES;
dispatch_queue_t queue;
queue = dispatch_queue_create("cameraQueue", NULL);
[output setSampleBufferDelegate:self queue:queue];
NSString* key = (NSString *) kCVPixelBufferPixelFormatTypeKey;
NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA];
NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key];
[output setVideoSettings:videoSettings];
self.captureSession = [[AVCaptureSession alloc] init];
[self.captureSession addInput:input];
[self.captureSession addOutput:output];
[self.captureSession setSessionPreset:AVCaptureSessionPresetPhoto];
self.previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:self.captureSession];
self.previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
// CHECK FOR YOUR APP
self.previewLayer.frame = CGRectMake(0, 0, self.view.frame.size.height, self.view.frame.size.width);
self.previewLayer.orientation = AVCaptureVideoOrientationLandscapeRight;
// CHECK FOR YOUR APP
[self.view.layer insertSublayer:self.previewLayer atIndex:0];
[self.captureSession startRunning];
}
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0);
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef newImage = CGBitmapContextCreateImage(newContext);
CGContextRelease(newContext);
CGColorSpaceRelease(colorSpace);
self.cameraImage = [UIImage imageWithCGImage:newImage];
CGImageRelease(newImage);
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
}
// Call whenever you need a snapshot
- (UIImage *)snapshot
{
NSLog(#"SNAPSHOT");
return self.cameraImage;
}
#end
This code captures the input image according to the selected preset (in this case, photo: 852x640), so if you want to capture it along with the view I would recommend the following options:
Scale, crop, and translate your image after capture. Pros: Camera still runs smoothly. Cons: More code
Add a UIImageView instead of the previewLayer, which updates its image in the captureOutput delegate. Pros: WYSIWYG. Cons: May cause your camera to run slower.
In both cases above you would need to merge the resulting capture with your other images after taking a screenshot (not as hard as it sounds).
AVFoundation and its associated frameworks are quite daunting, so this is a very lean implementation to get what you're after. If you want more details please check the following examples:
iOS 4 and direct access to the camera
Screenshots - a legal way to get screenshots
Hope that helps!

graphic context of rotated layers ignores any transformations

I'm composing a view by several layers, some of them rotated. The display of these layers works perfectly, however when I try to make an image, all rotations/transformations are ignored.
For example if I have a CALayer containing an image of an arrow pointing to the left, after a rotation around the z axis (using CATransform3DMakeRotation) it points to the top, and the layer is correctly displayed. However if I get an image of this layer (using renderInContext), the arrow still points to the left, ignoring any transformations.
Anyone any idea why, and what do I have to do to get my wanted result? :-)
Example code:
// ViewController.m
#import "ViewController.h"
#import <QuartzCore/QuartzCore.h>
// Don't forget to add the QuartzCore framework to your target.
#interface ViewController ()
#end
#implementation ViewController
- (void)loadView {
self.view = [[UIView alloc] initWithFrame:[[UIScreen mainScreen] bounds]];
self.view.backgroundColor = [UIColor lightGrayColor];
}
- (void)viewDidLoad {
[super viewDidLoad];
// Get the arrow image (tip to the left, 50 x 50 pixels).
// (Download the image from here: http://www.4shared.com/photo/uQfor7vC/arrow.html )
UIImage *arrowImage = [UIImage imageNamed:#"arrow"];
// The arrow layer in it's starting orientation (tip to the left).
CGRect layerFrame = CGRectMake(50.0, 50.0, arrowImage.size.width, arrowImage.size.height);
CALayer *arrowLayerStart = [CALayer new];
arrowLayerStart.frame = layerFrame;
[arrowLayerStart setContents:(id)[arrowImage CGImage]];
[self.view.layer addSublayer:arrowLayerStart];
// The arrow layer rotated around the z axis by 90 degrees.
layerFrame.origin = CGPointMake(layerFrame.origin.x + layerFrame.size.width + 25.0, layerFrame.origin.y);
CALayer *arrowLayerZ90 = [CALayer new];
arrowLayerZ90.frame = layerFrame;
[arrowLayerZ90 setContents:(id)[arrowImage CGImage]];
arrowLayerZ90.transform = CATransform3DMakeRotation(M_PI/2.0, 0.0, 0.0, 1.0);
[self.view.layer addSublayer:arrowLayerZ90];
// Now make images of each of these layers and display them in a second row.
// The starting layer without any rotation.
UIGraphicsBeginImageContext(arrowLayerStart.frame.size);
[arrowLayerStart renderInContext:UIGraphicsGetCurrentContext()];
UIImage *arrowStartImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
layerFrame.origin = CGPointMake(arrowLayerStart.frame.origin.x,
arrowLayerStart.frame.origin.y + arrowLayerStart.frame.size.height + 25.0);
CALayer *arrowLayerStartCaptured = [CALayer new];
arrowLayerStartCaptured.frame = layerFrame;
[arrowLayerStartCaptured setContents:(id)[arrowStartImage CGImage]];
[self.view.layer addSublayer:arrowLayerStartCaptured];
// The second layer, rotated around the z axis by 90 degrees.
UIGraphicsBeginImageContext(arrowLayerZ90.frame.size);
[arrowLayerZ90 renderInContext:UIGraphicsGetCurrentContext()];
UIImage *arrowZ90Image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
layerFrame.origin = CGPointMake(arrowLayerZ90.frame.origin.x,
arrowLayerZ90.frame.origin.y + arrowLayerZ90.frame.size.height + 25.0);
CALayer *arrowLayerZ90Captured = [CALayer new];
arrowLayerZ90Captured.frame = layerFrame;
[arrowLayerZ90Captured setContents:(id)[arrowZ90Image CGImage]];
[self.view.layer addSublayer:arrowLayerZ90Captured];
}
#end
// ViewController.h
#import <UIKit/UIKit.h>
#interface ViewController : UIViewController
#end
// AppDelegate.h
#import <UIKit/UIKit.h>
#interface AppDelegate : UIResponder <UIApplicationDelegate>
#property (strong, nonatomic) UIWindow *window;
#end
// AppDelegate.m
#import "AppDelegate.h"
#import "ViewController.h"
#implementation AppDelegate
#synthesize window = _window;
- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions {
self.window = [[UIWindow alloc] initWithFrame:[[UIScreen mainScreen] bounds]];
self.window.rootViewController = [[ViewController alloc] init];
[self.window makeKeyAndVisible];
return YES;
}
#end
The documentation for -[CALayer renderInContext:] says:
The Mac OS X v10.5 implementation of this method does not
support the entire Core Animation composition model.
QCCompositionLayer, CAOpenGLLayer, and QTMovieLayer layers are not
rendered. Additionally, layers that use 3D transforms are not
rendered, nor are layers that specify backgroundFilters, filters,
compositingFilter, or a mask values. Future versions of Mac OS X may
add support for rendering these layers and properties.
(This limitation is present on iOS, too.)
The header CALayer.h also says:
* WARNING: currently this method does not implement the full
* CoreAnimation composition model, use with caution. */
Essentially, -renderInContext: is useful in a few simple cases, but will not work as expected in more complicated situations. You will need to find some other way to do your drawing. Quartz (Core Graphics) will work for 2D, but for 3D you're on your own.
You can use Core Graphaic instead of CATransform3DMakeRotation :)
CGAffineTransform flip = CGAffineTransformMakeScale(1.0, -1.0);

Problem with pdf while opening in UIWebView

I have a problem while opening a pdf in a UIWebView. Zoom in and Zoom out doesn't work and even double tap doesn't enlarge the pdf font size.
Guys is there any way to do that....
If not can anyone share some code ....
#import
#interface TiledPDFView : UIView {
CGPDFPageRef pdfPage;
CGFloat myScale;
}
- (id)initWithFrame:(CGRect)frame andScale:(CGFloat)scale;
- (void)setPage:(CGPDFPageRef)newPage;
#end
#import "TiledPDFView.h"
#import
#implementation TiledPDFView
// Create a new TiledPDFView with the desired frame and scale.
- (id)initWithFrame:(CGRect)frame andScale:(CGFloat)scale{
if ((self = [super initWithFrame:frame])) {
CATiledLayer *tiledLayer = (CATiledLayer *)[self layer];
tiledLayer.levelsOfDetail = 4;
tiledLayer.levelsOfDetailBias = 4;
tiledLayer.tileSize = CGSizeMake(512.0, 512.0);
myScale = scale;
}
return self;
}
// Set the layer's class to be CATiledLayer.
+ (Class)layerClass {
return [CATiledLayer class];
}
// Set the CGPDFPageRef for the view.
- (void)setPage:(CGPDFPageRef)newPage
{
CGPDFPageRelease(self->pdfPage);
self->pdfPage = CGPDFPageRetain(newPage);
}
-(void)drawRect:(CGRect)r
{
// UIView uses the existence of -drawRect: to determine if it should allow its CALayer
// to be invalidated, which would then lead to the layer creating a backing store and
// -drawLayer:inContext: being called.
// By implementing an empty -drawRect: method, we allow UIKit to continue to implement
// this logic, while doing our real drawing work inside of -drawLayer:inContext:
}
// Draw the CGPDFPageRef into the layer at the correct scale.
-(void)drawLayer:(CALayer*)layer inContext:(CGContextRef)context
{
// First fill the background with white.
CGContextSetRGBFillColor(context, 1.0,1.0,1.0,0.5);
CGContextFillRect(context,self.bounds);
CGContextSaveGState(context);
// Flip the context so that the PDF page is rendered
// right side up.
CGContextTranslateCTM(context, 0.0, self.bounds.size.height);
// Scale the context so that the PDF page is rendered
// at the correct size for the zoom level.
CGContextScaleCTM(context, myScale,myScale);
CGContextDrawPDFPage(context, pdfPage);
CGContextRestoreGState(context);
}
// Clean up.
- (void)dealloc {
CGPDFPageRelease(pdfPage);
[super dealloc];
}
#end
Add this to your project...
Hope it helped....
Make sure that «Multiple Touch» is enabled for the UIWebView.

quartz masks in iOS -- do they still cause crashes?

According to this question from 2008, using quartz masks can cause crashes! Is that still the case?
Basically, what I want to do is to draw dice of different colors on a fixed background, using one png for each die shape (there are a lot of them), and somehow add the colors in code.
EDIT: to clarify, for example I want to use one png file to make all of the following:
Basically, I want to multiply the red, green, and blue components of my image by three independent constants, while leaving the alpha unchanged.
Here's a shot. Tested, no leaks. No crashes.
.h
#import <UIKit/UIKit.h>
#interface ImageModViewController : UIViewController {
}
#property (nonatomic, retain) IBOutlet UIButton *test_button;
#property (nonatomic, retain) IBOutlet UIImageView *source_image;
#property (nonatomic, retain) IBOutlet UIImageView *destination_image;
#property (nonatomic, assign) float kr;
#property (nonatomic, assign) float kg;
#property (nonatomic, assign) float kb;
#property (nonatomic, assign) float ka;
-(IBAction)touched_test_button:(id)sender;
-(UIImage *) MultiplyImagePixelsByRGBA:(UIImage *)source kr:(float)red_k kg:(float)green_k kb:(float)blue_k ka:(float)alpha_k;
#end
.m
#define BITS_PER_WORD 32
#define BITS_PER_CHANNEL 8
#define COLOR_CHANNELS 4
#define BYTES_PER_PIXEL BITS_PER_WORD / BITS_PER_CHANNEL
#import "ImageModViewController.h"
#implementation ImageModViewController
#synthesize test_button;
#synthesize source_image;
#synthesize destination_image;
#synthesize kr;
#synthesize kg;
#synthesize kb;
#synthesize ka;
-(IBAction)touched_test_button:(id)sender
{
// Setup coefficients
kr = 1.0;
kg = 0.0;
kb = 0.0;
ka = 1.0;
// Set UIImageView image to the result of multiplying the pixels by the coefficients
destination_image.image = [self MultiplyImagePixelsByRGBA:source_image.image kr:kr kg:kg kb:kb ka:ka];
}
-(UIImage *) MultiplyImagePixelsByRGBA:(UIImage *)source kr:(float)red_k kg:(float)green_k kb:(float)blue_k ka:(float)alpha_k
{
// Get image information
CGImageRef bitmap = [source CGImage];
int width = source.size.width;
int height = source.size.height;
int total_pixels = width * height;
// Allocate a buffer
unsigned char *buffer = malloc(total_pixels * COLOR_CHANNELS);
// Copy image data to buffer
CGColorSpaceRef cs = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(buffer, width, height, BITS_PER_CHANNEL, width * BYTES_PER_PIXEL, cs, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrderDefault);
CGColorSpaceRelease(cs);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), bitmap);
CGContextRelease(context);
// Bounds limit coefficients
kr = ((((kr < 0.0) ? 0.0 : kr) > 1.0) ? 1.0 : kr);
kg = ((((kg < 0.0) ? 0.0 : kg) > 1.0) ? 1.0 : kg);
kb = ((((kb < 0.0) ? 0.0 : kb) > 1.0) ? 1.0 : kb);
ka = ((((ka < 0.0) ? 0.0 : ka) > 1.0) ? 1.0 : ka);
// Process the image in the buffer
int offset = 0; // Used to index into the buffer
for (int i = 0 ; i < total_pixels; i++)
{
buffer[offset] = (char)(buffer[offset] * red_k); offset++;
buffer[offset] = (char)(buffer[offset] * green_k); offset++;
buffer[offset] = (char)(buffer[offset] * blue_k); offset++;
buffer[offset] = (char)(buffer[offset] * alpha_k); offset++;
}
// Put the image back into a UIImage
context = CGBitmapContextCreate(buffer, width, height, BITS_PER_CHANNEL, width * BYTES_PER_PIXEL, cs, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrderDefault);
bitmap = CGBitmapContextCreateImage(context);
UIImage *output = [UIImage imageWithCGImage:bitmap];
CGContextRelease(context);
free(buffer);
return output;
}
- (void)dealloc
{
[super dealloc];
}
- (void)didReceiveMemoryWarning
{
// Releases the view if it doesn't have a superview.
[super didReceiveMemoryWarning];
// Release any cached data, images, etc that aren't in use.
}
#pragma mark - View lifecycle
/*
// Implement viewDidLoad to do additional setup after loading the view, typically from a nib.
- (void)viewDidLoad
{
[super viewDidLoad];
}
*/
- (void)viewDidUnload
{
[super viewDidUnload];
// Release any retained subviews of the main view.
// e.g. self.myOutlet = nil;
}
- (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation
{
// Return YES for supported orientations
return (interfaceOrientation == UIInterfaceOrientationPortrait);
}
#end
I setup the xib with two UIImageViews and one UIButton. The top UIImageView was pre-loaded with an image with Interface Builder. Touch the text button and the image is processed and set to the second UIImageView.
BTW, I had a little trouble with your icons copied right off your post. The transparency didn't work very well for some reason. I used fresh PNG test images of my own created in Photoshop with and without transparency and it all worked as advertised.
What you do inside the loop is to be modified per your needs, of course.
Watch endianess, it can really mess things up!
I've used masks fairly extensively recently in an iPhone app with no crashes. The code in that link doesn't even seem to be using masks, just clipping; the only mention of masks was as something else he tried. More likely he was calling that from a background thread, UIGraphicsBeginImageContext isn't thread safe.
Without knowing exactly what effect you're trying to get, it's hard to give advice on how to do it. A mask certainly could work, either alone (to get a sort of silkscreened effect) or to clip an overlay color drawn on a more realistic image. I'd probably use a mask or a path to set the clipping, then draw the die image (using kCBGlendModeNormal or kCBGlendModeCopy), and then paint the appropriate solid color over it using kCGBlendModeColor.

Effect like Interface Builder connection lines on iPhone

Hi I'd like little bit of help on something. In my app, I have a UITableView which is populated with custom cells that represent images. (i.e. selecting a row displays a picture in an image view).
What I would like to do, is have an icon in the custom cell that I could drag to one of a series of image views. Similar to the way you can drag a line in IB to set connections. Once the user releases their finger I will have it check what part of the screen they released it and if it is one one of these rects that represent the picture frames, it will populate the picture frame with the image and the line will disappear.
I have never drawn lines in my app before so thats not something I know how to do (so im just looking for a link to a tutorial or class definition) and second, what problems will I have since the start point of the line is in a UITableViewCell and the end point is in the main view?
I have actually done this before, so I can give you the exact code :D
This is only the drawing part, you implement the touch events (in a separate class, otherwise remove the self.userInteractionEnabled = NO; in .m file
The BOOL dragged tells the LineView if the line should be drawn.
LineView.h
#import <UIKit/UIKit.h>
#interface LineView : UIView
{
CGPoint start;
CGPoint end;
BOOL dragged;
}
#property CGPoint start, end;
#property BOOL dragged;
#end
LineView.m
#import "LineView.h"
#implementation LineView
#synthesize start, end, dragged;
- (id)initWithFrame:(CGRect)frame
{
if (self = [super initWithFrame: frame])
{
self.userInteractionEnabled = NO;
self.backgroundColor = [UIColor clearColor];
}
return self;
}
#pragma mark Setters
-(void) setStart: (CGPoint) s
{
start = s;
[self setNeedsDisplay];
}
-(void) setEnd: (CGPoint) e
{
end = e;
[self setNeedsDisplay];
}
#define LINE_COLOR [UIColor redColor]
#define CIRCLE_COLOR [UIColor redColor]
#define LINE_WIDTH 5
#define CIRCLE_RADIUS 10
- (void)drawRect:(CGRect)rect
{
CGContextRef c = UIGraphicsGetCurrentContext();
if(dragged) {
[LINE_COLOR setStroke];
CGContextMoveToPoint(c, start.x, start.y);
CGContextAddLineToPoint(c, end.x, end.y);
CGContextSetLineWidth(c, LINE_WIDTH);
CGContextClosePath(c);
CGContextStrokePath(c);
[CIRCLE_COLOR setFill];
CGContextAddArc(c, start.x, start.y, CIRCLE_RADIUS, 0, M_PI*2, YES);
CGContextClosePath(c);
CGContextFillPath(c);
CGContextAddArc(c, end.x, end.y, CIRCLE_RADIUS, 0, M_PI*2, YES);
CGContextClosePath(c);
CGContextFillPath(c);
}
}
- (void)dealloc
{
[super dealloc];
}
#end