How to create line over UIButtons in ios? - iphone

I have 8 UIButtons. Now I want to create line overs these buttons.
I tried and I am able to draw a line. Please check what I am doing.
MyDrawingView.h
#import <UIKit/UIKit.h>
#interface MyDrawingView : UIView{
CGPoint fromPoint;
CGPoint toPoint;
UIColor* currentColor;
UIImage *backgroundImage;
}
#property CGPoint fromPoint;
#property CGPoint toPoint;
#property(nonatomic,retain) UIColor* currentColor;
#end
MyDrawingView.m
#import "MyDrawingView.h"
#implementation MyDrawingView
#synthesize fromPoint;
#synthesize toPoint;
#synthesize currentColor;
- (id)initWithFrame:(CGRect)frame
{
self = [super initWithFrame:frame];
if (self) {
// Initialization code
backgroundImage = [UIImage imageNamed:#"office.jpeg"];
}
return self;
}
- (void)drawRect:(CGRect)rect {
//CGPoint drawingTargetPoint = CGPointMake(100,0);
//[backgroundImage drawAtPoint:drawingTargetPoint];
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetLineWidth(context,10);
CGContextSetStrokeColorWithColor(context, currentColor.CGColor);
CGContextMoveToPoint(context,fromPoint.x , fromPoint.y);
CGContextAddLineToPoint(context, toPoint.x, toPoint.y);
CGContextStrokePath(context);
}
- (void) touchesBegan:(NSSet*)touches withEvent:(UIEvent*)event
{
UITouch* touchPoint = [touches anyObject];
fromPoint = [touchPoint locationInView:self];
toPoint = [touchPoint locationInView:self];
[self setNeedsDisplay];
}
-(void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch* touch = [touches anyObject];
toPoint=[touch locationInView:self];
[self setNeedsDisplay];
}
-(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch* touch = [touches anyObject];
toPoint = [touch locationInView:self];
[self setNeedsDisplay];
}
Now in another class I am using 8 buttons on viewDidLoad , please check what I am doing.
-(void)viewDidLoad{
8 UIButtons (8 buttons in one line and 10 pixels gap in X-position after every button)
[self method_Call_drawView];
}
-(void)method_Call_drawView{
v = [[MyDrawingView alloc]initWithFrame:CGRectMake(100,350,400,400)];
v.backgroundColor = [UIColor whiteColor];
[v setNeedsDisplay];
[self.view addSubview:v];
}
.h file
#import <UIKit/UIKit.h>
#class MyDrawingView;
#interface iPad_Level1 : UIViewController{
UIButton *btn1;
UIButton *btn2;
UIButton *btn3;
UIButton *btn4;
UIButton *btn5;
UIButton *btn6;
UIButton *btn7;
UIButton *btn8;
MyDrawingView *v;
}
#property(nonatomic,retain)MyDrawingView *v;
#end
I am able to create view and draw a line on a view but I am not able to touch UIButtons and create a line over it.
My line is increasing and decreasing wherever I click but I want this line on my UIButton.
please check what I am trying to do itunes.apple.com/in/app/word-search-puzzles/id609067187?mt=8, check yellow and blue line on the buttons.
Any idea or suggestion would be highly welcome.

You ca add a 1px height sub view before the button you want.
UIView *lineView = [[UIView alloc] initWithFrame:CGRectMake(0, 200, self.view.bounds.size.width, 1)]; // change the 200 to fit your screen
lineView.backgroundColor = [UIColor blackColor];
[self.view addSubview:lineView];
And for moving this line when user click a button use:
[UIView animateWithDuration:0.5
delay:0.5
options: UIViewAnimationCurveEaseOut
animations:^
{
CGRect frame = lineView .frame;
frame.origin.y = 100; // for move
frame.size.width = 50; // for re-size
lineView .frame = frame;
}
completion:^(BOOL finished)
{
NSLog(#"Completed");
}
];

#import <QuartzCore/CAGradientLayer.h>
Import this and add quartzcore framework in the project
-(IBAction)btnTapped:(id)sender{
UIButton *m_btnTemp=((UIButton *)sender);
m_btnTemp.layer.borderColor=[UIColor blackColor];
m_btnTemp.layer.borderWidth= 1.0f;
}
When you want to remove the bordercolor set borderWidth=0;

Related

Image dragging on a UIScrollView

A UIViewController is called when a user touches an image on a UITableViewCell.
It is called using modalViewController.
On this modalViewController is a UIScrollView, with another UIImageView in the middle, that fetches an image using NSDictionary (passed from the UITableViewCell).
Now, what I wanted to achieve was the ability for the user to drag the image vertically only, such that dragging and releasing a little would cause the image to return to the center with animation. If the user drags it to the extremes, the entire UIScrollView is dismissed and the user returns to the UITableView. I used the following code. The issue here is, and as my name suggests, this code is crude. Is there an elegant way of doing this, without the need of so much calculation?
BOOL imageMoved=NO;
- (void) touchesMoved:(NSSet *)touches
withEvent:(UIEvent *)event {
UITouch * touch = [touches anyObject];
CGPoint pos = [touch locationInView: [UIApplication sharedApplication].keyWindow];
CGRect imageRect = _photoImageView.frame;//_photoImageView object of UIImageView
CGFloat imageHeight = imageRect.size.height;//getting height of the image
CGFloat imageTop=240-imageHeight/2;
CGFloat imageBottom=imageTop+imageHeight;//setting boundaries for getting coordinates of touch.
// Touches that originate above imageTop and below imageBottom are ignored(until touch reaches UIImageView)
if (pos.y>50&&pos.y<430&&pos.y>=imageTop&&pos.y<=imageBottom){//extremes are defined as top and bottom 50 pixels.
imagedMoved=YES;//BOOL variable to hold if the image was dragged or not
NSLog(#"%f", pos.y);
[UIView setAnimationDelay:0];
[UIView animateWithDuration:0.4
animations:^{_photoImageView.frame=CGRectMake(0,pos.y-imageHeight/2,320,200);}
completion:^(BOOL finished){ }];
}
}
- (void) touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch * touch = [touches anyObject];
CGPoint pos = [touch locationInView: [UIApplication sharedApplication].keyWindow];
if (pos.y>50&&pos.y<430){//if touch has NOT ended in the extreme 50 pixels vertically
[UIView setAnimationDelay:0];//restoring UIImageView to original center with animation
[UIView animateWithDuration:0.4
animations:^{_photoImageView.frame=CGRectMake(0,140,320,200);}
completion:^(BOOL finished){ }];
imagedMoved=NO;//prepare BOOL value for next touch event
}
else if(pos.y<50||pos.y>430)
if(imagedMoved)
[self.photoBrowser exit] ;//exit function(imported from UIScrollViewController) dismisses
//modalView using [self dismissViewControllerAnimated:YES completion:nil];
}
All code here is a modification onto a customized copy of UITapView in MWPhotoBrowser.
Yo, here is a much easier way to do this, more or less an example of what Alessandro stated. I'm not finding the top of the screen but I'm giving the illusion of it.
BCViewController.h
#import <UIKit/UIKit.h>
#interface BCViewController : UIViewController <UIScrollViewDelegate>
#property (weak, nonatomic) IBOutlet UIScrollView *svScroller;
#property (weak, nonatomic) IBOutlet UIImageView *ivFish;
#end
#import "BCViewController.h"
#interface BCViewController (){
UIPanGestureRecognizer *_panGestureRecognizer;
CGPoint _fishOrigin;
}
#end
#implementation BCViewController
- (void)viewDidLoad
{
[super viewDidLoad];
self.svScroller.delegate = self;
[self.svScroller setContentSize:self.view.frame.size];
self.svScroller.translatesAutoresizingMaskIntoConstraints = NO;
self.ivFish.userInteractionEnabled = YES;
_panGestureRecognizer = [[UIPanGestureRecognizer alloc] initWithTarget:self action:#selector(handlePanFrom:)];
[self.ivFish addGestureRecognizer:_panGestureRecognizer];
_fishOrigin = self.ivFish.center;
NSLog(#"center %f", self.ivFish.center.x);
}
- (void)didReceiveMemoryWarning
{
[super didReceiveMemoryWarning];
// Dispose of any resources that can be recreated.
}
-(void)handlePanFrom:(UIPanGestureRecognizer *)recognizer{
// if you want it but not used
CGPoint translation = [recognizer translationInView:recognizer.view];
// if you want it but not used
CGPoint velocity = [recognizer velocityInView:recognizer.view];
CGPoint tempPoint;
if (recognizer.state == UIGestureRecognizerStateBegan) {
} else if (recognizer.state == UIGestureRecognizerStateChanged) {
tempPoint = [recognizer locationInView:self.view];
self.ivFish.center = CGPointMake(175.5, tempPoint.y);
} else if (recognizer.state == UIGestureRecognizerStateEnded) {
CGPoint tempPoint;
tempPoint = [recognizer locationInView:self.view];
if (tempPoint.y < 132) {
[UIView animateWithDuration:.3 delay:0
options:UIViewAnimationOptionCurveEaseOut
animations:^ {
[self.navigationController popViewControllerAnimated:TRUE];
}
completion:NULL];
} else {
[UIView animateWithDuration:.3 delay:0
options:UIViewAnimationOptionCurveEaseOut
animations:^ {
self.ivFish.center = _fishOrigin;
}
completion:NULL];
}
}
}

How to add a magnifier to custom control?

How to add a magnifier to custom control? Control is a child of UIView. (It's a UIWebView - but native magnification functionality doesn't work at some pages.)
UPDATED:
Maybe it's possible to force a draw of magnifier on UIWebView?
1. Add the following files to your project:
MagnifierView.h:
//
// MagnifierView.h
// SimplerMaskTest
//
#import <UIKit/UIKit.h>
#interface MagnifierView : UIView {
UIView *viewToMagnify;
CGPoint touchPoint;
}
#property (nonatomic, retain) UIView *viewToMagnify;
#property (assign) CGPoint touchPoint;
#end
MagnifierView.m:
//
// MagnifierView.m
// SimplerMaskTest
//
#import "MagnifierView.h"
#import <QuartzCore/QuartzCore.h>
#implementation MagnifierView
#synthesize viewToMagnify;
#dynamic touchPoint;
- (id)initWithFrame:(CGRect)frame {
return [self initWithFrame:frame radius:118];
}
- (id)initWithFrame:(CGRect)frame radius:(int)r {
int radius = r;
if ((self = [super initWithFrame:CGRectMake(0, 0, radius, radius)])) {
//Make the layer circular.
self.layer.cornerRadius = radius / 2;
self.layer.masksToBounds = YES;
}
return self;
}
- (void)setTouchPoint:(CGPoint)pt {
touchPoint = pt;
// whenever touchPoint is set, update the position of the magnifier (to just above what's being magnified)
self.center = CGPointMake(pt.x, pt.y-66);
}
- (CGPoint)getTouchPoint {
return touchPoint;
}
- (void)drawRect:(CGRect)rect {
CGContextRef context = UIGraphicsGetCurrentContext();
CGRect bounds = self.bounds;
CGImageRef mask = [UIImage imageNamed: #"loupe-mask#2x.png"].CGImage;
UIImage *glass = [UIImage imageNamed: #"loupe-hi#2x.png"];
CGContextSaveGState(context);
CGContextClipToMask(context, bounds, mask);
CGContextFillRect(context, bounds);
CGContextScaleCTM(context, 1.2, 1.2);
//draw your subject view here
CGContextTranslateCTM(context,1*(self.frame.size.width*0.5),1*(self.frame.size.height*0.5));
//CGContextScaleCTM(context, 1.5, 1.5);
CGContextTranslateCTM(context,-1*(touchPoint.x),-1*(touchPoint.y));
[self.viewToMagnify.layer renderInContext:context];
CGContextRestoreGState(context);
[glass drawInRect: bounds];
}
- (void)dealloc {
[viewToMagnify release];
[super dealloc];
}
#end
TouchReader.h:
//
// TouchReader.h
// SimplerMaskTest
//
#import <UIKit/UIKit.h>
#import "MagnifierView.h"
#interface TouchReader : UIView {
NSTimer *touchTimer;
MagnifierView *loop;
}
#property (nonatomic, retain) NSTimer *touchTimer;
- (void)addLoop;
- (void)handleAction:(id)timerObj;
#end
TouchReader.m:
//
// TouchReader.m
// SimplerMaskTest
//
#import "TouchReader.h"
#import "MagnifierView.h"
#implementation TouchReader
#synthesize touchTimer;
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
self.touchTimer = [NSTimer scheduledTimerWithTimeInterval:0.5 target:self selector:#selector(addLoop) userInfo:nil repeats:NO];
// just create one loop and re-use it.
if (loop == nil) {
loop = [[MagnifierView alloc] init];
loop.viewToMagnify = self;
}
UITouch *touch = [touches anyObject];
loop.touchPoint = [touch locationInView:self];
[loop setNeedsDisplay];
}
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
[self handleAction:touches];
}
- (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event {
[self.touchTimer invalidate];
self.touchTimer = nil;
[loop removeFromSuperview];
}
- (void)touchesCancelled:(NSSet *)touches withEvent:(UIEvent *)event {
[self.touchTimer invalidate];
self.touchTimer = nil;
[loop removeFromSuperview];
}
- (void)addLoop {
// add the loop to the superview. if we add it to the view it magnifies, it'll magnify itself!
[self.superview addSubview:loop];
// here, we could do some nice animation instead of just adding the subview...
}
- (void)handleAction:(id)timerObj {
NSSet *touches = timerObj;
UITouch *touch = [touches anyObject];
loop.touchPoint = [touch locationInView:self];
[loop setNeedsDisplay];
}
- (void)dealloc {
[loop release];
loop = nil;
[super dealloc];
}
#end
Based on: http://coffeeshopped.com/2010/03/a-simpler-magnifying-glass-loupe-view-for-the-iphone
2. Add the following images:
Used images on the code:
loupe-hi#2x.png:
loupe-mask#2x.png:
Original but centered images with a shadow (not used at this moment):
loupe-shadow-hi#2x.png:
loupe-shadow-mask#2x.png:
3. Replace the main UIView on your xib-file by TouchReader
The magnifier will work automaticaly except controls that captures touch events themselfs (for example, UIWebView). And the code above doesn't support the images with a shadow. Please add new answer to the qustion if you successfully fix this issue.
UPDATED:
Change the following code to add UIWebView support. UIView should remain UIView.
#interface TouchReader : UILongPressGestureRecognizer
And add a gesture to webView:
TouchReader* gestureMagnifier = [[[TouchReader alloc] initWithTarget:self action:#selector(handleMagnifier:)] autorelease];
gestureMagnifier.webView = editSource;
gestureMagnifier.delegate = self;
gestureMagnifier.minimumPressDuration = 0.5;
[webView addGestureRecognizer:gestureMagnifier];
TouchReader.h:
//- (void)handleAction:(id)timerObj;
-(void) handleGestureAction:(CGPoint)location;
TouchReader.m:
-(void)awakeFromNib
{
UILongPressGestureRecognizer * longPressGesture = [[UILongPressGestureRecognizer alloc] initWithTarget:self action:#selector(handleGesture:)];
[self addGestureRecognizer:longPressGesture];
}
-(void)handleGesture:(UILongPressGestureRecognizer *)longPressGesture
{
CGPoint location = [longPressGesture locationInView:self];
switch (longPressGesture.state) {
case UIGestureRecognizerStateBegan:
self.touchTimer = [NSTimer scheduledTimerWithTimeInterval:0.5
target:self
selector:#selector(addLoop)
userInfo:nil
repeats:NO];
// just create one loop and re-use it.
if(loop == nil){
loop = [[MagnifierView alloc] init];
loop.viewToMagnify = self;
}
loop.touchPoint = location;
[loop setNeedsDisplay];
break;
case UIGestureRecognizerStateChanged:
[self handleGestureAction:location];
break;
case UIGestureRecognizerStateEnded:
[self.touchTimer invalidate];
self.touchTimer = nil;
[loop removeFromSuperview];
loop=nil;
break;
default:
break;
}
}
- (void)addLoop {
// add the loop to the superview. if we add it to the view it magnifies, it'll magnify itself!
[self.superview addSubview:loop];
}
-(void) handleGestureAction:(CGPoint)location
{
loop.touchPoint = location;
[loop setNeedsDisplay];
}
You don't need the touches... methods any more.

How to convert cordinates to an image

Signature.h file
#import <UIKit/UIKit.h>
#interface Signature : UIViewController
{
CGPoint firstTouch;
CGPoint lastTouch;
UIColor *pointColor;
CGRect *points;
int npoints;
CGPoint location;
// UIImageView *signatureImageView;
}
#property CGPoint firstTouch;
#property CGPoint lastTouch;
#property (nonatomic, retain) UIColor *pointColor;
#property CGRect *points;
#property int npoints;
#property (retain, nonatomic) IBOutletCollection(UIImageView) NSArray *drawSign;
#property CGPoint location;
#property (retain, nonatomic) IBOutletCollection(UIImageView) NSArray *signatureImageView;
- (IBAction)savePressed:(id)sender;
- (IBAction)clearPressed:(id)sender;
#end
Signature.m
#import "Signature.h"
#interface Signature ()
#end
#implementation Signature
#synthesize drawSign;
#synthesize signatureImageView;
#synthesize firstTouch;
#synthesize lastTouch;
#synthesize pointColor;
#synthesize points;
- (id)initWithFrame:(CGRect)frame
{
return self;
}
- (id)initWithCoder:(NSCoder *)coder
{
if(self = [super initWithCoder:coder])
{
self.npoints = 0;
}
return self;
}
- (id)initWithNibName:(NSString *)nibNameOrNil bundle:(NSBundle *)nibBundleOrNil
{
self = [super initWithNibName:nibNameOrNil bundle:nibBundleOrNil];
if (self) {
// Custom initialization
}
NSLog(#"initwithnibname");
return self;
}
- (void)viewDidLoad
{
NSLog(#"viewdidload");
[super viewDidLoad];
// Do any additional setup after loading the view from its nib.
}
- (void)viewDidUnload
{
NSLog(#"view did unload");
[self setSignatureImageView:nil];
[self setDrawSign:nil];
[super viewDidUnload];
// Release any retained subviews of the main view.
// e.g. self.myOutlet = nil;
}
- (void) touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
NSLog(#"touch has began");
UITouch *touch = [touches anyObject];
self.location = [touch locationInView:self.view];
}
- (void) touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
NSLog(#"touches moves");
UITouch *touch = [touches anyObject];
CGPoint currentLocation = [touch locationInView:self.view];
UIGraphicsBeginImageContext(self.view.frame.size);
CGContextRef ctx = UIGraphicsGetCurrentContext();
//[self.drawSign.image drawInRect:CGRectMake(0, 0, self.frame.size.width, self.frame.size.height)];
CGContextSetLineCap(ctx, kCGLineCapRound);
CGContextSetLineWidth(ctx, 5.0);
CGContextSetRGBStrokeColor(ctx, 1.0, 0.0, 0.0, 1.0);
CGContextBeginPath(ctx);
CGContextMoveToPoint(ctx, location.x, location.y);
NSLog(#"%f x is",location.x);
NSLog(#"%f y is",location.y);
CGContextAddLineToPoint(ctx, currentLocation.x, currentLocation.y);
CGContextStrokePath(ctx);
// self.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
location = currentLocation;
}
- (void) touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event {
NSLog(#"touches ended");
UITouch *touch = [touches anyObject];
CGPoint currentLocation = [touch locationInView:self.view];
UIGraphicsBeginImageContext(self.view.frame.size);
CGContextRef ctx = UIGraphicsGetCurrentContext();
//[self.view.image drawInRect:CGRectMake(0, 0, self.frame.view.size.width, self.frame.view.size.height)];
CGContextSetLineCap(ctx, kCGLineCapRound);
CGContextSetLineWidth(ctx, 5.0);
CGContextSetRGBStrokeColor(ctx, 1.0, 0.0, 0.0, 1.0);
CGContextBeginPath(ctx);
CGContextMoveToPoint(ctx, location.x, location.y);
CGContextAddLineToPoint(ctx, currentLocation.x, currentLocation.y);
CGContextStrokePath(ctx);
// self.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
location = currentLocation;
}
- (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation
{
return (interfaceOrientation == UIInterfaceOrientationPortrait);
}
- (void)dealloc {
[signatureImageView release];
[super dealloc];
}
- (IBAction)savePressed:(id)sender {
NSLog(#"save pressed");
}
- (IBAction)clearPressed:(id)sender {
NSLog(#"cancel pressed");
}
#end
Please find the two files, related to my project. From One view I am coming to this view. Now I need to save whatever has been drawn on it by the user. I am able to fetch the co-ordinates but unable to draw them. I have tried several attempts but neither of them seems to be working.Please find them in the Signature.m file itself in commented way. Kindly point me my mistake and correct me.
I really dont understand your question but the reason this code is wrong is in the touches part.
If you want to draw while you touch then you have to use the drawInRect method from the view you want to draw in.
If you want to draw over an image to save or present it after the user releases it then you have to create an image context on the touchesBegan, then draw on it on the touchesMoved, and finally save the result in the touchesEnded.
You can extract the result at the touchesEnded with
UIImage *result = UIGraphicsGetImageFromCurrentImageContext();
What your code is doing right now is, every time you move your finger you create a new canvas, draw on it and destroy it. When you lift your finger you create one more canvas draw on it and destroy it as well.
But im pretty sure you want to move your finger and show the result directly in the screen. For that I really recommend you google for a tutorial but basically you have to: create an array of points in the touches begin, on the touches move you add points to this array and on the touches end you add the final point to the array. MEANWHILE on the view drawInRect your code has to continuously draw the points in this array.
EDIT:
ASSUMING you have a valid UIImage (blanc canvas already initialized) at the beginning in
self.drawSign.image
Try:
- (void) touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
NSLog(#"touches moves");
UITouch *touch = [touches anyObject];
CGPoint currentLocation = [touch locationInView:self.view];
UIGraphicsBeginImageContext(self.view.frame.size);
CGContextRef ctx = UIGraphicsGetCurrentContext();
[self.drawSign.image drawInRect:CGRectMake(0, 0, self.frame.size.width, self.frame.size.height)];
CGContextSetLineCap(ctx, kCGLineCapRound);
CGContextSetLineWidth(ctx, 5.0);
CGContextSetRGBStrokeColor(ctx, 1.0, 0.0, 0.0, 1.0);
CGContextBeginPath(ctx);
CGContextMoveToPoint(ctx, location.x, location.y);
NSLog(#"%f x is",location.x);
NSLog(#"%f y is",location.y);
CGContextAddLineToPoint(ctx, currentLocation.x, currentLocation.y);
CGContextStrokePath(ctx);
self.drawSign.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
location = currentLocation;
}
- (void) touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event {
NSLog(#"touches ended");
UITouch *touch = [touches anyObject];
CGPoint currentLocation = [touch locationInView:self.view];
UIGraphicsBeginImageContext(self.view.frame.size);
CGContextRef ctx = UIGraphicsGetCurrentContext();
[self.drawSign.image drawInRect:CGRectMake(0, 0, self.frame.view.size.width, self.frame.view.size.height)];
CGContextSetLineCap(ctx, kCGLineCapRound);
CGContextSetLineWidth(ctx, 5.0);
CGContextSetRGBStrokeColor(ctx, 1.0, 0.0, 0.0, 1.0);
CGContextBeginPath(ctx);
CGContextMoveToPoint(ctx, location.x, location.y);
CGContextAddLineToPoint(ctx, currentLocation.x, currentLocation.y);
CGContextStrokePath(ctx);
self.drawSign.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
location = currentLocation;
}
I am not sure if this will work, but even if it does it is a TERRIBLE WAY of doing it (performance wise).

How to achieve continuous drag drop menu effect?

I'm trying to achieve a Drag and Drop menu affect. I'm not sure how to go about this, perhaps someone has experience with this exact effect.
Quite simply, when a user touches down on a menu item, I want a graphic to appear at their touch location. Their touch will now control the panning of the graphic. Upon releasing the touch, the graphic will sit in its place and assume full alpha.
I'm already familiar with creating pan gestures and instantiating a graphic. So far, I can create the graphic where the menu item is touched. The biggest issue is how I "pass over" the touch gesture so it is a single and continuous motion.
Also, should the menu item be UIButton or UIImageView?
Any help appreciated. Thanks
I had some fun with this one. The following code will grab the image from the button when touched, drag that image at alpha=0.5, and drop it wherever your touches end at alpha=1.0. It will continue to be draggable thereafter.
After importing QuartzCore, create a new file. The .h should read:
#import <Foundation/Foundation.h>
#import <QuartzCore/CAGradientLayer.h>
#import <QuartzCore/CALayer.h>
#interface DraggableImage : CAGradientLayer
- (void)draw:(UIImage *)image;
- (void)moveToFront;
- (void)appearDraggable;
- (void)appearNormal;
#end
and the .m should read:
#import "DraggableImage.h"
#implementation DraggableImage
- (void)draw:(UIImage *)image{
CGRect buttonFrame = self.bounds;
int buttonWidth = buttonFrame.size.width;
int buttonHeight = buttonFrame.size.height;
UIGraphicsBeginImageContext( CGSizeMake(buttonWidth, buttonHeight) );
[image drawInRect:self.bounds];
UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[newImage drawInRect:self.bounds];
}
- (void)moveToFront {
CALayer *superlayer = self.superlayer;
[self removeFromSuperlayer];
[superlayer addSublayer:self];
}
- (void)appearDraggable {
self.opacity = 0.5;
}
- (void)appearNormal {
self.opacity = 1.0;
}
#end
Now in your main view controller, add:
#import <UIKit/UIKit.h>
#import <QuartzCore/QuartzCore.h>
#import "DraggableImage.h"
#interface YourViewController : UIViewController{
DraggableImage *heldImage;
DraggableImage *imageForFrame[5]; // or however many
UIButton *buttonPressed;
int imageCount;
}
#property (weak, nonatomic) IBOutlet UIButton *imageButton;
-(IBAction)buildImageLayerForButton:(UIButton *)sender;
- (void)moveHeldImageToPoint:(CGPoint)location;
- (CALayer *)layerForTouch:(UITouch *)touch;
The imageButton in this case would be your apple Button. Now in your .m file, add this:
#synthesize imageButton;
#pragma - mark Touches
-(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event{
CALayer *hitLayer = [self layerForTouch:[touches anyObject]];
if ([hitLayer isKindOfClass:[DraggableImage class]]) {
DraggableImage *image = (DraggableImage *)hitLayer;
heldImage = image;
[heldImage moveToFront];
}
hitLayer = nil;
[super touchesBegan:touches withEvent:event];
}
-(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event{
if (heldImage)
{
UITouch *touch = [touches anyObject];
UIView *view = self.view;
CGPoint location = [touch locationInView:view];
[self moveHeldImageToPoint:location];
}
}
- (void) touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event{
if (heldImage) {
[heldImage appearNormal];
heldImage = nil;
}
}
- (void)dragBegan:(UIControl *)c withEvent:ev {
}
- (void)dragMoving:(UIControl *)c withEvent:ev {
UITouch *touch = [[ev allTouches] anyObject];
CGPoint touchPoint = [touch locationInView:self.view];
[self moveHeldImageToPoint:touchPoint];
}
- (void)dragEnded:(UIControl *)c withEvent:ev {
UITouch *touch = [[ev allTouches] anyObject];
CGPoint touchPoint = [touch locationInView:self.view];
[self moveHeldImageToPoint:touchPoint];
[heldImage appearNormal];
heldImage = nil;
}
-(IBAction)buildImageLayerForButton:(UIButton *)sender{
DraggableImage *image = [[DraggableImage alloc] init];
buttonPressed = sender;
CGRect buttonFrame = sender.bounds;
int buttonWidth = buttonFrame.size.width;
int buttonHeight = buttonFrame.size.height;
image.frame = CGRectMake(120, 24, buttonWidth*3, buttonHeight*3);
image.backgroundColor = [UIColor lightGrayColor].CGColor;
image.delegate = self;
imageForFrame[imageCount] = image;
[self.view.layer addSublayer:image];
[image setNeedsDisplay];
[image moveToFront];
[image appearDraggable];
heldImage = image;
[self moveHeldImageToPoint:sender.center];
imageCount++;
}
- (void)drawLayer:(CALayer *)layer inContext:(CGContextRef)ctx {
UIGraphicsPushContext(ctx);
DraggableImage *image = (DraggableImage *)layer;
[image draw:[buttonPressed imageForState:UIControlStateNormal]];
UIGraphicsPopContext();
}
- (void)moveHeldImageToPoint:(CGPoint)location
{
float dx = location.x;
float dy = location.y;
CGPoint newPosition = CGPointMake(dx, dy);
[CATransaction begin];
[CATransaction setDisableActions:TRUE];
heldImage.position = newPosition;
[CATransaction commit];
}
- (CALayer *)layerForTouch:(UITouch *)touch
{
UIView *view = self.view;
CGPoint location = [touch locationInView:view];
location = [view convertPoint:location toView:nil];
CALayer *hitPresentationLayer = [view.layer.presentationLayer hitTest:location];
if (hitPresentationLayer)
{
return hitPresentationLayer.modelLayer;
}
return nil;
}
-(void)viewDidLoad{
[imageButton addTarget:self action:#selector(dragBegan:withEvent:) forControlEvents: UIControlEventTouchDown];
[imageButton addTarget:self action:#selector(dragMoving:withEvent:) forControlEvents: UIControlEventTouchDragInside | UIControlEventTouchDragOutside];
[imageButton addTarget:self action:#selector(dragEnded:withEvent:) forControlEvents: UIControlEventTouchUpInside | UIControlEventTouchUpOutside];
[super viewDidLoad];
}
- (void)viewDidUnload {
[self setImageButton:nil];
[super viewDidUnload];
}
Et voila! Connect your button, set its image, and throw copies all over the screen. :)
Note: I didn't comment much, but would be happy to answer any questions.
Cheers!
EDIT: fixed the -(void)draw:(UIImage *)image{} so that it would resize the image properly.
if what you want is to pass the touch function to the second graphic (the big one) i think you can do something like this
on .h you have to declare the images that you're going to drag and float variable to remember previous point of the dragable button (i'm assuming you use IOS 5 SDK)
#property(nonatomic, strong) UIImageView* myImage;
#property float pointX;
#property float pointY;
then, at .m you can do this
myImage = [[UIImageView alloc]initWithImage:#"appleImage.jpg"];
myImage.alpha = 0;
//default UIImageView interaction is disabled, so lets enabled it first
myImage.userInteractionEnabled = YES;
[button addTarget:self action:#selector(wasDragged:withEvent:) forControlEvents:UIControlEventTouchDragInside];
and then make the drag function
- (void)wasDragged:(UIButton *)button withEvent:(UIEvent *)event
{
self.myImage.alpha = 0.5;
UITouch *touch = [[event touchesForView:button] anyObject];
CGPoint previousLocation = [touch previousLocationInView:button];
CGPoint location = [touch locationInView:button];
CGFloat delta_x = location.x - previousLocation.x;
CGFloat delta_y = location.y - previousLocation.y;
// move button, to keep the dragging effect
button.center = CGPointMake(button.center.x + delta_x,
button.center.y + delta_y);
// moving the image
button.center = CGPointMake(button.center.x + delta_x,
button.center.y + delta_y);
self.pointX = previousLocation.x;
self.pointY = previousLocation.y;
[button addTarget:self action:#selector(dragRelease:withEvent:) forControlEvents:UIControlEventTouchUpInside];
}
finally, make the dragRelease function where you return the button to its original place and set the alpha of the images to 1
-(void)dragRelease:(UIButton *)button withEvent:(UIEvent *)event
{
self.myImage.alpha = 1;
button.center = CGPointMake(pointX, pointY);
}
and you're done :3
this is just the basic idea though, maybe this isn't what you want, but i hope this helps
edit* : oh and don't forget to synthesize all the properties, also if you're using SDK below 5.0, you can change the "strong" property to "retain"

UIImage detecting touch and dragging

Fairly common question this, to which I have a few answers and I'm nearly there. I have a button which when pressed, will create an image (code as follows)
(numImages is set on load to ZERO and is used as a count up for the tag numbers of all images created)
UIImage *tmpImage = [[UIImage imageNamed:[NSString stringWithFormat:#"%i.png", sender.tag]] retain];
UIImageView *myImage = [[UIImageView alloc] initWithImage:tmpImage];
numImages += 1;
myImage.userInteractionEnabled = YES;
myImage.tag = numImages;
myImage.opaque = YES;
[self.view addSubview:myImage];
[myImage release];
I then have a touchesBegan method which will detect what's touched. What I need it to do is to allow the user to drag the newly created image. It's nearly working, but the image flickers all over the place when you drag it. I can access the image which you click on as I can get it's TAG, but I just cannot drag it nicely.
- (void) touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch *touch = [[event allTouches] anyObject];
CGPoint location = [touch locationInView:touch.view];
if (touch.view.tag > 0) {
touch.view.center = location;
}
NSLog(#"tag=%#", [NSString stringWithFormat:#"%i", touch.view.tag]);
}
- (void) touchesMoved:(NSSet *)touches withEvent: (UIEvent *)event {
[self touchesBegan:touches withEvent:event];
}
It works, in that I get an output of the tag for each image as I click on them. But when I drag, it flashes... any ideas?
In answer to my own question - I decided to create a class for handling the images I place on the view.
Code if anyone's interested....
Draggable.h
#import <Foundation/Foundation.h>
#interface Draggable : UIImageView {
CGPoint startLocation;
}
#end
Draggable.m
#import "Draggable.h"
#implementation Draggable
- (void) touchesBegan:(NSSet*)touches withEvent:(UIEvent*)event {
// Retrieve the touch point
CGPoint pt = [[touches anyObject] locationInView:self];
startLocation = pt;
[[self superview] bringSubviewToFront:self];
}
- (void) touchesMoved:(NSSet*)touches withEvent:(UIEvent*)event {
// Move relative to the original touch point
CGPoint pt = [[touches anyObject] locationInView:self];
CGRect frame = [self frame];
frame.origin.x += pt.x - startLocation.x;
frame.origin.y += pt.y - startLocation.y;
[self setFrame:frame];
}
#end
and to call it
UIImage *tmpImage = [[UIImage imageNamed:"test.png"] retain];
UIImageView *imageView = [[UIImageView alloc] initWithImage:tmpImage];
CGRect cellRectangle;
cellRectangle = CGRectMake(0,0,tmpImage.size.width ,tmpImage.size.height );
UIImageView *dragger = [[Draggable alloc] initWithFrame:cellRectangle];
[dragger setImage:tmpImage];
[dragger setUserInteractionEnabled:YES];
[self.view addSubview:dragger];
[imageView release];
[tmpImage release];
Usually you get an implicit animation when you change center. Are you messing with -contentMode or calling -setNeedsDisplay by any chance?
You can explicitly request animation to avoid the delete and re-draw this way:
if (touch.view.tag > 0) {
[UIView beginAnimations:#"viewMove" context:touch.view];
touch.view.center = location;
[UIView commitAnimations];
}
Do note that NSLog() can be very slow (much slower than you'd expect; it's much more complicated than a simple printf), and that can cause trouble in something called as often as touchesMoved:withEvent:.
BTW, you're leaking tmpImage.