I'm moving my views by
UIPanGestureRecognizer *panRecognizer = [[UIPanGestureRecognizer alloc] initWithTarget:self action:#selector(moveRight:)];
[panRecognizer setMinimumNumberOfTouches:1];
[panRecognizer setMaximumNumberOfTouches:1];
[panRecognizer setDelegate:self];
[bubbleView[rightCnt] addGestureRecognizer:panRecognizer];
[panRecognizer release];
Now , I want to do same thing by drag with long press.
Any idea?
UILongPressGestureRecognizer already does what you want for you. Take a look at the UIGestureRecognizerState property. From the documentation:
Long-press gestures are continuous. The gesture begins
(UIGestureRecognizerStateBegan) when the number of allowable fingers
(numberOfTouchesRequired) have been pressed for the specified period
(minimumPressDuration) and the touches do not move beyond the
allowable range of movement (allowableMovement). The gesture
recognizer transitions to the Change state whenever a finger moves,
and it ends (UIGestureRecognizerStateEnded) when any of the fingers
are lifted.
So essentially after your UILongPressGestureRecognizerselector is called you listen to UIGestureRecognizerStateBegan, UIGestureRecognizerStateChanged, UIGestureRecognizerStateEnded. Keep changing your views frame during UIGestureRecognizerStateChanged.
- (void)moveRight:(UILongPressGestureRecognizer *)gesture
{
if(gesture.state == UIGestureRecognizerStateBegan)
{
//if needed do some initial setup or init of views here
}
else if(gesture.state == UIGestureRecognizerStateChanged)
{
//move your views here.
[yourView setFrame:];
}
else if(gesture.state == UIGestureRecognizerStateEnded)
{
//else do cleanup
}
}
#implementation MyViewController {
CGPoint _priorPoint;
}
- (void)moveRight:(UILongPressGestureRecognizer *)sender {
UIView *view = sender.view;
CGPoint point = [sender locationInView:view.superview];
if (sender.state == UIGestureRecognizerStateChanged) {
CGPoint center = view.center;
center.x += point.x - _priorPoint.x;
center.y += point.y - _priorPoint.y;
view.center = center;
}
_priorPoint = point;
}
In Swift this can be achieved using below code
class DragView: UIView {
// Starting center position
var initialCenter: CGPoint?
override func didMoveToWindow() {
super.didMoveToWindow()
// Add longPress gesture recognizer
let longPress = UILongPressGestureRecognizer(target: self, action: #selector(longPressAction(gesture:)))
addGestureRecognizer(longPress)
}
// Handle longPress action
func longPressAction(gesture: UILongPressGestureRecognizer) {
if gesture.state == .began {
guard let view = gesture.view else {
return
}
initialCenter = gesture.location(in: view.superview)
}
else if gesture.state == .changed {
guard let originalCenter = initialCenter else {
return
}
guard let view = gesture.view else {
return
}
let point = gesture.location(in: view.superview)
// Calculate new center position
var newCenter = view.center;
newCenter.x += point.x - originalCenter.x;
newCenter.y += point.y - originalCenter.y;
// Update view center
view.center = newCenter
}
else if gesture.state == .ended {
...
}
}
You do not need to declare _priorPoint;
In my case, i only want the view to move horizontally so i'm only changing the x coordinate.
Here is my solution:
if (longpressGestRec.state == UIGestureRecognizerStateChanged)
{
UIView *view = longpressGestRec.view;
// Location of the touch within the view.
CGPoint point = [longpressGestRec locationInView:view];
// Calculate new X position based on the amount the gesture
// has moved plus the size of the view we want to move.
CGFloat newXLoc = (item.frame.origin.x + point.x) - (item.frame.size.width / 2);
[item setFrame:CGRectMake(newXLoc,
item.frame.origin.y,
item.frame.size.width,
item.frame.size.height)];
}
Thanks to Hari Kunwar for the Swift code, but the longPressAction function is not correctly defined.
Here's an improved version:
#objc func longPressAction(gesture: UILongPressGestureRecognizer) {
if gesture.state == UIGestureRecognizerState.began {
}
else if gesture.state == .changed {
guard let view = gesture.view else {
return
}
let location = gesture.location(in: self.view)
view.center = CGPoint(x:view.center.x + (location.x - view.center.x),
y:view.center.y + (location.y - view.center.y))
}
else if gesture.state == UIGestureRecognizerState.ended{
}
}
Related
I need your help guys. I have game scene and func which allow to move camera using panGesture. Also i need pinchGesture to zoom in and out my SKScene. I found some code here, but it lags. Can plz someone help me to improve this code?
`
#objc private func didPinch(_ sender: UIPinchGestureRecognizer) {
guard let camera = self.camera else {return}
if sender.state == .changed {
previousCameraScale = camera.xScale
}
camera.setScale(previousCameraScale * 1 / sender.scale)
sender.scale = 1.0
}
`
try this pinch code.
//pinch -- simple version
#objc func pinch(_ recognizer:UIPinchGestureRecognizer) {
guard let camera = self.camera else { return } // The camera has a weak reference, so test it
if recognizer.state == .changed {
let deltaScale = (recognizer.scale - 1.0)*2
let convertedScale = recognizer.scale - deltaScale
let newScale = camera.xScale*convertedScale
camera.setScale(newScale)
//reset value for next time
recognizer.scale = 1.0
}
}
although i would recommend this slightly more complicated version which centers the pinch around the touch point. makes for a much nicer pinch in my experience.
//pinch around touch point
#objc func pinch(_ recognizer:UIPinchGestureRecognizer) {
guard let camera = self.camera else { return } // The camera has a weak reference, so test it
//cache location prior to scaling
let locationInView = recognizer.location(in: self.view)
let location = self.convertPoint(fromView: locationInView)
if recognizer.state == .changed {
let deltaScale = (recognizer.scale - 1.0)*2
let convertedScale = recognizer.scale - deltaScale
let newScale = camera.xScale*convertedScale
camera.setScale(newScale)
//zoom around touch point rather than center screen
let locationAfterScale = self.convertPoint(fromView: locationInView)
let locationDelta = location - locationAfterScale
let newPoint = camera.position + locationDelta
camera.position = newPoint
//reset value for next time
recognizer.scale = 1.0
}
}
//also need these extensions to add and subtract CGPoints
extension CGPoint {
static func + (a:CGPoint, b:CGPoint) -> CGPoint {
return CGPoint(x: a.x + b.x, y: a.y + b.y)
}
static func - (a:CGPoint, b:CGPoint) -> CGPoint {
return CGPoint(x: a.x - b.x, y: a.y - b.y)
}
}
Here is my code:
viewDidLoad:
UIPinchGestureRecognizer *pinch = [[UIPinchGestureRecognizer alloc]initWithTarget:self action:#selector(pinch:)];
[self.canvas addGestureRecognizer:pinch];
pinch.delegate = self;
UIRotationGestureRecognizer *twoFingersRotate = [[UIRotationGestureRecognizer alloc] initWithTarget:self action:#selector(pinchRotate:)];
[[self canvas] addGestureRecognizer:twoFingersRotate];
twoFingersRotate.delegate = self;
Code For Pinches and Rotates:
-(BOOL)gestureRecognizer:(UIGestureRecognizer *)gestureRecognizer shouldRecognizeSimultaneouslyWithGestureRecognizer:(UIGestureRecognizer *)otherGestureRecognizer
{
return YES;
}
-(void)pinchRotate:(UIRotationGestureRecognizer*)rotate
{
SMImage *selectedImage = [DataCenter sharedDataCenter].selectedImage;
switch (rotate.state)
{
case UIGestureRecognizerStateBegan:
{
selectedImage.referenceTransform = selectedImage.transform;
break;
}
case UIGestureRecognizerStateChanged:
{
selectedImage.transform = CGAffineTransformRotate(selectedImage.referenceTransform, ([rotate rotation] * 55) * M_PI/180);
break;
}
default:
break;
}
}
-(void)pinch:(UIPinchGestureRecognizer*)pinch
{
SMImage *selectedImage = [DataCenter sharedDataCenter].selectedImage;
[self itemSelected];
switch (pinch.state)
{
case UIGestureRecognizerStateBegan:
{
selectedImage.referenceTransform = selectedImage.transform;
break;
}
case UIGestureRecognizerStateChanged:
{
CGAffineTransform transform = CGAffineTransformScale(selectedImage.referenceTransform, pinch.scale, pinch.scale);
selectedImage.transform = transform;
break;
}
default:
break;
}
}
My rotation works great on its own and my scale works great on its own, but they wont work together. One always works or the other doesn't. When I implement shouldRecognizeSimultaneouslyWithGestureRecognizer the two gestures seem to fight against each other and produce poor results. What am I missing? (Yes I have implemented <UIGestureRecognizerDelegate>)
Every time pinch: is called, you just compute the transform based on the pinch recognizer's scale. Every time pinchRotate: is called, you just compute the transform based on the rotation recognizer's rotation. You never combine the scale and the rotation into one transform.
Here's an approach. Give yourself one new instance variable, _activeRecognizers:
NSMutableSet *_activeRecognizers;
Initialize it in viewDidLoad:
_activeRecognizers = [NSMutableSet set];
Use one method as the action for both recognizers:
- (IBAction)handleGesture:(UIGestureRecognizer *)recognizer
{
SMImage *selectedImage = [DataCenter sharedDataCenter].selectedImage;
switch (recognizer.state) {
case UIGestureRecognizerStateBegan:
if (_activeRecognizers.count == 0)
selectedImage.referenceTransform = selectedImage.transform;
[_activeRecognizers addObject:recognizer];
break;
case UIGestureRecognizerStateEnded:
selectedImage.referenceTransform = [self applyRecognizer:recognizer toTransform:selectedImage.referenceTransform];
[_activeRecognizers removeObject:recognizer];
break;
case UIGestureRecognizerStateChanged: {
CGAffineTransform transform = selectedImage.referenceTransform;
for (UIGestureRecognizer *recognizer in _activeRecognizers)
transform = [self applyRecognizer:recognizer toTransform:transform];
selectedImage.transform = transform;
break;
}
default:
break;
}
}
You'll need this helper method:
- (CGAffineTransform)applyRecognizer:(UIGestureRecognizer *)recognizer toTransform:(CGAffineTransform)transform
{
if ([recognizer respondsToSelector:#selector(rotation)])
return CGAffineTransformRotate(transform, [(UIRotationGestureRecognizer *)recognizer rotation]);
else if ([recognizer respondsToSelector:#selector(scale)]) {
CGFloat scale = [(UIPinchGestureRecognizer *)recognizer scale];
return CGAffineTransformScale(transform, scale, scale);
}
else
return transform;
}
This works if you're just allowing rotating and scaling. (I even tested it!)
If you want to add panning, use a separate action method and just adjust selectedImage.center. Trying to do panning with rotation and scaling using selectedImage.transform is much more complicated.
Swift 3 with Pan, Rotate and Pinch
// MARK: - Gesturies
func transformUsingRecognizer(_ recognizer: UIGestureRecognizer, transform: CGAffineTransform) -> CGAffineTransform {
if let rotateRecognizer = recognizer as? UIRotationGestureRecognizer {
return transform.rotated(by: rotateRecognizer.rotation)
}
if let pinchRecognizer = recognizer as? UIPinchGestureRecognizer {
let scale = pinchRecognizer.scale
return transform.scaledBy(x: scale, y: scale)
}
if let panRecognizer = recognizer as? UIPanGestureRecognizer {
let deltaX = panRecognizer.translation(in: imageView).x
let deltaY = panRecognizer.translation(in: imageView).y
return transform.translatedBy(x: deltaX, y: deltaY)
}
return transform
}
var initialTransform: CGAffineTransform?
var gestures = Set<UIGestureRecognizer>(minimumCapacity: 3)
#IBAction func processTransform(_ sender: Any) {
let gesture = sender as! UIGestureRecognizer
switch gesture.state {
case .began:
if gestures.count == 0 {
initialTransform = imageView.transform
}
gestures.insert(gesture)
case .changed:
if var initial = initialTransform {
gestures.forEach({ (gesture) in
initial = transformUsingRecognizer(gesture, transform: initial)
})
imageView.transform = initial
}
case .ended:
gestures.remove(gesture)
default:
break
}
}
func gestureRecognizer(_ gestureRecognizer: UIGestureRecognizer, shouldRecognizeSimultaneouslyWith otherGestureRecognizer: UIGestureRecognizer) -> Bool {
return true
}
For this to happen you need to implement gesture delegate shouldRecognizeSimultaneouslyWithGestureRecognizer and put what gestures you would like to recognize simultaneously.
// ensure that the pinch and rotate gesture recognizers on a particular view can all recognize simultaneously
// prevent other gesture recognizers from recognizing simultaneously
- (BOOL)gestureRecognizer:(UIGestureRecognizer *)gestureRecognizer shouldRecognizeSimultaneouslyWithGestureRecognizer:(UIGestureRecognizer *)otherGestureRecognizer
{
// if the gesture recognizers's view isn't one of our views, don't allow simultaneous recognition
if (gestureRecognizer.view != firstView && gestureRecognizer.view != secondView)
return NO;
// if the gesture recognizers are on different views, don't allow simultaneous recognition
if (gestureRecognizer.view != otherGestureRecognizer.view)
return NO;
// if either of the gesture recognizers is the long press, don't allow simultaneous recognition
if ([gestureRecognizer isKindOfClass:[UILongPressGestureRecognizer class]] || [otherGestureRecognizer isKindOfClass:[UILongPressGestureRecognizer class]])
return NO;
return YES;
}
This code needs to be modified to the view for which you want simultaneous gesture recognisers. The above code is what you need.
This example does not use gesture recognizers, and directly computes the transformation matrix. It also properly handles one-to-two finger transitions.
class PincherView: UIView {
override var bounds :CGRect {
willSet(newBounds) {
oldBounds = self.bounds
} didSet {
self.imageLayer.position = ┼self.bounds
self._adjustScaleForBoundsChange()
}
}
var oldBounds :CGRect
var touch₁ :UITouch?
var touch₂ :UITouch?
var p₁ :CGPoint? // point 1 in image coordiate system
var p₂ :CGPoint? // point 2 in image coordinate system
var p₁ʹ :CGPoint? // point 1 in view coordinate system
var p₂ʹ :CGPoint? // point 2 in view coordinate system
var image :UIImage? {
didSet {self._reset()}
}
var imageLayer :CALayer
var imageTransform :CGAffineTransform {
didSet {
self.backTransform = self.imageTransform.inverted()
self.imageLayer.transform = CATransform3DMakeAffineTransform(self.imageTransform)
}
}
var backTransform :CGAffineTransform
var solutionMatrix :HXMatrix?
required init?(coder aDecoder: NSCoder) {
self.oldBounds = CGRect.zero
let layer = CALayer();
self.imageLayer = layer
self.imageTransform = CGAffineTransform.identity
self.backTransform = CGAffineTransform.identity
super.init(coder: aDecoder)
self.oldBounds = self.bounds
self.isMultipleTouchEnabled = true
self.layer.addSublayer(layer)
}
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
for touch in touches {
let pʹ = touch.location(in: self).applying(self._backNormalizeTransform())
let p = pʹ.applying(self.backTransform)
if self.touch₁ == nil {
self.touch₁ = touch
self.p₁ʹ = pʹ
self.p₁ = p
} else if self.touch₂ == nil {
self.touch₂ = touch
self.p₂ʹ = pʹ
self.p₂ = p
}
}
self.solutionMatrix = self._computeSolutionMatrix()
}
override func touchesMoved(_ touches: Set<UITouch>, with event: UIEvent?) {
for touch in touches {
let pʹ = touch.location(in: self).applying(self._backNormalizeTransform())
if self.touch₁ == touch {
self.p₁ʹ = pʹ
} else if self.touch₂ == touch {
self.p₂ʹ = pʹ
}
}
CATransaction.begin()
CATransaction.setValue(true, forKey:kCATransactionDisableActions)
// Whether you're using 1 finger or 2 fingers
if let q₁ʹ = self.p₁ʹ, let q₂ʹ = self.p₂ʹ {
self.imageTransform = self._computeTransform(q₁ʹ, q₂ʹ)
} else if let q₁ʹ = (self.p₁ʹ != nil ? self.p₁ʹ : self.p₂ʹ) {
self.imageTransform = self._computeTransform(q₁ʹ, CGPoint(x:q₁ʹ.x + 10, y:q₁ʹ.y + 10))
}
CATransaction.commit()
}
override func touchesEnded(_ touches: Set<UITouch>, with event: UIEvent?) {
for touch in touches {
if self.touch₁ == touch {
self.touch₁ = nil
self.p₁ = nil
self.p₁ʹ = nil
} else if self.touch₂ == touch {
self.touch₂ = nil
self.p₂ = nil
self.p₂ʹ = nil
}
}
self.solutionMatrix = self._computeSolutionMatrix()
}
//MARK: Private Methods
private func _reset() {
guard
let image = self.image,
let cgimage = image.cgImage else {
return
}
let r = CGRect(x:0, y:0, width:cgimage.width, height:cgimage.height)
imageLayer.contents = cgimage;
imageLayer.bounds = r
imageLayer.position = ┼self.bounds
self.imageTransform = self._initialTransform()
}
private func _normalizeTransform() -> CGAffineTransform {
let center = ┼self.bounds
return CGAffineTransform(translationX: center.x, y: center.y)
}
private func _backNormalizeTransform() -> CGAffineTransform {
return self._normalizeTransform().inverted();
}
private func _initialTransform() -> CGAffineTransform {
guard let image = self.image, let cgimage = image.cgImage else {
return CGAffineTransform.identity;
}
let r = CGRect(x:0, y:0, width:cgimage.width, height:cgimage.height)
let s = r.scaleIn(rect: self.bounds)
return CGAffineTransform(scaleX: s, y: s)
}
private func _adjustScaleForBoundsChange() {
guard let image = self.image, let cgimage = image.cgImage else {
return
}
let r = CGRect(x:0, y:0, width:cgimage.width, height:cgimage.height)
let oldIdeal = r.scaleAndCenterIn(rect: self.oldBounds)
let newIdeal = r.scaleAndCenterIn(rect: self.bounds)
let s = newIdeal.height / oldIdeal.height
self.imageTransform = self.imageTransform.scaledBy(x: s, y: s)
}
private func _computeSolutionMatrix() -> HXMatrix? {
if let q₁ = self.p₁, let q₂ = self.p₂ {
return _computeSolutionMatrix(q₁, q₂)
} else if let q₁ = self.p₁, let q₁ʹ = self.p₁ʹ {
let q₂ = CGPoint(x: q₁ʹ.x + 10, y: q₁ʹ.y + 10).applying(self.backTransform)
return _computeSolutionMatrix(q₁, q₂)
} else if let q₂ = self.p₂, let q₂ʹ = self.p₂ʹ {
let q₁ = CGPoint(x: q₂ʹ.x + 10, y: q₂ʹ.y + 10).applying(self.backTransform)
return _computeSolutionMatrix(q₂, q₁)
}
return nil
}
private func _computeSolutionMatrix(_ q₁:CGPoint, _ q₂:CGPoint) -> HXMatrix {
let x₁ = Double(q₁.x)
let y₁ = Double(q₁.y)
let x₂ = Double(q₂.x)
let y₂ = Double(q₂.y)
let A = HXMatrix(rows: 4, columns: 4, values:[
x₁, -y₁, 1, 0,
y₁, x₁, 0, 1,
x₂, -y₂, 1, 0,
y₂, x₂, 0, 1
])
return A.inverse()
}
private func _computeTransform(_ q₁ʹ:CGPoint, _ q₂ʹ:CGPoint) -> CGAffineTransform {
guard let solutionMatrix = self.solutionMatrix else {
return CGAffineTransform.identity
}
let B = HXMatrix(rows: 4, columns: 1, values: [
Double(q₁ʹ.x),
Double(q₁ʹ.y),
Double(q₂ʹ.x),
Double(q₂ʹ.y)
])
let C = solutionMatrix ⋅ B
let U = CGFloat(C[0,0])
let V = CGFloat(C[1,0])
let tx = CGFloat(C[2,0])
let ty = CGFloat(C[3,0])
var t :CGAffineTransform = CGAffineTransform.identity
t.a = U; t.b = V
t.c = -V; t.d = U
t.tx = tx; t.ty = ty
return t
}
}
Here is my code:
viewDidLoad:
UIPinchGestureRecognizer *pinch = [[UIPinchGestureRecognizer alloc]initWithTarget:self action:#selector(pinch:)];
[self.canvas addGestureRecognizer:pinch];
pinch.delegate = self;
UIRotationGestureRecognizer *twoFingersRotate = [[UIRotationGestureRecognizer alloc] initWithTarget:self action:#selector(pinchRotate:)];
[[self canvas] addGestureRecognizer:twoFingersRotate];
twoFingersRotate.delegate = self;
Code For Pinches and Rotates:
-(BOOL)gestureRecognizer:(UIGestureRecognizer *)gestureRecognizer shouldRecognizeSimultaneouslyWithGestureRecognizer:(UIGestureRecognizer *)otherGestureRecognizer
{
return YES;
}
-(void)pinchRotate:(UIRotationGestureRecognizer*)rotate
{
SMImage *selectedImage = [DataCenter sharedDataCenter].selectedImage;
switch (rotate.state)
{
case UIGestureRecognizerStateBegan:
{
selectedImage.referenceTransform = selectedImage.transform;
break;
}
case UIGestureRecognizerStateChanged:
{
selectedImage.transform = CGAffineTransformRotate(selectedImage.referenceTransform, ([rotate rotation] * 55) * M_PI/180);
break;
}
default:
break;
}
}
-(void)pinch:(UIPinchGestureRecognizer*)pinch
{
SMImage *selectedImage = [DataCenter sharedDataCenter].selectedImage;
[self itemSelected];
switch (pinch.state)
{
case UIGestureRecognizerStateBegan:
{
selectedImage.referenceTransform = selectedImage.transform;
break;
}
case UIGestureRecognizerStateChanged:
{
CGAffineTransform transform = CGAffineTransformScale(selectedImage.referenceTransform, pinch.scale, pinch.scale);
selectedImage.transform = transform;
break;
}
default:
break;
}
}
My rotation works great on its own and my scale works great on its own, but they wont work together. One always works or the other doesn't. When I implement shouldRecognizeSimultaneouslyWithGestureRecognizer the two gestures seem to fight against each other and produce poor results. What am I missing? (Yes I have implemented <UIGestureRecognizerDelegate>)
Every time pinch: is called, you just compute the transform based on the pinch recognizer's scale. Every time pinchRotate: is called, you just compute the transform based on the rotation recognizer's rotation. You never combine the scale and the rotation into one transform.
Here's an approach. Give yourself one new instance variable, _activeRecognizers:
NSMutableSet *_activeRecognizers;
Initialize it in viewDidLoad:
_activeRecognizers = [NSMutableSet set];
Use one method as the action for both recognizers:
- (IBAction)handleGesture:(UIGestureRecognizer *)recognizer
{
SMImage *selectedImage = [DataCenter sharedDataCenter].selectedImage;
switch (recognizer.state) {
case UIGestureRecognizerStateBegan:
if (_activeRecognizers.count == 0)
selectedImage.referenceTransform = selectedImage.transform;
[_activeRecognizers addObject:recognizer];
break;
case UIGestureRecognizerStateEnded:
selectedImage.referenceTransform = [self applyRecognizer:recognizer toTransform:selectedImage.referenceTransform];
[_activeRecognizers removeObject:recognizer];
break;
case UIGestureRecognizerStateChanged: {
CGAffineTransform transform = selectedImage.referenceTransform;
for (UIGestureRecognizer *recognizer in _activeRecognizers)
transform = [self applyRecognizer:recognizer toTransform:transform];
selectedImage.transform = transform;
break;
}
default:
break;
}
}
You'll need this helper method:
- (CGAffineTransform)applyRecognizer:(UIGestureRecognizer *)recognizer toTransform:(CGAffineTransform)transform
{
if ([recognizer respondsToSelector:#selector(rotation)])
return CGAffineTransformRotate(transform, [(UIRotationGestureRecognizer *)recognizer rotation]);
else if ([recognizer respondsToSelector:#selector(scale)]) {
CGFloat scale = [(UIPinchGestureRecognizer *)recognizer scale];
return CGAffineTransformScale(transform, scale, scale);
}
else
return transform;
}
This works if you're just allowing rotating and scaling. (I even tested it!)
If you want to add panning, use a separate action method and just adjust selectedImage.center. Trying to do panning with rotation and scaling using selectedImage.transform is much more complicated.
Swift 3 with Pan, Rotate and Pinch
// MARK: - Gesturies
func transformUsingRecognizer(_ recognizer: UIGestureRecognizer, transform: CGAffineTransform) -> CGAffineTransform {
if let rotateRecognizer = recognizer as? UIRotationGestureRecognizer {
return transform.rotated(by: rotateRecognizer.rotation)
}
if let pinchRecognizer = recognizer as? UIPinchGestureRecognizer {
let scale = pinchRecognizer.scale
return transform.scaledBy(x: scale, y: scale)
}
if let panRecognizer = recognizer as? UIPanGestureRecognizer {
let deltaX = panRecognizer.translation(in: imageView).x
let deltaY = panRecognizer.translation(in: imageView).y
return transform.translatedBy(x: deltaX, y: deltaY)
}
return transform
}
var initialTransform: CGAffineTransform?
var gestures = Set<UIGestureRecognizer>(minimumCapacity: 3)
#IBAction func processTransform(_ sender: Any) {
let gesture = sender as! UIGestureRecognizer
switch gesture.state {
case .began:
if gestures.count == 0 {
initialTransform = imageView.transform
}
gestures.insert(gesture)
case .changed:
if var initial = initialTransform {
gestures.forEach({ (gesture) in
initial = transformUsingRecognizer(gesture, transform: initial)
})
imageView.transform = initial
}
case .ended:
gestures.remove(gesture)
default:
break
}
}
func gestureRecognizer(_ gestureRecognizer: UIGestureRecognizer, shouldRecognizeSimultaneouslyWith otherGestureRecognizer: UIGestureRecognizer) -> Bool {
return true
}
For this to happen you need to implement gesture delegate shouldRecognizeSimultaneouslyWithGestureRecognizer and put what gestures you would like to recognize simultaneously.
// ensure that the pinch and rotate gesture recognizers on a particular view can all recognize simultaneously
// prevent other gesture recognizers from recognizing simultaneously
- (BOOL)gestureRecognizer:(UIGestureRecognizer *)gestureRecognizer shouldRecognizeSimultaneouslyWithGestureRecognizer:(UIGestureRecognizer *)otherGestureRecognizer
{
// if the gesture recognizers's view isn't one of our views, don't allow simultaneous recognition
if (gestureRecognizer.view != firstView && gestureRecognizer.view != secondView)
return NO;
// if the gesture recognizers are on different views, don't allow simultaneous recognition
if (gestureRecognizer.view != otherGestureRecognizer.view)
return NO;
// if either of the gesture recognizers is the long press, don't allow simultaneous recognition
if ([gestureRecognizer isKindOfClass:[UILongPressGestureRecognizer class]] || [otherGestureRecognizer isKindOfClass:[UILongPressGestureRecognizer class]])
return NO;
return YES;
}
This code needs to be modified to the view for which you want simultaneous gesture recognisers. The above code is what you need.
This example does not use gesture recognizers, and directly computes the transformation matrix. It also properly handles one-to-two finger transitions.
class PincherView: UIView {
override var bounds :CGRect {
willSet(newBounds) {
oldBounds = self.bounds
} didSet {
self.imageLayer.position = ┼self.bounds
self._adjustScaleForBoundsChange()
}
}
var oldBounds :CGRect
var touch₁ :UITouch?
var touch₂ :UITouch?
var p₁ :CGPoint? // point 1 in image coordiate system
var p₂ :CGPoint? // point 2 in image coordinate system
var p₁ʹ :CGPoint? // point 1 in view coordinate system
var p₂ʹ :CGPoint? // point 2 in view coordinate system
var image :UIImage? {
didSet {self._reset()}
}
var imageLayer :CALayer
var imageTransform :CGAffineTransform {
didSet {
self.backTransform = self.imageTransform.inverted()
self.imageLayer.transform = CATransform3DMakeAffineTransform(self.imageTransform)
}
}
var backTransform :CGAffineTransform
var solutionMatrix :HXMatrix?
required init?(coder aDecoder: NSCoder) {
self.oldBounds = CGRect.zero
let layer = CALayer();
self.imageLayer = layer
self.imageTransform = CGAffineTransform.identity
self.backTransform = CGAffineTransform.identity
super.init(coder: aDecoder)
self.oldBounds = self.bounds
self.isMultipleTouchEnabled = true
self.layer.addSublayer(layer)
}
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
for touch in touches {
let pʹ = touch.location(in: self).applying(self._backNormalizeTransform())
let p = pʹ.applying(self.backTransform)
if self.touch₁ == nil {
self.touch₁ = touch
self.p₁ʹ = pʹ
self.p₁ = p
} else if self.touch₂ == nil {
self.touch₂ = touch
self.p₂ʹ = pʹ
self.p₂ = p
}
}
self.solutionMatrix = self._computeSolutionMatrix()
}
override func touchesMoved(_ touches: Set<UITouch>, with event: UIEvent?) {
for touch in touches {
let pʹ = touch.location(in: self).applying(self._backNormalizeTransform())
if self.touch₁ == touch {
self.p₁ʹ = pʹ
} else if self.touch₂ == touch {
self.p₂ʹ = pʹ
}
}
CATransaction.begin()
CATransaction.setValue(true, forKey:kCATransactionDisableActions)
// Whether you're using 1 finger or 2 fingers
if let q₁ʹ = self.p₁ʹ, let q₂ʹ = self.p₂ʹ {
self.imageTransform = self._computeTransform(q₁ʹ, q₂ʹ)
} else if let q₁ʹ = (self.p₁ʹ != nil ? self.p₁ʹ : self.p₂ʹ) {
self.imageTransform = self._computeTransform(q₁ʹ, CGPoint(x:q₁ʹ.x + 10, y:q₁ʹ.y + 10))
}
CATransaction.commit()
}
override func touchesEnded(_ touches: Set<UITouch>, with event: UIEvent?) {
for touch in touches {
if self.touch₁ == touch {
self.touch₁ = nil
self.p₁ = nil
self.p₁ʹ = nil
} else if self.touch₂ == touch {
self.touch₂ = nil
self.p₂ = nil
self.p₂ʹ = nil
}
}
self.solutionMatrix = self._computeSolutionMatrix()
}
//MARK: Private Methods
private func _reset() {
guard
let image = self.image,
let cgimage = image.cgImage else {
return
}
let r = CGRect(x:0, y:0, width:cgimage.width, height:cgimage.height)
imageLayer.contents = cgimage;
imageLayer.bounds = r
imageLayer.position = ┼self.bounds
self.imageTransform = self._initialTransform()
}
private func _normalizeTransform() -> CGAffineTransform {
let center = ┼self.bounds
return CGAffineTransform(translationX: center.x, y: center.y)
}
private func _backNormalizeTransform() -> CGAffineTransform {
return self._normalizeTransform().inverted();
}
private func _initialTransform() -> CGAffineTransform {
guard let image = self.image, let cgimage = image.cgImage else {
return CGAffineTransform.identity;
}
let r = CGRect(x:0, y:0, width:cgimage.width, height:cgimage.height)
let s = r.scaleIn(rect: self.bounds)
return CGAffineTransform(scaleX: s, y: s)
}
private func _adjustScaleForBoundsChange() {
guard let image = self.image, let cgimage = image.cgImage else {
return
}
let r = CGRect(x:0, y:0, width:cgimage.width, height:cgimage.height)
let oldIdeal = r.scaleAndCenterIn(rect: self.oldBounds)
let newIdeal = r.scaleAndCenterIn(rect: self.bounds)
let s = newIdeal.height / oldIdeal.height
self.imageTransform = self.imageTransform.scaledBy(x: s, y: s)
}
private func _computeSolutionMatrix() -> HXMatrix? {
if let q₁ = self.p₁, let q₂ = self.p₂ {
return _computeSolutionMatrix(q₁, q₂)
} else if let q₁ = self.p₁, let q₁ʹ = self.p₁ʹ {
let q₂ = CGPoint(x: q₁ʹ.x + 10, y: q₁ʹ.y + 10).applying(self.backTransform)
return _computeSolutionMatrix(q₁, q₂)
} else if let q₂ = self.p₂, let q₂ʹ = self.p₂ʹ {
let q₁ = CGPoint(x: q₂ʹ.x + 10, y: q₂ʹ.y + 10).applying(self.backTransform)
return _computeSolutionMatrix(q₂, q₁)
}
return nil
}
private func _computeSolutionMatrix(_ q₁:CGPoint, _ q₂:CGPoint) -> HXMatrix {
let x₁ = Double(q₁.x)
let y₁ = Double(q₁.y)
let x₂ = Double(q₂.x)
let y₂ = Double(q₂.y)
let A = HXMatrix(rows: 4, columns: 4, values:[
x₁, -y₁, 1, 0,
y₁, x₁, 0, 1,
x₂, -y₂, 1, 0,
y₂, x₂, 0, 1
])
return A.inverse()
}
private func _computeTransform(_ q₁ʹ:CGPoint, _ q₂ʹ:CGPoint) -> CGAffineTransform {
guard let solutionMatrix = self.solutionMatrix else {
return CGAffineTransform.identity
}
let B = HXMatrix(rows: 4, columns: 1, values: [
Double(q₁ʹ.x),
Double(q₁ʹ.y),
Double(q₂ʹ.x),
Double(q₂ʹ.y)
])
let C = solutionMatrix ⋅ B
let U = CGFloat(C[0,0])
let V = CGFloat(C[1,0])
let tx = CGFloat(C[2,0])
let ty = CGFloat(C[3,0])
var t :CGAffineTransform = CGAffineTransform.identity
t.a = U; t.b = V
t.c = -V; t.d = U
t.tx = tx; t.ty = ty
return t
}
}
I'm using spliviewcontroller for my ipad application. I've also implemented reordering for the uitableview which on the left of spliview. What i want to achieve is user can reorder the tableCell but need not touch on the three white bars. User should be able to touch anywhere on cell and reorder it. Is it possible?
The class below will hide the reorder control and make the whole UITableViewCell touchable for reordering. Additionally, it takes care of re-resizing the content view to its original size, which is important for autolayout.
#interface UITableViewCellReorder : UITableViewCell
{
__weak UIView *_reorderControl;
}
#implementation UITableViewCellReorder
#define REORDER_CONTROL_CLASSNAME #"UITableViewCellReorderControl"
/*
Override layoutSubviews to resize the content view's frame to its original size which is the size
of the cell. This is important for autolayout!
Do not call this method on super, as we don't need any further layouting wihtin the cell itself.
*/
- (void)layoutSubviews
{
self.contentView.frame = CGRectMake(0, 0, self.frame.size.width, self.frame.size.height);
[self setAndHideReorderControl];
}
/*
Find the reorder control, store a reference and hide it.
*/
- (void) setAndHideReorderControl
{
if (_reorderControl)
return;
// > iOS 7
for(UIView* view in [[self.subviews objectAtIndex:0] subviews])
if([[[view class] description] isEqualToString:REORDER_CONTROL_CLASSNAME])
_reorderControl = view;
// < iOS 7
if (!_reorderControl)
for(UIView* view in self.subviews)
if([[[view class] description] isEqualToString:REORDER_CONTROL_CLASSNAME])
_reorderControl = view;
if (_reorderControl)
{
[_reorderControl setHidden:YES];
}
}
#pragma mark - Touch magic
/*
Just perform the specific selectors on the hidden reorder control to fire touch events on the control.
*/
- (void) touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
if (_reorderControl && [_reorderControl respondsToSelector:#selector(beginTrackingWithTouch:withEvent:)])
{
UITouch * touch = [touches anyObject];
[_reorderControl performSelector:#selector(beginTrackingWithTouch:withEvent:) withObject:touch withObject:event];
}
[super touchesBegan:touches withEvent:event];
[self.nextResponder touchesBegan:touches withEvent:event];
}
- (void) touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
{
if (_reorderControl && [_reorderControl respondsToSelector:#selector(continueTrackingWithTouch:withEvent:)])
{
UITouch * touch = [touches anyObject];
[_reorderControl performSelector:#selector(continueTrackingWithTouch:withEvent:) withObject:touch withObject:event];
}
[super touchesMoved:touches withEvent:event];
[self.nextResponder touchesMoved:touches withEvent:event];
}
- (void) touchesCancelled:(NSSet *)touches withEvent:(UIEvent *)event
{
if (_reorderControl && [_reorderControl respondsToSelector:#selector(cancelTrackingWithEvent:)])
{
[_reorderControl performSelector:#selector(cancelTrackingWithEvent:) withObject:event];
}
[super touchesCancelled:touches withEvent:event];
[self.nextResponder touchesCancelled:touches withEvent:event];
}
- (void) touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event
{
if (_reorderControl && [_reorderControl respondsToSelector:#selector(endTrackingWithTouch:withEvent:)])
{
UITouch * touch = [touches anyObject];
[_reorderControl performSelector:#selector(endTrackingWithTouch:withEvent:) withObject:touch withObject:event];
}
[super touchesEnded:touches withEvent:event];
[self.nextResponder touchesEnded:touches withEvent:event];
}
#end
Yes! Check out: http://b2cloud.com.au/how-to-guides/reordering-a-uitableviewcell-from-any-touch-point
I know it's an old question, but here's a Swift helper that should help future adventure seekers.
/*
THIS IS A HACK!
It uses undocumented controls and class names, and will need to be revised with every iOS release.
Usage:
Call the 'transform' method after the cell is displayed - i.e. tableView(tableView: UITableView!, willDisplayCell cell: UITableViewCell! ...
Adopted from:
http://b2cloud.com.au/how-to-guides/reordering-a-uitableviewcell-from-any-touch-point
In case of failure:
- print names of all subviews
- find the new 'UITableViewCellReorderControl' and reflect changes to the code below
*/
struct MovableTableViewCell {
static func transform(cell:UITableViewCell) {
var reorder = cell.getSubviewByName("UITableViewCellReorderControl")
if (reorder == nil) {
println("UITableViewCellReorderControl was not found. Reorder control will remain unchanged.")
return
}
// resized grip
var resized = UIView(frame: CGRectMake(0, 0, CGRectGetMaxX(reorder!.frame), CGRectGetMaxY(reorder!.frame)))
resized.addSubview(reorder!)
cell.addSubview(resized)
// remove image
for img:AnyObject in reorder!.subviews {
if (img is UIImageView) {
(img as UIImageView).image = nil
}
}
// determine diff and ratio
var diff = CGSizeMake(resized.frame.width - reorder!.frame.width, resized.frame.height - reorder!.frame.height)
var ratio = CGSizeMake(resized.frame.width / reorder!.frame.width, resized.frame.height / reorder!.frame.height)
// transform!
var transform = CGAffineTransformIdentity
transform = CGAffineTransformScale(transform, ratio.width, ratio.height)
transform = CGAffineTransformTranslate(transform, -diff.width / 2.0, -diff.height / 2.0)
resized.transform = transform
}
}
extension UIView {
func getSubviewByName(name:String) -> UIView? {
if (object_getClassName(self) == name.bridgeToObjectiveC().UTF8String) {
return self
}
for v in (self.subviews as Array<UIView>) {
var child = v.getSubviewByName(name)
if (child != nil) {
return child
}
}
return nil
}
}
I took the answer from ANami and merged it with another post to let it work with Xcode 7.3 and iOS 9 using Swift 2.2
I would suggest to use this as a base class:
//
// MoveableTableViewCell.swift
// ReorderTableTest
//
// Created by Geoff on 05/06/16.
// Copyright © 2016 Think#. MIT License, free to use.
//
import UIKit
class MoveableTableViewCell: UITableViewCell {
override func awakeFromNib() {
super.awakeFromNib()
// Initialization code
}
override func setSelected(selected: Bool, animated: Bool) {
super.setSelected(selected, animated: animated)
// Configure the view for the selected state
}
override func setEditing(editing: Bool, animated: Bool) {
super.setEditing(editing, animated: true)
//self.showsReorderControl = false
if (editing) {
for view in subviews as [UIView] {
if view.dynamicType.description().rangeOfString("Reorder") != nil {
// resized grip
let resized = UIView(frame: CGRectMake(0, 0, CGRectGetMaxX(view.frame), CGRectGetMaxY(view.frame)))
resized.addSubview(view)
self.addSubview(resized)
// remove image
for img:AnyObject in view.subviews {
if (img is UIImageView) {
(img as! UIImageView).image = nil
}
}
// determine diff and ratio
let diff = CGSizeMake(resized.frame.width - view.frame.width, resized.frame.height - view.frame.height)
let ratio = CGSizeMake(resized.frame.width / view.frame.width, resized.frame.height / view.frame.height)
// transform!
var transform = CGAffineTransformIdentity
transform = CGAffineTransformScale(transform, ratio.width, ratio.height)
transform = CGAffineTransformTranslate(transform, -diff.width / 2.0, -diff.height / 2.0)
resized.transform = transform
}
}
}
}
}
Here is some code I wrote that will let you reorder the cells based on the user touching any part of the cell. I subclassed UITableView and used the touchesBegan and touchesEnded methods to figure out what cell was tapped and move it back and forth. Its pretty basic code, let me know if you have any questions.
class ReorderTableView: UITableView {
var customView:UIImageView?
var oldIndexPath:NSIndexPath?
override func touchesBegan(touches: Set<UITouch>, withEvent event: UIEvent?) {
guard let touch1 = touches.first else{
return
}
oldIndexPath = self.indexPathForRowAtPoint(touch1.locationInView(self))
guard (oldIndexPath != nil) else{
return
}
let oldCell = self.cellForRowAtIndexPath(self.oldIndexPath!)
customView = UIImageView(frame: CGRectMake(0, touch1.locationInView(self).y - 20, self.frame.width, 40))
customView?.image = screenShotView(oldCell!)
customView?.layer.shadowColor = UIColor.blackColor().CGColor
customView?.layer.shadowOpacity = 0.5
customView?.layer.shadowOffset = CGSizeMake(1, 1)
self.addSubview(customView!)
oldCell?.alpha = 0
}
override func touchesMoved(touches: Set<UITouch>, withEvent event: UIEvent?) {
guard let touch1 = touches.first else{
return
}
let newIndexPath = self.indexPathForRowAtPoint(touch1.locationInView(self))
guard newIndexPath != nil else{
return
}
guard oldIndexPath != nil else{
return
}
if newIndexPath != oldIndexPath{
self.moveRowAtIndexPath(oldIndexPath!, toIndexPath: newIndexPath!)
oldIndexPath = newIndexPath
self.cellForRowAtIndexPath(self.oldIndexPath!)!.alpha = 0
}
self.customView!.frame.origin = CGPointMake(0, touch1.locationInView(self).y - 20)
}
override func touchesEnded(touches: Set<UITouch>, withEvent event: UIEvent?) {
self.customView?.removeFromSuperview()
self.customView = nil
guard (oldIndexPath != nil) else{
return
}
self.cellForRowAtIndexPath(self.oldIndexPath!)!.alpha = 1
}
func screenShotView(view: UIView) -> UIImage? {
let rect = view.bounds
UIGraphicsBeginImageContextWithOptions(rect.size,true,0.0)
let context = UIGraphicsGetCurrentContext()
CGContextTranslateCTM(context, 0, -view.frame.origin.y);
self.layer.renderInContext(context!)
let capturedImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return capturedImage
}
}
Any idea if there is a way to get the length of a swipe gesture or the touches so that i can calculate the distance?
It's impossible to get a distance from a swipe gesture, because the SwipeGesture triggers the method where you could access the location exactly one time, when the gesture has ended.
Maybe you want to use a UIPanGestureRecognizer.
If it possible for you to use pan gesture you would save the starting point of the pan, and if the pan has ended calculate the distance.
- (void)panGesture:(UIPanGestureRecognizer *)sender {
if (sender.state == UIGestureRecognizerStateBegan) {
startLocation = [sender locationInView:self.view];
}
else if (sender.state == UIGestureRecognizerStateEnded) {
CGPoint stopLocation = [sender locationInView:self.view];
CGFloat dx = stopLocation.x - startLocation.x;
CGFloat dy = stopLocation.y - startLocation.y;
CGFloat distance = sqrt(dx*dx + dy*dy );
NSLog(#"Distance: %f", distance);
}
}
In Swift
override func viewDidLoad() {
super.viewDidLoad()
// add your pan recognizer to your desired view
let panRecognizer = UIPanGestureRecognizer(target: self, action: #selector(panedView))
self.view.addGestureRecognizer(panRecognizer)
}
#objc func panedView(sender:UIPanGestureRecognizer){
var startLocation = CGPoint()
//UIGestureRecognizerState has been renamed to UIGestureRecognizer.State in Swift 4
if (sender.state == UIGestureRecognizer.State.began) {
startLocation = sender.location(in: self.view)
}
else if (sender.state == UIGestureRecognizer.State.ended) {
let stopLocation = sender.location(in: self.view)
let dx = stopLocation.x - startLocation.x;
let dy = stopLocation.y - startLocation.y;
let distance = sqrt(dx*dx + dy*dy );
NSLog("Distance: %f", distance);
if distance > 400 {
//do what you want to do
}
}
}
Hope that helps all you Swift pioneers
func swipeAction(gesture: UIPanGestureRecognizer) {
let transition = sqrt(pow(gesture.translation(in: view).x, 2)
+ pow(gesture.translation(in: view).y, 2))
}
For those of us using Xamarin:
void panGesture(UIPanGestureRecognizer gestureRecognizer) {
if (gestureRecognizer.State == UIGestureRecognizerState.Began) {
startLocation = gestureRecognizer.TranslationInView (view)
} else if (gestureRecognizer.State == UIGestureRecognizerState.Ended) {
PointF stopLocation = gestureRecognizer.TranslationInView (view);
float dX = stopLocation.X - startLocation.X;
float dY = stopLocation.Y - startLocation.Y;
float distance = Math.Sqrt(dX * dX + dY * dY);
System.Console.WriteLine("Distance: {0}", distance);
}
}
You can only do it a standard way: remember the touch point of touchBegin and compare the point from touchEnd.
I have an implementation similar to the answer in swift that discriminates between a drag and a swipe calculating the distance relative to the container and the speed of the swipe.
#objc private func handleSwipe(sender: UIPanGestureRecognizer) {
if (sender.state == .began) {
self.swipeStart.location = sender.location(in: self)
self.swipeStart.time = Date()
}
else if (sender.state == .ended) {
let swipeStopLocation : CGPoint = sender.location(in: self)
let dx : CGFloat = swipeStopLocation.x - swipeStart.location.x
let dy : CGFloat = swipeStopLocation.y - swipeStart.location.y
let distance : CGFloat = sqrt(dx*dx + dy*dy );
let speed : CGFloat = distance / CGFloat(Date().timeIntervalSince(self.swipeStart.time))
let portraitWidth = min(self.frame.size.width, self.frame.size.height)
print("Distance: \(distance), speed: \(speed), dy: \(dy), dx: \(dx), portraitWidth: \(portraitWidth), c1: \(distance > portraitWidth * 0.4), c2: \(abs(dy) < abs(dx) * 0.25), c3: \(speed > portraitWidth * 3.0) ")
if distance > portraitWidth * 0.4 && abs(dy) < abs(dx) * 0.25 && speed > portraitWidth * 3.0 {
if dx > 0 {
delegate?.previousAssetPressed(self)
}else{
delegate?.nextAssetPressed(self)
}
}
}
}