I have a custom slider-type object, that I wish to make more usable. Currently I use UIPanGestureRecognizer and translationInView to make it work. It works pretty well but I'd like some sort of velocity in there to make it feel a bit more useful. I've tried a few things but cant quite figure out how to properly implement velocity changedLevel equation.
- (void)panDetected:(UIPanGestureRecognizer *)gesture {
CGPoint swipeLocation = [gesture locationInView:self.tableView];
NSIndexPath *indexPath = [self.tableView indexPathForRowAtPoint:swipeLocation];
LevelCounterTableCell *swipedCell = (LevelCounterTableCell *)[self.tableView cellForRowAtIndexPath:indexPath];
if([gesture state] == UIGestureRecognizerStateBegan) {
NSString *originalLevelString = swipedCell.levelNumber.text;
originalLevel = [originalLevelString intValue]; // int originalLevel
}
if ([gesture state] == UIGestureRecognizerStateChanged) {
CGFloat xTranslation = [gesture translationInView:[[gesture view] superview]].x;
CGFloat xVelocity = [gesture velocityInView: [[gesture view] superview]].x;
// Pan threshold is currently set to 8.0.
// 8.0 is a decent level for slow panning
// for fast panning 2.0 is more reasonable
changedLevel = ceilf((xTranslation / panThreshold) + originalLevel); // int changedLevel
// Raw velocity seems to go from around 3 (slow)
// to over 200 (fast)
NSLog(#"raw velocity = %f", xVelocity);
if (changedLevel >= 15 && changedLevel <= 100) {
swipedCell.levelNumber.text = [NSString stringWithFormat:#"%i", changedLevel];
swipedCell.meter.frame = [self updateMeter: changedLevel];
}
}
if ([gesture state] == UIGestureRecognizerStateEnded || [gesture state] == UIGestureRecognizerStateCancelled) {
if (changedLevel >= 15 && changedLevel <= 100) {
//... Save the values...
}
}
}
Any help would be greatly appreciated. Thank you.
In my experience, the velocityInView: of a pan gesture recognizer isn't important until the user lifts their finger(s) and the recognizer finishes. At that point, you can use the velocity to calculate an animation duration to move your views.
Just stick with translationInView: until the state is UIGestureRecognizerStateEnded and then use velocityInView: to animate any onscreen view changes.
Related
Im using the UIGestureRecognizer to transform a view and its workin perfectly, but now I want to use it to transform my view width and height independently. The only way that comes to my mind to solve this is getting the two finger positions and make an if clause to recognize if the user is trying to increase width or height, but for this I need to get each finger position involved in the Pinch Gesture. But I cant find any method to do this I was wondering if this is posible or if there is another alternative for achieving this.
- (IBAction)handlePinch:(UIPinchGestureRecognizer *)recognizer {
recognizer.view.transform = CGAffineTransformScale(recognizer.view.transform, recognizer.scale, 1);//To transform height insted of width just swap positions of the second and third parameter.
NSLog(#"%f",recognizer.scale);
recognizer.scale = 1;
}
Got the answer if somebody needs to do this, there is a method called location of touch. With this method you can get each touch x and y position. But call it when the Gesture recognizer state began because it crashes if you do it in the state changed. Save this values in some variables and you are good to go. Hope it helps someone who is interested.
- (IBAction)handlePinch:(UIPinchGestureRecognizer *)recognizer {
if(recognizer.state == UIGestureRecognizerStateBegan){
NSLog(#"pos : 0%f, %f",[recognizer locationOfTouch:0 inView:self.view].x,[recognizer locationOfTouch:0 inView:self.view].y);
NSLog(#"pos 1: %f, %f",[recognizer locationOfTouch:1 inView:self.view].x,[recognizer locationOfTouch:1 inView:self.view].y);
}
if(recognizer.state == UIGestureRecognizerStateChanged){
recognizer.view.transform = CGAffineTransformScale(recognizer.view.transform, recognizer.scale, 1);
//NSLog(#"%f",recognizer.scale);
recognizer.scale = 1;
}
if(recognizer.state == UIGestureRecognizerStateEnded){
}
I have a UIView called character. Inside that view the user can add several UIImageView as subViews.
I created gesture methods for Pinch, Move and Rotate. The gesture method were added to the character view.
For moving I'm using the gesture translationInView property and it works fine!
My problem is with scalling.
Scaling individual UIImageViews is ok, no issues, but how to scale all of them??
I tried to scale the character view, it works fine scaling proportionally all the subviews but I don't want to change the character view, because I use it as a canvas to place the UIImageView to compose my character. If I change the character with a CGAffineTransformScale, all the new ImageViews that I add, got the same scale.
I tried this but it doesn't work:
if ([gesture state] == UIGestureRecognizerStateBegan || [gesture state] == UIGestureRecognizerStateChanged) {
CGFloat customScale = gesture.scale;
for (UIImageView *singleView in [self.devoCharacter subviews]) {
singleView.frame = CGRectMake(singleView.frame.origin.x / customScale, singleView.frame.origin.y /customScale, singleView.frame.size.width /customScale , singleView.frame.size.height /customScale);
[gesture setScale:1];
}
}
I wanted to reduce the size of each view and their relative distances in this way, but the whole thing expands from the coordinates origin.
Then I tried this:
if ([gesture state] == UIGestureRecognizerStateBegan || [gesture state] == UIGestureRecognizerStateChanged) {
CGFloat customScale = gesture.scale;
for (UIImageView *singleView in [self.devoCharacter subviews]) {
singleView.transform = CGAffineTransformScale(singleView.transform, customScale, customScale);
[gesture setScale:1];
}
}
But this scales each view independently and does not maintain their relative distances.
Is there anyway to just scale each view, proportionally? and maintaining their relative distances as well?
Ok I got how to do it. Scale every single view and then scale proportionally their centers. This solved my problem:
for (Part *singleView in [self.canvas subviews]) {
singleView.transform = CGAffineTransformScale(singleView.transform, customScale, customScale);
singleView.center = CGPointMake(singleView.center.x * customScale, singleView.center.y * customScale);
[gesture setScale:1];
}
I've got a UIView that receives user panning from a gesture recognizer. I realized some times user would "throw" my view almost out of screen and that looks very bad. I want to prevent this from happening. I know I should do some check in my selector, but I don't know how to do it when the view is translated.
Here is my code:
- (void)panPiece:(UIPanGestureRecognizer *)gestureRecognizer
{
UIView *piece = [gestureRecognizer view];
[self adjustAnchorPointForGestureRecognizer:gestureRecognizer];
if ([gestureRecognizer state] == UIGestureRecognizerStateBegan || [gestureRecognizer state] == UIGestureRecognizerStateChanged) {
CGPoint translation = [gestureRecognizer translationInView:[piece superview]];
[piece setCenter:CGPointMake([piece center].x + translation.x, [piece center].y + translation.y)];
[gestureRecognizer setTranslation:CGPointZero inView:[piece superview]];
}
}
Thanks in advance
Regards
Leo
You can add following condition in you code.
if([gestureRecognizer state] == UIGestureRecognizerStateEnd)
Inside that you can check weather any corner of view is outside of screen or not. If its is outside then you can bounce it back inside screen.
Following is a possible example.
if (zl.layer.frame.origin.x > 0) {
self.moveFactorX = self.moveFactorX-zl.layer.frame.origin.x;
[zl.layer setValue:[NSNumber numberWithFloat: moveFactorX+totalMoveX] forKeyPath: #"transform.translation.x"];
}
else if(zl.layer.frame.origin.x < pageSize.width - zl.layer.frame.size.width){
self.moveFactorX = self.moveFactorX-(zl.layer.frame.origin.x - pageSize.width + zl.layer.frame.size.width);
[zl.layer setValue:[NSNumber numberWithFloat: moveFactorX+totalMoveX] forKeyPath: #"transform.translation.x"];
}
This checked for me weather left side of layer of zl class is inside the screen. When It is inside screen then i was pushing is back to left most corner.
In else it checks weather it is inside the screen from right side or not. and when condition satisfies it will push it to right border.
Same way you have to implement for top and bottom also.
If you find code confusing then post here. I will modify it accordingly.
Having a little trouble understanding how to implement a faster dragging speed for the code I'm using below (written by #PaulSoltz) which allows you to drag an object across the screen. I realize you have to use the velocityInView method from UIPanGestureRecognizer, and I understand it returns the x velocity vector and y velocity vector. If velocity = distance over time, then for instance velocityx = (x2 - x1) / time and I'm unsure how to use this formula to get what I need. Basically I just want to be able to adjust the speed of my movement to make it a little faster. Maybe I'm over thinking things, but if anyone could help me understand this it would be appreciated. Thanks.
- (void)handlePanGesture:(UIPanGestureRecognizer *)gestureRecognizer {
UIView *myView = [gestureRecognizer view];
CGPoint translate = [gestureRecognizer translationInView:[myView superview]];
if ([gestureRecognizer state] == UIGestureRecognizerStateChanged || [gestureRecognizer state] == UIGestureRecognizerStateChanged) {
[myView setCenter:CGPointMake(myView.center.x + translate.x, myView.center.y + translate.y)];
[gestureRecognizer setTranslation:CGPointZero inView:[myView superview]];
}
}
Just multiply the components of the translation vector by some constant.
[myView setCenter:CGPointMake(myView.center.x + translate.x * 2, myView.center.y + translate.y * 2)];
Im trying to move an image around with the accelerometer by doing that:
- (void)accelerometer:(UIAccelerometer *)accelerometer didAccelerate:(UIAcceleration *)acceleration {
image.center = CGPointMake(acceleration.x, acceleration.y);
}
When i test the app, the image that is supposed to move around just sits in the x0 y0 position.
I declared the accelerometer, called the .h UIAccelerometerDelegate and so on...
What am i doing wrong?
Thanks in advance! -DD
You do realize that the accelerometer returns, as the name would suggest, measures of acceleration not points on the display? Anyway, what you need to do, is alter the center (not replace it completely), which will allow you to move the image.
Something along these lines:
image.center = CGPointMake(image.center.x + acceleration.x,
image.center.y - acceleration.y);
It is also important to note that the acceleration usually stays between -1 and 1 (unless the user shakes the device), which is due to the gravity being 1G. Therefore you should probably multiply the acceleration.x and .y values with some constant to make the image move a bit faster than about 1 point at a time.
There are additional things you should think about, what if the image is at the edge of the screen? What if the user wants to use the app in some other position than flat on a surface (needs calibration of the accelerometer)?
-(void)moveImage:(id)sender
{
[operationView bringSubviewToFront:[(UIPanGestureRecognizer*)sender view]];
[[[(UIPanGestureRecognizer*)sender view] layer] removeAllAnimations];
CGPoint translatedPoint = [(UIPanGestureRecognizer*)sender translationInView:self.view];
if([(UIPanGestureRecognizer*)sender state] == UIGestureRecognizerStateBegan)
{
firstX = [[sender view] center].x;
firstY = [[sender view] center].y;
[imgDeleteView setHidden:FALSE];
}
else if ([(UIPanGestureRecognizer*)sender state] == UIGestureRecognizerStateEnded)
{
[imgDeleteView setHidden:TRUE];
}
translatedPoint = CGPointMake(firstX+translatedPoint.x, firstY+translatedPoint.y);
[[(UIPanGestureRecognizer *)sender view] setCenter:translatedPoint];
}