I have uiview , that I zoom in and out in it
I associate it with pinchRecognizerMeasure using
pinchRecognizerMeasure = [[UIPinchGestureRecognizer alloc] initWithTarget:self action:#selector(MeasureAndDraw:)];
[pinchRecognizerMeasure setDelegate:self];
[DrawLine addGestureRecognizer:pinchRecognizerMeasure];
[pinchRecognizerMeasure release];
the code of MeasureAndDraw
// get position of touches, for example:
NSUInteger num_touches = [pinchRecognizerMeasure numberOfTouches];
// save locations to some instance variables, like `CGPoint location1, location2;`
if (num_touches >= 1) {
DrawLine.startPoint = [pinchRecognizerMeasure locationOfTouch:0 inView:DrawLine];
}
if (num_touches >= 2) {
DrawLine.endPoint = [pinchRecognizerMeasure locationOfTouch:1 inView:DrawLine];
}
startPoint , endPoint are CGPoint , I want to get the equivalent pixel to it
what shall I do is it correct to do something like
startPoint.X * DrawLine.contentScaleFactor to get the pixl x coordinate or what shall I do
I read http://developer.apple.com/library/ios/#documentation/2DDrawing/Conceptual/DrawingPrintingiOS/GraphicsDrawingOverview/GraphicsDrawingOverview.html , but get confused
any suggestion
Have a look at the contentScaleFactor property of UIView to translate between points and pixels on the device if you really need to.
Related
I want to make shape match app like this.
Please check the second image of this link.
please check this link also,
I can drag and drop image but I am unable to match the exact location and unable to fill that image with another one.
Any idea or suggestions would be highly welcome.
//UIImageView *imageView1 (blank image with whom the image has to be matched),
//UIImageView *imageView2 (the coloured image that has to be matched)
Take a NSMutableArray positionArray, now whenever you add any new blank imageview (imageview1 ) to the screen add its additional information to the MutableArray like
UIImageView *imageview = [[UIImageView alloc] initWithFrame:CGRectMake(10,10,100,50)];
[imageview setTag:TagValue];
[imageview setImage:[UIImage imageNamed:#"blankMonkeyShapedImage.png"]];
//Incrementing the TagValue by one each time any blank imageview is added
TagValue++;
NSMutableDictionary *locationDic=[[NSMutableDictionary alloc] init];
[locationDic setValue:[NSNumber numberWithInt:imageview.tag] forKey:#"TagNumber"];
[locationDic setValue:[NSString stringWithFormat:#"Monkey~example"] forKey:#"ImageName"];
[locationDic setValue:[NSNumber numberWithFloat:imageview.frame.origin.x] forKey:#"PosX"];
[locationDic setValue:[NSNumber numberWithFloat:imageview.frame.origin.x] forKey:#"PosY"];
[locationDic setValue:[NSNumber numberWithFloat:imageview.frame.size.width] forKey:#"SizeWidth"];
[locationDic setValue:[NSNumber numberWithFloat:imageview.frame.size.height] forKey:#"SizeHeight"];
[self.view addSubview:imageview];
[positionArray addObject:locationDic];
Now whenever you add any new blank imageview you should repeat the same for it . Coming back to the UIGestureRecognizer selector method
-(void)panGestureRecognizer:(UIPanGestureRecognizer *)gesture
{
CGPoint translation = [gesture translationInView:self.view];
for (int i=0 ; i<[positionArray count]; i++)
{
NSMutableDictionary *lctionDic=[positionArray objectAtIndexPath:i];
float posX=[[lctionDic valueFor:#"PosX"] floatValue];
float posY=[[lctionDic valueFor:#"PosY"] floatValue];
float SizeWidth = [[lctionDic valueFor:#"SizeWidth"] floatValue];
float SizeHeight = [[lctionDic valueFor:#"SizeHeight"] floatValue];
if (translation.x >= posX && translation.x <= posX+SizeWidth &&
translation.y >= posY && translation.y <= posX+SizeHeight )
{
//Condition where the dragged objects position matches any previousluy placed balnk imageview.
if(gesture.view.tag==[[lctionDic valueFor:#"TagNumber"] intValue])
{
NSLog(#"Matched Image's Name is : %#",[lctionDic valueForKey:#"ImageName"]);
break;
}
else
continue;
}
}
}
Take special care at the time of allocating TagValues to the blank imageview and the toBeDragged imageview (both should be saem for same type of image).
That is bit tricky.
You are coder you know which image match what shape, May be you take two imageviews one with shape and other with image..
PS: I am just giving an idea.
So may be you can keep track with tags. Add gesture recogniser to drag images, so you will come to know which shape is being dragged. Now while moving just compare center of shape with center of your image current position. Make sure that you compare range and not exact center.
Hope this much info helps you :)
I made an augmented reality Application, there I have an arrow points on the z-axis to the location (longitude & latitude) of one shop now I'd like to make an annotation in the "sky" which points direct on the shop coordinate. I found some projects on github, like iPhone-AR-Toolkit, but I don't understand this.
Is there a simple way to add such an annotation above the location?
Edit:
After searching a lot I find a pdf document in de web, which contains some code for that what I want, but the problem is, I don't understand it and it doesn't do the things right.
The Code:
// Artificial Horizon - compensate for rotation along x-axis
// need to know the field of view of the camera to find horizon position inside of camera view
// about 53 degrees vertical and 37.5 degrees horizontal
// for directional Horizon - artificial horizon pegged to a specific cardinal direction (North)
// in (void)viewDidAppear
// Load the image to show in the overlay
UIImage *overlayGraphic = [UIImage imageNamed:#"Punkt.png"];
overlayGraphicView = [[UIImageView alloc] initWithImage:overlayGraphic];
overlayGraphicView.frame = CGRectMake(30, 100, 50, 50);
[self addSubview:overlayGraphicView];
[[UIAccelerometer sharedAccelerometer] setDelegate:self];
locationManager = [[CoreLocationMangager alloc] init];
[locationManager setDelegate:self];
[locationManager startUpdatingHeading];
- (void) updateUI {
CGPoint overlayCenter = [overlayGraphicView center];
overlayCenter.y = 240.0 - 537.8 * sin(vertAngle);
overlayCenter.x = 160.0 - 497.8 * sin((magCompassHeadingInDeg) * (M_PI / 180.0));
[overlayGraphicView setCenter:overlayCenter];
overlayGraphicView.transform = CGAffineTransformMakeRotation(angle);
}
- (void)accelerometer:(UIAccelerometer *)accelerometer didAccelerate:(UIAcceleration *)acceleration {
vertAngle = -atan2(acceleration.y, acceleration.z) - M_PI/2.0;
vertAngleInDeg = vertAngle * 180.0f/M_PI;
angle = -atan2(acceleration.y, acceleration.x) - M_PI/2.0;
angleInDeg = angle * 180.0f / M_PI;
[self updateUI];
}
- (void)locationManager:(CLLocationManager *)manager didUpdateHeading:(CLHeading *)newHeading {
magCompassHeadingInDeg = [newHeading magneticHeading];
[self updateUI];
}
There is an image View is added to a camera overlay, and this image should point to North - but my problem is it also points to South. How can i fix that? And can anybody explain me thr values from the overlayCenter.y and .x - I don't understand them.
You need to Specify an altitude to specify the height from sea-level that is z-axis.
Mixare SDK will allow you to specify and load your POI's with latitude, longitude and altitude.
You can refer it from here
I have used this sdk and it works great and allows you specify Altitude.
Hope this helps:)
I have a simple oval shape (comprised of CGMutablePaths) from which I'd like the user to be able to drag an object around it. Just wondering how complicated it is to do this, do I need to know a ton of math and physics, or is there some simple built in way that will allow me to do this? IE the user drags this object around the oval, and it orbits it.
This is an interesting problem. We want to drag an object, but constrain it to lie on a CGPath. You said you have “a simple oval shape”, but that's boring. Let's do it with a figure 8. It'll look like this when we're done:
So how do we do this? Given an arbitrary point, finding the nearest point on a Bezier spline is rather complicated. Let's do it by brute force. We'll just make an array of points closely spaced along the path. The object starts out on one of those points. As we try to drag the object, we'll look at the neighboring points. If either is nearer, we'll move the object to that neighbor point.
Even getting an array of closely-spaced points along a Bezier curve is not trivial, but there is a way to get Core Graphics to do it for us. We can use CGPathCreateCopyByDashingPath with a short dash pattern. This creates a new path with many short segments. We'll take the endpoints of each segment as our array of points.
That means we need to iterate over the elements of a CGPath. The only way to iterate over the elements of a CGPath is with the CGPathApply function, which takes a callback. It would be much nicer to iterate over path elements with a block, so let's add a category to UIBezierPath. We start by creating a new project using the “Single View Application” template, with ARC enabled. We add a category:
#interface UIBezierPath (forEachElement)
- (void)forEachElement:(void (^)(CGPathElement const *element))block;
#end
The implementation is very simple. We just pass the block as the info argument of the path applier function.
#import "UIBezierPath+forEachElement.h"
typedef void (^UIBezierPath_forEachElement_Block)(CGPathElement const *element);
#implementation UIBezierPath (forEachElement)
static void applyBlockToPathElement(void *info, CGPathElement const *element) {
__unsafe_unretained UIBezierPath_forEachElement_Block block = (__bridge UIBezierPath_forEachElement_Block)info;
block(element);
}
- (void)forEachElement:(void (^)(const CGPathElement *))block {
CGPathApply(self.CGPath, (__bridge void *)block, applyBlockToPathElement);
}
#end
For this toy project, we'll do everything else in the view controller. We'll need some instance variables:
#implementation ViewController {
We need an ivar to hold the path that the object follows.
UIBezierPath *path_;
It would be nice to see the path, so we'll use a CAShapeLayer to display it. (We need to add the QuartzCore framework to our target for this to work.)
CAShapeLayer *pathLayer_;
We'll need to store the array of points-along-the-path somewhere. Let's use an NSMutableData:
NSMutableData *pathPointsData_;
We'll want a pointer to the array of points, typed as a CGPoint pointer:
CGPoint const *pathPoints_;
And we need to know how many of those points there are:
NSInteger pathPointsCount_;
For the “object”, we'll have a draggable view on the screen. I'm calling it the “handle”:
UIView *handleView_;
We need to know which of the path points the handle is currently on:
NSInteger handlePathPointIndex_;
And while the pan gesture is active, we need to keep track of where the user has tried to drag the handle:
CGPoint desiredHandleCenter_;
}
Now we have to get to work initializing all those ivars! We can create our views and layers in viewDidLoad:
- (void)viewDidLoad {
[super viewDidLoad];
[self initPathLayer];
[self initHandleView];
[self initHandlePanGestureRecognizer];
}
We create the path-displaying layer like this:
- (void)initPathLayer {
pathLayer_ = [CAShapeLayer layer];
pathLayer_.lineWidth = 1;
pathLayer_.fillColor = nil;
pathLayer_.strokeColor = [UIColor blackColor].CGColor;
pathLayer_.lineCap = kCALineCapButt;
pathLayer_.lineJoin = kCALineJoinRound;
[self.view.layer addSublayer:pathLayer_];
}
Note that we haven't set the path layer's path yet! It's too soon to know the path at this time, because my view hasn't been laid out at its final size yet.
We'll draw a red circle for the handle:
- (void)initHandleView {
handlePathPointIndex_ = 0;
CGRect rect = CGRectMake(0, 0, 30, 30);
CAShapeLayer *circleLayer = [CAShapeLayer layer];
circleLayer.fillColor = nil;
circleLayer.strokeColor = [UIColor redColor].CGColor;
circleLayer.lineWidth = 2;
circleLayer.path = [UIBezierPath bezierPathWithOvalInRect:CGRectInset(rect, circleLayer.lineWidth, circleLayer.lineWidth)].CGPath;
circleLayer.frame = rect;
handleView_ = [[UIView alloc] initWithFrame:rect];
[handleView_.layer addSublayer:circleLayer];
[self.view addSubview:handleView_];
}
Again, it's too soon to know exactly where we'll need to put the handle on screen. We'll take care of that at view layout time.
We also need to attach a pan gesture recognizer to the handle:
- (void)initHandlePanGestureRecognizer {
UIPanGestureRecognizer *recognizer = [[UIPanGestureRecognizer alloc] initWithTarget:self action:#selector(handleWasPanned:)];
[handleView_ addGestureRecognizer:recognizer];
}
At view layout time, we need to create the path based on the size of the view, compute the points along the path, make the path layer show the path, and make sure the handle is on the path:
- (void)viewDidLayoutSubviews {
[super viewDidLayoutSubviews];
[self createPath];
[self createPathPoints];
[self layoutPathLayer];
[self layoutHandleView];
}
In your question, you said you're using a “simple oval shape”, but that's boring. Let's draw a nice figure 8. Figuring out what I'm doing is left as an exercise for the reader:
- (void)createPath {
CGRect bounds = self.view.bounds;
CGFloat const radius = bounds.size.height / 6;
CGFloat const offset = 2 * radius * M_SQRT1_2;
CGPoint const topCenter = CGPointMake(CGRectGetMidX(bounds), CGRectGetMidY(bounds) - offset);
CGPoint const bottomCenter = { topCenter.x, CGRectGetMidY(bounds) + offset };
path_ = [UIBezierPath bezierPath];
[path_ addArcWithCenter:topCenter radius:radius startAngle:M_PI_4 endAngle:-M_PI - M_PI_4 clockwise:NO];
[path_ addArcWithCenter:bottomCenter radius:radius startAngle:-M_PI_4 endAngle:M_PI + M_PI_4 clockwise:YES];
[path_ closePath];
}
Next we're going to want to compute the array of points along that path. We'll need a helper routine to pick out the endpoint of each path element:
static CGPoint *lastPointOfPathElement(CGPathElement const *element) {
int index;
switch (element->type) {
case kCGPathElementMoveToPoint: index = 0; break;
case kCGPathElementAddCurveToPoint: index = 2; break;
case kCGPathElementAddLineToPoint: index = 0; break;
case kCGPathElementAddQuadCurveToPoint: index = 1; break;
case kCGPathElementCloseSubpath: index = NSNotFound; break;
}
return index == NSNotFound ? 0 : &element->points[index];
}
To find the points, we need to ask Core Graphics to “dash” the path:
- (void)createPathPoints {
CGPathRef cgDashedPath = CGPathCreateCopyByDashingPath(path_.CGPath, NULL, 0, (CGFloat[]){ 1.0f, 1.0f }, 2);
UIBezierPath *dashedPath = [UIBezierPath bezierPathWithCGPath:cgDashedPath];
CGPathRelease(cgDashedPath);
It turns out that when Core Graphics dashes the path, it can create segments that slightly overlap. We'll want to eliminate those by filtering out each point that's too close to its predecessor, so we'll define a minimum inter-point distance:
static CGFloat const kMinimumDistance = 0.1f;
To do the filtering, we'll need to keep track of that predecessor:
__block CGPoint priorPoint = { HUGE_VALF, HUGE_VALF };
We need to create the NSMutableData that will hold the CGPoints:
pathPointsData_ = [[NSMutableData alloc] init];
At last we're ready to iterate over the elements of the dashed path:
[dashedPath forEachElement:^(const CGPathElement *element) {
Each path element can be a “move-to”, a “line-to”, a “quadratic-curve-to”, a “curve-to” (which is a cubic curve), or a “close-path”. All of those except close-path define a segment endpoint, which we pick up with our helper function from earlier:
CGPoint *p = lastPointOfPathElement(element);
if (!p)
return;
If the endpoint is too close to the prior point, we discard it:
if (hypotf(p->x - priorPoint.x, p->y - priorPoint.y) < kMinimumDistance)
return;
Otherwise, we append it to the data and save it as the predecessor of the next endpoint:
[pathPointsData_ appendBytes:p length:sizeof *p];
priorPoint = *p;
}];
Now we can initialize our pathPoints_ and pathPointsCount_ ivars:
pathPoints_ = (CGPoint const *)pathPointsData_.bytes;
pathPointsCount_ = pathPointsData_.length / sizeof *pathPoints_;
But we have one more point we need to filter. The very first point along the path might be too close to the very last point. If so, we'll just discard the last point by decrementing the count:
if (pathPointsCount_ > 1 && hypotf(pathPoints_[0].x - priorPoint.x, pathPoints_[0].y - priorPoint.y) < kMinimumDistance) {
pathPointsCount_ -= 1;
}
}
Blammo. Point array created. Oh yeah, we also need to update the path layer. Brace yourself:
- (void)layoutPathLayer {
pathLayer_.path = path_.CGPath;
pathLayer_.frame = self.view.bounds;
}
Now we can worry about dragging the handle around and making sure it stays on the path. The pan gesture recognizer sends this action:
- (void)handleWasPanned:(UIPanGestureRecognizer *)recognizer {
switch (recognizer.state) {
If this is the start of the pan (drag), we just want to save the starting location of the handle as its desired location:
case UIGestureRecognizerStateBegan: {
desiredHandleCenter_ = handleView_.center;
break;
}
Otherwise, we need to update the desired location based on the drag, and then slide the handle along the path toward the new desired location:
case UIGestureRecognizerStateChanged:
case UIGestureRecognizerStateEnded:
case UIGestureRecognizerStateCancelled: {
CGPoint translation = [recognizer translationInView:self.view];
desiredHandleCenter_.x += translation.x;
desiredHandleCenter_.y += translation.y;
[self moveHandleTowardPoint:desiredHandleCenter_];
break;
}
We put in a default clause so clang won't warn us about the other states that we don't care about:
default:
break;
}
Finally we reset the translation of the gesture recognizer:
[recognizer setTranslation:CGPointZero inView:self.view];
}
So how do we move the handle toward a point? We want to slide it along the path. First, we have to figure out which direction to slide it:
- (void)moveHandleTowardPoint:(CGPoint)point {
CGFloat earlierDistance = [self distanceToPoint:point ifHandleMovesByOffset:-1];
CGFloat currentDistance = [self distanceToPoint:point ifHandleMovesByOffset:0];
CGFloat laterDistance = [self distanceToPoint:point ifHandleMovesByOffset:1];
It's possible that both directions would move the handle further from the desired point, so let's bail out in that case:
if (currentDistance <= earlierDistance && currentDistance <= laterDistance)
return;
OK, so at least one of the directions will move the handle closer. Let's figure out which one:
NSInteger direction;
CGFloat distance;
if (earlierDistance < laterDistance) {
direction = -1;
distance = earlierDistance;
} else {
direction = 1;
distance = laterDistance;
}
But we've only checked the nearest neighbors of the handle's starting point. We want to slide as far as we can along the path in that direction, as long as the handle is getting closer to the desired point:
NSInteger offset = direction;
while (true) {
NSInteger nextOffset = offset + direction;
CGFloat nextDistance = [self distanceToPoint:point ifHandleMovesByOffset:nextOffset];
if (nextDistance >= distance)
break;
distance = nextDistance;
offset = nextOffset;
}
Finally, update the handle's position to our newly-discovered point:
handlePathPointIndex_ += offset;
[self layoutHandleView];
}
That just leaves the small matter of computing the distance from the handle to a point, should the handle be moved along the path by some offset. Your old buddy hypotf computes the Euclidean distance so you don't have to:
- (CGFloat)distanceToPoint:(CGPoint)point ifHandleMovesByOffset:(NSInteger)offset {
int index = [self handlePathPointIndexWithOffset:offset];
CGPoint proposedHandlePoint = pathPoints_[index];
return hypotf(point.x - proposedHandlePoint.x, point.y - proposedHandlePoint.y);
}
(You could speed things up by using squared distances to avoid the square roots that hypotf is computing.)
One more tiny detail: the index into the points array needs to wrap around in both directions. That's what we've been relying on the mysterious handlePathPointIndexWithOffset: method to do:
- (NSInteger)handlePathPointIndexWithOffset:(NSInteger)offset {
NSInteger index = handlePathPointIndex_ + offset;
while (index < 0) {
index += pathPointsCount_;
}
while (index >= pathPointsCount_) {
index -= pathPointsCount_;
}
return index;
}
#end
Fin. I've put all of the code in a gist for easy downloading. Enjoy.
I have an iphone version of an App that draws lines on the screen from CGPoints that are stored in a Core Data. all of the drawings are line based without any fill, so basically it draws a line from given point to the next point etc.
Now i am making an iPad version and i want to use the same points (The points were collected with a function I build for tracking the screen and it was a lot of work so I wish to reuse the same points i have).
Does any body has an idea, algorithm or function for drawing the same lines' from the same points but X2 size ?
That is the draw method:(took it from the GLPaint example of apple)
- (void) playback:(NSNumber*)index
{
if (p==0) {
pointsCount=[[localpoints objectAtIndex:[index intValue]] count]-1;
}
isPlayBackOn = YES;
LetterPoint *point1 = (LetterPoint*)[[localpoints objectAtIndex:[index intValue]] objectAtIndex:p];
CGPoint p1 = CGPointFromString(point1.float_point);
LetterPoint *point2 = (LetterPoint*)[[localpoints objectAtIndex:[index intValue]] objectAtIndex:p+1];
CGPoint p2 = CGPointFromString(point2.float_point);
[self renderLineFromPoint:p1 toPoint:p2];
p++;
if(p<pointsCount){
[self performSelector:#selector(playback:) withObject:index afterDelay:0.03];
}else {
p=0;
isPlayBackOn = NO;
}
}
thanks
shani
Create an affine transform matrix with a scale factor of 2.0 (and possibly a translation if you want to move the origin of the drawing). Then apply that transform to every point with CGPointApplyAffineTransform() and use the resulting points for drawing.
well, just double p1.x and p1.y...
transform it just with this:
instead of:
CGPoint p1 = CGPointFromString(point1.float_point);
do this:
CGPoint p1Temp = CGPointFromString(point1.float_point);
CGPoint p1 = CGPointMake(p1Temp.x * 2, p1Temp.y * 2);
and the same for p2...
this is the code in the continuation of this question....Allow horizontal scrolling only in the core-plot barchart?
-(BOOL)pointingDeviceDraggedAtPoint:(CGPoint)interactionPoint
{
if ( !self.allowsUserInteraction || !self.graph.plotArea ) {
return NO;
}
CGPoint pointInPlotArea = [self.graph.plotArea convertPoint:interactionPoint toLayer:self.graph.plotArea];
if ( isDragging ) {
pointInPlotArea.y = lastDragPoint.y;//-- Madhup Changed it for allwoing scrolling only in horizontal direction --//
//-- Madhup Changed it for allwoing scrolling only in horizontal direction --//
//CGPoint displacement = CGPointMake(pointInPlotArea.x-lastDragPoint.x, pointInPlotArea.y-lastDragPoint.y);
CGPoint displacement = CGPointMake(pointInPlotArea.x-lastDragPoint.x, 0);
//--******************************--//
CGPoint pointToUse = pointInPlotArea;
// Allow delegate to override
if ( [self.delegate respondsToSelector:#selector(plotSpace:willDisplaceBy:)] ) {
displacement = [self.delegate plotSpace:self willDisplaceBy:displacement];
pointToUse = CGPointMake(lastDragPoint.x+displacement.x, lastDragPoint.y+displacement.y);
}
NSDecimal lastPoint[2], newPoint[2];
[self plotPoint:lastPoint forPlotAreaViewPoint:lastDragPoint];
[self plotPoint:newPoint forPlotAreaViewPoint:pointToUse];
CPPlotRange *newRangeX = [[self.xRange copy] autorelease];
CPPlotRange *newRangeY = [[self.yRange copy] autorelease];
NSDecimal shiftX = CPDecimalSubtract(lastPoint[0], newPoint[0]);
NSDecimal shiftY = CPDecimalSubtract(lastPoint[1], newPoint[1]);
newRangeX.location = CPDecimalAdd(newRangeX.location, shiftX);
newRangeY.location = CPDecimalAdd(newRangeY.location, shiftY);
// Delegate override
if ( [self.delegate respondsToSelector:#selector(plotSpace:willChangePlotRangeTo:forCoordinate:)] ) {
newRangeX = [self.delegate plotSpace:self willChangePlotRangeTo:newRangeX forCoordinate:CPCoordinateX];
newRangeY = [self.delegate plotSpace:self willChangePlotRangeTo:newRangeY forCoordinate:CPCoordinateY];
}
self.xRange = newRangeX;
self.yRange = newRangeY;
//-- Madhup Changed it for keeping y axis fixed --//
NSLog(#"%#",self.graph.axisSet.axes);
NSMutableArray *axisArr= [[NSMutableArray alloc] initWithArray:self.graph.axisSet.axes];
CPXYAxis *yAxis = [axisArr objectAtIndex:1];
CGPoint point = yAxis.position;
point.y -= lastDragPoint.x;
yAxis.position = point;
[axisArr replaceObjectAtIndex:1 withObject:yAxis];
self.graph.axisSet.axes = axisArr;
[axisArr release];
NSLog(#"%#",self.graph.axisSet.axes);
//--******************************--//
lastDragPoint = pointInPlotArea;
return YES;
}
return NO;
}
Now from the code you people can see that i am able to stop the scrolling of map only in horizontal direction, but still I am not able to keep the y-axis fixed. I have written some code for that in this method too but it does not seem to be working.
if you want to fix axis when scrolling add this lines
CPTXYAxisSet *axisSet = (CPTXYAxisSet *)_graph.axisSet;
axisSet.xAxis.axisConstraints = [CPTConstraints constraintWithLowerOffset:0.0];
axisSet.yAxis.axisConstraints = [CPTConstraints constraintWithLowerOffset:0.0];
The original question now has two viable solutions that work for the latest, as of this date, code in the repository.
Allow horizontal scrolling only in the core-plot barchart?
To keep the graph only in quadrant one:
Allow horizontal scrolling only in the core-plot barchart?
To only allow scrolling in the horizontal:
Allow horizontal scrolling only in the core-plot barchart?
I've used both solutions with a scatter plot and it works great.