So here is task i need to get done on iOS:
Show User's Current Location on an Image like this:
Yes, like this Image!
So few things one will notice as soon as one look at it. Its Tilted. It's headings are Off. You can see horizon..!
Lets say,
I have location coordinates for corners of this image.
I have angle at which this image is tilted. (e.g 50°)
I have heading angle of the image. (e.g -170° North, which means its about 10° South-West, correct me if im wrong.)
And i have Zoom level on which this image would have been zoomed if it was on Google Maps/Google Earth. (e.g 14 zoom level - out of 20)
But thats the task, i need to show users location on such images. Just like in "Google Earth".
Can anyone give me heads up to accomplish this task?
Thank you everyone. :)
Given the information you have, sounds like you can use line-plane intersection to solve your problem.
I achieved the desired result by plotting the coordinate on a UIView and transform it using CATrasnform3D. These explanations also help, Explanation of Transformation Matrix in terms of Core Animation and Core Animation Programming Guide itself.
_graphic.transform = CGAffineTransformMakeRotation(-2.89724656);
CATransform3D rotatedTransform = _graphic.layer.transform;
rotatedTransform.m34 = 1.0 / 500;
rotatedTransform.m22 = -2;
rotatedTransform.m41 = 170;
rotatedTransform.m42 = 170;
rotatedTransform =
CATransform3DRotate(rotatedTransform,
51 * M_PI / 180.0f,
1.0f,
0.0f,
0.0f);
_graphic.layer.transform = rotatedTransform;
[_graphic setNeedsDisplay];
Thank you all.
Related
I'm using a custom image (which is 360 x 276 so not proportional) and I'm rotating it with an animation.
The anchor point is not just (0,.5) or (1,0), it's something like (.23423, .912314). Is there any way to show, where the anchor point currently is? Or setting it in InterfaceBuilder? Currently I'm just trying to reach the correct CGPoint by testing different values, but I didn't get the perfect one.
You cannot simple set it in IB, unfortunately. In your case, I would use some image processing application to find out at what coordinate you need the image to rotate and then do the math:
84,3228 / 360 = 0.24423
Not necessarily brilliant, but workable I would say.
This might help.
CGPoint anchorPosition = CGPointMake(
imageView.frame.origin.x + imageView.frame.size.width*anchorPoint.x,
imageView.frame.origin.y + imageView.frame.size.height*anchorPoint.y);
You can refer to http://disanji.net/iOS_Doc/#documentation/Cocoa/Conceptual/CoreAnimation_guide/Articles/Layers.html
I'm working on an app in which i have to detect left eye, right eye, and mouth position.
I have an imageView on my self.view and imageView contains a face image, now I want to get both eyes and mouth coordinates. I have seen 2-3 sample codes for this but all are approximately same in all codes we have to invert my view for matching the coordinates which I don't want because my view have some other controls. And one more thing they all are using
UIImageView *imageView = [[UIImageView alloc]initWithImage:[UIImage imageNamed:#"image.png"]];
but my imageView has frame and i cant init it with image. When I do so I found faceFeature's eyes and mouth coordinates wrong.
I had started my code from this sample code but in this also view is being invert its Y coordinate.
Can any one help me how can i detect the face eyes and mouth position on UIImageView's image without invert my self.view.
Please let me know if my question is not clear enough.
The trick here is to transform the returned points and bounds from CIDetector to your coordinates, instead of flipping your own view. CIImage has origin at the bottom left which you will need to transform to the top left
int height = CVPixelBufferGetHeight(pixelBuffer);
CGAffineTransform transform = CGAffineTransformMakeScale(1, -1);
transform = CGAffineTransformTranslate(transform, 0, -1 * height);
/* Do your face detection */
CGRect faceRect = CGRectApplyAffineTransform(feature.bounds, transform);
CGPoint mouthPoint = CGPointApplyAffineTransform(feature.mouthPosition, transform);
// Same for eyes, etc
For your second question about UIImageView, you just have to do
imageview.image = yourImage
After you have initialized your imageview
Worked it out! - edited the class to have a faceContainer which contains all of the face objects (the mouth and eyes), then this container is rotated and thats all. Obviously this is very crude, but it does work. Here is a link, http://www.jonathanlking.com/download/AppDelegate.m. Then replace the app delegate from the sample code with it.
-- OLD POST --
Have a look at this Apple Documentation, and slide 42 onwards from this apple talk. Also you should probably watch the talk as it has a demo of what you are trying to achieve, it's called "Using Core Image on iOS & Mac OS X" and is here.
I am trying to make an app that uses a radial dial. As an example the radial dial features as availabel in this app
http://itunes.apple.com/us/app/kitchen-dial/id448558628?mt=8&ign-mpt=uo%3D4
What I wanted to ask is what method should I use to accomplish this? Guys, I am not asking you to write the code for me what I am asking is how is this done conceptually? I mean when a user swipses left/right on the screen how does it know which radial dial to move and in which direction. I am sure there are multiple buttons and there is some image transition going on here. I need to understand the concept of making such a thing. If anyone know or has done something similar please do share your thoughts. Does apple has a sample code for this? Thank you.
The central concepts are:
Ability to rotate an image around a center point.
Ability to calculate an angle relative to the center point given an x,y coordinate.
So, to move the wheel:
On touch began:
Calculate starting angle based on x,y
On touch moved:
Calculate new angle based on x,y and subtract starting angle. This is the angle delta (or how much more to rotate the image by).
To register button taps on the wheel:
On tap:
Calculate angle based on x,y, add existing rotation, divide by the angle size of each segment to get an index.
You asked for a high-level explanation of the concepts - and that's pretty much it.
This is a complicated question. The simple answer is: use a pan gesture recognizer, with:
- (void)panBegan:(UIPanGestureRecognizer *)pan {
CGRect frame = [self frame];
_central = CGPointMake(frame.origin.x+frame.size.width/2, frame.origin.y+frame.size.height/2);
CGPoint p = [pan locationInView:[self superview]];
p.x -= _central.x;
p.y -= _central.y;
_angle = atan2(p.x, p.y);
}
- (void)panMoved:(UIPanGestureRecognizer *)pan {
CGPoint p = [pan locationInView:[self superview]]];
p.x -= _central.x;
p.y -= _central.y;
CGFloat deltaAngle = _angle - atan2(p.x, p.y);
self.transform = CGAffineTransformMakeRotation(deltaAngle + _finalAngle);
}
where _angle is angle of the initial touch of the pan, and _finalAngle is saved angle at the end of the previous pan.
But: the UIPanGestureRecognizer has a velocity property, and you should be using it at the end of the pan to continue the rotation, as if the dial had inertia. And, if the dial has a fixed integer set of legal stopping positions, then you must adjust the slowdown curve so it stops on a legal stop. And, if you animate the deceleration, and the user starts a new pan before the previous one is finished, then you have to adjust _finalAngle based on the perceived position, not the stored position.
Here is a list of rotary knobs that are open source, so you can look at the code and see whats involved:
http://maniacdev.com/2011/12/open-source-libraries-for-easily-adding-rotary-knob-controls-in-your-ios-apps/
hi all upto now i know making rectangle with the CGrectmake and this rect(frame) i am using as imageview frame like UIImageView *someImage=[[uiimageview alloc]initwithframe:someRect]; now i can add an image with the frame of someRect. my problem here is when the coordinates like
(rectangleFirstx-coordinate,tectangleFirstY-cordinate)=(10,10)
(rectangleLastx-cordinate,rectangleLasty-cordinate)=(17,7) this, how can i give frame to the uiimageview....This is like a inclined rectangle..can any one suggest me how to apply frame through the ios library for these type of coordinates..Thanks in advance..
Your example isn't very clear because a rectangle with opposite corners at (10,10) and (10,7) can be in any one of a myriad of different orientations, including one perfectly aligned along the x and y axis.
What you can certainly do is create a UIImageView of the desired size and location and then rotate it by using one of many techniques, including animation methods.
[UIImageView animateWithDuration:0.1 animations:^
{
your_UIImageView_here.transform = CGAffineTransformMakeRotation((M_PI/180.0) * degrees);
}];
You can hide the UIImageView until the rotation is done and then show it.
If your question is about how to use the coordinates you provided to arrive at an angle I'd suggest that more data is needed because it is impossible to pick one of the billions of possible rectangles with corners at those two points without more information. Once you have more data then it is pretty basic trigonometry to figure out the angle to feed into the rotation.
Using the information here: iPhone Landscape-Only Utility-Template Application I am able to launch, use and maintain my view as landscape only. However, I am confused about the axis.
The absolute axis behaves as expected, meaning (0,0) is the top left, (240,0) is the top center, (0,320) is the bottom left corner, etc. However, when I attempt to do calculations related to the view I am drawing on I find x,y to be oriented as if the portrait top left were the origin. So to draw something at the center-point in my view controller I need to do:
CGPoint center = CGPointMake(self.view.center.y, self.view.center.x);
I assume this due to the fact that the UIView referenced by my controllers self.view is giving it's values relative to it's enclosing frame, meaning the window which has it's axis origin on the top left in portrait mode.
Is there some simple way to account for this that I am missing?
Documentation suggests that the transform property is exactly what I am looking for, however, I experiencing further confusion here. There are 3 essential properties involved:
frame
bounds
center
If I do this in viewDidLoad:
// calculate new center point
CGFloat x = self.view.bounds.size.width / 2.0;
CGFloat y = self.view.bounds.size.height / 2.0;
CGPoint center = CGPointMake(y, x);
// set the new center point
self.view.center = center;
// Rotate the view 90 degrees counter clockwise around the new center point.
CGAffineTransform transform = self.view.transform;
transform = CGAffineTransformRotate(transform, -(M_PI / 2.0));
self.view.transform = transform;
Now according to the reference docs, if transform is not set to the identity transform self.view.frame is undefined. So I should work with bounds and center.
self.view.center is correct now, because I set it to what I wanted it to be.
self.view.bounds appears unchanged.
self.view.frame appears to be exactly what I want it to be, but as noted above, the reference claims it is invalid.
So while I can get what I believe to be the right numbers, I fear I am overlooking something critical that will become troublesome later.
Thanks.
With Quartz 2D, the coordinate system has its origin (0,0) at the lower-left corner of the graphic, not at the upper-left corner. Maybe that's what you are missing.
See: http://developer.apple.com/documentation/graphicsimaging/Conceptual/drawingwithquartz2d/dq_overview/dq_overview.html#//apple_ref/doc/uid/TP30001066-CH202-CJBBAEEC