Getting x and y Position? - iphone

How to Get the X and Y axis of UIImage, I have one images which are randomly changes
it's position so how to get the image current x and y position so i can match with
another image x and y position.I have to get the position of Image without any touch
on screen. please suggest some solution.
Thank You.

You can get the frame of any view by accessing its frame property. Within that frame struct are a CGPoint origin and a CGSize size value. The origin is probably what you're looking for. Note that it is expressed in terms of relative position of the view within its superview.
For example, the following will print the origin coordinate of a view called imageView within its superview:
CGPoint origin = imageView.frame.origin;
NSLog(#"Current position: (%f, %f)", origin.x, origin.y);

Related

Finding Bounding Box of Rotated Image that has Transparent Background

I have a UIImage that has a transparent background. When rotating this image, I'd like to find the bounding box around the graphic (ie the nons transparent part, if you rotate it in a UIImageView, it will find the bounding box around the entire UIImage including the transparent part).
Is there an Apple library that might do this for me? If not, does anyone know how this can be done?
If I understood your questions correctly, you can retrieve the frame (not bounds) of uiimageview then get the individual cgpoints and explicitly transform these points to get a transformed rectangle. Because in Apple's documentation it says: You can operate on a CGRect structure by calling the function CGRectApplyAffineTransform. This function returns the smallest rectangle that contains the transformed corner points of the rectangle passed to it. Transforming points 1 by 1 should avoid this auto-correcting behavior.
CGRect originalFrame = UIImageView.frame;
CGPoint p1 = originalFrame.origin;
CGPoint p2 = p1; p2.x += originalFrame.width;
CGPoint p3 = p1; p3.y += originalFrame.height;
//Use the same transformation that you applied to uiimageview here
CGPoint transformedP1 = CGPointApplyAffineTransform(p1, transform);
CGPoint transformedP2 = CGPointApplyAffineTransform(p2, transform);
CGPoint transformedP3 = CGPointApplyAffineTransform(p3, transform);
Now you should be able to define a new rectangle from these 3 points (4th one is optional because width and height can be calculated from 3 points. One point to note is that you cannot store this new rectangle in a cgrect because cgrect is defined by an origin and a size so its edges are always parallel to x and y axis. Apple's cgrect definition does not allow rotated rectangles to be stored.

How to get coordinates of a view according to the coordinates of the uiimage on which it is placed in ios sdk?

I have an app where user takes a picture from camera and it is placed on a full screen image view. Now user can dear rectangular /square views on it and send the coordinates/size of the overlayed view to the server. The problem is i am getting the coordinates of the view according to the screen size but i want that the size should be according to the image on the imageview because normally image from the camera would be of larger resolution than the screen .
Please provide me a way to get the the size of the overlayed view according to the uiimage and not the screen
EDIT : For example :If user draw a view on a human face in the image and send the coordinates of the crop view to the server . Then it will be difficult to synchronize the coordinates with the real image if the coordinates are according to the screen and not according to the uiimage itself
The answer is simple:
You have the frame of your UIImageView, and the frame of your drew square (both relative to the self.view)
You only need to find the origin of your square, relative to the UIImageView.
Just subtract:
//get the frame of the square
CGRect *correctFrame = square.frame;
//edit square's origin
correctFrame.origin.x -= imageView.frame.origin.x;
correctFrame.origin.y -= imageView.frame.origin.y;
Now correctFrame is the frame of your square relative to the imageView, while square.frame is still relative to self.view (as we didn't change it)
To get the frame accordingly to the image resolution, do exact the same as above, then:
float xCoefficient = imageResolutionWidth / imageView.frame.size.width;
float yCoefficient = imageResolutionHeight / imageView.frame.size.height;
correctFrame.origin.x *= xCoefficient;
correctFrame.origin.y *= yCoefficient;
correctFrame.size.width *= xCoefficient;
correctFrame.size.height *= yCoefficient;
Why: the image resolution is much grater than the imageView.frame, so you gotta calculate a coefficient that will adjust your square.
That's all!

CGMakePoint(X,Y) for a image

I am new to iPhone development. can any one let me know what is CGMakePoint(x,y) and what x, y stands for and how to get CGPoint x and y value for a image.?
CGMakePoint is a function that returns a CGPoint structure initialized with the given coordinates.
Not sure what you mean by the "CGPoint x and y value for a image". The position of a UIImage in a view? You use image.frame to get the rectangle (CGRect) of the images's position within the parent view, and image.frame.origin to get the top/right coordinate.

Co-ordinates of the four points of a uiview which has been rotated

Is it possible to get this? If so, can anyone please tell me how?
Get the four points of the frame of your view (view.frame)
Retrieve the CGAffineTransform applied to your view (view.transform)
Then apply this same affine transform to the four points using CGPointApplyAffineTransform (and sibling methods of the CGAffineTransform Reference)
CGPoint topLeft = view.bounds.origin;
topLeft = [[view superview] convertPoint:topLeft fromView:view];
CGPoint topRight = CGPointMake(view.bounds.origin.x + view.bounds.width, view.bounds.origin.y);
topRight = [[view superview] convertPoint:topRight fromView:view];
// ... likewise for the other points
The first point is in the view's coordinate space, which is always "upright". Then the next statement finds the point that point corresponds to in the parent view's coordinate space. Note for an un-transformed view, that would be equal to view.frame.origin. The above calculations give the equivalent of the corners of view.frame for a transformed view.

How do we get the coordinates of an UIImageView programmatically?

I would like to get the coordinates of an UIImageView programmatically. How do I do this?
For example I have a square. I want to get the coordinates of all the corners. How must i proceed?
NSLog("%f", ImageView.frame.origin.x);
NSLog("%f", ImageView.frame.origin.y);
I get the topleft coordinate of the square.
I must say that the square (imageview) rotates and that's why I must get it's coordinates.
The coordinates in whose coordinate space?
Each UIView has its own coordinate space. You can refer to a view's size in its own coordinate space by asking for its bounds.
A view's size and position in its parent view is called its frame. In your example, you're asking for the image view's top left corner in its parent view's coordinate space.
If that's what you want to do, then try these:
frame.origin.x
frame.origin.y
frame.size.width
frame.size.height
By adding those together you can get any coordinate: for example, the x coordinate of the top right would be
frame.origin.x + frame.size.width