Place a image relative to iPad and iPhone - iphone

In my app there is a feature where you can drag and drop a image ,image is in a UIimageView. Its a universal app.I need to store the x,y coordinates relative to 2048x1536 .I use CGPointApplyAffineTransform to calculate the relative point .And inverse the transform to iPhone screen to find the relative position on iPhone screen. Due to pixel density variation i am not getting the relative position correctly.
CGFloat TARGET_WIDTH =2048.00;
CGFloat TARGET_HEIGHT =1536.00;
CGPoint thispoint= CGPointApplyAffineTransform(imageView.frame.origin, CGAffineTransformMakeScale(TARGET_HEIGHT/self.view.frame.size.height, TARGET_WIDTH/self.view.frame.size.width));
This is the code i find the relative point .
And i inverse the transform with iPhone view size how do i calculate the relative with the density of pixels iPad and iPhone.

Test this,
CGFloat TARGET_WIDTH =2048.00;
CGFloat TARGET_HEIGHT =1536.00;
CGFloat xScaleFactor = TARGET_WIDTH / self.view.frame.size.width;
CGFloat yScaleFactor = TARGET_HEIGHT / self.view.frame.size.height;
CGPoint thispoint = CGPointMake(imageView.frame.origin.x*xScaleFactor, imageView.frame.origin.y*yScaleFactor);
//point with respect to your given target size

Related

Finding Bounding Box of Rotated Image that has Transparent Background

I have a UIImage that has a transparent background. When rotating this image, I'd like to find the bounding box around the graphic (ie the nons transparent part, if you rotate it in a UIImageView, it will find the bounding box around the entire UIImage including the transparent part).
Is there an Apple library that might do this for me? If not, does anyone know how this can be done?
If I understood your questions correctly, you can retrieve the frame (not bounds) of uiimageview then get the individual cgpoints and explicitly transform these points to get a transformed rectangle. Because in Apple's documentation it says: You can operate on a CGRect structure by calling the function CGRectApplyAffineTransform. This function returns the smallest rectangle that contains the transformed corner points of the rectangle passed to it. Transforming points 1 by 1 should avoid this auto-correcting behavior.
CGRect originalFrame = UIImageView.frame;
CGPoint p1 = originalFrame.origin;
CGPoint p2 = p1; p2.x += originalFrame.width;
CGPoint p3 = p1; p3.y += originalFrame.height;
//Use the same transformation that you applied to uiimageview here
CGPoint transformedP1 = CGPointApplyAffineTransform(p1, transform);
CGPoint transformedP2 = CGPointApplyAffineTransform(p2, transform);
CGPoint transformedP3 = CGPointApplyAffineTransform(p3, transform);
Now you should be able to define a new rectangle from these 3 points (4th one is optional because width and height can be calculated from 3 points. One point to note is that you cannot store this new rectangle in a cgrect because cgrect is defined by an origin and a size so its edges are always parallel to x and y axis. Apple's cgrect definition does not allow rotated rectangles to be stored.

How to get coordinates of a view according to the coordinates of the uiimage on which it is placed in ios sdk?

I have an app where user takes a picture from camera and it is placed on a full screen image view. Now user can dear rectangular /square views on it and send the coordinates/size of the overlayed view to the server. The problem is i am getting the coordinates of the view according to the screen size but i want that the size should be according to the image on the imageview because normally image from the camera would be of larger resolution than the screen .
Please provide me a way to get the the size of the overlayed view according to the uiimage and not the screen
EDIT : For example :If user draw a view on a human face in the image and send the coordinates of the crop view to the server . Then it will be difficult to synchronize the coordinates with the real image if the coordinates are according to the screen and not according to the uiimage itself
The answer is simple:
You have the frame of your UIImageView, and the frame of your drew square (both relative to the self.view)
You only need to find the origin of your square, relative to the UIImageView.
Just subtract:
//get the frame of the square
CGRect *correctFrame = square.frame;
//edit square's origin
correctFrame.origin.x -= imageView.frame.origin.x;
correctFrame.origin.y -= imageView.frame.origin.y;
Now correctFrame is the frame of your square relative to the imageView, while square.frame is still relative to self.view (as we didn't change it)
To get the frame accordingly to the image resolution, do exact the same as above, then:
float xCoefficient = imageResolutionWidth / imageView.frame.size.width;
float yCoefficient = imageResolutionHeight / imageView.frame.size.height;
correctFrame.origin.x *= xCoefficient;
correctFrame.origin.y *= yCoefficient;
correctFrame.size.width *= xCoefficient;
correctFrame.size.height *= yCoefficient;
Why: the image resolution is much grater than the imageView.frame, so you gotta calculate a coefficient that will adjust your square.
That's all!

Convert coordinates from iPhone screen size to iPad screen size

I have a custom UIView that contains an interactive drawing that is drawn in the drawRect function of the view. On the iPad the drawing size is 1024 x 768. For the iPhone I shrink the drawing for the iPhone using CGContextScaleCTM. To keep the proper aspect ratio on my drawings I shrink the view to 480 x 360 and set the y value of the view to -20 (effectively cropping 20 pixels off the top and bottom of the View, which is fine). Everything looks correct now, but I need to convert the touch coordinates from iPhone to iPad coordinates for the interactive portions of my view to work. If I make the uiview 320 high and use
point.y *=768/320
for converting the y value the locations are correct (but my drawing is distorted) I've done some tests hard coding point in so I know this should work but I'm having a hard time getting the math to work with the crop. Here is what I have so far:
CGPoint point = [[touches anyObject] locationInView:self.view];
[self endTouch:&point];
NSLog(#"true point: %#",NSStringFromCGPoint(point));
if (UI_USER_INTERFACE_IDIOM() == UIUserInterfaceIdiomPhone) {
point.x *= 1024/480;
point.y *= 768/360;
NSLog(#"corrected point: %#",NSStringFromCGPoint(point));
}
Can anybody help me with doing this conversion? Thanks!
1024/480 and 768/360 perform integer division. The result of both operations is 2, but you really want 2.133333. Replace the relevant lines in your code with the following:
point.x *= 1024.0/480.0;
point.y *= 768.0/360.0;
This will return the value you're looking for and scale your point's x and y values accordingly.
You might be better served (in terms of readability) by replacing these literals with a #define macro. At the top of your code, put the following:
#define SCALING_FACTOR_X (1024.0/480.0)
#define SCALING_FACTOR_Y (768.0/360.0)
and then modify your code to use that:
point.x *= SCALING_FACTOR_X;
point.y *= SCALING_FACTOR_Y;
That way, it will be more clear as to what you're actually doing.

AFOpenFlow coverFlow

in the AFOpenFlow library are two lines that says that the coverFlow is in the middle of your
iPhone - Screen:
CGPoint newPosition;
newPosition.x = halfScreenWidth + aCover.horizontalPosition;
newPosition.y = halfScreenHeight + aCover.verticalPosition;
But how could i change this line, that the coverFlow at the high range of the screen?
Marco
I don't know AFOpenFlow but this math here set the middle of the View related to the center of the screen. If you want the view to only take half of the size of your screen, you also need to change its height. it would be like :
[Updated Code]
CGSize newSize = CGSizeMake(screenWidth, swcreenHeight/2);
CGPoint newCenterPosition = CGPointMake(halfScreenWidth+aCover.horizontalPosition, halfScreenHeight/2+aCover.verticalPosition)
aCover.bounds = CGRectMake(newCenterPosition.x, newCenterPosition.y, newSize.width, newSize.height);

Getting x and y Position?

How to Get the X and Y axis of UIImage, I have one images which are randomly changes
it's position so how to get the image current x and y position so i can match with
another image x and y position.I have to get the position of Image without any touch
on screen. please suggest some solution.
Thank You.
You can get the frame of any view by accessing its frame property. Within that frame struct are a CGPoint origin and a CGSize size value. The origin is probably what you're looking for. Note that it is expressed in terms of relative position of the view within its superview.
For example, the following will print the origin coordinate of a view called imageView within its superview:
CGPoint origin = imageView.frame.origin;
NSLog(#"Current position: (%f, %f)", origin.x, origin.y);