how to apply the center of image view for another image view center - iphone

I am developing image redraw app. i know the center of one image view which is having scaling property. i.e it can change it frame and center . that to be applied to another view center. when i apply center to newly formed Image view it giving wrong center because of the previous image view is having scaling property. so how can i get exact center to my new image view from previous imageview

Your view or image is
width*height
your center view should always be in position
(width/2,height/2)
whether the image is scaled or not.
Just recalculate your center after the scale if you need the "scaled" center or keep in memory the original center position if you don't.
Pseudocode:
getCenter(w,h){
pos[0]=w/2;
pos[1]=h/2;
return pos;
}
calc(image){
c = getCenter(image.w,image.h);
scaled = image.scale(80); //That is 80% of original
d = getCenter(scaled.w,scaled.h);
if(something) return c;
else return d;
}
Second explanation after discussion (read comments):
Let's assume you have a 640X480 image and you create a view of 320X240 (a quarter of it) and you move THIS view 100px right and 50 pixels down from position (0,0) which is usually top left corner of your image then:
your new center of the VIEW will be as usual in position (160,120) of the VIEW
the original center of the ORIGINAL image will remain in its position (320,240) which casually corresponds to the bottom right corner of your VIEW.
IF you want to know WHERE the original center of the ORIGINAL image DID end up AFTER movement and "cropping" then you just have to know where did you move the VIEW:
100px right becomes (original position - relative movement) (320 - 100) = 220
50px down becomes (original position - relative movement) (240 - 50) = 190
So your ORIGINAL center will be in position (220,190) of the new VIEW.

Related

PIL image.rotate center(0.0)

I rotate the image 45 degrees to the left corner, using Image.rotate, but the image disappears beyond the border of the frame. How can I fix this?
im2 = self.image
rot = im2.rotate(45, expand=True, center=(0, 0))
self.image = rot
Originally the image looks like this:
enter image description here
after a 45 degree rotation like this:
enter image description here
Edit: Judging from your new picture, the center=(0, 0) must mean the origin point of the frame, not the image.
To rotate with the top-left of the image fixed, try this:
rot = im2.rotate(45, expand=True, center=(im2.left, im2.top))
This is assuming that the image has properties that tell you its position within the frame.
You are rotating around the top-left corner center=(0, 0), the origin point for this image. Try using the image center (1/2 width, 1/2 height) as your center of rotation.

ULabel's center property's value changes depending on which edge it's bound to

Quick version: Why does the value of a UILabel's "center" property change between when it's Size Attribute is bound to the top of it's parent container and the bottom, and what can I do to change or get around that behavior?
Long version with details:
I am drawing a line between two labels like this (the yellow line):
To calculate the end points for the line I am using the center property of the UILabels (in this photo the labels say 2.5kHz and 20 Hz), then + or - half of the label's height. This technique Works good in the example/picture below. However in the design phase where I toggled the screen size to the iPhone5 (taller), I bound the "20 Hz" label to the bottom instead of the top by changing it in the Size Inspect as pictured in the following two images. Note that the origin is still in the upper left in both cases:
This changes how the center property behaves and thus will produce the following line:
Here I have relocated the UILabel just for demonstration. Note that the end of the line should be on the center of the top of the label:
I did a printout of the label's center point. Once while bound to the top, and again to the bottom:
Bound to top
{290, 326.5}
Bound to bottom
{290, 370.5}
I tried a workaround by using the origin.y and doing an offset from there, but that has the same type of effect.
I desire it to look like this when finished, but on the taller iPhone 5 screen:
Edit: As requested by #ott, adding the code snippet which calculates the end points of the line
- (void)viewDidLoad
{
[super viewDidLoad];
CGPoint begin = self.frequencyMaxLabel.center;
begin.y += self.frequencyMaxLabel.frame.size.height/2.0;
CGPoint end = self.frequencyMinLabel.center;
end.y -= self.frequencyMinLabel.frame.size.height/2.0;
// This is an object of two CGPoints to represent a line
VWWLine* frequenciesLine = [[VWWLine alloc]
initWithBegin:begin
andEnd:end];
// Pass the line to a UIView subclass where it is drawn
[self.configView setLineFrequencies:frequenciesLine];
[frequenciesLine release];
// .... other code truncated for example
}
And the code that draws the line (inside a UIView subclass, self.configView from above snippet)
- (void)drawRect:(CGRect)rect
{
CGContextRef cgContext = UIGraphicsGetCurrentContext();
CGContextBeginPath(cgContext);
CGContextSetLineWidth(cgContext, 2.0f);
CGFloat yellowColor[4] = {1.0, 1.0, 0.0, 1.0};
CGContextSetStrokeColor(cgContext, yellowColor);
[self drawLineWithContext:cgContext
fromPoint:self.lineFrequencies.begin
toPoint:self.lineFrequencies.end];
CGContextStrokePath(cgContext);
//...... truncated. Draw the other lines next
}
You receive the viewDidLoad message before the system has laid out your views for the current device and interface orientation. Move that code to viewDidLayoutSubviews.
Also, you might want to check out CAShapeLayer.

How to get coordinates of a view according to the coordinates of the uiimage on which it is placed in ios sdk?

I have an app where user takes a picture from camera and it is placed on a full screen image view. Now user can dear rectangular /square views on it and send the coordinates/size of the overlayed view to the server. The problem is i am getting the coordinates of the view according to the screen size but i want that the size should be according to the image on the imageview because normally image from the camera would be of larger resolution than the screen .
Please provide me a way to get the the size of the overlayed view according to the uiimage and not the screen
EDIT : For example :If user draw a view on a human face in the image and send the coordinates of the crop view to the server . Then it will be difficult to synchronize the coordinates with the real image if the coordinates are according to the screen and not according to the uiimage itself
The answer is simple:
You have the frame of your UIImageView, and the frame of your drew square (both relative to the self.view)
You only need to find the origin of your square, relative to the UIImageView.
Just subtract:
//get the frame of the square
CGRect *correctFrame = square.frame;
//edit square's origin
correctFrame.origin.x -= imageView.frame.origin.x;
correctFrame.origin.y -= imageView.frame.origin.y;
Now correctFrame is the frame of your square relative to the imageView, while square.frame is still relative to self.view (as we didn't change it)
To get the frame accordingly to the image resolution, do exact the same as above, then:
float xCoefficient = imageResolutionWidth / imageView.frame.size.width;
float yCoefficient = imageResolutionHeight / imageView.frame.size.height;
correctFrame.origin.x *= xCoefficient;
correctFrame.origin.y *= yCoefficient;
correctFrame.size.width *= xCoefficient;
correctFrame.size.height *= yCoefficient;
Why: the image resolution is much grater than the imageView.frame, so you gotta calculate a coefficient that will adjust your square.
That's all!

Anchor Point in CALayer

Looking at the Touches example from Apple's documentation, there is this method:
// scale and rotation transforms are applied relative to the layer's anchor point
// this method moves a gesture recognizer's view's anchor point between the user's fingers
- (void)adjustAnchorPointForGestureRecognizer:(UIGestureRecognizer *)gestureRecognizer {
if (gestureRecognizer.state == UIGestureRecognizerStateBegan) {
UIView *piece = gestureRecognizer.view;
CGPoint locationInView = [gestureRecognizer locationInView:piece];
CGPoint locationInSuperview = [gestureRecognizer locationInView:piece.superview];
piece.layer.anchorPoint = CGPointMake(locationInView.x / piece.bounds.size.width, locationInView.y / piece.bounds.size.height);
piece.center = locationInSuperview;
}
}
First question, can someone explain the logic of setting the anchor point in the subview, and changing the center of the superview (like why this is done)?
Lastly, how does the math work for the anchorPoint statement? If you have a view that has a bounds of 500, 500, and say you touch at 100, 100 with one finger, 500, 500 with the other. In this box your normal anchor point is (250, 250). Now it's ???? (have no clue)
Thanks!
The center property of a view is a mere reflection of the position property of its backing layer. Surprisingly what this means is that the center need not be at the center of your view. Where position is situated within its bounds is based on the anchorPoint which takes in values anywhere between (0,0) and (1,1). Think of it as a normalized indicator of whether the position lies within its bounds. If you were to change either the anchorPoint or the position, the bounds will adjust itself rather than the position shifting w.r.t to its superlayer/superview. So to readjust position so that the frame of the view doesn't shift one can manipulate the center.
piece.layer.anchorPoint = CGPointMake(locationInView.x / piece.bounds.size.width, locationInView.y / piece.bounds.size.height);
Imagine the original thing being where O is the touch point,
+++++++++++
+ O + +++++++++++
+ X + --> + X +
+ + + +
+++++++++++ + +
+++++++++++
Now we want this X to be at the point where the user has touched. We do this because all scaling and rotations are done based on the position/anchorPoint. To adjust the frame back to its original position, we set the "center" of the view to the touch location.
piece.center = locationInSuperview;
So this reflects in the view readjusting its frame back,
+++++++++++
+++++++++++ + X +
+ X + --> + +
+ + + +
+ + +++++++++++
+++++++++++
Now when the user rotates or scales, it will happen as if the axis were at the touch point rather than the true center of the view.
In your example, the location of view might end up being the average i.e. (300, 300) which means the anchorPoint would be (0.6, 0.6) and in response the frame will move up. To readjust we move the center to the touch location will move the frame back down.
First question, can someone explain
the logic of setting the anchor point
in the subview, and changing the
center of the superview (like why this
is done)?
This code isn't changing the center of the superview. It's changing the center of the gesture recognizer's view to be the location of the gesture (coordinates specified in the superview's frame). That statement is simply moving the view around in its superview while following the location of the gesture. Setting center can be thought of as a shorthand way of setting frame.
As for the anchor point, it affects how scale and rotation transforms are applied to the layer. For example, a layer will rotate using that anchor point as its axis of rotation. When scaling, all points are offset around the anchor point, which doesn't move itself.
Lastly, how does the math work for the
anchorPoint statement? If you have a
view that has a bounds of 500, 500,
and say you touch at 100, 100 with one
finger, 500, 500 with the other. In
this box your normal anchor point is
(250, 250). Now it's ???? (have no
clue)
The key concept to note on the anchorPoint property is that the range of the values in the point is declared to be from [0, 1], no matter what that actual size of the layer is. So, if you have a view with bounds (500, 500) and you touch twice at (100, 100) and (500, 500), the location in the view of the gesture as a whole will be (300, 300), and the anchor point will be (300/500, 300/500) = (0.6, 0.6).

Getting x and y Position?

How to Get the X and Y axis of UIImage, I have one images which are randomly changes
it's position so how to get the image current x and y position so i can match with
another image x and y position.I have to get the position of Image without any touch
on screen. please suggest some solution.
Thank You.
You can get the frame of any view by accessing its frame property. Within that frame struct are a CGPoint origin and a CGSize size value. The origin is probably what you're looking for. Note that it is expressed in terms of relative position of the view within its superview.
For example, the following will print the origin coordinate of a view called imageView within its superview:
CGPoint origin = imageView.frame.origin;
NSLog(#"Current position: (%f, %f)", origin.x, origin.y);