I have UIView which have subviews , I have rotated view with 90.
CGAffineTransform transform = CGAffineTransformMakeRotation(degreesToRadians(degrees));
Now I need to subview location in screen coordinate system and for that I am doing
CGPoint subViewPoint = [[subView superview] convertPoint:subView.frame.origin toView:baseView];
This is working fine , if I am not rotating the View but its not working if i am rotating the view. Please help me on this.
How to get the subview location in screen after its super view 90 rotation.
I'm not sure, but there are two things to know.
1) You should avoid using frame property when you have to deal with transformations. Use center and bounds properties instead. Frames of views that have not identity transformations are invalid.
2) You can always use CGPointApplyAffineTransform function to calculate point coordinates by yourself : )
Hope this'll help.
Related
I have a problem I've run into with the UIView method convertRect: fromView: method. Here is the situation:
I have an overwritten the UIView class to create a view that rotates with the user's movement(very similar to the TaskRabbit spinner). To create the rotation, over I added an additional view to my subclassed view, and I rotated that view. The rotated view contains additional subviews that obviously rotate with the rotated subview. The problem is, after the subview has been rotated, I need to find where those additional subviews are, with respect to the original overritten view - not the rotated view. To do this, in my UIView class, I have the following:
[self convertRect:currentView.frame fromView:rotationView];
However, when I print out the frame of the converted rect, the coordinates are not accurate. Has anyone run into this issue where the convertRect: fromView: isn't accurate after the view is rotated?
Edit
Specifically, about the points being not accurate, I can't even see the relationship between what is should be and what it is-ie off by an specific angle, x/y flipped etc. For example, the point that should be (25, 195) is returned at (325.25, 273.16)
I'm assuming that you are rotating your views by applying a transform to them (either a CGAffineTransform to the view or a CATransform3D to the layer). This is what is causing the problem with your frame. The documentation for UIView frame says:
Warning If the transform property is not the identity transform, the value of this property is undefined and therefore should be ignored.
As you've already seen, the value of the frame is undefined. You can still use the center and bounds properties though.
I currently have a UIView that draws radar data on top of a MKMapView using OpenGL. Because of the level of detail in the radar image, OpenGL is required (CoreGraphics is not fast enough).
All of the images that I am drawing are saved in MKMapPoints. I choose them over the standard CLLocationCoordinate2D because their lengths do not depend on the latitude. The basic method for drawing is this:
Add the GLView as a subview of the MKMapView and set GLView.frame = MKMapView.frame.
Using GLOrthof, set the projection of the GLView to equal the current visible MKMapRect of the map. Here is the code that does this.
CLLocationCoordinate2D coordinateTopLeft =
[mapView convertPoint:CGPointMake(0, 0)
toCoordinateFromView:mapView];
MKMapPoint pointTopLeft = MKMapPointForCoordinate(coordinateTopLeft);
CLLocationCoordinate2D coordinateBottomRight =
[mapView convertPoint:CGPointMake(mapView.frame.size.width,
mapView.frame.size.height)
toCoordinateFromView:mapView];
MKMapPoint pointBottomRight = MKMapPointForCoordinate(coordinateBottomRight);
glLoadIdentity();
glOrthof(pointTopLeft.x, pointBottomRight.x,
pointBottomRight.y, pointTopLeft.y, -1, 1);
Set the viewport to be the correct size using glViewport(0, 0, backingWidth, backingHeight) where backingWidth and backingHeight is the size of the mapView in points.
Draw using glDrawArrays. Not sure if this matters, but GL_VERTEX_ARRAY and GL_TEXTURE_COORD_ARRAY are both enabled during the draw.
Using this method, everything works fine. The drawing is performed like it is supposed to. The only problem is since it is a subview of the mapView (and not an overlay), the radar image is drawn on top of any other MKAnnotations and MKOverlays. I need this layer to be drawn under the other annotations and overlays.
What I tried to do to get this working was to make the glView a subview of a custom MKOverlayView instead of the mapView. What I did was give the MKOverlay a boundingMapRect of MKMapRectWorld and set the frame of the glView the same way that I set the projection (since frame of a MKOverlayView is determined by MKMapPoints and not CGPoints). Again, here is the code.
CLLocationCoordinate2D coordinateTopLeft =
[mapView convertPoint:CGPointMake(0, 0)
toCoordinateFromView:mapView];
MKMapPoint pointTopLeft = MKMapPointForCoordinate(coordinateTopLeft);
CLLocationCoordinate2D coordinateBottomRight =
[mapView convertPoint:CGPointMake(mapView.frame.size.width,
mapView.frame.size.height)
toCoordinateFromView:mapView];
MKMapPoint pointBottomRight = MKMapPointForCoordinate(coordinateBottomRight);
glRadarView.frame = CGRectMake(pointTopLeft.x, pointTopLeft.y,
pointBottomRight.x - pointTopLeft.x,
pointBottomRight.y - pointTopLeft.y);
When I do this, the glView is positioned correctly on the screen (in the same place that is was while it was a subview of the mapView), but the drawing no longer works correctly. When the image does come up, it is not the right size and not in the correct location. I did a check and backingWidth and backingHeight are still the size of the view in points (as they should be).
Any idea why this is not working?
I've been away from iphone for too long to really fully grok your code, but I seem to recall from when I messed with Open GL on the iPhone some time ago, I found that it was necessary to maintain my own z-Index and simply draw items in that order... Each draw operation happened rotated in 3d properly, but something drawn later was always on top of something drawn earlier. One of my early test programs drew a grid on a surface, and then caused the whole thing to flip. My expectation was that the grid would disappear when the back of the object was facing me, but it remained visible because it was drawn later in a separate operation (IIRC)
It's possible that I was doing something wrong that caused that problem, but my solution was to order my draws by a z index.
Can you draw the image first using your first method?
I think that you should just set the viewport before setting the projection mode.
I am displaying a CATiledLayer via a scrollview which displays a map. When the user presses a button, I want to determine the bounds of the catiledlayer that the user is currently viewing and place an image there.
In other words, I need to determine the portion of the CATiledLayer that is being displayed, then determine the center point of that portion.
Appreciate any thoughts.
You can call scrollView.contentOffset (which returns a CGPoint) which will give you where all the content in your scroll view is in relation to the origin point of the scroll view. So if your CATiledLayer is up and to the left you'd get some thing like (-50, -100). Take those values, multiply by -1 and then add them to scrollView.center property (also returns a CGPoint). This will give you a CGPoint coordinate that if used as the center point of a new-subview within scrollView will center that view within the current frame.
CGPoint currentOffset = CGPointMake(scrollView.contentOffset.x * -1, scrollView.contentOffset.y * -1);
CGPoint center = CGRectMake(scrollView.center.x + currentOffset.x, scrollView.center.y + currentOffset.y);
I haven't tested this code, but it should work or at least give you a general starting point!
Example: I have a CGPoint in window coordinates:
CGPoint windowPoint = CGPointMake(220.0f, 400.0f);
There is aView which has a superview in a superview in a superview. Somewhere deep in the view hierarchy, probably even transformed a few times.
When you get a UITouch, you can ask it for -locationInView: and it will return the coordinates relative to that view.
I need pretty much the same thing. Is there an easy way to accomplish that?
I found a really easy solution:
[self.aView convertPoint:windowPoint fromView:self.window];
Perhaps you could iterate through the superview tree, drilling down to subviews by tag, and add or subtract the subview's frame.origin values to get to a translated windowPoint relative to the view-of-interest's frame.origin.
I have an instance of UIScrollview containing an instance of UIView. The UIView is just a container for a horizonal array of UIImageView instances. - Zooming is provided by UIScrollView and UIScrollViewDelegate. I would like to constrain zooming to occur only along the horizontal axis with no vertical scalling at all. How to I do this?
Is there a way, for example, to subclass UIView and override the appropriate method to prevent vertical scaling? I like this approach but I am unclear on which method to override and what that overridden method should actually do.
Cheers,
Doug
Similar to what I describe in this answer, you can create a UIView subclass and override the -setTransform: accessor method to adjust the transform that the UIScrollView will try to apply to your UIView. Set this UIView to host your content subviews and make it the subview of the UIScrollView.
Within your overridden -setTransform:, you'll need to take in the transform that the UIScrollView would like to apply and adjust it so that the scaling only takes effect in one direction. From the documentation on how CGAffineTransform matrices are constructed, I believe the following implementation should constrain your scaling to be just along the horizontal direction:
- (void)setTransform:(CGAffineTransform)newValue;
{
CGAffineTransform constrainedTransform = CGAffineTransformIdentity;
constrainedTransform.a = newValue.a;
[super setTransform:constrainedTransform];
}
Using OS 3.0, you can tell the zoom to zoom to a rect in the scrollview. I have this in my logic that detects taps.
CGRect zoomRect = [self zoomRectForScale:newScale withCenter:CGPointMake(tapPoint.x, tapPoint.y) inScrollView:scrollView];
[scrollView zoomToRect:zoomRect animated:YES];
The for the other part, you will have to stretch your image views by the ratio that the new frame has against the original, and center it in the same center point. You can do this in an animation timed the same as the zoom animation so that it looks right, but I think this will be the only way to do it.
In scrollViewDidZoom:, adjust your content view's variables based on zoomScale, reset zoomScale to 1.0, then do setNeedsDisplay on the content view. Handle the actual zoom (in whatever direction you want) in your content view's drawRect:.
The Ugly Details:
While the zoom is in progress, the UIScollView changes contentOffset and contentScale, so save those prior values in scrollViewWillBeginZooming: and in scrollViewDidZoom: so you can compute a new contentOffset yourself according to the zoom.
Since changing zoomScale will immediately fire another scrollViewDidZoom:, you must set a BOOL before (and clear after) resetting the zoomScale. Test the BOOL at the start of scrollViewDidZoom: and return if true.
You may need to inhibit scrollViewDidScroll: while the zoom is in progress (test a BOOL; set it in scrollViewWillBeginZooming: and clear it in scrollViewDidEndZooming:) so your own contentOffsets are used while the zoom is in progress.