Trouble converting MapKit user coordinates to screen coordinates - iphone

OK this is actually three distinct questions in one thread:
1) I'm using:
- (void)viewDidLoad
{
[super viewDidLoad];
[mapView setFrame :CGRectMake(-100,-100,520,520)];
[mapView setAutoresizesSubviews:true];
mapView.showsUserLocation = YES;
locManager = [[CLLocationManager alloc] init];
locManager.delegate = self;
[locManager startUpdatingLocation];
[locManager startUpdatingHeading];
[bt_toolbar setEnabled:YES];
}
- (IBAction) CreatePicture
{
CLLocationCoordinate2D coord = CLLocationCoordinate2DMake(mapView.userLocation.coordinate.latitude, mapView.userLocation.coordinate.longitude);
CGPoint annPoint = [self.mapView convertCoordinate:coord toPointToView:self.mapView];
mapPic = [[UIImageView alloc] initWithImage:[UIImage imageNamed:#"pic.png"]];
mapPic.frame = CGRectMake(annPoint.x, annPoint.y, 32, 32);
[self.view addSubview:macPic];
}
to place an image using GCRectMake over the user's coordinates, but sadly the image is not being placed in the right spot:
And if i move the map, the picture will always be placed with the exact same offset regarding the user's location. What am i missing? Shouldn't it have been placed directly above the user? I'm guessing the offset (-100, -100) I gave the mapView in the first place is what's causing this (i've got a toolbar below, hence the offset).
2) My iOS 5 simulator is acting crazy, placing my location in Glendale, Phoenix (shouldn't it have been in Cupertino??) for some reason and acting as i were on the move due east; everything works fine in my iPhone 4 though (apart from the incorrect placement of the picture). Any ideas?
3) I've been thinking about using Annotations instead, but i've heard they're static (and i really need them dynamic). Is this true?
Thanks!

When adding a subview, its frame origin is relative to that of its superview. In this case you derived an origin from the map view and then added the image to the view controller's view. You would need to convert your image's origin to the self.view's coordinate system like this:
CGPoint imageOrigin = [self.view convertPoint:annPoint fromView:self.mapView];
mapPic.frame.origin = imageOrigin;
[self.view addSubview:macPic];
NB: this is untested code, but the principal is that you need to convert between the mapView's coordinate space (which is what you got from [self.mapView convertCoordinate:coord toPointToView:self.mapView]) to self.view's coordinate space. This example is a little contrived to be illustrative since an even simpler approach would be to use self.view's coordinate space in the first place:
CGPoint annPoint = [self.mapView convertCoordinate:coord toPointToView:self.view];

I think you would be better off using a custom MKPinAnnotationView using your own image.
as per this answer.
Add image behind MKPinAnnotationView

MKMapView provides the following methods for translating between view space and geographic coordinates:
– convertCoordinate:toPointToView:
– convertPoint:toCoordinateFromView:
– convertRegion:toRectToView:
– convertRect:toRegionFromView:
These are good for tasks like translating touches to map coordinates. For the task you describe, I think a map overlay would be more appropriate than trying to draw on the map using a subview. An overlay is a kind of annotation that allows you to draw whatever content you like, but lets the map take care of determining when the overlay is visible and needs to be drawn at all.
3) I've been thinking about using Annotations instead, but i've heard
they're static (and i really need them dynamic). Is this true?
What do you mean by "static" vs. "dynamic" here? Do you mean that you want to change the content in the overlay view? Or do you want to change the location of the overlay itself? I think it should be possible to update the overlay view as often as you need to. I'm not sure if you can move the overlay by simply adjusting it's boundingMapRect property, but I'd expect that to work. If it doesn't, you can always just remove the overlay and add it again.
Trying to create your own map overlay system instead of using the tools provided by MapKit should be the last thing on your list of options. Life is always easier when you can work with a framework instead of against it, and your code is much less likely to break in the future.

Related

Loading UIView transform & center from settings gives different position

I'm using pan, pinch, and rotate UIGestureRecognizers to allow the user to put certain UI elements exactly where they want them. Using the code from here http://www.raywenderlich.com/6567/uigesturerecognizer-tutorial-in-ios-5-pinches-pans-and-more (or similar code from here http://mobile.tutsplus.com/tutorials/iphone/uigesturerecognizer/) both give me what I need for the user to place these UI elements as they desire.
When the user exits "UI layout" mode, I save the UIView's transform and center like so:
NSString *transformString = NSStringFromCGAffineTransform(self.transform);
[[NSUserDefaults standardUserDefaults] setObject:transformString forKey:#"UItransform", suffix]];
NSString *centerString = NSStringFromCGPoint(self.center);
[[NSUserDefaults standardUserDefaults] setObject:centerString forKey:#"UIcenter"];
When I reload the app, I read the UIView's transform and center like so:
NSString *centerString = [[NSUserDefaults standardUserDefaults] objectForKey:#"UIcenter"];
if( centerString != nil )
self.center = CGPointFromString(centerString);
NSString *transformString = [[NSUserDefaults standardUserDefaults] objectForKey:#"UItransform"];
if( transformString != nil )
self.transform = CGAffineTransformFromString(transformString);
And the UIView ends up rotated and scaled correctly, but in the wrong place. Further, upon entering "UI layout" mode again, I can't always grab the view with the various gestures (as though the view as displayed is not the view as understood by the gesture recognizer?)
I also have a reset button that sets the UIView's transform to the identity and its center to whatever it is when it loads from the NIB. But after loading the altered UIView center and transform, even the reset doesn't work. The UIView's position is wrong.
My first thought was that since those gesture code examples alter center, that rotations must be happening around different centers (assuming some unpredictable sequence of moves, rotations, and scales). As I don't want to save the entire sequence of edits (though that might be handy if I want to have some undo feature in the layout mode), I altered the UIPanGestureRecognizer handler to use the transform to move it. Once I got that working, I figured just saving the transform would get me the current location and orientation, regardless of in what order things happened. But no such luck. I still get a wacky position this way.
So I'm at a loss. If a UIView has been moved and rotated to a new position, how can I save that location and orientation in a way that I can load it later and get the UIView back to where it should be?
Apologies in advance if I didn't tag this right or didn't lay it out correctly or committed some other stackoverflow sin. It's the first time I've posted here.
EDIT
I'm trying the two suggestions so far. I think they're effectively the same thing (one suggests saving the frame and the other suggests saving the origin, which I think is the frame.origin).
So now the save/load from prefs code includes the following.
Save:
NSString *originString = NSStringFromCGPoint(self.frame.origin);
[[NSUserDefaults standardUserDefaults] setObject:originString forKey:#"UIorigin"];
Load (before loading the transform):
NSString *originString = [[NSUserDefaults standardUserDefaults] objectForKey:#"UIorigin"];
if( originString ) {
CGPoint origin = CGPointFromString(originString);
self.frame = CGRectMake(origin.x, origin.y, self.frame.size.width, self.frame.size.height);
}
I get the same (or similar - it's hard to tell) result. In fact, I added a button to just reload the prefs, and once the view is rotated, that "reload" button will move the UIView by some offset repeatedly (as though the frame or transform are relative to itself - which I'm sure is a clue, but I'm not sure what it's pointing to).
EDIT #2
This makes me wonder about depending on the view's frame. From Apple http://developer.apple.com/library/ios/#documentation/WindowsViews/Conceptual/ViewPG_iPhoneOS/WindowsandViews/WindowsandViews.html#//apple_ref/doc/uid/TP40009503-CH2-SW6 (emphasis mine):
The value in the center property is always valid, even if scaling or rotation factors have been added to the view’s transform. The same is not true for the value in the frame property, which is considered invalid if the view’s transform is not equal to the identity transform.
EDIT #3
Okay, so when I'm loading the prefs in, everything looks fine. The UI panel's bounds rect is {{0, 0}, {506, 254}}. At the end of my VC's viewDidLoad method, all still seems okay. But by the time things actually are displayed, bounds is something else. For example: {{0, 0}, {488.321, 435.981}} (which looks like how big it is within its superview once rotated and scaled). If I reset bounds to what it's supposed to be, it moves back into place.
It's easy enough to reset the bounds to what they're supposed to be programatically, but I'm actually not sure when to do it! I would've thought to do it at the end of viewDidLoad, but bounds is still correct at that point.
EDIT #4
I tried capturing self.bounds in initWithCoder (as it's coming from a NIB), and then in layoutSubviews, resetting self.bounds to that captured CGRect. And that works.
But it seems horribly hacky and fraught with peril. This can't really be the right way to do this. (Can it?) skram's answer below seems so straightforward, but doesn't work for me when the app reloads.
You would save the frame property as well. You can use NSStringFromCGRect() and CGRectFromString().
When loading, set the frame then apply your transform. This is how I do it in one of my apps.
Hope this helps.
UPDATE: In my case, I have Draggable UIViews that rotation and resizing can be applied to. I use NSCoding to save and load my objects, example below.
//encoding
....
[coder encodeCGRect:self.frame forKey:#"rect"];
// you can save with NSStringFromCGRect(self.frame);
[coder encodeObject:NSStringFromCGAffineTransform(self.transform) forKey:#"savedTransform"];
//init-coder
CGRect frame = [coder decodeCGRectForKey:#"rect"];
// you can use frame = CGRectFromString(/*load string*/);
[self setFrame:frame];
self.transform = CGAffineTransformFromString([coder decodeObjectForKey:#"savedTransform"]);
What this does is save my frame and transform, and load them when needed. The same method can be applied with NSStringFromCGRect() and CGRectFromString().
UPDATE 2: In your case. You would do something like this..
[self setFrame:CGRectFromString([[NSUserDefaults standardUserDefaults] valueForKey:#"UIFrame"])];
self.transform = CGAffineTransformFromString([[NSUserDefaults standardUserDefaults] valueForKey:#"transform"]);
Assuming you're saving to NSUserDefaults with UIFrame, and transform keys.
I am having trouble reproducing your issue. I have used the following code, which does the following:
Adds a view
Moves it by changing the centre
Scales it with a transform
Rotates it with another transform, concatenated onto the first
Saves the transform and centre to strings
Adds another view and applies the centre and transform from the string
This results in two views in exactly the same place and position:
- (void)viewDidLoad
{
[super viewDidLoad];
UIView *view1 = [[UIView alloc] initWithFrame:CGRectMake(0, 0, 100, 100)];
view1.layer.borderWidth = 5.0;
view1.layer.borderColor = [UIColor blackColor].CGColor;
[self.view addSubview:view1];
view1.center = CGPointMake(150,150);
view1.transform = CGAffineTransformMakeScale(1.3, 1.3);
view1.transform = CGAffineTransformRotate(view1.transform, 0.5);
NSString *savedCentre = NSStringFromCGPoint(view1.center);
NSString *savedTransform = NSStringFromCGAffineTransform(view1.transform);
UIView *view2 = [[UIView alloc] initWithFrame:CGRectMake(0, 0, 100, 100)];
view2.layer.borderWidth = 2.0;
view2.layer.borderColor = [UIColor greenColor].CGColor;
[self.view addSubview:view2];
view2.center = CGPointFromString(savedCentre);
view2.transform = CGAffineTransformFromString(savedTransform);
}
Giving:
This ties up with what I would expect from the documentation, in that all transforms happen around the centre point and so that is never affected. The only way I can imagine that you're not able to restore items to their previous state is if somehow the superview was different, either with its own transform or a different frame, or a different view altogether. But I can't tell that from your question.
In summary, the original code in your question ought to be working, so there is something else going on! Hopefully this answer will help you narrow it down.
You should the also save the UIView's location,
CGPoint position = CGPointMake(self.view.origin.x, self.view.origin.y)
NSString _position = NSStringFromCGPoint(position);
// Do the saving
I'm not sure of everything that's going on, but here are some ideas that may help.
1- skram's solution seems plausible, but it's the bounds you want to save, not the frame. (Note that, if there's been no rotation, the center and bounds define the frame. So, setting the two is the same as setting the frame.)
From, the View Programming Guide for IOS you linked to:
Important If a view’s transform property is not the identity
transform, the value of that view’s frame property is undefined and
must be ignored. When applying transforms to a view, you must use the
view’s bounds and center properties to get the size and position of
the view. The frame rectangles of any subviews are still valid because
they are relative to the view’s bounds.
2- Another idea. When you reload the app, you could try the following:
First, set the view's transform to the identity transform.
Then, set the view's bounds and center to the saved values.
Finally, set the view's transform to the saved transform.
Depending on where your app is restarting, it may be starting back up with some of the old geometry. I really don't think this will change anything, but it's easy enough to try.
Update: After some testing, it really does seem like this wouldn't have any effect. Changing the transform does not seem to change the bounds or center (although it does change the frame.)
3- Lastly, you may save some trouble by rewriting the pinch gesture recognizer to operate on the bounds rather than the transform. (Again, use bounds, not frame, because an earlier rotation could have rendered the frame invalid.) In this way, the transform is used only for rotations, which, I think, cannot be done any other way without redrawing.
From the same guide, Apple's recommendation is:
You typically modify the transform property of a view when you want to
implement animations. For example, you could use this property to
create an animation of your view rotating around its center point. You
would not use this property to make permanent changes to your view,
such as modifying its position or size a view within its superview’s
coordinate space. For that type of change, you should modify the frame
rectangle of your view instead.
Thanks to all who contributed answers! The sum of them all led me to the following:
The trouble seems to have been that the bounds CGRect was being reset after loading the transform from preferences at startup, but not when updating the preferences while modifying in real time.
I think there are two solutions. One would be to first load the preferences from layoutSubviews instead of from viewDidLoad. Nothing seems to happen to bounds after layoutSubviews is called.
For other reasons in my app, however, it's more convenient to load the preferences from the view controller's viewDidLoad. So the solution I'm using is this:
// UserTransformableView.h
#interface UserTransformableView : UIView {
CGRect defaultBounds;
}
// UserTransformableView.m
- (id)initWithCoder:(NSCoder *)aDecoder {
self = [super initWithCoder:aDecoder];
if( self ) {
defaultBounds = self.bounds;
}
return self;
}
- (void)layoutSubviews {
self.bounds = defaultBounds;
}

MKOverlayView and OpenGL

I currently have a UIView that draws radar data on top of a MKMapView using OpenGL. Because of the level of detail in the radar image, OpenGL is required (CoreGraphics is not fast enough).
All of the images that I am drawing are saved in MKMapPoints. I choose them over the standard CLLocationCoordinate2D because their lengths do not depend on the latitude. The basic method for drawing is this:
Add the GLView as a subview of the MKMapView and set GLView.frame = MKMapView.frame.
Using GLOrthof, set the projection of the GLView to equal the current visible MKMapRect of the map. Here is the code that does this.
CLLocationCoordinate2D coordinateTopLeft =
[mapView convertPoint:CGPointMake(0, 0)
toCoordinateFromView:mapView];
MKMapPoint pointTopLeft = MKMapPointForCoordinate(coordinateTopLeft);
CLLocationCoordinate2D coordinateBottomRight =
[mapView convertPoint:CGPointMake(mapView.frame.size.width,
mapView.frame.size.height)
toCoordinateFromView:mapView];
MKMapPoint pointBottomRight = MKMapPointForCoordinate(coordinateBottomRight);
glLoadIdentity();
glOrthof(pointTopLeft.x, pointBottomRight.x,
pointBottomRight.y, pointTopLeft.y, -1, 1);
Set the viewport to be the correct size using glViewport(0, 0, backingWidth, backingHeight) where backingWidth and backingHeight is the size of the mapView in points.
Draw using glDrawArrays. Not sure if this matters, but GL_VERTEX_ARRAY and GL_TEXTURE_COORD_ARRAY are both enabled during the draw.
Using this method, everything works fine. The drawing is performed like it is supposed to. The only problem is since it is a subview of the mapView (and not an overlay), the radar image is drawn on top of any other MKAnnotations and MKOverlays. I need this layer to be drawn under the other annotations and overlays.
What I tried to do to get this working was to make the glView a subview of a custom MKOverlayView instead of the mapView. What I did was give the MKOverlay a boundingMapRect of MKMapRectWorld and set the frame of the glView the same way that I set the projection (since frame of a MKOverlayView is determined by MKMapPoints and not CGPoints). Again, here is the code.
CLLocationCoordinate2D coordinateTopLeft =
[mapView convertPoint:CGPointMake(0, 0)
toCoordinateFromView:mapView];
MKMapPoint pointTopLeft = MKMapPointForCoordinate(coordinateTopLeft);
CLLocationCoordinate2D coordinateBottomRight =
[mapView convertPoint:CGPointMake(mapView.frame.size.width,
mapView.frame.size.height)
toCoordinateFromView:mapView];
MKMapPoint pointBottomRight = MKMapPointForCoordinate(coordinateBottomRight);
glRadarView.frame = CGRectMake(pointTopLeft.x, pointTopLeft.y,
pointBottomRight.x - pointTopLeft.x,
pointBottomRight.y - pointTopLeft.y);
When I do this, the glView is positioned correctly on the screen (in the same place that is was while it was a subview of the mapView), but the drawing no longer works correctly. When the image does come up, it is not the right size and not in the correct location. I did a check and backingWidth and backingHeight are still the size of the view in points (as they should be).
Any idea why this is not working?
I've been away from iphone for too long to really fully grok your code, but I seem to recall from when I messed with Open GL on the iPhone some time ago, I found that it was necessary to maintain my own z-Index and simply draw items in that order... Each draw operation happened rotated in 3d properly, but something drawn later was always on top of something drawn earlier. One of my early test programs drew a grid on a surface, and then caused the whole thing to flip. My expectation was that the grid would disappear when the back of the object was facing me, but it remained visible because it was drawn later in a separate operation (IIRC)
It's possible that I was doing something wrong that caused that problem, but my solution was to order my draws by a z index.
Can you draw the image first using your first method?
I think that you should just set the viewport before setting the projection mode.

Making map pins spread out from a cluster

I've got two custom annotation classes for my map: one for a single post tied to a location, and one for a cluster of those posts. The cluster stores pointers to all of the posts it contains, as well as a central lat/long location (calculated using the locations of the posts it contains). I have the behaviour that when I click on a cluster annotation it removes the cluster and adds its posts to the map. What I want is to change the pin-drop annotation when expanding the clusters to an animation whereby the new pins move outwards from the centre of the cluster to their new locations. However, I also have some posts that are never clustered due to their distance from other points. Obviously they can't have this animation as there is no associated location for them to move outward from. Does anyone know how I might implement this?
Making the pins expand from the cluster center is actually pretty easy. When you make the new single-pin annotations, set their coordinates to the cluster center:
id <MKAnnotation> pin;
CLLocationCoordinate2D clusterCenter;
// ...
pin.coordinate = clusterCenter;
In viewForAnnotation:, don't animate the new pins:
MKPinAnnotationView *pinView;
// ...
pinView.animatesDrop = NO;
Then, after you've added the pins to the map view, you'll animate moving them to their real positions:
MKMapView *mapView;
id <MKAnnotation> pin;
// ...
// probably loop over annotations
[mapView addAnnotation:pin];
NSTimeInterval interval = 1.0; // or whatever
[UIView animateWithDuration:interval animations:^{
// probably loop over annotations here again
CLLocationCoordinate2D realCoord;
// ...
pin.coordinate = realCoord;
}];
As for the problem of non-clustered pins, that's harder to answer without knowing the implementation in detail, but I think there are lots of possibilities. You could just have a simple flag that skips the animation. Or you could just treat them exactly the same, and still "cluster" them even when they're solo, and still animate them ... not maximally efficient, but it would work and your code would be cleaner.

mapkit annotation error correction on iphone

i have 4 annotation with same lat/long as they are pointing some location in 1 building , since they share common lat/long so i can show only one of them on map?? so is there any way to use some error correction so that i can show them Lil side by side??
here is my annotation code
MKCoordinateRegion SecondRegiona;
SecondRegiona.center.latitude = 111.31888;
SecondRegiona.center.longitude = 203.861;
MyAnnotation *aSecondAnnotationa = [[[MyAnnotation alloc] init]autorelease];
aSecondAnnotationa.title = [listItems objectAtIndex:15];//#"3rd Annotation";
aSecondAnnotationa.subtitle = [listItems objectAtIndex:16];
aSecondAnnotationa.coordinate = SecondRegiona.center;
Why would you expect the platform to position something someplace different than where you told it to place it? That certainly sounds undesirable and not something that I would call "Error Correction"
You need to detect that state and do something reasonable like coalesce them into a single custom annotation or adjust the lat lngs to position them nearby according to your needs.
I would detect whether points are really close together (maybe use the distance formula to see how close points are?) and use - (CLLocationCoordinate2D)convertPoint:(CGPoint)point toCoordinateFromView:(UIView *)view to get CLLocationCoordinate2D of a point close to the original pin (say, 3 pixels to the right, 3 pixels down). You would use this CLLocationCoordinate2D to display the new, "adjacent" point.
As for what "coalesce" means, Nick means to merge points together -- take points that are really close to each other and display only one to represent the close points. I guess this isn't what you're looking for though.
Hope this helps!

UIView transparency shows how the sausages are made!

I have a UIView container that has two UIImageViews inside it, one partially obscuring the other (they're being composed like this to allow for occasional animation of one "layer" or another.
Sometimes I want to make this container 50% alpha, so what the users sees fades. Here's the problem: setting my container view to 50% alpha makes all my subviews inherit this as well, and now you can see through the first subview into the second, which in my application has a weird X-Ray effect that I'm not looking for.
What I'm after, of course, is for what the user currently sees to become 50% transparent-- the equivalent of flattening the visible view into one bitmap, and then making that 50% alpha.
What are my best bets for accomplishing this? Ideally would like to avoid actually, dynamically flattening the views if I can help it, but best practices on that welcome as well. Am I missing something obvious? Since most views have subviews and would run into this issue, I feel like there's some obvious solution here.
Thanks!
EDIT: Thanks for the thoughts folks. I'm just moving one image around on top of another image, which it only partially obscures. And this pair of images has to move together sometimes, as well. And sometimes I want to fade the whole thing out, wherever it is, and whatever the state of the image pair is at the moment. Later, I want to bring it back and continue animating it.
Taking a snapshot of the container, either by rendering its layer (?) or by doing some other offscreen compositing on the fly before alpha'ing out the whole thing, is definitely possible, and I know there are a couple ways to do it. But what if the animation should continue to happen while the whole thing's at 50% alpha, for example?
It sounds like there's no obvious solution to what I'm trying to do, which seems odd to me, but thank you all for the input.
Recently I had this same problem, where I needed to animate layers of content with a global transparency. Since my animation was quite complex, I discovered that flattening the UIView hierarchy made for a choppy animation.
The solution I found was using CALayers instead of UIViews, and setting the .shouldRasterize property to YES in the container layer, so that any sublayers would be flattened automatically prior to applying the opacity.
Here's what a UIView could look like:
#import <QuartzCore/QuartzCore.h> //< Needed to use CALayers
...
#interface MyView : UIView{
CALayer *layer1;
CALayer *layer2;
CALayer *compositingLayer; //< Layer where compositing happens.
}
...
- (void)initialization
{
UIImage *im1 = [UIImage imageNamed:#"image1.png"];
UIImage *im2 = [UIImage imageNamed:#"image2.png"];
/***** Setup the layers *****/
layer1 = [CALayer layer];
layer1.contents = im1.CGImage;
layer1.bounds = CGRectMake(0, 0, im1.size.width, im1.size.height);
layer1.position = CGPointMake(100, 100);
layer2 = [CALayer layer];
layer2.contents = im2.CGImage;
layer2.bounds = CGRectMake(0, 0, im2.size.width, im2.size.height);
layer2.position = CGPointMake(300, 300);
compositingLayer = [CALayer layer];
compositingLayer.shouldRasterize = YES; //< Here we turn this into a compositing layer.
compositingLayer.frame = self.bounds;
/***** Create the layer three *****/
[compositingLayer addSublayer:layer1]; //< Add first, so it's in back.
[compositingLayer addSublayer:layer2]; //< Add second, so it's in front.
// Don't mess with the UIView's layer, it's picky; just add sublayers to it.
[self.layer addSublayer:compositingLayer];
}
- (IBAction)animate:(id)sender
{
/* Since we're using CALayers, we can use implicit animation
* to move and change the opacity.
* Layer2 is over Layer1, the compositing is partially transparent.
*/
layer1.position = CGPointMake(200, 200);
layer2.position = CGPointMake(200, 200);
compositingLayer.opacity = 0.5;
}
I think that flattening the UIView into a UIImageView is your best bet if you have your heart set on providing this feature. Also, I don't think that flattening the image is going to be as complicated as you might think. Take a look at the answer provided in this question.
Set the bottom UIImageView to have .hidden = YES, then set .hidden = NO when you setup a cross-fade animation between the top and bottom UIImageViews.
When you need to fade the whole thing, you can either set .alpha = 0.5 on the container view or the top image view - it shouldn't matter. It may be computationally more efficient to set .alpha = 0.5 on the image view itself, but I don't know enough about the graphics pipeline on the iPhone to be sure about that.
The only downside to this approach is that you can't do a cross-fade when your top image is set to 50% opacity.
A way to do this would be to add the ImageViews to the UIWindow (the container would be a fake one)