I'm having a strange problem with UIWebView. I have to set it's orientation manually by rotating and sizing it. But after i've done that there are areas of the web content that aren't usable.
The iPad is in Portrait, but the web content is built in Landscape. So what I do is rotate the container by 90 degrees, set the bounds of the container, and resize the container and webframe.
Here's the code I use for this:
//rotate
CGRect contentRect;
CGAffineTransform transform;
transform = CGAffineTransformMakeRotation(-3.14159/2);
contentRect = CGRectMake(0,0,1024,768);
//set container bounds and transform
[_container setBounds:contentRect];
[_container setTransform:transform];
//set frame
containerFrame = CGRectMake(0,0,768,1024);
webViewFrame = CGRectMake(0,0,768,1024);
[_webview setFrame:webViewFrame];
[_container setFrame:containerFrame];
So after this rotation happens, the ipad is still in Portrait, the web content is displayed correctly on the screen (rotated). But content below 768 pixels is not interactable.
I'm not sure what's going on. This has stumped me for a while now. Any ideas?
Related
I want to create a scrollView that works exactly like you pan/zoom an image in the Photo app:
-A landscape image is aspect fit on the portrait screen,
-You can zoom into the image,
-If you rotate the device zoomed (landscape), the image remains in the middle,
-And when you zoom back, the image is still aspect fit in the new landscape screen (streched full screen).
So I need aspect fit, and zooming features at once.
I have implemented a solution, where I layout the scrollView's content "by hand" in layouSubviews to have the aspect fit, but that disturbs zooming behaviour.
Is there a neat UIKit way to handle this?
Or I have to create my own implementation here?
You need to enable zooming (and set the min and max zoom scale) and then implement scrollViewDidZoom. Here's some sample code to get you started that handles centering the image. You can tweak it to do the other parts:
- (void)scrollViewDidZoom:(UIScrollView *)scrollView
{
CGSize boundsSize = scrollView.bounds.size;
CGRect frameToCenter = imageView.frame;
// center horizontally
if (frameToCenter.size.width < boundsSize.width)
frameToCenter.origin.x = (boundsSize.width - frameToCenter.size.width) / 2;
else
frameToCenter.origin.x = 0;
// center vertically
if (frameToCenter.size.height < boundsSize.height)
frameToCenter.origin.y = (boundsSize.height - frameToCenter.size.height) / 2;
else
frameToCenter.origin.y = 0;
imageView.frame = frameToCenter;
}
Note: this code assumes you keep a reference to the UIImageView (imageView in this example) in your UIScrollView.
I'm using a pinch gesture to let users resize an image.
-(void)handlePinch:(UIPinchGestureRecognizer *)recognizer{
recognizer.view.transform = CGAffineTransformScale(recognizer.view.transform, recognizer.scale, recognizer.scale);
recognizer.scale = 1;
}
The issue is that modifying the CGAffineTransformScale resizes the frame, but simply just stretches out the existing image resolution, meaning the image is blurry.
How can I refresh the image so that it adopts the new frame size correctly? Thanks
I have a custom UIImageView, I can drag it around screen by making a translation with (xDif and yDif is the amount fingers moved):
CGAffineTransform translate = CGAffineTransformMakeTranslation(xDif, yDif);
[self setTransform: CGAffineTransformConcat([self transform], translate)];
Let's say I moved the ImageView for 50px in both x and y directions. I then try to rotate the ImageView (via gesture recognizer) with:
CGAffineTransform transform = CGAffineTransformMakeRotation([recognizer rotation]);
myImageView.transform = transform;
What happens is the ImageView suddenly moves to where the ImageView was originally located (before the translation - not from the moved position + 50px in both directions).
(It seems that no matter how I translate the view, the self.center of the ImageView subclass stays the same - where it was originally laid in IB).
Another problem is, if I rotate the ImageView by 30 deg, and then try to rotate it a bit more, it will again start from the original position (angle = 0) and go from there, why wouldn't it start from the angle 30 deg and not 0.
You are overwriting the earlier transform. To add to the current transform, you should do this –
myImageView.transform = CGAffineTransformRotate(myImageView.transform, recognizer.rotation);
Since you're changing the transform property in a serial order, you should use CGAffineTransformRotate, CGAffineTransformTranslate and CGAffineTransformScale instead so that you add to the original transform and not create a new one.
My goal is simple; I want to create a program that displays an UIImage, and when swiped from bottom to top, displays another UIImage. The images here could be a happy face/sad face. The sad face should be the starting point, the happy face the end point. When swiping your finger the part below the finger should be showing the happy face.
So far I tried solving this with the frame and bounds properties of the UIImageview I used for the happy face image.
What this piece of code does is wrong, because the transition starts in the center of the screen and not the bottom. Notice that the origin of both frame and bounds are at 0,0...
I have read numerous pages about frames and bounds, but I don't get it. Any help is appreciated!
The loadimages is called only once.
- (void)loadImages {
sadface = [UIImage imageNamed:#"face-sad.jpg"];
happyface = [UIImage imageNamed:#"face-happy.jpg"];
UIImageView *face1view = [[UIImageView alloc]init];
face1view.image = sadface;
[self.view addSubview:face1view];
CGRect frame;
CGRect contentRect = self.view.frame;
frame = CGRectMake(0, 0, contentRect.size.width, contentRect.size.height);
face1view.frame = frame;
face2view = [[UIImageView alloc]init];
face2view.layer.masksToBounds = YES;
face2view.contentMode = UIViewContentModeScaleAspectFill;
face2view.image = happyface;
[self.view addSubview:face2view];
frame = CGRectMake(startpoint.x, 0, contentRect.size.width, contentRect.size.height);
face2view.frame = frame;
face2view.clipsToBounds = YES;
}
-(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
{
CGPoint movepoint = [[touches anyObject] locationInView: self.view];
NSLog(#"movepoint: %f %f", movepoint.x, movepoint.y);
face2view.bounds = CGRectMake(0, 0, 320, 480 - movepoint.y);
}
The UIImages and UIImageViews are properly disposed of in the dealloc function.
Indeed, you seem to be confused about frames and bounds. In fact, they are easy. Always remember that any view has its own coordinate system. The frame, center and transform properties are expressed in superview's coordinates, while the bounds is expressed in the view's own coordinate system. If a view doesn't have a superview (not installed into a view hierarchy yet), it still has a frame. In iOS the frame property is calculated from the view's bounds, center and transform. You may ask what the hell frame and center mean when there's no superview. They are used when you add the view to another view, allowing to position the view before it's actually visible.
The most common example when a view's bounds differ from its frame is when it is not in the upper left corner of its superview: its bounds.origin may be CGPointZero, while its frame.origin is not. Another classic example is UIScrollView, which frequently modifies its bounds.origin to make subviews scroll (in fact, modifying the origin of the coordinate system automatically moves every subview without affecting their frames), while its own frame is constant.
Back to your code. First of all, when you already have images to display in image views, it makes sense to init the views with their images:
UIImageView *face1view = [[UIImageView alloc] initWithImage: sadface];
That helps the image view to immediately size itself properly. It is not recommended to init views with -init because that might skip some important code in their designated initializer, -initWithFrame:.
Since you add face1view to self.view, you should really use its bounds rather than its frame:
face1view.frame = self.view.bounds;
Same goes for the happier face. Then in -touchesMoved:… you should either change face2view's frame to move it inside self.view or (if self.view does not contain any other subviews besides faces) modify self.view's bounds to move both faces inside it together. Instead, you do something weird like vertically stretching the happy face inside face2view. If you want the happy face to slide from the bottom of self.view, you should initially set its frame like this (not visible initially):
face2view.frame = CGRectOffset(face2view.frame, 0, CGRectGetHeight(self.view.bounds));
If you choose to swap faces by changing image views' frames (contrasted with changing self.view's bounds), I guess you might want to change both the views' frame origins, so that the sad face slides up out and the happy face slides up in. Alternatively, if you want the happy face to cover the sad one:
face2view.frame = face1view.frame;
Your problem seems to have something to do with the face2view.bounds in touchesMoved.
You are setting the bounds of this view to the rect, x:0, y:0, width:320, height:480 - y
x = 0 == left on the x axis
y = 0 == top on the y axis
So you are putting this image frame at the upper left corner, and making it fill the whole view. That's not what you want. The image simply becomes centered in this imageView.
I'm trying to draw a small image on a UIScrollView for my iPhone app. I started with a UIImage created from a png I included in my bundle and that works ok. Every time the zooming/panning stops the delegate of the scrollview recalculates the frame of that UIImage (which is a subview of the UIScrollView's contentView) and calls setNeedsDisplay. The frame gets smaller and smaller in width/height measurements as you zoom into the scrollview because I want the image to stay the same size on the screen. While the user is zooming the image is the wrong size but as soon as the zooming stops it gets corrected. Not great but not the worst thing either.
The problem is now that I want to draw the image myself rather than have a static png. I've subclassed UIView and got into the drawRect method but I can't get the maths right to draw a smooth looking path when the view is zoomed in. I thought I could just scale the line width down and the radius of the circle I'm drawing (CGContextAddArc) but it looks as if the iPhone has tried to draw a thin lined circle over two or three pixels and then enlarged those pixels, rather than zoomed the view in and drawn a highly accurate circle over it.
This is the ViewController resizing the UIView (which is a subview of the UIScrollView's contentView)
float scaler = (1/scrollView.zoomScale);
[targetView setFrame:CGRectMake(viewCoords.x-11*scaler, viewCoords.y-11*scaler, 22*scaler,22*scaler)];
[targetView setNeedsDisplay];
This is the drawRect of the UIView
float x = rect.origin.x + rect.size.width/2.0;
float y = rect.origin.y + rect.size.height/2.0;
float radius = (rect.size.width-2.0*lineWidth)/2.0;
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGContextSetLineWidth(ctx,lineWidth );
CGContextSetRGBStrokeColor(ctx, 1.0, 0.0, 0.0, 1.0);
CGContextAddArc(ctx, x, y, radius , 0, 2* M_PI, 1);
CGContextClosePath(ctx);
CGContextStrokePath(ctx);
I'm open to the idea of placing the UIView subclass elsewhere, but I must have it move in sync with the UIScrollView's contentview
See my answer to this question. The UIScrollView applies a transform to your content view when it does the pinch zooming, which is what's causing your blurriness. You can override this to draw your own content sharply if you follow the method I describe.