I am a little confused about the behavior of animationImages in UIImageView. The documentation says:
Setting this property to a value other than nil hides the image
represented by the image property.
Yet, if I initWithImage, then set animationImages property, the image passed to the init call is displayed. When I start animation, it goes through the array as expected, but once animation is done, it reverts back to image. Is the documentation inaccurate?
What I want ultimately is for the UIImageView to display the first image in the array, then once animation is complete, display the last image in the array. From what I can tell, it seems like I'll need to set image manually at the beginning to the first frame, then set it to the last frame right before I start animation?
you can make the start img as the img you want to start with, and when the animation is running you change the uiimage to the last img you want as the last img, so when the animation stops the last img will be showing.
Yes, set the image property to the image you want to see as the first image. Just before you call startAnimation, change that to be the last image so after the animation stops it rests on the last image:
In my setup:
animatedIV.animationImages = imageArray;
animatedIV.image = [imageArray objectAtIndex:0];
animatedIV.animationRepeatCount = 1;
before I start animation:
animatedIV.image = [imageArray lastObject];
[animatedIV startAnimation];
Related
The short version: How do I know what region of a UIImageView contains the image, and not aspect ratio padding?
The longer version:
I have a UIImageView of fixed size as pictured:
I am loading photos into this UIViewController, and I want to retain the original photo's aspect ratio so I set the contentMode to Aspect Fit. This ends up ensuring that the entire photo is displayed within the UIImageView, but with the side effect of adding some padding (configured in red):
No problem so far.... But now I am doing face detection on the original image. The face detection code returns a list of CGRects which I then render on top of the UIImageView (I have a subclassed UIView and then laid out an instance in IB which is the same size and offset as the UIImageView).
This approach works great when then photo is not padded out to fit into UIImageView. However if there is padding, it introduces some skew as seen here in green:
I need to take the image padding into account when rendering the boxes, but I do not see a way to retrieve it.
Since I know the original image size and the UIImageView size, I can do some algebra to calculate where the padding should be. However it seems like there is probably a way to retrieve this information, and I am overlooking it.
I do not use image views often so this may not be the best solution. But since no one else has answered the question I figured I'd through out a simple mathematical solution that should solve your problem:
UIImage *selectedImage; // the image you want to display
UIImageView *imageView; // the imageview to hold the selectedImage
NSInteger heightOfView = imageView.frame.size.height;
NSInteger heightOfPicture = selectedImage.size.height;
NSInteger yStartingLocationForGreenSquare; // set it to whatever the current location is
// take whatever you had it set to and add the value of the top padding
yStartingLocationForGreenSquare += (heightOfView - heightOfPicture) / 2;
So although there may be other solutions this is a pretty simple math formula to accomplish what you need. Hope it helps.
I added two UIView to ViewController.view, and applied 2 squares image into each view.layer.mask to make it like a square is sliced into 2 pieces, and addSubview the imageview over it.
I am having a problem rendering the masked layers and save it to photo album.
I want the saved photo to be look like picture no. 1, but it always looks like picture no. 2 after I save it to photo album.
Is there any solution to capture like picture No. 1 after applying mask?
the below is the reference from apple regarind renderIngContext.
Important The OS X v10.5 implementation of this method does not support the entire Core Animation composition model. QCCompositionLayer, CAOpenGLLayer, and QTMovieLayer layers are not rendered. Additionally, layers that use 3D transforms are not rendered, nor are layers that specify backgroundFilters, filters, compositingFilter, or a mask values. Future versions of OS X may add support for rendering these layers and properties.
I've created an image capture function before, which literally does a printscreen of a UIView. I don't use it, because it does not work well for my needs but maybe you can use it:
UIImage *img;
UIGraphicsBeginImageContextWithOptions(UIViewYouWantToCapture.bounds.size, self.opaque, 0.0);
[[UIViewYouWantToCapture layer] renderInContext:UIGraphicsGetCurrentContext()];
img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
When we apply masking to any image then the we get the resultant image with alpha property of masked image to 1 and remaining image to 0,
and when we are capturing image of view then there is still complete image(we are able to seen half image due due to alpa = 0 of half image, but there is still a complete image) , so we getting the screenshot of complete view.
I am using a TTImageView (from three20) to display an image from the web.
self.pic = [[TTImageView alloc] initWithFrame:CGRectMake(25, 25, 100, 100)];
self.pic.urlPath = #"http://www.google.com/images/logos/ps_logo2.png";
[self.pic sizeToFit];
NSLog(#"%f %f %f %f", self.pic.frame.origin.x, self.pic.frame.origin.y, self.pic.frame.size.width, self.pic.frame.size.height);
the image loads and displays perfectly but the NSLog returns "25.000000 25.000000 0.000000 0.000000". How do I get this to return the correct frame?
It's been a while since I used TTImageView, but what I imagine is happening is the following:
You ask TTImageView to load a new image from a URL
Immediately after you set the URL path, you call sizeTo Fit
...but the image actually hasn't been downloaded yet, so the frame is set to zero because no image exists at that time
TTImageView will download asynchronously - that is, in the background. You need to use the delegate function TTImageViewDelegate (documented here) to get a call back when the image has finished loading, at which point you should call sizeToFit.
Currently, you're firing off a request to download the image - a process that could take several seconds on 3G, and still a fair amount of time on WiFi- and then the next line asks to resize the frame. You need to call sizeToFit after you're sure the image download has finished, which is what the delegate methods are for.
How would I go about creating a UITextField like the one in this image?
It appears to be slightly larger, specifically in height.
This can be done much better and simpler with a stretchable image:
UIImage *fieldBGImage = [[UIImage imageNamed:#"input.png"] stretchableImageWithLeftCapWidth:20 topCapHeight:20];
[myUITextField setBackground:fieldBGImage];
Think of the background of the text field as split into three sections. A middle section which can be stretched and caps on the ends. Create an image which only needs to be long enough to contain one pixel of this repeating section (a very short text field), and then create a stretchable image from it using stretchableImageWithLeftCapWidth: topCapHeight. Pass the width of the left end cap into 'leftCapWidth.' You can make it stretch vertically as well, but if your background image is the same height as your text box it wont have an effect.
If you're familiar with 9-slice scaling in Flash or Illustrator it's exactly the same concept, except the middle sections are only one pixel wide/tall.
The advantage of this is you don't have to worry about multiple layered objects scaling together and you can resize your text fields any time and the background will stay in tact. It works on other elements too!
You probably want to use the background and borderStyle properties of UITextField. Setting borderStyle as UITextBorderStyleNone and create a custom background image to be stretched and used as the background property would be one approach.
I suggest taking a look at those properties in the UITextField class reference.
You can do this by:
yourTextField.borderStyle = UITextBorderStyleNone;
yourTextField.background = [UIImage imageNamed:#"email-input.png"];
And if you want to give margin to your text inside the textfield, you can do this by:
// Setting Text Field Margins to display the user entered text Properly
UIView *paddingView = [[UIView alloc] initWithFrame:CGRectMake(0, 0, 5, 20)];
yourTextField.leftView = paddingView;
yourTextField.leftViewMode = UITextFieldViewModeAlways;
You can also do this through Interface Builder.
just use the following line and it should work
textfield_name.background = [UIImage imageNamed : #"yourImage.png"];
here, "yourImage" is the background image you wanna set...
however, this will work only if your button isnt a roundrect button.So, you can change the type of the button in the Interface Builder or you can use
textfield_name.borderstyle = UITextBorderStyleNone or UITextBorderStyleBezel
and you r gud2go....!
Take an uiimageview set its image property to the image you want as uitextfield background. And on top of this uiimageview place an uitextfield with borderstyle none. This you can do directly in interface builder.
I want to make an animated background for iphone app. Something simple 5-6 frames changing in the loop. On the front there will be another animation running. How can this be done?
The easiest thing to do is probably to use the animationImages property of a UIImageView. Once you have the animationImages property correctly set up, just call startAnimating on your view. So your code would look something like:
imageView.animationImages = myNSArrayofUIImagesObjects;
imageView.animationDuration = 1; // by default this is equal to the number of images multiplied by 1/30th of a second
[imageView startAnimating];
An important thing to note is that you can't easily control how long each image is shown. But what you can do is use the same image in your NSArray of images multiple times. So, for example, you could have an NSArray of length 500, where the first 100 entries map to your first image, the second 100 entries map to your second image, etc. Make sure to minimize the amount of memory you're loading onto the heap by reusing the same UIImage object for each of your five or six images.