Why image is not showing up in UIImageView - iphone

I'm having issue with a simple code downloading a remote image and adding content to an UIImageView object through its 'image' property. Instead of showing the downloaded image, it shows an old picture. Any caching responsible ? I've reset anything and removed cache files but still don't understand why this old image is still showing.
- (void)connectionDidFinishLoading:(NSURLConnection *)connection
{
// Set appIcon and clear temporary data/image
UIImage *image = [[UIImage alloc] initWithData:self.activeDownload];
if (image.size.width != kAppIconHeight && image.size.height != kAppIconHeight)
{
CGSize itemSize = CGSizeMake(kAppIconHeight, kAppIconHeight);
UIGraphicsBeginImageContext(itemSize);
CGRect imageRect = CGRectMake(0.0, 0.0, itemSize.width, itemSize.height);
[image drawInRect:imageRect];
self.image_view.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
else
{
self.image_view.image = image;
}
self.activeDownload = nil;
[image release];
// Release the connection now that it's finished
self.imageConnection = nil;
// call our delegate and tell it that our icon is ready for display
[delegate appImageDidLoad:self.indexPathInTableView];
}

Try to call [imageView setNeedsDisplay] after setting image property.

Related

Trying to overlap two images and showing a overlapped imaged in a third image

I am trying to overlap two local images and trying to show the overlapped one in third image.
I am using this code but simulator shows nothing.
- (void)viewDidLoad
{
[super viewDidLoad];
image1 = [[UIImage alloc]init];
image1 = [UIImage imageNamed:#"iphone.png"];
imageA = [[UIImageView alloc]initWithImage:image1];
[self merge];
}
-(void)merge
{
CGSize size = CGSizeMake(320, 480);
UIGraphicsBeginImageContext(size);
CGPoint thumbPoint = CGPointMake(0,0);
imageview.image = imageA.image;
[imageA.image drawAtPoint:thumbPoint];
imageB = [[UIImage alloc]init];
imageB = [UIImage imageNamed:#"Favorites.png"];
CGPoint starredPoint = CGPointMake(0, 0);
[imageB drawAtPoint:starredPoint];
UIImage *imageC = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
imageview.image = imageC;
[self.view addSubview:imageview];
}
I can't figure out/don't know where i am making mistake.
Any help would be appreciable.
Remove all the code from every where except the below code in Merge.
-(void)merge
{
CGSize size = CGSizeMake(320, 480);
UIGraphicsBeginImageContext(size);
CGPoint point1 = CGPointMake(0,0);
// The second point has to be some where different than the first point, other wise, the second image will be above the first image, and you wont even know that the two images are there.
CGPoint point2 = CGPointMake(100,100);
UIImage *imageOne = [UIImage imageNamed:#"Image1.png"];
[imageOne drawAtPoint:point1];
UIImage *imageTwo = [UIImage imageNamed:#"Image2.png"];
// If you want the above image to have some blending, then you can do some thing like below.
// [imageTwo drawAtPoint:point2 blendMode:kCGBlendModeMultiply alpha:0.5];
[imageTwo drawAtPoint:point2];
UIImage *imageC = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageView *iv = [[UIImageView alloc] initWithFrame:CGRectMake(100,100,200,200)];
iv.image=imageC;
[self.view addSubview:iv];
}
here's a general purpose "merge" function wrote as a UIImage category... allows image overlay/underlay.
http://saveme-dot-txt.blogspot.com/2011/06/merge-image-function.html

UIWebView to UIImage

I tried to capture image from UIWebView using this method but the image contains only visible area of the screen. How do I capture full content of UIWebView including invisible areas, i.e. the entire web page into one single image?
-(UIImage*)captureScreen:(UIView*) viewToCapture{
UIGraphicsBeginImageContext(viewToCapture.bounds.size);
[viewToCapture.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return viewImage;
}
Check this out Rendering a UIWebView into an ImageContext
or just use this :) :
(UIImage*) imageFromWebview:(UIWebView*) webview{
//store the original framesize to put it back after the snapshot
CGRect originalFrame = webview.frame;
//get the width and height of webpage using js (you might need to use another call, this doesn't work always)
int webViewHeight = [[webview stringByEvaluatingJavaScriptFromString:#"document.body.scrollHeight;"] integerValue];
int webViewWidth = [[webview stringByEvaluatingJavaScriptFromString:#"document.body.scrollWidth;"] integerValue];
//set the webview's frames to match the size of the page
[webview setFrame:CGRectMake(0, 0, webViewWidth, webViewHeight)];
//make the snapshot
UIGraphicsBeginImageContextWithOptions(webview.frame.size, false, 0.0);
[webview.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//set the webview's frame to the original size
[webview setFrame:originalFrame];
//and VOILA :)
return image;
}
EDIT (from a comment by Vad)
Solution was to call
webView.scalesPageToFit = YES;
in the initialization, and
[webView sizeToFit]
when the page did finish loading.
You are currently capturing only the visible part because you are limiting the image context to what's visible. You should limit it to what's available.
UIView has a scrollView property that has contentSize, telling you what is the size of the web view inside the scroll view. You can use that size to set your image context like this:
-(UIImage*)captureScreen:(UIView*) viewToCapture{
CGSize overallSize = overallSize;
UIGraphicsBeginImageContext(viewToCapture.scrollView.contentSize);
// Save the current bounds
CGRect tmp = viewToCapture.bounds;
viewToCapture.bounds = CGRectMake(0, 0, overallSize.width, overallSize.height);
// Wait for the view to finish loading.
// This is not very nice, but it should work. A better approach would be
// to use a delegate, and run the capturing on the did finish load event.
while (viewToCapture.loading) {
[NSThread sleepForTimeInterval:0.1];
}
[viewToCapture.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Restore the bounds
viewToCapture.bounds = tmp;
return viewImage;
}
EDIT : New answer from me with tested code.
Add below method to capture UIWebViewinto UIImage. It will also capture unvisible area as well.
- (UIImage*)webviewToImage:(UIWebView*)webView
{
int currentWebViewHeight = webView.scrollView.contentSize.height;
int scrollByY = webView.frame.size.height;
[webView.scrollView setContentOffset:CGPointMake(0, 0)];
NSMutableArray* images = [[NSMutableArray alloc] init];
CGRect screenRect = webView.frame;
int pages = currentWebViewHeight/scrollByY;
if (currentWebViewHeight%scrollByY > 0) {
pages ++;
}
for (int i = 0; i< pages; i++)
{
if (i == pages-1) {
if (pages>1)
screenRect.size.height = currentWebViewHeight - scrollByY;
}
if (IS_RETINA)
UIGraphicsBeginImageContextWithOptions(screenRect.size, NO, 0);
else
UIGraphicsBeginImageContext( screenRect.size );
if ([webView.layer respondsToSelector:#selector(setContentsScale:)]) {
webView.layer.contentsScale = [[UIScreen mainScreen] scale];
}
//UIGraphicsBeginImageContext(screenRect.size);
CGContextRef ctx = UIGraphicsGetCurrentContext();
[[UIColor blackColor] set];
CGContextFillRect(ctx, screenRect);
[webView.layer renderInContext:ctx];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
if (i == 0)
{
scrollByY = webView.frame.size.height;
}
else
{
scrollByY += webView.frame.size.height;
}
[webView.scrollView setContentOffset:CGPointMake(0, scrollByY)];
[images addObject:newImage];
}
[webView.scrollView setContentOffset:CGPointMake(0, 0)];
UIImage *resultImage;
if(images.count > 1) {
//join all images together..
CGSize size;
for(int i=0;i<images.count;i++) {
size.width = MAX(size.width, ((UIImage*)[images objectAtIndex:i]).size.width );
size.height += ((UIImage*)[images objectAtIndex:i]).size.height;
}
if (IS_RETINA)
UIGraphicsBeginImageContextWithOptions(size, NO, 0);
else
UIGraphicsBeginImageContext(size);
if ([webView.layer respondsToSelector:#selector(setContentsScale:)]) {
webView.layer.contentsScale = [[UIScreen mainScreen] scale];
}
CGContextRef ctx = UIGraphicsGetCurrentContext();
[[UIColor blackColor] set];
CGContextFillRect(ctx, screenRect);
int y=0;
for(int i=0;i<images.count;i++) {
UIImage* img = [images objectAtIndex:i];
[img drawAtPoint:CGPointMake(0,y)];
y += img.size.height;
}
resultImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
} else {
resultImage = [images objectAtIndex:0];
}
[images removeAllObjects];
return resultImage;
}
Also add these macro for checking if iOS is retina display
#define IS_RETINA ([[UIScreen mainScreen] respondsToSelector:#selector(displayLinkWithTarget:selector:)] && ([UIScreen mainScreen].scale == 2.0))

Taking a picture from the camera and show it in a UIImageView

I have a view with some fields (name, price, category) and a segmented control, plus this button to take picture.
If I try this on the simulator (no camera) it works properly: I can select the image from the camera roll, edit it and go back to the view, which will show all the fields with their contents .
But on my iphone, when I select the image after the editing and go back to the view, all the fields are empty exept for the UIImageView.I also tried to save the content of the fields in variables and put them back in the "viewWillApper" method, but the app crashes.
Start to thinking that maybe there is something wrong methods below
EDIT
I found the solution here. I defined a new method to the UIImage class. (follow the link for more information).Then I worked on the frame of the UIImageView to adapt itself to the new dimension, in landscape or portrait.
-(IBAction)takePhoto:(id)sender {
if ([UIImagePickerController isSourceTypeAvailable: UIImagePickerControllerSourceTypeCamera]) {
self.imgPicker.sourceType = UIImagePickerControllerSourceTypeCamera;
self.imgPicker.cameraCaptureMode = UIImagePickerControllerCameraCaptureModePhoto;
} else {
imgPicker.sourceType = UIImagePickerControllerSourceTypeSavedPhotosAlbum;
}
[self presentModalViewController:self.imgPicker animated:YES];
}
-(void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info {
[picker dismissModalViewControllerAnimated:YES];
NSDate *date = [NSDate date];
NSString *photoName = [dateFormatter stringFromDate:date];
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUs erDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
imagePath = [documentsDirectory stringByAppendingPathComponent:[NSString stringWithFormat:#"%#.png", photoName]];
UIImage *picture = [info objectForKey:UIImagePickerControllerOriginalImage];
// ---------RESIZE CODE--------- //
if (picture.size.width == 1936) {
picture = [picture scaleToSize:CGSizeMake(480.0f, 720.0f)];
} else {
picture = [picture scaleToSize:CGSizeMake(720.0f, 480.0f)];
}
// --------END RESIZE CODE-------- //
photoPreview.image = picture;
// ---------FRAME CODE--------- //
photoPreview.contentMode = UIViewContentModeScaleAspectFit;
CGRect frame = photoPreview.frame;
if (picture.size.width == 480) {
frame.size.width = 111.3;
frame.size.height =167;
} else {
frame.size.width = 167;
frame.size.height =111.3;
}
photoPreview.frame = frame;
// --------END FRAME CODE-------- //
NSData *webData = UIImagePNGRepresentation(picture);
CGImageRelease([picture CGImage]);
[webData writeToFile:imagePath atomically:YES];
imgPicker = nil;
}
Now I have a new issue! If I take a picture in landscape, and try to take another one in portrait, the app crashs. Do I have to release something?
I had the same issue, there is no edited image when using the camera, you must use the original image :
originalimage = [editingInfo objectForKey:UIImagePickerControllerOriginalImage];
if ([editingInfo objectForKey:UIImagePickerControllerMediaMetadata]) {
// test to chek that the camera was used
// especially I fund out htat you then have to rotate the photo
...
If it was cropped when usign the album you have to re-crop it of course :
if ([editingInfo objectForKey:UIImagePickerControllerCropRect] != nil) {
CGRect cropRect = [[editingInfo objectForKey:UIImagePickerControllerCropRect] CGRectValue];
CGImageRef imageRef = CGImageCreateWithImageInRect([originalimage CGImage], cropRect);
chosenimage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
} else {
chosenimage = originalimage;
}
The croprect info is also present for the camera mode, you need to check how you want it to behave.
To Crop image i think this may help you
UIImage *croppedImage = [self imageByCropping:photo.image toRect:tempview.frame];
CGSize size = CGSizeMake(croppedImage.size.height, croppedImage.size.width);
UIGraphicsBeginImageContext(size);
CGPoint pointImg1 = CGPointMake(0,0);
[croppedImage drawAtPoint:pointImg1 ];
[[UIImage imageNamed:appDelegete.strImage] drawInRect:CGRectMake(0,532, 150,80) ];
UIImage* result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
croppedImage = result;
UIImageView *mainImageView = [[UIImageView alloc] initWithImage:croppedImage];
CGRect clippedRect = CGRectMake(0, 0, croppedImage.size.width, croppedImage.size.height);
CGFloat scaleFactor = 0.5;
UIGraphicsBeginImageContext(CGSizeMake(croppedImage.size.width * scaleFactor, croppedImage.size.height * scaleFactor));
CGContextRef currentContext = UIGraphicsGetCurrentContext();
CGContextClipToRect(currentContext, clippedRect);
//this will automatically scale any CGImage down/up to the required thumbnail side (length) when the CGImage gets drawn into the context on the next line of code
CGContextScaleCTM(currentContext, scaleFactor, scaleFactor);
[mainImageView.layer renderInContext:currentContext];
appDelegete.appphoto = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

Is it possible to use UIGraphicsBeginImageContext from connectionDidFinishLoading?

I got these errors when the code below is executed:
[Switching to process 74682 thread 0x2003]
[Switching to process 74682 thread 0x207]
[Switching to process 74682 thread 0x720f]
--
- (void)connectionDidFinishLoading:(NSURLConnection *)connection
{
UIImage = [[UIImage alloc] initWithData:self.resp_data;
CGSize itemSize = CGSizeMake(150);
UIGraphicsBeginImageContext(itemSize);
CGRect image_rect = CGRectMake(0.0,0.0,itemSize.width,itemSize.height);
[image drawInRect:image_rect];
self.image_view.image = UIGraphicsGetImageFromCurrentImageContext(); // image_view ivar is connected to a UIImageView in the View associated to this controller
self.resp_data = nil;
self.imageConnection = nil;
[image release];
}
Any idea what could be the issue and how to solve it ? (and of course the expected image does not show up)
Thx for helping,
Stephane
TESTED CODE:100% WORKS
#define kAppIconHeight 48
- (void)connectionDidFinishLoading:(NSURLConnection *)connection
{
// Set appIcon and clear temporary data/image
UIImage *image = [[UIImage alloc] initWithData:self.resp_data];
if (image.size.width != kAppIconHeight && image.size.height != kAppIconHeight)
{
CGSize itemSize = CGSizeMake(kAppIconHeight, kAppIconHeight);
UIGraphicsBeginImageContext(itemSize);
CGRect imageRect = CGRectMake(0.0, 0.0, itemSize.width, itemSize.height);
[image drawInRect:imageRect];
self.image_view.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
else
{
self.image_view.image = image;
}
self.resp_data = nil;
[image release];
// Release the connection now that it's finished
//self.imageConnection = nil;
// call our delegate and tell it that our icon is ready for display
// [delegate appImageDidLoad:self.indexPathInTableView];
}

Should repeated use of the camera crash an app?

I have an app that builds a slideshow from user images. They can grab from their library or take a picture. I have found that repeated use of grabbing an image from the library is fine. But repeated use of taking a picture causes erratic behavior. I have been getting crashes but mostly what happens seems to be a reloading of the view after "didFinishPickingMediaWithInfo", which messes things up.
I have no leaks and it seems to be releasing properly after each picture is taken. I am resizing the image and saving it in a data base. Is anyone else running into this situation? Was the camera not designed to be called this often?
Sorry for the mix up. It's not continuos, it's take a picture, remove and release the camera, then the user selects choice A, B or C (they are prompted throughout to make selections on many things) then they can take another picture, remove and release the camera...etc...and this happens several times while they complete all the data entry.
After they select the camera I call this code. I am not releasing the image picker in the dealloc method.
- (void)openCamera {
if ([UIImagePickerController isSourceTypeAvailable:UIImagePickerControllerSourceTypeCamera])
{
imagePicker = [[UIImagePickerController alloc] init];
imagePicker.delegate = self;
imagePicker.sourceType = UIImagePickerControllerSourceTypeCamera;
[self presentModalViewController:imagePicker animated:YES];
[imagePicker release];
}
}
After they "USE" a picture taken from the camera I call the following code. It resized the image based on the original size of the image.
- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info {
UIImage *tempCameraImage = [info objectForKey:#"UIImagePickerControllerOriginalImage"];
[picker dismissModalViewControllerAnimated:YES];
CGFloat originalSize = tempCameraImage.size.width * tempCameraImage.size.height;
NSLog(#"Original Size %f", originalSize);
if (originalSize > 2500000.0) {
CGSize size = tempCameraImage.size;
CGRect rect = CGRectMake(0.0, 0.0, .23 * size.width, .23 * size.height);
UIGraphicsBeginImageContext(rect.size);
[tempCameraImage drawInRect:rect];
theImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CGFloat totalSize = theImage.size.width * theImage.size.height;
NSLog(#"Final Camera Size %f", totalSize);
[self resizeImageCamera];
return;
}
else {
CGSize size = tempCameraImage.size;
CGRect rect = CGRectMake(0.0, 0.0, .27 * size.width, .27 * size.height);
UIGraphicsBeginImageContext(rect.size);
[tempCameraImage drawInRect:rect];
theImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CGFloat totalSize = theImage.size.width * theImage.size.height;
NSLog(#"Final Camera Size %f", totalSize);
[self resizeImageCamera];
return;
}
}
-(void)resizeImageCamera {
if (editingImage1) {
NSManagedObject *image = [NSEntityDescription insertNewObjectForEntityForName:#"Image" inManagedObjectContext:slideshow.managedObjectContext];
slideshow.image = image;
[image setValue:theImage forKey:#"image"];
CGSize size = theImage.size;
CGRect rect = CGRectMake(0.0, 0.0, .15 * size.width, .15 * size.height);
UIGraphicsBeginImageContext(rect.size);
[theImage drawInRect:rect];
slideshow.thumbnailImage1 = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[self finishCameraImage];
return;
}
}
-(void)finishCameraImage {
if (editingImage1) {
keyInt = #"2";
[editedObject setValue:keyInt forKey:#"script1"];
pictureView.alpha = 0;
self.navigationItem.rightBarButtonItem = nil;
editedFieldKey = #"line2Int";
editedFieldName = NSLocalizedString(#"line2Int", #"display name for line2");
self.title = editedFieldName;
linePicker.hidden = NO;
}
I realize I am doing a lot to this image. If I put in a couple of delays would that help?