UIView layer renderInContext not drawing to image context - iphone

I'm trying to draw a view's layer to a CGGraphicsImageContext using renderInContext, however the view layer is not displaying on the resultant UIImage.
UIGraphicsBeginImageContext(scaledImage.size);
[scaledImage drawInRect:CGRectMake(0, 0, scaledImage.size.width, scaledImage.size.height)];
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *croppedImage = [UIGraphicsGetImageFromCurrentImageContext() croppedImage:CGRectMake(0, 44, 320, scaledImage.size.height - (60 + 44))];
UIGraphicsEndImageContext();
My drawRect method for the view is as follows:
-(void)drawRect:(CGRect)rect
{
CALayer *background = [[CALayer alloc] init];
background.backgroundColor = [UIColor blackColor].CGColor;
background.opacity = 0.4;
background.frame = self.bounds;
[self.layer addSublayer:background];
CATextLayer *textLayer = [[CATextLayer alloc] init];
textLayer.string = _poseName;
textLayer.fontSize = 19;
textLayer.foregroundColor = [UIColor whiteColor].CGColor;
textLayer.alignmentMode = kCAAlignmentCenter;
textLayer.frame = self.bounds;
CGSize textSize = [_poseName sizeWithFont:[UIFont fontWithName:#"Helvetica" size:19]
constrainedToSize:textLayer.bounds.size
lineBreakMode:NSLineBreakByWordWrapping];
textLayer.position = CGPointMake(160, self.bounds.size.height - textSize.height/2);
[self.layer addSublayer:textLayer];
}
The view's layers are displaying correctly when the app is running however, the view is not rendering to the graphics context as I would expect. Thank you in advance for your help!

Related

UIImageView pinch zooming in UIScrollView

I am comfortable with pinch zooming functionality using UIScrollView. But the problem is the aspect fit of image in scrollview.
Currently, I am having this, below image:
But I want image to be fit in screen like below image:
And same behavior for landscape. How can I achieve this?
Below is the code:
- (void)viewDidLoad
{
UIImage *image = [UIImage imageWithData:appDelegate.selectedOriginalImage];
imgView.image = image;
CGSize imageSize = image.size;
CGRect rect = CGRectMake(0, 0, imageSize.width, imageSize.height);
imgView.frame = rect;
scrollView.contentSize = CGSizeMake(imgView.frame.size.width, imgView.frame.size.height);
scrollView.maximumZoomScale = 4.0;
scrollView.minimumZoomScale = 1.0;
scrollView.delegate = self;
}
- (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation
{
UIImage *image = [UIImage imageWithData:appDelegate.selectedOriginalImage];
CGSize imageSize = image.size;
CGRect rect = CGRectMake(0, 0, imageSize.width, imageSize.height);
imgView.frame = rect;
[scrollView setContentOffset:CGPointMake(0, 0)];
return YES;
}
You are setting the UIImageView to the original size of the image instead of the size of the UIScrollView that contains it.
- (void)viewDidLoad
{
UIImage *image = [UIImage imageWithData:appDelegate.selectedOriginalImage];
imgView.image = image;
imgView.frame = scrollView.bounds;
[imgView setContentMode:UIViewContentModeScaleAspectFit];
scrollView.contentSize = CGSizeMake(imgView.frame.size.width, imgView.frame.size.height);
scrollView.maximumZoomScale = 4.0;
scrollView.minimumZoomScale = 1.0;
scrollView.delegate = self;
}
remember to return the imgView on the viewToZoom delegate method

Face detection is not working properly on resized images specially in Device, why?

Here is the code I am using to detect face from an Image:
- (void)detectFaces:(UIImageView *)photo
{
CIImage *coreImage = [CIImage imageWithCGImage:photo.image.CGImage];
CIDetector *detector = [CIDetector detectorOfType:CIDetectorTypeFace
context:nil
options:[NSDictionary dictionaryWithObject:CIDetectorAccuracyHigh
forKey:CIDetectorAccuracy]];
NSArray* features = [detector featuresInImage:coreImage];
for(CIFaceFeature* faceFeature in features)
{
NSLog(#"self.view %#",NSStringFromCGRect(self.view.frame));
NSLog(#"self.view %#",NSStringFromCGRect(self.view.bounds));
NSLog(#"self.vounds %#",NSStringFromCGRect(faceFeature.bounds));
CGFloat faceWidth = faceFeature.bounds.size.width;
UIView* faceView = [[UIView alloc] initWithFrame:faceFeature.bounds];
faceView.layer.borderWidth = 1;
faceView.layer.borderColor = [[UIColor redColor] CGColor];
[self.view addSubview:faceView];
if(faceFeature.hasLeftEyePosition)
{
UIView* leftEyeView = [[UIView alloc] initWithFrame:CGRectMake(faceFeature.leftEyePosition.x-faceWidth*0.15, faceFeature.leftEyePosition.y-faceWidth*0.15, faceWidth*0.3, faceWidth*0.3)];
[leftEyeView setBackgroundColor:[[UIColor blueColor] colorWithAlphaComponent:0.3]];
[leftEyeView setCenter:faceFeature.leftEyePosition];
leftEyeView.layer.cornerRadius = faceWidth*0.15;
[self.view addSubview:leftEyeView];
}
if(faceFeature.hasRightEyePosition)
{
UIView* leftEye = [[UIView alloc] initWithFrame:CGRectMake(faceFeature.rightEyePosition.x-faceWidth*0.15, faceFeature.rightEyePosition.y-faceWidth*0.15, faceWidth*0.3, faceWidth*0.3)];
[leftEye setBackgroundColor:[[UIColor blueColor] colorWithAlphaComponent:0.3]];
[leftEye setCenter:faceFeature.rightEyePosition];
leftEye.layer.cornerRadius = faceWidth*0.15;
[self.view addSubview:leftEye];
}
if(faceFeature.hasMouthPosition)
{
UIView* mouth = [[UIView alloc] initWithFrame:CGRectMake(faceFeature.mouthPosition.x-faceWidth*0.2, faceFeature.mouthPosition.y-faceWidth*0.2, faceWidth*0.4, faceWidth*0.4)];
[mouth setBackgroundColor:[[UIColor greenColor] colorWithAlphaComponent:0.3]];
[mouth setCenter:faceFeature.mouthPosition];
mouth.layer.cornerRadius = faceWidth*0.2;
[self.view addSubview:mouth];
}
}
}
This is code that I have used to resize an image:
- (UIImage *)imageWithImage:(UIImage *)image scaledToSize:(CGSize)newSize {
//UIGraphicsBeginImageContext(newSize);
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0.0);
[image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
and finally I am calling detect face method like this:
UIImageView *inputImage = [[UIImageView alloc] initWithImage:[self imageWithImage:[UIImage imageNamed:#"facedetectionpic.jpg"] scaledToSize:CGSizeMake(320, 460)]];
[self.view addSubview:inputImage];
[inputImage setTransform:CGAffineTransformMakeScale(1, -1)];
[self.view setTransform:CGAffineTransformMakeScale(1, -1)];
[self performSelectorInBackground:#selector(detectFaces:) withObject:inputImage];
It is working properly in Simulator but not in device. Can anyone please help me on this.
Simulator:
Device:
When I have changed options in UIGraphicsBeginImageContextWithOptions(newSize, NO, 0.0); to UIGraphicsBeginImageContextWithOptions(newSize, NO, 1.0); it started working even in device. Solved the issue.
Change options in UIGraphicsBeginImageContextWithOptions(newSize, NO, 0.0); to
UIGraphicsBeginImageContextWithOptions(newSize, NO, 1.0); it will work.

UIImageView round corner has white background

The rounded corner has white background.
I followed other SO answers but don't know why i'm getting this whites
Bellow is the code.
UIView* testView = [[[UIView alloc] initWithFrame: self.animationView.bounds] autorelease];
UIImageView* testImageView = [[[UIImageView alloc] initWithImage:backImage] autorelease];
[testView addSubview: testImageView];
testImageView.backgroundColor = [UIColor clearColor];
CALayer* layer = [testView layer];
bool prev = layer.masksToBounds;
layer.masksToBounds = YES;
layer.cornerRadius = 30;
testView.clipsToBounds = YES;
UIImage* image = [UIImage captureView: testView];
//this image has the white regions in the four corners.
// when seen on iphone photo album
+ (UIImage*)captureView:(UIView*)view
{
CGSize size = view.bounds.size;
CGContextRef context = CreateARGBBitmapContext(size);
CGContextTranslateCTM(context, 0, size.height);
CGContextScaleCTM(context, 1.0, -1.0);
[view.layer renderInContext: context];
CGImageRef imageRef = CGBitmapContextCreateImage(context);
UIImage* img = [UIImage imageWithCGImage: imageRef];
CGImageRelease(imageRef);
CGContextRelease(context);
return img;
}
Add [testImageView setClipsToBounds:YES].
It might be that you have to set testImageView.opaque = NO.

How to make something like iPhone Folders?

I'm wanting to know if there's a way I can transform my view to look something like iPhone folders. In other words, I want my view to split somewhere in the middle and reveal a view underneath it. Is this possible?
EDIT:
Per the suggestion below, I could take a screenshot of my application by doing this:
UIGraphicsBeginImageContext(self.view.bounds.size);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Not sure what to do with this, however.
EDIT:2
I've figured out how to add some shadows to my view, and here's what I've achieved (cropped to show relevant part):
EDIT:3
http://github.com/jwilling/JWFolders
the basic thought will be to take a picture of your current state and split it somewhere. Then animate both parts by setting a new frame. I don't know how to take a screenshot programmatically so I can't provide sample codeā€¦
EDIT: hey hey it's not looking great but it works ^^
// wouldn't be sharp on retina displays, instead use "withOptions" and set scale to 0.0
// UIGraphicsBeginImageContext(self.view.bounds.size);
UIGraphicsBeginImageContextWithOptions(self.view.bounds.size, NO, 0.0);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *f = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CGRect fstRect = CGRectMake(0, 0, 320, 200);
CGRect sndRect = CGRectMake(0, 200, 320, 260); // was 0,200,320,280
CGImageRef fImageRef = CGImageCreateWithImageInRect([f CGImage], fstRect);
UIImage *fCroppedImage = [UIImage imageWithCGImage:fImageRef];
CGImageRelease(fImageRef);
CGImageRef sImageRef = CGImageCreateWithImageInRect([f CGImage], sndRect);
UIImage *sCroppedImage = [UIImage imageWithCGImage:sImageRef];
CGImageRelease(sImageRef);
UIImageView *first = [[UIImageView alloc]initWithFrame:fstRect];
first.image = fCroppedImage;
//first.contentMode = UIViewContentModeTop;
UIImageView *second = [[UIImageView alloc]initWithFrame:sndRect];
second.image = sCroppedImage;
//second.contentMode = UIViewContentModeBottom;
UIView *blank = [[UIView alloc]initWithFrame:CGRectMake(0, 0, 320, 460)];
blank.backgroundColor = [UIColor darkGrayColor];
[self.view addSubview:blank];
[self.view addSubview:first];
[self.view addSubview:second];
[UIView animateWithDuration:2.0 animations:^{
second.center = CGPointMake(second.center.x, second.center.y+75);
}];
You can uncomment the two .contentMode lines and the quality will improve but in my case the subview has an offset of 10px or so (you can see it by setting a background color to both subviews)
//EDIT 2: ok found that bug. Had used the whole 320x480 screen, but had to cut off the status bar so it should be 320x460 and all is working great ;)
Instead of taking a snapshot of the view, you could use a separate view for each row of icons. You'll have to do a bit more work with repositioning stuff, but the rows won't be static when the folder is open (in other words, they'll keep redrawing as necessary).
I took relikd's code as a base and made it a bit more dynamic.
You can specify split position and direction when calling the function and I added a boarder to the split images.
#define splitAnimationTime 0.5
- (void)split:(SplitDirection)splitDirection
atYPostition:(int)splitYPosition
withRevealedViewHeight:(int)revealedViewHeight{
// wouldn't be sharp on retina displays, instead use "withOptions" and set scale to 0.0
// UIGraphicsBeginImageContext(self.view.bounds.size);
UIGraphicsBeginImageContextWithOptions(self.view.bounds.size, NO, 0.0);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *f = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CGRect fullScreenRect = [self getScreenFrameForCurrentOrientation];
CGRect upperSplitRect = CGRectMake(0, 0,fullScreenRect.size.width, splitYPosition);
CGRect lowerSplitRect = CGRectMake(0, splitYPosition, fullScreenRect.size.width, fullScreenRect.size.height-splitYPosition);
CGImageRef upperImageRef = CGImageCreateWithImageInRect([f CGImage], upperSplitRect);
UIImage *upperCroppedImage = [UIImage imageWithCGImage:upperImageRef];
CGImageRelease(upperImageRef);
CGImageRef lowerImageRef = CGImageCreateWithImageInRect([f CGImage], lowerSplitRect);
UIImage *lowerCroppedImage = [UIImage imageWithCGImage:lowerImageRef];
CGImageRelease(lowerImageRef);
UIImageView *upperImage = [[UIImageView alloc]initWithFrame:upperSplitRect];
upperImage.image = upperCroppedImage;
//first.contentMode = UIViewContentModeTop;
UIView *upperBoarder = [[UIView alloc]initWithFrame:CGRectMake(0, splitYPosition, fullScreenRect.size.width, 1)];
upperBoarder.backgroundColor = [UIColor whiteColor];
[upperImage addSubview:upperBoarder];
UIImageView *lowerImage = [[UIImageView alloc]initWithFrame:lowerSplitRect];
lowerImage.image = lowerCroppedImage;
//second.contentMode = UIViewContentModeBottom;
UIView *lowerBoarder = [[UIView alloc]initWithFrame:CGRectMake(0, 0, fullScreenRect.size.width, 1)];
lowerBoarder.backgroundColor = [UIColor whiteColor];
[lowerImage addSubview:lowerBoarder];
int reveledViewYPosition = splitYPosition;
if(splitDirection==SplitDirectionUp){
reveledViewYPosition = splitYPosition - revealedViewHeight;
}
UIView *revealedView = [[UIView alloc]initWithFrame:CGRectMake(0, reveledViewYPosition, fullScreenRect.size.width, revealedViewHeight)];
revealedView.backgroundColor = [UIColor scrollViewTexturedBackgroundColor];
[self.view addSubview:revealedView];
[self.view addSubview:upperImage];
[self.view addSubview:lowerImage];
[UIView animateWithDuration:splitAnimationTime animations:^{
if(splitDirection==SplitDirectionUp){
upperImage.center = CGPointMake(upperImage.center.x, upperImage.center.y-revealedViewHeight);
} else { //assume down
lowerImage.center = CGPointMake(lowerImage.center.x, lowerImage.center.y+revealedViewHeight);
}
}];
}
This means I can call it like this:
[self split:SplitDirectionUp atYPostition:500 withRevealedViewHeight:200];
I used these conveniance functions in the updated split function:
- (CGRect)getScreenFrameForCurrentOrientation {
return [self getScreenFrameForOrientation:[UIApplication sharedApplication].statusBarOrientation];
}
- (CGRect)getScreenFrameForOrientation:(UIInterfaceOrientation)orientation {
UIScreen *screen = [UIScreen mainScreen];
CGRect fullScreenRect = screen.bounds;
BOOL statusBarHidden = [UIApplication sharedApplication].statusBarHidden;
//implicitly in Portrait orientation.
if(orientation == UIInterfaceOrientationLandscapeRight || orientation == UIInterfaceOrientationLandscapeLeft){
CGRect temp = CGRectZero;
temp.size.width = fullScreenRect.size.height;
temp.size.height = fullScreenRect.size.width;
fullScreenRect = temp;
}
if(!statusBarHidden){
CGFloat statusBarHeight = 20;
fullScreenRect.size.height -= statusBarHeight;
}
return fullScreenRect;
}
and this enum:
typedef enum SplitDirection
{
SplitDirectionDown,
SplitDirectionUp
}SplitDirection;
Adding a return to normaal function and adding the arrow would be a great addition.

iPhone SDK: Rendering a CGLayer into an image object

I am trying to add a curved border around an image downloaded and to be displayed in a UITableViewCell.
In the large view (ie one image on the screen) I have the following:
productImageView = [[UIImageView alloc] initWithImage:[UIImage imageNamed:product.image]];
[productImageView setAlpha:0.4];
productImageView.frame = CGRectMake(10.0, 30.0, 128.0, 128.0);
CALayer *roundedlayer = [productImageView layer];
[roundedlayer setMasksToBounds:YES];
[roundedlayer setCornerRadius:7.0];
[roundedlayer setBorderWidth:2.0];
[roundedlayer setBorderColor:[[UIColor darkGrayColor] CGColor]];
[self addSubview:productImageView];
In the table view cell, to get it to scroll fast, an image needs to be drawn in the drawRect method of a UIView which is then added to a custom cell.
so in drawRect
- (void)drawRect:(CGRect)rect {
...
point = CGPointMake(boundsX + LEFT_COLUMN_OFFSET, UPPER_ROW_TOP);
//CALayer *roundedlayer = [productImageView layer];
//[roundedlayer setMasksToBounds:YES];
//[roundedlayer setCornerRadius:7.0];
//[roundedlayer setBorderWidth:2.0];
//[roundedlayer setBorderColor:[[UIColor darkGrayColor] CGColor]];
//[productImageView drawRect:CGRectMake(boundsX + LEFT_COLUMN_OFFSET, UPPER_ROW_TOP, IMAGE_WIDTH, IMAGE_HEIGHT)];
//
[productImageView.image drawInRect:CGRectMake(boundsX + LEFT_COLUMN_OFFSET, UPPER_ROW_TOP, IMAGE_WIDTH, IMAGE_HEIGHT)];
So this works well, but if I remove the comment and try to show the rounded CA layer the scrolling goes really slow.
To fix this I suppose I would have to render this image context into a different image object, and store this in an array, then set this image as something like:
productImageView.image = (UIImage*)[imageArray objectAtIndex:indexPath.row];
My question is "How do I render this layer into an image?"
TIA.
This is what I got to work well.
- (UIImage *)roundedImage:(UIImage*)originalImage
{
CGRect bounds = CGRectMake(0.0, 0.0, originalImage.size.width, originalImage.size.height);
UIImageView *imageView = [[UIImageView alloc] initWithImage:originalImage];
CALayer *layer = imageView.layer;
imageView.frame = bounds;
[layer setMasksToBounds:YES];
[layer setCornerRadius:7.0];
[layer setBorderWidth:2.0];
[layer setBorderColor:[[UIColor darkGrayColor] CGColor]];
UIGraphicsBeginImageContext(bounds.size);
[layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *anImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[imageView release];
return anImage;
}
Then to scale the image, I found this in the lazy loading example:
#define kAppIconHeight 48
CGSize itemSize = CGSizeMake(kAppIconHeight, kAppIconHeight);
UIGraphicsBeginImageContext(itemSize);
CGRect imageRect = CGRectMake(0.0, 0.0, itemSize.width, itemSize.height);
[image drawInRect:imageRect];
self.appRecord.appIcon = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
You are on the right track with your own comment.
- (UIImage *)roundedImage:(UIImage*)originalImage;
{
CGRect bounds = originalImage.bounds;
CGImageRef theImage = originalImage.CGImage;
CALayer *roundedlayer = [CALayer layer];
roundedlayer.position = CGPointMake(0.0f,0.0f);
roundedlayer.bounds = bounds;
[roundedlayer setMasksToBounds:YES];
[roundedlayer setCornerRadius:7.0];
[roundedlayer setBorderWidth:2.0];
[roundedlayer setBorderColor:[[UIColor darkGrayColor] CGColor]];
roundedlayer.contents = theImage;
UIGraphicsBeginImageContext(bounds.size); // creates a new context and pushes it on the stack
CGContextBeginTransparencyLayerWithRect(UIGraphicsGetCurrentContext(), bounds, NULL);
CGContextClearRect(UIGraphicsGetCurrentContext(), bounds);
[roundedlayer drawInContext:UIGraphicsGetCurrentContext()];
UIImage *anImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext(); //releases new context and removes from stack
return anImage;
}
I would pre-render these and store them in your image array so they are not calculated during your drawRect, but set in your
- (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath
method.