How to make something like iPhone Folders? - iphone

I'm wanting to know if there's a way I can transform my view to look something like iPhone folders. In other words, I want my view to split somewhere in the middle and reveal a view underneath it. Is this possible?
EDIT:
Per the suggestion below, I could take a screenshot of my application by doing this:
UIGraphicsBeginImageContext(self.view.bounds.size);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Not sure what to do with this, however.
EDIT:2
I've figured out how to add some shadows to my view, and here's what I've achieved (cropped to show relevant part):
EDIT:3
http://github.com/jwilling/JWFolders

the basic thought will be to take a picture of your current state and split it somewhere. Then animate both parts by setting a new frame. I don't know how to take a screenshot programmatically so I can't provide sample codeā€¦
EDIT: hey hey it's not looking great but it works ^^
// wouldn't be sharp on retina displays, instead use "withOptions" and set scale to 0.0
// UIGraphicsBeginImageContext(self.view.bounds.size);
UIGraphicsBeginImageContextWithOptions(self.view.bounds.size, NO, 0.0);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *f = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CGRect fstRect = CGRectMake(0, 0, 320, 200);
CGRect sndRect = CGRectMake(0, 200, 320, 260); // was 0,200,320,280
CGImageRef fImageRef = CGImageCreateWithImageInRect([f CGImage], fstRect);
UIImage *fCroppedImage = [UIImage imageWithCGImage:fImageRef];
CGImageRelease(fImageRef);
CGImageRef sImageRef = CGImageCreateWithImageInRect([f CGImage], sndRect);
UIImage *sCroppedImage = [UIImage imageWithCGImage:sImageRef];
CGImageRelease(sImageRef);
UIImageView *first = [[UIImageView alloc]initWithFrame:fstRect];
first.image = fCroppedImage;
//first.contentMode = UIViewContentModeTop;
UIImageView *second = [[UIImageView alloc]initWithFrame:sndRect];
second.image = sCroppedImage;
//second.contentMode = UIViewContentModeBottom;
UIView *blank = [[UIView alloc]initWithFrame:CGRectMake(0, 0, 320, 460)];
blank.backgroundColor = [UIColor darkGrayColor];
[self.view addSubview:blank];
[self.view addSubview:first];
[self.view addSubview:second];
[UIView animateWithDuration:2.0 animations:^{
second.center = CGPointMake(second.center.x, second.center.y+75);
}];
You can uncomment the two .contentMode lines and the quality will improve but in my case the subview has an offset of 10px or so (you can see it by setting a background color to both subviews)
//EDIT 2: ok found that bug. Had used the whole 320x480 screen, but had to cut off the status bar so it should be 320x460 and all is working great ;)

Instead of taking a snapshot of the view, you could use a separate view for each row of icons. You'll have to do a bit more work with repositioning stuff, but the rows won't be static when the folder is open (in other words, they'll keep redrawing as necessary).

I took relikd's code as a base and made it a bit more dynamic.
You can specify split position and direction when calling the function and I added a boarder to the split images.
#define splitAnimationTime 0.5
- (void)split:(SplitDirection)splitDirection
atYPostition:(int)splitYPosition
withRevealedViewHeight:(int)revealedViewHeight{
// wouldn't be sharp on retina displays, instead use "withOptions" and set scale to 0.0
// UIGraphicsBeginImageContext(self.view.bounds.size);
UIGraphicsBeginImageContextWithOptions(self.view.bounds.size, NO, 0.0);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *f = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CGRect fullScreenRect = [self getScreenFrameForCurrentOrientation];
CGRect upperSplitRect = CGRectMake(0, 0,fullScreenRect.size.width, splitYPosition);
CGRect lowerSplitRect = CGRectMake(0, splitYPosition, fullScreenRect.size.width, fullScreenRect.size.height-splitYPosition);
CGImageRef upperImageRef = CGImageCreateWithImageInRect([f CGImage], upperSplitRect);
UIImage *upperCroppedImage = [UIImage imageWithCGImage:upperImageRef];
CGImageRelease(upperImageRef);
CGImageRef lowerImageRef = CGImageCreateWithImageInRect([f CGImage], lowerSplitRect);
UIImage *lowerCroppedImage = [UIImage imageWithCGImage:lowerImageRef];
CGImageRelease(lowerImageRef);
UIImageView *upperImage = [[UIImageView alloc]initWithFrame:upperSplitRect];
upperImage.image = upperCroppedImage;
//first.contentMode = UIViewContentModeTop;
UIView *upperBoarder = [[UIView alloc]initWithFrame:CGRectMake(0, splitYPosition, fullScreenRect.size.width, 1)];
upperBoarder.backgroundColor = [UIColor whiteColor];
[upperImage addSubview:upperBoarder];
UIImageView *lowerImage = [[UIImageView alloc]initWithFrame:lowerSplitRect];
lowerImage.image = lowerCroppedImage;
//second.contentMode = UIViewContentModeBottom;
UIView *lowerBoarder = [[UIView alloc]initWithFrame:CGRectMake(0, 0, fullScreenRect.size.width, 1)];
lowerBoarder.backgroundColor = [UIColor whiteColor];
[lowerImage addSubview:lowerBoarder];
int reveledViewYPosition = splitYPosition;
if(splitDirection==SplitDirectionUp){
reveledViewYPosition = splitYPosition - revealedViewHeight;
}
UIView *revealedView = [[UIView alloc]initWithFrame:CGRectMake(0, reveledViewYPosition, fullScreenRect.size.width, revealedViewHeight)];
revealedView.backgroundColor = [UIColor scrollViewTexturedBackgroundColor];
[self.view addSubview:revealedView];
[self.view addSubview:upperImage];
[self.view addSubview:lowerImage];
[UIView animateWithDuration:splitAnimationTime animations:^{
if(splitDirection==SplitDirectionUp){
upperImage.center = CGPointMake(upperImage.center.x, upperImage.center.y-revealedViewHeight);
} else { //assume down
lowerImage.center = CGPointMake(lowerImage.center.x, lowerImage.center.y+revealedViewHeight);
}
}];
}
This means I can call it like this:
[self split:SplitDirectionUp atYPostition:500 withRevealedViewHeight:200];
I used these conveniance functions in the updated split function:
- (CGRect)getScreenFrameForCurrentOrientation {
return [self getScreenFrameForOrientation:[UIApplication sharedApplication].statusBarOrientation];
}
- (CGRect)getScreenFrameForOrientation:(UIInterfaceOrientation)orientation {
UIScreen *screen = [UIScreen mainScreen];
CGRect fullScreenRect = screen.bounds;
BOOL statusBarHidden = [UIApplication sharedApplication].statusBarHidden;
//implicitly in Portrait orientation.
if(orientation == UIInterfaceOrientationLandscapeRight || orientation == UIInterfaceOrientationLandscapeLeft){
CGRect temp = CGRectZero;
temp.size.width = fullScreenRect.size.height;
temp.size.height = fullScreenRect.size.width;
fullScreenRect = temp;
}
if(!statusBarHidden){
CGFloat statusBarHeight = 20;
fullScreenRect.size.height -= statusBarHeight;
}
return fullScreenRect;
}
and this enum:
typedef enum SplitDirection
{
SplitDirectionDown,
SplitDirectionUp
}SplitDirection;
Adding a return to normaal function and adding the arrow would be a great addition.

Related

iOS Custom UIImagePickerController Camera Crop to Square

I'm trying to create a camera like Instagram where the user can see a box and the image would crop to that box. For Some reason the camera doesn't go all the way to the bottom of the screen and cuts off near the end. I'm also wondering how would I go about cropping the image to be 320x320 exactly inside that square?
Here's the easiest way to do it (without reimplementing UIImagePickerController). First, use an overlay to make the camera field look square. Here's an example for 3.5" screens (you'd need to update it to work for iPhone 5):
UIImagePickerController *imagePickerController = [[UIImagePickerController alloc] init];
imagePickerController.sourceType = source;
if (source == UIImagePickerControllerSourceTypeCamera) {
//Create camera overlay
CGRect f = imagePickerController.view.bounds;
f.size.height -= imagePickerController.navigationBar.bounds.size.height;
CGFloat barHeight = (f.size.height - f.size.width) / 2;
UIGraphicsBeginImageContext(f.size);
[[UIColor colorWithWhite:0 alpha:.5] set];
UIRectFillUsingBlendMode(CGRectMake(0, 0, f.size.width, barHeight), kCGBlendModeNormal);
UIRectFillUsingBlendMode(CGRectMake(0, f.size.height - barHeight, f.size.width, barHeight), kCGBlendModeNormal);
UIImage *overlayImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageView *overlayIV = [[UIImageView alloc] initWithFrame:f];
overlayIV.image = overlayImage;
[imagePickerController.cameraOverlayView addSubview:overlayIV];
}
imagePickerController.delegate = self;
[self presentViewController:imagePickerController animated:YES completion:nil];
Then, after you get a picture back from the UIImagePickerController, crop it to a square with something like this:
//Crop the image to a square
CGSize imageSize = image.size;
CGFloat width = imageSize.width;
CGFloat height = imageSize.height;
if (width != height) {
CGFloat newDimension = MIN(width, height);
CGFloat widthOffset = (width - newDimension) / 2;
CGFloat heightOffset = (height - newDimension) / 2;
UIGraphicsBeginImageContextWithOptions(CGSizeMake(newDimension, newDimension), NO, 0.);
[image drawAtPoint:CGPointMake(-widthOffset, -heightOffset)
blendMode:kCGBlendModeCopy
alpha:1.];
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
And you're done.
#Anders answer was very close to correct for me on iPhone 5. I made the following modification to add an overlay hardcoded for iPhone 5:
CGRect f = imagePickerController.view.bounds;
f.size.height -= imagePickerController.navigationBar.bounds.size.height;
UIGraphicsBeginImageContext(f.size);
[[UIColor colorWithWhite:0 alpha:.5] set];
UIRectFillUsingBlendMode(CGRectMake(0, 0, f.size.width, 124.0), kCGBlendModeNormal);
UIRectFillUsingBlendMode(CGRectMake(0, 444, f.size.width, 52), kCGBlendModeNormal);
UIImage *overlayImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageView *overlayIV = [[UIImageView alloc] initWithFrame:f];
overlayIV.image = overlayImage;
overlayIV.alpha = 0.7f;
[imagePickerController setCameraOverlayView:overlayIV];`
I hope this helps someone.

UIImageView pinch zooming in UIScrollView

I am comfortable with pinch zooming functionality using UIScrollView. But the problem is the aspect fit of image in scrollview.
Currently, I am having this, below image:
But I want image to be fit in screen like below image:
And same behavior for landscape. How can I achieve this?
Below is the code:
- (void)viewDidLoad
{
UIImage *image = [UIImage imageWithData:appDelegate.selectedOriginalImage];
imgView.image = image;
CGSize imageSize = image.size;
CGRect rect = CGRectMake(0, 0, imageSize.width, imageSize.height);
imgView.frame = rect;
scrollView.contentSize = CGSizeMake(imgView.frame.size.width, imgView.frame.size.height);
scrollView.maximumZoomScale = 4.0;
scrollView.minimumZoomScale = 1.0;
scrollView.delegate = self;
}
- (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation
{
UIImage *image = [UIImage imageWithData:appDelegate.selectedOriginalImage];
CGSize imageSize = image.size;
CGRect rect = CGRectMake(0, 0, imageSize.width, imageSize.height);
imgView.frame = rect;
[scrollView setContentOffset:CGPointMake(0, 0)];
return YES;
}
You are setting the UIImageView to the original size of the image instead of the size of the UIScrollView that contains it.
- (void)viewDidLoad
{
UIImage *image = [UIImage imageWithData:appDelegate.selectedOriginalImage];
imgView.image = image;
imgView.frame = scrollView.bounds;
[imgView setContentMode:UIViewContentModeScaleAspectFit];
scrollView.contentSize = CGSizeMake(imgView.frame.size.width, imgView.frame.size.height);
scrollView.maximumZoomScale = 4.0;
scrollView.minimumZoomScale = 1.0;
scrollView.delegate = self;
}
remember to return the imgView on the viewToZoom delegate method

Trying to overlap two images and showing a overlapped imaged in a third image

I am trying to overlap two local images and trying to show the overlapped one in third image.
I am using this code but simulator shows nothing.
- (void)viewDidLoad
{
[super viewDidLoad];
image1 = [[UIImage alloc]init];
image1 = [UIImage imageNamed:#"iphone.png"];
imageA = [[UIImageView alloc]initWithImage:image1];
[self merge];
}
-(void)merge
{
CGSize size = CGSizeMake(320, 480);
UIGraphicsBeginImageContext(size);
CGPoint thumbPoint = CGPointMake(0,0);
imageview.image = imageA.image;
[imageA.image drawAtPoint:thumbPoint];
imageB = [[UIImage alloc]init];
imageB = [UIImage imageNamed:#"Favorites.png"];
CGPoint starredPoint = CGPointMake(0, 0);
[imageB drawAtPoint:starredPoint];
UIImage *imageC = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
imageview.image = imageC;
[self.view addSubview:imageview];
}
I can't figure out/don't know where i am making mistake.
Any help would be appreciable.
Remove all the code from every where except the below code in Merge.
-(void)merge
{
CGSize size = CGSizeMake(320, 480);
UIGraphicsBeginImageContext(size);
CGPoint point1 = CGPointMake(0,0);
// The second point has to be some where different than the first point, other wise, the second image will be above the first image, and you wont even know that the two images are there.
CGPoint point2 = CGPointMake(100,100);
UIImage *imageOne = [UIImage imageNamed:#"Image1.png"];
[imageOne drawAtPoint:point1];
UIImage *imageTwo = [UIImage imageNamed:#"Image2.png"];
// If you want the above image to have some blending, then you can do some thing like below.
// [imageTwo drawAtPoint:point2 blendMode:kCGBlendModeMultiply alpha:0.5];
[imageTwo drawAtPoint:point2];
UIImage *imageC = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageView *iv = [[UIImageView alloc] initWithFrame:CGRectMake(100,100,200,200)];
iv.image=imageC;
[self.view addSubview:iv];
}
here's a general purpose "merge" function wrote as a UIImage category... allows image overlay/underlay.
http://saveme-dot-txt.blogspot.com/2011/06/merge-image-function.html

UIImageView round corner has white background

The rounded corner has white background.
I followed other SO answers but don't know why i'm getting this whites
Bellow is the code.
UIView* testView = [[[UIView alloc] initWithFrame: self.animationView.bounds] autorelease];
UIImageView* testImageView = [[[UIImageView alloc] initWithImage:backImage] autorelease];
[testView addSubview: testImageView];
testImageView.backgroundColor = [UIColor clearColor];
CALayer* layer = [testView layer];
bool prev = layer.masksToBounds;
layer.masksToBounds = YES;
layer.cornerRadius = 30;
testView.clipsToBounds = YES;
UIImage* image = [UIImage captureView: testView];
//this image has the white regions in the four corners.
// when seen on iphone photo album
+ (UIImage*)captureView:(UIView*)view
{
CGSize size = view.bounds.size;
CGContextRef context = CreateARGBBitmapContext(size);
CGContextTranslateCTM(context, 0, size.height);
CGContextScaleCTM(context, 1.0, -1.0);
[view.layer renderInContext: context];
CGImageRef imageRef = CGBitmapContextCreateImage(context);
UIImage* img = [UIImage imageWithCGImage: imageRef];
CGImageRelease(imageRef);
CGContextRelease(context);
return img;
}
Add [testImageView setClipsToBounds:YES].
It might be that you have to set testImageView.opaque = NO.

iPhone: Create a screenshot programmatically and then add it as a subview

I would like to call a method which takes a screenshot and then load this screenshot as a subview. I am using Apples sample code on how to take a screen shot (see below) and was trying to use the result (an image) in my code. However, I don't really know how to get the image from the method into my code. This is what I tried; it's obviously wrong, but it's all I could come up with:
// Test Screenshot:
screenShot = [UIImage screenshot]; // THIS DOESN'T WORK
screenShotView = [[UIImageView alloc] initWithImage:screenShot];
[screenShotView setFrame:CGRectMake(0, 0, 320, 480)];
[self.view addSubview:screenShotView];
And this is Apple's sample code for the method:
- (UIImage*)screenshot
{
NSLog(#"Shot");
// Create a graphics context with the target size
// On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
// On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
CGSize imageSize = [[UIScreen mainScreen] bounds].size;
if (NULL != UIGraphicsBeginImageContextWithOptions)
UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0);
else
UIGraphicsBeginImageContext(imageSize);
CGContextRef context = UIGraphicsGetCurrentContext();
// Iterate over every window from back to front
for (UIWindow *window in [[UIApplication sharedApplication] windows])
{
if (![window respondsToSelector:#selector(screen)] || [window screen] == [UIScreen mainScreen])
{
// -renderInContext: renders in the coordinate space of the layer,
// so we must first apply the layer's geometry to the graphics context
CGContextSaveGState(context);
// Center the context around the window's anchor point
CGContextTranslateCTM(context, [window center].x, [window center].y);
// Apply the window's transform about the anchor point
CGContextConcatCTM(context, [window transform]);
// Offset by the portion of the bounds left of and above the anchor point
CGContextTranslateCTM(context,
-[window bounds].size.width * [[window layer] anchorPoint].x,
-[window bounds].size.height * [[window layer] anchorPoint].y);
// Render the layer hierarchy to the current context
[[window layer] renderInContext:context];
// Restore the context
CGContextRestoreGState(context);
}
}
// Retrieve the screenshot image
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
Any help would be very much appreciated! Thanks.
EDIT: This is the viewDidLoad method in which I create a TextView and then try to capture a screen shot of it:
- (void)viewDidLoad {
// Setup TextView:
NSString* someText = #"Some Text";
CGRect frameText = CGRectMake(0, 0, 320, 480);
aTextView = [[UITextView alloc] initWithFrame:frameText];
aTextView.text = someText;
[self.view addSubview:aTextView];
// Test Screenshot:
screenShotView = [[UIImageView alloc] initWithImage:[self screenshot]];
[screenShotView setFrame:CGRectMake(10, 10, 200, 200)];
[self.view addSubview:screenShotView];
[self.view bringSubviewToFront:screenShotView];
[super viewDidLoad];
}
To use the image just change this line
screenShot = [UIImage screenshot];
To
screenShot = [self screenshot];
Edit: Check to see if the [self screenshot] returns a valid image or nil.
- (void)viewDidLoad {
// Setup TextView:
NSString* someText = #"Some Text";
CGRect frameText = CGRectMake(0, 0, 320, 480);
aTextView = [[UITextView alloc] initWithFrame:frameText];
aTextView.text = someText;
[self.view addSubview:aTextView];
// Test Screenshot:
UIImage *screenShotImage = [self screenShot];
if(screenShot){
screenShotView = [[UIImageView alloc] initWithImage:screenShotImage];
[screenShotView setFrame:CGRectMake(10, 10, 200, 200)];
[self.view addSubview:screenShotView];
[self.view bringSubviewToFront:screenShotView];
}else
NSLog(#"Something went wrong in screenShot method, the image is nil");
[super viewDidLoad];
}