I want to scale the image in iPhone app but not entirely. I just want to scale specific parts like the bottom part or the middle part...How do I do it?
Please help.
Thanks
It sounds like you want to do a form of 9-slice scaling or 3-slice scaling. Let's say you have the following image:
and you want to make it look like this:
(the diagonal end pieces do not stretch at all, the top and bottom pieces stretch horizontal, and the left and right pieces stretch vertical)
To do this, use -stretchableImageWithLeftCapWidth:topCapHeight: in iOS 4.x and earlier, or -resizableImageWithCapInsets: starting with iOS 5.
UIImage *myImage = [UIImage imageNamed:#"FancyButton"];
UIImage *myResizableImage = [myImage resizableImageWithCapInsets:UIEdgeInsetsMake(21.0, 13.0, 21.0, 13.0)];
[anImageView setImage:myResizableImage]
To help visualize the scaling, here is an image showing the above cap insets:
I'm not aware of any way to adjust the scale of just a part of a UIImage. I'd approach is slightly differently by creating seperate images from your primary image using CGImageCreateWithImageInRect and then scaling the seperate images with the different rates that you require.
See:
Cropping a UIImage
CGImage Reference
Quartz 2D Programming Guide
Related
I want to cut a part of large image with the shape of a small image. Is there any way to find the exact shape of small image instead of making rect of that image using boundingBox? Please reply..
You could do it by masking the bigger sprite with the smaller one.
Here you have a great tutorial on that:
http://www.raywenderlich.com/4421/how-to-mask-a-sprite-with-cocos2d-1-0
And here is the tool that they use to play with the blending functions
http://www.andersriggelsen.dk/glblendfunc.php
I'm building some sort of censoring app. I've gotten so far that i can completele pixelate an image taken with my iPhone.
But I want to achieve in the end an image like this: http://images-mediawiki-sites.thefullwiki.org/11/4/8/8/8328511755287292.jpg
So my thought was to fully pixelate my image and then add a mask on top of it, to achieve the desired effect. So in terms of layers it goes like: originalImage + maskedPixelatedVersionOfImage.. I was thinking to animate the mask when touching the image, to scale the mask to the desired size. The longer you hold your finger on the image, the bigger the mask becomes...
After some searching, I guess this can be done using CALayers and CAAnimation. But how do I then composite those layers to an image that I can save in the photoalbum on the iphone?
Am I taking the right approach here?
EDIT:
Okay, I guess Ole's solution is the correct one, though I'm still not getting what I want: the code I use is:
CALayer *maskLayer = [CALayer layer];
CALayer *mosaicLayer = [CALayer layer];
// Mask image ends with 0.15 opacity on both sides. Set the background color of the layer
// to the same value so the layer can extend the mask image.
mosaicLayer.contents = (id)[img CGImage];
mosaicLayer.frame = CGRectMake(0,0, img.size.width, img.size.height);
UIImage *maskImg = [UIImage imageWithContentsOfFile:[[NSBundle mainBundle] pathForResource:#"mask" ofType:#"png"]];
maskLayer.contents = (id)[maskImg CGImage];
maskLayer.frame = CGRectMake(100,150, maskImg.size.width, maskImg.size.height);
mosaicLayer.mask = maskLayer;
[imageView.layer addSublayer:mosaicLayer];
UIGraphicsBeginImageContext(imageView.layer.bounds.size);
[imageView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *saver = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
So in my imageView i did: setImage, which has the original (unedited) version of the photo. On top of that i add a sublayer, mosaicLayer, which has a mask property: maskLayer. I thought by rendering the rootlayer of the imageView, everything would turn out ok. Is that not correct?
Also, I figured out something else: my mask is stretched and rotated, which i'm guessing has something to do with imageOrientation? I noticed by accidentally saving mosaicLayer to my library, which also explains the problem I had that the mask seemed to mask the wrong part of my image...
To render a layer tree, put all layers in a common container layer and call:
UIGraphicsBeginImageContext(containerLayer.bounds.size);
[containerLayer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
If you're willing to drop support for pre-iPhone 3G S devices (iPhone and iPhone 3G), I'd suggest using OpenGL ES 2.0 shaders for this. While it may be easy to overlay a CALayer containing a pixelated version of the image, I think you'll find the performance to be lacking.
In my tests, performing a simple CPU-based calculation on every pixel of a 480 x 320 image led to a framerate of about 4 FPS on an iPhone 4. You might be able to sample only a fraction of these pixels to achieve the desired effect, but it still will be a slow operation to redraw a pixelated image to match the live video.
Instead, if you use an OpenGL ES 2.0 fragment shader to process the incoming live video image, you should be able to take in the raw camera image, apply this filter selectively over the desired area, and either display or save the resulting camera image. This processing will take place almost entirely on the GPU, which I've found to do simple operations like this at 60 FPS on the iPhone 4.
While getting a fragment shader to work quite right can require a little setup, you might be able to use this sample application I wrote for processing camera input and doing color tracking to be a decent starting point. You might also look at the touch gesture I use there, where I take the initial touch down point to be the location to center an effect around and then a subsequent drag distance to control the strength or radius of an effect.
I'm using the UIGraphicsGetImageFromCurrentImageContext() function to capture the screen contents into an UIImage object (previously rendered into an Image context). This works great for both the simulator and a real device, however in the latter the resulting image has a few pixels with distorted colors, as seen here:
http://img300.imageshack.us/img300/2788/screencap.png
Please notice the few fucsia pixels at the top navigation bar, at both sides of the search field and to the right of the button. There are also such pixels at the right of the bottom-left button.
The code I'm using to capture the screen view into an UIImage object is pretty straightforward:
UIGraphicsBeginImageContext(self.view.window.frame.size);
[self.view.window.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *anImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
One thing to note is that all the graphics that get distorted belong to custom PNG files, used to draw the search field background as well as the buttons background.
Does anyone knows what could be possible causing this strange color distortion?
Best regards,
Just checked my own code that is doing the same thing you are. Yours is nearly identical to mine, except that I am asking the view's layer to render instead of the window's, i.e.:
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
I don't know why that would make a difference, but it's worth a try.
Solved it by using the just-approved private function UIGetScreenImage().
For more info, please check http://www.tuaw.com/2009/12/15/apple-relents-and-is-now-allowing-uigetscreenimage-for-app-st/ and https://devforums.apple.com/message/149553
Regards,
This article explains the issue with image corruption (caused by partially transparent pixels) and provides a workaround which corroborates Chris's comment:
http://www.kaoma.net/iphone/?p=9
UIGetScreenImage() is quite annoying when you just want to capture a view.
I found a nice trick, just re-save all your PNG images into TIFF format using Preview.app :)
I'm trying to resize an image using stretchableImageWithLeftCapWidth: it works on the simulator, but on the device, vertical green bars appears.
I've tried to use imageNamed, imageWithContentOfFile and imageWithData lo load the image, it doesnt change.
UIImage *bottomImage = [[UIImage imageWithData:
[NSData dataWithContentsOfFile:
[NSString stringWithFormat:#"%#/bottom_part.png",
[[NSBundle mainBundle] resourcePath]]]]
stretchableImageWithLeftCapWidth:27 topCapHeight:9];
UIImageView *bottomView = [[UIImageView alloc] initWithFrame:CGRectMake(10, 200+73, 100, 73)];
[self.view addSubview:bottomView];
UIGraphicsBeginImageContext(CGSizeMake(100, 73));
[bottomImage drawInRect:CGRectMake(0, 0, 100, 73)];
UIImage *bottomResizedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
bottomView.image = bottomResizedImage;
See the result: the green bars shouldn't be there, they dont appear on the simulator.
alt text http://www.quicksnapper.com/files/5161/96232162149F1C14751048_m.png
Did you figure this out?
Seems like it might be a bug with UIGraphicsGetImageFromCurrentImageContext. If I draw an image that has transparency I get red/green artifacts. These only appear on the device (not in the sim). Also, if I remove transparency from the image, the artifacts go away.
Update: Some more weirdness. I was using PNGs before, so I tried using a transparent gif instead. Using a GIF, shows the artifact problem on the sim.
Victory! Found a solution:
Turn off 'Compress PNG Files' in the build settings for your project. Disabling this makes the PNG transparency work without any artifacts.
It seems like you're writing a lot of unnecessary code, but perhaps that's just because it's out of context and there's more to it that I'm missing.
To get the image:
[[UIImage imageName: #"bottom_part.png"] stretchableImageWithLeftCapWidth: 27 topCapHeight: 9];
What part of your code are you displaying this image? i.e. in what method call are you drawing the images above?
Why not just use a UIImageView and put the image in there? Then you don't need to do any of the image context drawing etc.
In my case I had an UIImageView which I stretched horizontally. I found a strange vertical white line in the stretched image.
The solutions above didn't work for me. However, the following did:
Cast the width value to an int value:
myNewViewFrame.size.width = (int)newWidth;
myView.frame = myNewViewFrame;
Hope it works for you guys as well...
I've spent 5 hours debugging this last night, disabling PNG compression in xcode didn't do anything for me.
For anyone else meeting vertical, green, bars/lines/artifacts with stretchableImageWithLeftCapWidth (in UIImage API) - this post is for you.
I have a 21x30 for a custom barbutton background that I want to widen, but got the same green stripes as the OP. I found a PNG created with photoshop and that worked fine - mine are made with Gimp.
Anyway, stripping ALL chunks from the files (except the three essential ones) made no difference. Neither did disabling PNG compression.
What did work for me was this:
Add ONE empty line above my image (which is now 21x31).
Set topCapHeight:0 when creating the scaled image.
When I use my scaled image, I draw into a CGContext, which is later used to make an UIImage. I use this to draw:
[image drawInRect:CGRectMake(0,-2,width,32)];
This makes the problem go away (for me).
I assume that the bug/issue has to do with not scaling vertically when drawing, so I force scaling of the first source image line (into two lines), which are draw outside my composition.
I hope this helps someone save 5 hours.
/Krisb
I stopped using stretchableImageWithLeftCapWidth in favor of UIView's much more well-behaved contentStretch property. I've never seen the green bars you are describing, but in general I would recommend using contentStretch instead of stretchableImageWithLeftCapWidth.
I tried to disable PNG compression like Nate proposed, but this wouldn't work for me (iOS 3.1.3). I then tried using TIFF images instead of PNG, which works.
I have found similar issues with contentStretch (on any UIView that has drawn content) when using a value of (0.5, 0.5, 0, 0). i.e. stretch on the center pixel.
I have found that only the iphone 3G (possibly 2G) exhibits this problem. iphone 4 and 3GS is OK. So, I assume this is a problem with the old graphics HW.
a way i found around the problem was to stretch on a slightly larger center area.
e.g. (0.4, 0.4, 0.1, 0.1)
I am developing one game where I want to magnify the image where magnifier image is placed.
For that I am using the concept of masking. After masking I am zooming the image but looks blur. And I want image should be clearer like we r looking through rifle magnifier. So if any one have solution then kindly reply
are you sure that the problem is the masking?
perhaps your resources are too low resolution? high resolution images scaled down always look better than low resolution images scaled up.
Maybe you need to look at the problem backwards... so that your image when looking through the rifle magnifier [scope?] is viewed at a 1:1 resolution and when not viewed through the scope it is zoomed out (1:2 resolution?). so this way your 'normal' mode is the zoomed out mode and the "magnified view" is actually just the image at 1:1.
If you have A UIImage whose size is 293x184 but you create a UIImageView with an initial size of 40x30, the iPhone SCALES the UIImage to fit according to the property: contentMode. Default contentMode is: UIViewContentModeScaleToFill, which scales your image.
So even though you started with a large image, it is now only 40x30 and rendered at 40x30. When you zoom it is STILL 40x30, but rendered at some larger size which is causing the blur.
One solution would be to replace the image after the zoom, then you would have a completely new UIImage at full resolution.
[self.view setFrame:reallyBigFrame];
[self.view setImage:newUIImage];
Another would be initially place the UIImage in a full size 293x184 UIImageView, then use an AffineTransform to scale it down:
view.transform = CGAffineTransformScale(view.transform, 0.25, 0.25);