I added two UIView to ViewController.view, and applied 2 squares image into each view.layer.mask to make it like a square is sliced into 2 pieces, and addSubview the imageview over it.
I am having a problem rendering the masked layers and save it to photo album.
I want the saved photo to be look like picture no. 1, but it always looks like picture no. 2 after I save it to photo album.
Is there any solution to capture like picture No. 1 after applying mask?
the below is the reference from apple regarind renderIngContext.
Important The OS X v10.5 implementation of this method does not support the entire Core Animation composition model. QCCompositionLayer, CAOpenGLLayer, and QTMovieLayer layers are not rendered. Additionally, layers that use 3D transforms are not rendered, nor are layers that specify backgroundFilters, filters, compositingFilter, or a mask values. Future versions of OS X may add support for rendering these layers and properties.
I've created an image capture function before, which literally does a printscreen of a UIView. I don't use it, because it does not work well for my needs but maybe you can use it:
UIImage *img;
UIGraphicsBeginImageContextWithOptions(UIViewYouWantToCapture.bounds.size, self.opaque, 0.0);
[[UIViewYouWantToCapture layer] renderInContext:UIGraphicsGetCurrentContext()];
img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
When we apply masking to any image then the we get the resultant image with alpha property of masked image to 1 and remaining image to 0,
and when we are capturing image of view then there is still complete image(we are able to seen half image due due to alpa = 0 of half image, but there is still a complete image) , so we getting the screenshot of complete view.
Related
The short version: How do I know what region of a UIImageView contains the image, and not aspect ratio padding?
The longer version:
I have a UIImageView of fixed size as pictured:
I am loading photos into this UIViewController, and I want to retain the original photo's aspect ratio so I set the contentMode to Aspect Fit. This ends up ensuring that the entire photo is displayed within the UIImageView, but with the side effect of adding some padding (configured in red):
No problem so far.... But now I am doing face detection on the original image. The face detection code returns a list of CGRects which I then render on top of the UIImageView (I have a subclassed UIView and then laid out an instance in IB which is the same size and offset as the UIImageView).
This approach works great when then photo is not padded out to fit into UIImageView. However if there is padding, it introduces some skew as seen here in green:
I need to take the image padding into account when rendering the boxes, but I do not see a way to retrieve it.
Since I know the original image size and the UIImageView size, I can do some algebra to calculate where the padding should be. However it seems like there is probably a way to retrieve this information, and I am overlooking it.
I do not use image views often so this may not be the best solution. But since no one else has answered the question I figured I'd through out a simple mathematical solution that should solve your problem:
UIImage *selectedImage; // the image you want to display
UIImageView *imageView; // the imageview to hold the selectedImage
NSInteger heightOfView = imageView.frame.size.height;
NSInteger heightOfPicture = selectedImage.size.height;
NSInteger yStartingLocationForGreenSquare; // set it to whatever the current location is
// take whatever you had it set to and add the value of the top padding
yStartingLocationForGreenSquare += (heightOfView - heightOfPicture) / 2;
So although there may be other solutions this is a pretty simple math formula to accomplish what you need. Hope it helps.
In the iPhone sample code "PhotoScroller" from WWDC 2010, they show how to do a pretty good mimmic of the Photos app with scrolling, zooming, and paging of images. They also tile the images to show how to display high resolution images and maintain good performance.
Tiling is implemented in the sample code by grabbing pre scaled and cut images for different resolutions and placing them in the grid which makes up the entire image.
My question is: is there a way to tile images without having to manually go through all your photos and create "tiles"? How is it the Photos app is able to display large images on the fly?
Edit
Here is the code from Deepa's answer below:
- (UIImage *)tileForScale:(float)scale row:(int)row col:(int)col size:(CGSize)tileSize image:(UIImage *)inImage
{
CGRect subRect = CGRectMake(col*tileSize.width, row * tileSize.height, tileSize.width, tileSize.height);
CGImageRef tiledImage = CGImageCreateWithImageInRect([inImage CGImage], subRect);
UIImage *tileImage = [UIImage imageWithCGImage:tiledImage];
return tileImage;
}
Here goes the piece of code for tiled image generation:
In PhotoScroller source code replace tileForScale: row:col: with the following:
inImage - Image that you want to create tiles
- (UIImage *)tileForScale: (float)scale row: (int)row column: (int)col size: (CGSize)tileSize image: (UIImage*)inImage
{
CGRect subRect = CGRectMake(col*tileSize.width, row * tileSize.height, tileSize.width, tileSize.height);
CGImageRef tiledImage = CGImageCreateWithImageInRect([inImage CGImage], subRect);
UIImage *tileImage = [UIImage imageWithCGImage: tiledImage];
return tileImage;
}
Regards,
Deepa
I've found this which may be of help: http://www.mikelin.ca/blog/2010/06/iphone-splitting-image-into-tiles-for-faster-loading-with-imagemagick/
You just run it in the Terminal as a shell script on your Mac.
Sorry Jonah, but I think that you cannot do what you want to.
I have been implementing a comic app using the same example as a reference and had the same doubt. Finally, I realized that, even if you could load the image and cut it into tiles the first time that you use it, you shouldn't. There are two reasons for that:
You do the tiling to save time and be more responsive. Loading and tiling takes time for a large image.
Previous reason is particularly important the first time the user runs the app.
If these two reasons make no sense to you, and you still want to do it, I would use Quartz to create the tiles. CGImage function CGImageCreateWithImageInRect would be my starting point.
Deepa's answer above will load the entire image into memory as a UIImage (the input variable in his function), defeating the purpose of tiling.
Many image formats support region-based decoding. Instead of loading the whole image into memory, decompressing the whole thing, and discarding all but the region of interest (ROI), you can load and decode only the ROI, on-demand. For the most part, this eliminates the need to pre-generate and save image tiles. I've never worked with ImageMagick but I'd be amazed if it couldn't do it. (I have done it using the Java Advanced Imaging (JAI) API, which isn't going to help you on the iPhone...)
I've played with the PhotoScroller example and the way it works with pre-generated tiles is only to demonstrate the idea behind CATiledLayer, and make a working-self contained project. It's straightforward to replace the image tile loading strategy - just rewrite the TilingView tileForScale:row:col: method to return a UIImage tile from some other source, be it Quartz or ImageMagick or whatever.
I want to be able to take a UIImage instance, modify some of its pixel data, and get back a new UIImage instance with that pixel data. I've made some progress with this, but when I try to change the array returned by CGBitmapContextGetData, the end result is always an overlay of vertical blue lines over the image. I want to be able to change the colors and alpha values of pixels within the image.
Can someone provide an end-to-end example of how to do this, starting with a UIImage and ending with a new UIImage with modified pixels?
This might answer your question: How to get the RGB values for a pixel on an image on the iphone
I guess you'd follow those instructions to get a copy of the data, modify it, and then create a new CGDataProvider (CGDataProviderCreateWithCFData), use that to create a new CGImage (CGImageCreate) and then create a new UIImage from that.
I haven't actually tried this (no access to a Mac at the moment), though.
I have a mask (loaded from a 256 grey PNG) that I want to apply to an image that's being used as part of my process for drawing a UITableViewCell's imageView.image property.
When the cell isn't selected/highlighted, I CGImageCreateWithMask with a square of the proper color and the mask, then drawAtPoint: it into the image I'm building. This works fine.
However, when the cell is selected or highlighted, I'd like to instead use the mask to instead punch through my image appropriately. That is, when my mask specifies full opacity, I want the image I'm building to be completely transparent so the tableview's background is drawn through it. Where my mask specifies 0 opacity, I want the alpha channel untouched. I want nothing other than the alpha channel affected.
I guess what I mean is that I want to draw clearColor over a UIImage, with a varying level of opacity according to a mask.
First, what is this called? And second, how do I do it?
I think you have to manipulate the CALayers for that. You can use the mask property of the cell's CALayer : CALayer mask attribute.
That is, something in the way of (if myMask is descendent of UIView) :
myCell.layer.mask = myMask.layer
Basically I'm downloading images off of a webserver and then caching them to the disk, but before I do so I want to mask them.
I'm using the masking code everyone seems to point at which can be found here:
http://iosdevelopertips.com/cocoa/how-to-mask-an-image.html
What happens though, is that the image displays fine, but the version that gets written to the disk with
UIImage *img = [self maskImage:[UIImage imageWithData:data] withMask:self.imageMask];
[UIImagePNGRepresentation(img) writeToFile:cachePath atomically:NO];
has it's alpha channel inverted when compared to the one displayed later on (using the same UIImage instance here).
Any ideas? I do need the cached version to be masked, otherwise displaying the images in a table view get's awfully slow if I have to mask them every time.
Edit: So yeah, UIImagePNGRepresentation(img) seems to invert the alpha channel, doesn't have anything to do with the code that writes to disk, which is rather obvious but I checked anyway.
How about drawing into a new image, and then save that?
UIGraphicsBeginImageContext(img.size);
[img drawAtPoint:CGPointZero];
UIImage *newImg = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[UIImagePNGRepresentation(newImg) writeToFile:cachePath atomically:NO];
(untested)
See the description in CGImageCreateWithMask in CGImage Reference:
The resulting image depends on whether the mask parameter is an image mask or an image. If the mask parameter is an image mask, then the source samples of the image mask act as an inverse alpha value. That is, if the value of a source sample in the image mask is S, then the corresponding region in image is blended with the destination using an alpha value of (1-S). For example, if S is 1, then the region is not painted, while if S is 0, the region is fully painted.
If the mask parameter is an image, then it serves as an alpha mask for blending the image onto the destination. The source samples of mask' act as an alpha value. If the value of the source sample in mask is S, then the corresponding region in image is blended with the destination with an alpha of S. For example, if S is 0, then the region is not painted, while if S is 1, the region is fully painted.
It seems for some reason the image mask is treated as a mask image to mask with while saving. According to:
UIImagePNGRepresentation and masked images
http://lists.apple.com/archives/quartz-dev/2010/Sep/msg00038.html
to correctly save with UIImagePNGRepresentation, there are several choices:
Use inverse version of the image mask.
Use "mask image" instead of "image mask".
Render to a bitmap context and then save it, like epatel mentioned.