Writing a masked image to disk as a PNG file - iphone

Basically I'm downloading images off of a webserver and then caching them to the disk, but before I do so I want to mask them.
I'm using the masking code everyone seems to point at which can be found here:
http://iosdevelopertips.com/cocoa/how-to-mask-an-image.html
What happens though, is that the image displays fine, but the version that gets written to the disk with
UIImage *img = [self maskImage:[UIImage imageWithData:data] withMask:self.imageMask];
[UIImagePNGRepresentation(img) writeToFile:cachePath atomically:NO];
has it's alpha channel inverted when compared to the one displayed later on (using the same UIImage instance here).
Any ideas? I do need the cached version to be masked, otherwise displaying the images in a table view get's awfully slow if I have to mask them every time.
Edit: So yeah, UIImagePNGRepresentation(img) seems to invert the alpha channel, doesn't have anything to do with the code that writes to disk, which is rather obvious but I checked anyway.

How about drawing into a new image, and then save that?
UIGraphicsBeginImageContext(img.size);
[img drawAtPoint:CGPointZero];
UIImage *newImg = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[UIImagePNGRepresentation(newImg) writeToFile:cachePath atomically:NO];
(untested)

See the description in CGImageCreateWithMask in CGImage Reference:
The resulting image depends on whether the mask parameter is an image mask or an image. If the mask parameter is an image mask, then the source samples of the image mask act as an inverse alpha value. That is, if the value of a source sample in the image mask is S, then the corresponding region in image is blended with the destination using an alpha value of (1-S). For example, if S is 1, then the region is not painted, while if S is 0, the region is fully painted.
If the mask parameter is an image, then it serves as an alpha mask for blending the image onto the destination. The source samples of mask' act as an alpha value. If the value of the source sample in mask is S, then the corresponding region in image is blended with the destination with an alpha of S. For example, if S is 0, then the region is not painted, while if S is 1, the region is fully painted.
It seems for some reason the image mask is treated as a mask image to mask with while saving. According to:
UIImagePNGRepresentation and masked images
http://lists.apple.com/archives/quartz-dev/2010/Sep/msg00038.html
to correctly save with UIImagePNGRepresentation, there are several choices:
Use inverse version of the image mask.
Use "mask image" instead of "image mask".
Render to a bitmap context and then save it, like epatel mentioned.

Related

How can I increase the bit depth of an image while keeping it transparent?

I am using UIGraphicsBeginImageContext(canvasRect.size) to export images, but because UIGraphicsBeginImageContext uses only 8-bit context, the exported image has the original image's The color representation was dropped, resulting in a blurry appearance.
Therefore, we changed the code to UIGraphicsBeginImageContextWithOptions(canvasRect.size, true, 1.0).
The original image was successfully exported as a clean image with no loss of color representation, but transparency is no longer represented because opaque was set to true.
Please let me know if you know how to increase the bit rate while keeping the transparency of the image.
Please let me know if there is any other method other than UIGraphicsBeginImageContextWithOptions that can be used to export an image while preserving the color representation of the image.

How to capture screen after applying mask to the layer in iOS?

I added two UIView to ViewController.view, and applied 2 squares image into each view.layer.mask to make it like a square is sliced into 2 pieces, and addSubview the imageview over it.

I am having a problem rendering the masked layers and save it to photo album.
I want the saved photo to be look like picture no. 1, but it always looks like picture no. 2 after I save it to photo album.

Is there any solution to capture like picture No. 1 after applying mask?
the below is the reference from apple regarind renderIngContext.
Important The OS X v10.5 implementation of this method does not support the entire Core Animation composition model. QCCompositionLayer, CAOpenGLLayer, and QTMovieLayer layers are not rendered. Additionally, layers that use 3D transforms are not rendered, nor are layers that specify backgroundFilters, filters, compositingFilter, or a mask values. Future versions of OS X may add support for rendering these layers and properties.
I've created an image capture function before, which literally does a printscreen of a UIView. I don't use it, because it does not work well for my needs but maybe you can use it:
UIImage *img;
UIGraphicsBeginImageContextWithOptions(UIViewYouWantToCapture.bounds.size, self.opaque, 0.0);
[[UIViewYouWantToCapture layer] renderInContext:UIGraphicsGetCurrentContext()];
img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
When we apply masking to any image then the we get the resultant image with alpha property of masked image to 1 and remaining image to 0,
and when we are capturing image of view then there is still complete image(we are able to seen half image due due to alpa = 0 of half image, but there is still a complete image) , so we getting the screenshot of complete view.

What's the difference between an CGImageRef that is a mask, and one that is not a mask?

I discovered that CGBitmapContextCreateImage() creates an image, which is not neccessarily always a mask compatible with CGContextClipToMask(). But when using CGImageMaskCreate(), the CGImageRef is always a mask that works with CGContextClipToMask(). Now, what is so special about the mask VS. the "normal" image?
My guess is that the mask is grayscale only, where as an CGImageRef created with CGBitmapContextCreateImage() may have RGBA values which irritate CGContextClipToMask(). I couldn't find the spot in the documentation where the exact difference between masks and CG images is explained.
But it seems that an Core Graphics image != a mask, while a mask == a Core Graphics Image
Every value in an image, be it RGB, CMYK or Greyscale, represents a position in a particular colorspace. It is meaningful to ask "What would this value be in colour-space 'x'?" - and the result would, if possible, be the same colour, but could be a different numerical value.
eg (simplistically). A pixel with value (255,255,255) is White in an RGB colorspace but Black in a (hypothetical) CMY colour-space. Converting the White RGB pixel to the CMY colorspace would give the value (0,0,0). In other words an Image must have a colorspace, it only makes sense given a colorspace.
On the contrary, an 8bit mask represents absolute values between 0-255. There is no colorspace and it makes no sense to think of a mask in a particular colorspace.
In that way images and masks are fundamentally different, even though we often think of masks as greyscale images.
An image mask in Core Graphics is a special kind of image. From the CGImageMaskCreate reference:
A Quartz bitmap image mask is used the same way an artist uses a silkscreen, or a sign painter uses a stencil. The bitmap represents a mask through which a color is transferred. The bitmap itself does not have a color. It gets its color from the fill color currently set in the graphics state.
When you draw into a context with a bitmap image mask, Quartz uses the mask to determine where and how the current fill color is applied to the image rectangle. Each sample value in the mask specifies how much of the current fill color is masked out at a specific location. Effectively, the sample value specifies the opacity of the mask. Larger values represent greater opacity and hence less color applied to the page.
See more here: http://developer.apple.com/library/mac/#documentation/GraphicsImaging/Reference/CGImage/Reference/reference.html

The color of video is wrong when made from UIImage of the PNG files

I am taking a UIImage from Png file and feed it to the videoWriter.
avAdaptor appendPixelBuffer:pixelBuffer
When the end result video comes out, it seems to lacking the one color, missing the yellow color or something.
I take alook of the function that made the pixelbuffer out of the UIImage
CVPixelBufferCreateWithBytes(NULL,
myWidth,
myHeight,
kCVPixelFormatType_32BGRA,
(void*)CFDataGetBytePtr(image),
CGImageGetBytesPerRow(cgImage),
NULL,
0,
NULL,
&pixelBuffer);
I also try the kCVPixelFormatType_32AGRB and others, it didn't help.
any thoughts?
Please verify if your PNG image is with or without transparency element. If your PNG image doesn't contain transparency then it's 24-bit and not 32-bit per pixel.
Also, have you tried kCVPixelFormatType_32RGBA ?
Maybe the image sizes do not fit together.
Your input image should have the same width and height like the video output. If "myWidth" or "myHeight" is different (i.e. different aspect ratio) to the size of the image, one byte may be lost at the end of the line, which could lead to color shifting. kCVPixelFormatType_32BGRA seems to be the preferred (fastest) pixel format, so this should be okay.
There is no yellow color in RGB colorspace. This means yellow is only the >red< and >green< components. It seems that >blue< is missing.
I assume you are using a CFDataRef (maybe NSData) for the image. If it is an NSData object you can print the bytes to the debug console using
NSLog(#"data: %#", image);
This will print a hex dump to the console. Here you can see if you have alpha and what kind of byte order your png is alike. If your image has alpha, every fourth byte should be the same number.

How can I adjust the RGB pixel data of a UIImage on the iPhone?

I want to be able to take a UIImage instance, modify some of its pixel data, and get back a new UIImage instance with that pixel data. I've made some progress with this, but when I try to change the array returned by CGBitmapContextGetData, the end result is always an overlay of vertical blue lines over the image. I want to be able to change the colors and alpha values of pixels within the image.
Can someone provide an end-to-end example of how to do this, starting with a UIImage and ending with a new UIImage with modified pixels?
This might answer your question: How to get the RGB values for a pixel on an image on the iphone
I guess you'd follow those instructions to get a copy of the data, modify it, and then create a new CGDataProvider (CGDataProviderCreateWithCFData), use that to create a new CGImage (CGImageCreate) and then create a new UIImage from that.
I haven't actually tried this (no access to a Mac at the moment), though.