This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
How to get pixel data from a UIImage (Cocoa Touch) or CGImage (Core Graphics)?
Get Pixel color of UIImage
I have a scenario in which the user can select a color from an image, for example the one below:
Depending on the tap location of the user on the image I need to extract the RGB and alpha value at that very point—or say, pixel.
How do I accomplish this?
You need to create a bitmap context (CGContextRef) from the image and convert the CGPoint that was tapped to an array offset location to retrieve the color information from the pixel data.
See What Color is My Pixel? for a tutorial and this similar Stack Overflow question.
Methods:
- (CGContextRef) createARGBBitmapContextFromImage:(CGImageRef) inImage
Returns a CGContextRef representing the image passed as an argument
using the correct color space. This method is called by:
- (UIColor*) getPixelColorAtLocation:(CGPoint) point
This is the function you would call to get the UIColor at the passed
CGPoint.
Note that these methods are in a subclass of a UIImageView to make the process more straightforward.
Related
All solutions I've found for doing this (like this one Change color of certain pixels in a UIImage ) suggest to create a new uiimage, but I want to modify directly a pixel in the uiimage without creating a new one. Is there a way to do this ?
It seems that cgimage is not mutable, but is there a way to create an image from a pixel data buffer, and modifying this pixel data buffer would directly modify the image ?
I added two UIView to ViewController.view, and applied 2 squares image into each view.layer.mask to make it like a square is sliced into 2 pieces, and addSubview the imageview over it.
I am having a problem rendering the masked layers and save it to photo album.
I want the saved photo to be look like picture no. 1, but it always looks like picture no. 2 after I save it to photo album.
Is there any solution to capture like picture No. 1 after applying mask?
the below is the reference from apple regarind renderIngContext.
Important The OS X v10.5 implementation of this method does not support the entire Core Animation composition model. QCCompositionLayer, CAOpenGLLayer, and QTMovieLayer layers are not rendered. Additionally, layers that use 3D transforms are not rendered, nor are layers that specify backgroundFilters, filters, compositingFilter, or a mask values. Future versions of OS X may add support for rendering these layers and properties.
I've created an image capture function before, which literally does a printscreen of a UIView. I don't use it, because it does not work well for my needs but maybe you can use it:
UIImage *img;
UIGraphicsBeginImageContextWithOptions(UIViewYouWantToCapture.bounds.size, self.opaque, 0.0);
[[UIViewYouWantToCapture layer] renderInContext:UIGraphicsGetCurrentContext()];
img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
When we apply masking to any image then the we get the resultant image with alpha property of masked image to 1 and remaining image to 0,
and when we are capturing image of view then there is still complete image(we are able to seen half image due due to alpa = 0 of half image, but there is still a complete image) , so we getting the screenshot of complete view.
im creating and app where user already have an image (with different objects) without colors, i have to check the object and then color with respected color with the touch on that objects. how should i do this. can anyone help me.
I would say that that is non-trivial. I can only give hints since I have not done such an app yet.
First, you need to convert the image into a CGImageRef, for example by doing [uiimage_object CGImage].
Next you need convert the CGImageRef into array of pixel colors. You can follow the tutorial at http://www.fiveminutes.eu/iphone-image-processing/ for sample code. But for your app you need to convert the array into two dimension based on image width and height.
Then, use the coordinate of the user touch to access the exact pixel color value from the array. Next you read off the color values of the surrounding pixels and determine if color is similar to the touched pixel or not (you might need to read some wikipedia articles etc on doing the color comparison). If the color is similar, change the color to the one you want. Recurse until the surrounding color is different (i.e. you hit the boundary).
When you are finished modifying the pixel color value array, you need to convert the array back into CGImageRef using CGImageCreate function. Then you convert back to UIImage using [UIImage imageWithCGImage:imageref].
Now you are on your own to implement the steps into code. It would be unreasonable if you expect me to code all that for you, wouldn't it?
I want to be able to take a UIImage instance, modify some of its pixel data, and get back a new UIImage instance with that pixel data. I've made some progress with this, but when I try to change the array returned by CGBitmapContextGetData, the end result is always an overlay of vertical blue lines over the image. I want to be able to change the colors and alpha values of pixels within the image.
Can someone provide an end-to-end example of how to do this, starting with a UIImage and ending with a new UIImage with modified pixels?
This might answer your question: How to get the RGB values for a pixel on an image on the iphone
I guess you'd follow those instructions to get a copy of the data, modify it, and then create a new CGDataProvider (CGDataProviderCreateWithCFData), use that to create a new CGImage (CGImageCreate) and then create a new UIImage from that.
I haven't actually tried this (no access to a Mac at the moment), though.
Basically I'm downloading images off of a webserver and then caching them to the disk, but before I do so I want to mask them.
I'm using the masking code everyone seems to point at which can be found here:
http://iosdevelopertips.com/cocoa/how-to-mask-an-image.html
What happens though, is that the image displays fine, but the version that gets written to the disk with
UIImage *img = [self maskImage:[UIImage imageWithData:data] withMask:self.imageMask];
[UIImagePNGRepresentation(img) writeToFile:cachePath atomically:NO];
has it's alpha channel inverted when compared to the one displayed later on (using the same UIImage instance here).
Any ideas? I do need the cached version to be masked, otherwise displaying the images in a table view get's awfully slow if I have to mask them every time.
Edit: So yeah, UIImagePNGRepresentation(img) seems to invert the alpha channel, doesn't have anything to do with the code that writes to disk, which is rather obvious but I checked anyway.
How about drawing into a new image, and then save that?
UIGraphicsBeginImageContext(img.size);
[img drawAtPoint:CGPointZero];
UIImage *newImg = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[UIImagePNGRepresentation(newImg) writeToFile:cachePath atomically:NO];
(untested)
See the description in CGImageCreateWithMask in CGImage Reference:
The resulting image depends on whether the mask parameter is an image mask or an image. If the mask parameter is an image mask, then the source samples of the image mask act as an inverse alpha value. That is, if the value of a source sample in the image mask is S, then the corresponding region in image is blended with the destination using an alpha value of (1-S). For example, if S is 1, then the region is not painted, while if S is 0, the region is fully painted.
If the mask parameter is an image, then it serves as an alpha mask for blending the image onto the destination. The source samples of mask' act as an alpha value. If the value of the source sample in mask is S, then the corresponding region in image is blended with the destination with an alpha of S. For example, if S is 0, then the region is not painted, while if S is 1, the region is fully painted.
It seems for some reason the image mask is treated as a mask image to mask with while saving. According to:
UIImagePNGRepresentation and masked images
http://lists.apple.com/archives/quartz-dev/2010/Sep/msg00038.html
to correctly save with UIImagePNGRepresentation, there are several choices:
Use inverse version of the image mask.
Use "mask image" instead of "image mask".
Render to a bitmap context and then save it, like epatel mentioned.