Accurately get a color from pixel on screen and convert its color space - swift

I need to get a color from a pixel on the screen and convert its color space. The problem I have is that the color values are not the same when comparing the values against the Digital Color Meter app.
// create a 1x1 image at the mouse position
if let image:CGImage = CGDisplayCreateImage(disID, rect: CGRect(x: x, y: y, width: 1, height: 1))
{
let bitmap = NSBitmapImageRep(cgImage: image)
// get the color from the bitmap and convert its colorspace to sRGB
var color = bitmap.colorAt(x: 0, y: 0)!
color = color.usingColorSpace(.sRGB)!
// print the RGB values
let red = color.redComponent, green = color.greenComponent, blue = color.blueComponent
print("r:", Int(red * 255), " g:", Int(green * 255), " b:", Int(blue * 255))
}
My code (converted to sRGB): 255, 38, 0
Digital Color Meter (sRGB): 255, 4, 0
How do you get a color from a pixel on the screen with the correct color space values?
Update:
If you don’t convert the colors colorspace to anything (or convert it to calibratedRGB), the values match the Digital Color Meters values when it’s set to “Display native values”.
My code (not converted): 255, 0, 1
Digital Color Meter (set to: Display native values): 255, 0, 1
So why when the colors values match the native values in the DCM app, does converting the color to sRGB and comparing it to the DCM's values(in sRGB) not match? I also tried converting to other colorspaces and there always different from the DCM.

OK, I can tell you how to fix it/match DCM, you'll have to decide if this is correct/a bug/etc.
It seems the color returned by colorAt() has the same component values as the bitmap's pixel but a different color space - rather than the original device color space it is a generic RGB one. We can "correct" this by building a color in the bitmap's space:
let color = bitmap.colorAt(x: 0, y: 0)!
// need a pointer to a C-style array of CGFloat
let compCount = color.numberOfComponents
let comps = UnsafeMutablePointer<CGFloat>.allocate(capacity: compCount)
// get the components
color.getComponents(comps)
// construct a new color in the device/bitmap space with the same components
let correctedColor = NSColor(colorSpace: bitmap.colorSpace,
components: comps,
count: compCount)
// convert to sRGB
let sRGBcolor = correctedColor.usingColorSpace(.sRGB)!
I think you'll find that the values of correctedColor track DCM's native values, and those of sRGBcolor track DCM's sRGB values.
Note that we are constructing a color in the device space, not converting a color to the device space.
HTH

Related

ChromaKey filter not filtering desired color CIImage

I am trying to make some colors of the image transparent. Below are the images that I have.
Lets say that I want to remove the bold red color from the image and have it transparent. I am viewing my image as PDF, therefore the transparent color would be if the background would match with the pink on the side. I am using the code from Apple documentation which I slightly modified in the following way:
// inside 3rd loop
let hue = getHue(red: red, green: green, blue: blue)
let wantedHue = getHue(red: myPixel.redComponent, green: myPixel.greenComponent, blue: myPixel.blueComponent)
let isHueInRange = hue >= wantedHue - 0.1 && hue <= wantedHue + 0.1
let alpha:CGFloat = isHueInRange ? 0 : 1
Here is the result I get. As you can see, there is some color left and the background is not fully transparent. I made these modifications, because I need to be able to dynamically remove the background color of the image (my images won't have any humans or other complex objects in it. It will most likely be text and some rectangles. No color mixing. Just still colors.)
So what I do is finding the first pixel of the image and get its color. When I have the color I get its hue, but I manually set allowed range to be 0.2. I am assuming that the image won't contain any similar color to the one I have.
EDIT:
The original color is: rgb(200, 39, 39) - hsv(200, 80.5, 78.4)
The residue color is: rgb(246, 215, 210) - hsv(352, 14.6, 96.5)
The image I have:
The image I get after applying the filter:
To remove red colour, if the hue is between 0.9 and 0.1 (approximately) alpha should be zero.
Use the following and it will work.
let hue = getHue(red: red, green: green, blue: blue)
var alpha : CGFloat = 1.0
if (hue < 0.1 && hue >= 0.0) || (hue > 0.9 && hue <= 1.0){
alpha = 0.0
}
I think the problem with your code is it never consider the range 0.9 to 1.0. It always consider some range from 0.0xxx to 0.1xxxx.

Can I change a color within an image in Swift [duplicate]

My question is if I have a Lion image just I want to change the color of the lion alone not the background color. For that I referred this SO question but it turns the color of whole image. Moreover the image is not looking great. I need the color change like photoshop. whether it is possible to do this in coregraphics or I have to use any other library.
EDIT : I need the color change to be like iQuikColor app
This took quite a while to figure out, mainly because I wanted to get it up and running in Swift using Core Image and CIColorCube.
#Miguel's explanation is spot on about the way you need to replace a "Hue angle range" with another "Hue angle range". You can read his post above for details on what a Hue Angle Range is.
I made a quick app that replaces a default blue truck below, with whatever you choose on Hue slider.
You can slide the slider to tell the app what color Hue you want to replace the blue with.
I'm hardcoding the Hue range to be 60 degrees, which typically seems to encompass most of a particular color but you can edit that if you need to.
Notice that it does not color the tires or the tail lights because that's outside of the 60 degree range of the truck's default blue hue, but it does handle shading appropriately.
First you need code to convert RGB to HSV (Hue value):
func RGBtoHSV(r : Float, g : Float, b : Float) -> (h : Float, s : Float, v : Float) {
var h : CGFloat = 0
var s : CGFloat = 0
var v : CGFloat = 0
let col = UIColor(red: CGFloat(r), green: CGFloat(g), blue: CGFloat(b), alpha: 1.0)
col.getHue(&h, saturation: &s, brightness: &v, alpha: nil)
return (Float(h), Float(s), Float(v))
}
Then you need to convert HSV to RGB. You want to use this when you discover a hue that in your desired hue range (aka, a color that's the same blue hue of the default truck) to save off any adjustments you make.
func HSVtoRGB(h : Float, s : Float, v : Float) -> (r : Float, g : Float, b : Float) {
var r : Float = 0
var g : Float = 0
var b : Float = 0
let C = s * v
let HS = h * 6.0
let X = C * (1.0 - fabsf(fmodf(HS, 2.0) - 1.0))
if (HS >= 0 && HS < 1) {
r = C
g = X
b = 0
} else if (HS >= 1 && HS < 2) {
r = X
g = C
b = 0
} else if (HS >= 2 && HS < 3) {
r = 0
g = C
b = X
} else if (HS >= 3 && HS < 4) {
r = 0
g = X
b = C
} else if (HS >= 4 && HS < 5) {
r = X
g = 0
b = C
} else if (HS >= 5 && HS < 6) {
r = C
g = 0
b = X
}
let m = v - C
r += m
g += m
b += m
return (r, g, b)
}
Now you simply loop through a full RGBA color cube and "adjust" any colors in the "default blue" hue range with those from your newly desired hue. Then use Core Image and the CIColorCube filter to apply your adjusted color cube to the image.
func render() {
let centerHueAngle: Float = 214.0/360.0 //default color of truck body blue
let destCenterHueAngle: Float = slider.value
let minHueAngle: Float = (214.0 - 60.0/2.0) / 360 //60 degree range = +30 -30
let maxHueAngle: Float = (214.0 + 60.0/2.0) / 360
var hueAdjustment = centerHueAngle - destCenterHueAngle
let size = 64
var cubeData = [Float](count: size * size * size * 4, repeatedValue: 0)
var rgb: [Float] = [0, 0, 0]
var hsv: (h : Float, s : Float, v : Float)
var newRGB: (r : Float, g : Float, b : Float)
var offset = 0
for var z = 0; z < size; z++ {
rgb[2] = Float(z) / Float(size) // blue value
for var y = 0; y < size; y++ {
rgb[1] = Float(y) / Float(size) // green value
for var x = 0; x < size; x++ {
rgb[0] = Float(x) / Float(size) // red value
hsv = RGBtoHSV(rgb[0], g: rgb[1], b: rgb[2])
if hsv.h < minHueAngle || hsv.h > maxHueAngle {
newRGB.r = rgb[0]
newRGB.g = rgb[1]
newRGB.b = rgb[2]
} else {
hsv.h = destCenterHueAngle == 1 ? 0 : hsv.h - hueAdjustment //force red if slider angle is 360
newRGB = HSVtoRGB(hsv.h, s:hsv.s, v:hsv.v)
}
cubeData[offset] = newRGB.r
cubeData[offset+1] = newRGB.g
cubeData[offset+2] = newRGB.b
cubeData[offset+3] = 1.0
offset += 4
}
}
}
let data = NSData(bytes: cubeData, length: cubeData.count * sizeof(Float))
let colorCube = CIFilter(name: "CIColorCube")!
colorCube.setValue(size, forKey: "inputCubeDimension")
colorCube.setValue(data, forKey: "inputCubeData")
colorCube.setValue(ciImage, forKey: kCIInputImageKey)
if let outImage = colorCube.outputImage {
let context = CIContext(options: nil)
let outputImageRef = context.createCGImage(outImage, fromRect: outImage.extent)
imageView.image = UIImage(CGImage: outputImageRef)
}
}
You can download the sample project here.
See answers below instead. Mine doesn't provide a complete solution.
Here is the sketch of a possible solution using OpenCV:
Convert the image from RGB to HSV using cvCvtColor (we only want to change the hue).
Isolate a color with cvThreshold specifying a certain tolerance (you want a range of colors, not one flat color).
Discard areas of color below a minimum size using a blob detection library like cvBlobsLib. This will get rid of dots of the similar color in the scene.
Mask the color with cvInRangeS and use the resulting mask to apply the new hue.
cvMerge the new image with the new hue with an image composed by the saturation and brightness channels that you saved in step one.
There are several OpenCV iOS ports in the net, eg: http://www.eosgarden.com/en/opensource/opencv-ios/overview/ I haven't tried this myself, but seems a good research direction.
I'm going to make the assumption that you know how to perform these basic operations, so these won't be included in my solution:
load an image
get the RGB value of a given pixel of the loaded image
set the RGB value of a given pixel
display a loaded image, and/or save it back to disk.
First of all, let's consider how you can describe the source and destination colors. Clearly you can't specify these as exact RGB values, since a photo will have slight variations in color. For example, the green pixels in the truck picture you posted are not all exactly the same shade of green. The RGB color model isn't very good at expressing basic color characteristics, so you will get much better results if you convert the pixels to HSL. Here are C functions to convert RGB to HSL and back.
The HSL color model describes three aspects of a color:
Hue - the main perceived color - i.e. red, green, orange, etc.
Saturation - how "full" the color is - i.e. from full color to no color at all
Lightness - how bright the color is
So for example, if you wanted to find all the green pixels in a picture, you will convert each pixel from RGB to HSL, then look for H values that correspond to green, with some tolerance for "near green" colors. Below is a Hue chart, from Wikipedia:
So in your case you will be looking at pixels that have a Hue of 120 degrees +/- some amount. The bigger the range the more colors will get selected. If you make your range too wide you will start seeing yellow and cyan pixels getting selected, so you'll have to find the right range, and you may even want to offer the user of your app controls to select this range.
In addition to selecting by Hue, you may want to allow ranges for Saturation and Lightness, so that you can optionally put more limits to the pixels that you want to select for colorization.
Finally, you may want to offer the user the ability to draw a "lasso selection" so that specific parts of the picture can be left out of the colorization. This is how you could tell the app that you want the body of the green truck, but not the green wheel.
Once you know which pixels you want to modify it's time to alter their color.
The easiest way to colorize the pixels is to just change the Hue, leaving the Saturation and Lightness from the original pixel. So for example, if you want to make green pixels magenta you will be adding 180 degrees to all the Hue values of the selected pixels (making sure you use modulo 360 math).
If you wanted to get more sophisticated, you can also apply changes to Saturation and that will give you a wider range of tones you can go to. I think the Lightness is better left alone, you may be able to make small adjustments and the image will still look good, but if you go too far away from the original you may start seeing hard edges where the process pixels border with background pixels.
Once you have the colorized HSL pixel you just convert it back to RGB and write it back to the image.
I hope this helps. A final comment I should make is that Hue values in code are typically recorded in the 0-255 range, but many applications show them as a color wheel with a range of 0 to 360 degrees. Keep that in mind!
Can I suggest you look into using OpenCV? It's an open source image manipulation library, and it's got an iOS port too. There are plenty of blog posts about how to use it and set it up.
It has a whole heap of functions that will help you do a good job of what you're attempting. You could do it just using CoreGraphics, but the end result isn't going to look nearly as good as OpenCV would.
It was developed by some folks at MIT, so as you might expect it does a pretty good job at things like edge detection and object tracking. I remember reading a blog about how to separate a certain color from a picture with OpenCV - the examples showed a pretty good result. See here for an example. From there I can't imagine it would be a massive job to actually change the separated color to something else.
I don't know of a CoreGraphics operation for this, and I don't see a suitable CoreImage filter for this. If that's correct, then here's a push in the right direction:
Assuming you have a CGImage (or a uiImage.CGImage):
Begin by creating a new CGBitmapContext
Draw the source image to the bitmap context
Get a handle to the bitmap's pixel data
Learn how the buffer is structured so you could properly populate a 2D array of pixel values which have the form:
typedef struct t_pixel {
uint8_t r, g, b, a;
} t_pixel;
Then create the color to locate:
const t_pixel ColorToLocate = { 0,0,0,255 }; // << black, opaque
And its substitution value:
const t_pixel SubstitutionColor = { 255,255,255,255 }; // << white, opaque
Iterate over the bitmap context's pixel buffer, creating t_pixels.
When you find a pixel which matches ColorToLocate, replace the source values with the values in SubstitutionColor.
Create a new CGImage from the CGBitmapContext.
That's the easy part! All that does is takes a CGImage, replace exact color matches, and produces a new CGImage.
What you want is more sophisticated. For this task, you will want a good edge detection algorithm.
I've not used this app you have linked. If it's limited to a few colors, then they may simply be swapping channel values, paired with edge detection (keep in mind that buffers may also be represented in multiple color models - not just RGBA).
If (in the app you linked) the user can choose an arbitrary colors, values, and edge thresholds, then you will have to use real blending and edge detection. If you need to see how this is accomplished, you may want to check out a package such as Gimp (it's an open image editor) - they have the algos to detect edges and choose by color.

Fill Color of PIL Cropping/Thumbnailing

I am taking an image file and thumbnailing and cropping it with the following PIL code:
image = Image.open(filename)
image.thumbnail(size, Image.ANTIALIAS)
image_size = image.size
thumb = image.crop( (0, 0, size[0], size[1]) )
offset_x = max( (size[0] - image_size[0]) / 2, 0 )
offset_y = max( (size[1] - image_size[1]) / 2, 0 )
thumb = ImageChops.offset(thumb, offset_x, offset_y)
thumb.convert('RGBA').save(filename, 'JPEG')
This works great, except when the image isn't the same aspect ratio, the difference is filled in with a black color (or maybe an alpha channel?). I'm ok with the filling, I'd just like to be able to select the fill color -- or better yet an alpha channel.
Output example:
How can I specify the fill color?
I altered the code just a bit to allow for you to specify your own background color, including transparency.
The code loads the image specified into a PIL.Image object, generates the thumbnail from the given size, and then pastes the image into another, full sized surface.
(Note that the tuple used for color can also be any RGBA value, I have just used white with an alpha/transparency of 0.)
# assuming 'import from PIL *' is preceding
thumbnail = Image.open(filename)
# generating the thumbnail from given size
thumbnail.thumbnail(size, Image.ANTIALIAS)
offset_x = max((size[0] - thumbnail.size[0]) / 2, 0)
offset_y = max((size[1] - thumbnail.size[1]) / 2, 0)
offset_tuple = (offset_x, offset_y) #pack x and y into a tuple
# create the image object to be the final product
final_thumb = Image.new(mode='RGBA',size=size,color=(255,255,255,0))
# paste the thumbnail into the full sized image
final_thumb.paste(thumbnail, offset_tuple)
# save (the PNG format will retain the alpha band unlike JPEG)
final_thumb.save(filename,'PNG')
Its a bit easier to paste your re-sized thumbnail image onto a new image, that is the colour (and alpha value) you want.
You can create an image, and speicfy its colour in a RGBA tuple like this:
Image.new('RGBA', size, (255,0,0,255))
Here there is there is no transparency as the alpha band is set to 255. But the background will be red. Using this image to paste onto we can create thumbnails with any colour like this:
If we set the alpha band to 0, we can paste onto a transparent image, and get this:
Example code:
import Image
image = Image.open('1_tree_small.jpg')
size=(50,50)
image.thumbnail(size, Image.ANTIALIAS)
# new = Image.new('RGBA', size, (255, 0, 0, 255)) #without alpha, red
new = Image.new('RGBA', size, (255, 255, 255, 0)) #with alpha
new.paste(image,((size[0] - image.size[0]) / 2, (size[1] - image.size[1]) / 2))
new.save('saved4.png')

How to change a particular color in an image?

My question is if I have a Lion image just I want to change the color of the lion alone not the background color. For that I referred this SO question but it turns the color of whole image. Moreover the image is not looking great. I need the color change like photoshop. whether it is possible to do this in coregraphics or I have to use any other library.
EDIT : I need the color change to be like iQuikColor app
This took quite a while to figure out, mainly because I wanted to get it up and running in Swift using Core Image and CIColorCube.
#Miguel's explanation is spot on about the way you need to replace a "Hue angle range" with another "Hue angle range". You can read his post above for details on what a Hue Angle Range is.
I made a quick app that replaces a default blue truck below, with whatever you choose on Hue slider.
You can slide the slider to tell the app what color Hue you want to replace the blue with.
I'm hardcoding the Hue range to be 60 degrees, which typically seems to encompass most of a particular color but you can edit that if you need to.
Notice that it does not color the tires or the tail lights because that's outside of the 60 degree range of the truck's default blue hue, but it does handle shading appropriately.
First you need code to convert RGB to HSV (Hue value):
func RGBtoHSV(r : Float, g : Float, b : Float) -> (h : Float, s : Float, v : Float) {
var h : CGFloat = 0
var s : CGFloat = 0
var v : CGFloat = 0
let col = UIColor(red: CGFloat(r), green: CGFloat(g), blue: CGFloat(b), alpha: 1.0)
col.getHue(&h, saturation: &s, brightness: &v, alpha: nil)
return (Float(h), Float(s), Float(v))
}
Then you need to convert HSV to RGB. You want to use this when you discover a hue that in your desired hue range (aka, a color that's the same blue hue of the default truck) to save off any adjustments you make.
func HSVtoRGB(h : Float, s : Float, v : Float) -> (r : Float, g : Float, b : Float) {
var r : Float = 0
var g : Float = 0
var b : Float = 0
let C = s * v
let HS = h * 6.0
let X = C * (1.0 - fabsf(fmodf(HS, 2.0) - 1.0))
if (HS >= 0 && HS < 1) {
r = C
g = X
b = 0
} else if (HS >= 1 && HS < 2) {
r = X
g = C
b = 0
} else if (HS >= 2 && HS < 3) {
r = 0
g = C
b = X
} else if (HS >= 3 && HS < 4) {
r = 0
g = X
b = C
} else if (HS >= 4 && HS < 5) {
r = X
g = 0
b = C
} else if (HS >= 5 && HS < 6) {
r = C
g = 0
b = X
}
let m = v - C
r += m
g += m
b += m
return (r, g, b)
}
Now you simply loop through a full RGBA color cube and "adjust" any colors in the "default blue" hue range with those from your newly desired hue. Then use Core Image and the CIColorCube filter to apply your adjusted color cube to the image.
func render() {
let centerHueAngle: Float = 214.0/360.0 //default color of truck body blue
let destCenterHueAngle: Float = slider.value
let minHueAngle: Float = (214.0 - 60.0/2.0) / 360 //60 degree range = +30 -30
let maxHueAngle: Float = (214.0 + 60.0/2.0) / 360
var hueAdjustment = centerHueAngle - destCenterHueAngle
let size = 64
var cubeData = [Float](count: size * size * size * 4, repeatedValue: 0)
var rgb: [Float] = [0, 0, 0]
var hsv: (h : Float, s : Float, v : Float)
var newRGB: (r : Float, g : Float, b : Float)
var offset = 0
for var z = 0; z < size; z++ {
rgb[2] = Float(z) / Float(size) // blue value
for var y = 0; y < size; y++ {
rgb[1] = Float(y) / Float(size) // green value
for var x = 0; x < size; x++ {
rgb[0] = Float(x) / Float(size) // red value
hsv = RGBtoHSV(rgb[0], g: rgb[1], b: rgb[2])
if hsv.h < minHueAngle || hsv.h > maxHueAngle {
newRGB.r = rgb[0]
newRGB.g = rgb[1]
newRGB.b = rgb[2]
} else {
hsv.h = destCenterHueAngle == 1 ? 0 : hsv.h - hueAdjustment //force red if slider angle is 360
newRGB = HSVtoRGB(hsv.h, s:hsv.s, v:hsv.v)
}
cubeData[offset] = newRGB.r
cubeData[offset+1] = newRGB.g
cubeData[offset+2] = newRGB.b
cubeData[offset+3] = 1.0
offset += 4
}
}
}
let data = NSData(bytes: cubeData, length: cubeData.count * sizeof(Float))
let colorCube = CIFilter(name: "CIColorCube")!
colorCube.setValue(size, forKey: "inputCubeDimension")
colorCube.setValue(data, forKey: "inputCubeData")
colorCube.setValue(ciImage, forKey: kCIInputImageKey)
if let outImage = colorCube.outputImage {
let context = CIContext(options: nil)
let outputImageRef = context.createCGImage(outImage, fromRect: outImage.extent)
imageView.image = UIImage(CGImage: outputImageRef)
}
}
You can download the sample project here.
See answers below instead. Mine doesn't provide a complete solution.
Here is the sketch of a possible solution using OpenCV:
Convert the image from RGB to HSV using cvCvtColor (we only want to change the hue).
Isolate a color with cvThreshold specifying a certain tolerance (you want a range of colors, not one flat color).
Discard areas of color below a minimum size using a blob detection library like cvBlobsLib. This will get rid of dots of the similar color in the scene.
Mask the color with cvInRangeS and use the resulting mask to apply the new hue.
cvMerge the new image with the new hue with an image composed by the saturation and brightness channels that you saved in step one.
There are several OpenCV iOS ports in the net, eg: http://www.eosgarden.com/en/opensource/opencv-ios/overview/ I haven't tried this myself, but seems a good research direction.
I'm going to make the assumption that you know how to perform these basic operations, so these won't be included in my solution:
load an image
get the RGB value of a given pixel of the loaded image
set the RGB value of a given pixel
display a loaded image, and/or save it back to disk.
First of all, let's consider how you can describe the source and destination colors. Clearly you can't specify these as exact RGB values, since a photo will have slight variations in color. For example, the green pixels in the truck picture you posted are not all exactly the same shade of green. The RGB color model isn't very good at expressing basic color characteristics, so you will get much better results if you convert the pixels to HSL. Here are C functions to convert RGB to HSL and back.
The HSL color model describes three aspects of a color:
Hue - the main perceived color - i.e. red, green, orange, etc.
Saturation - how "full" the color is - i.e. from full color to no color at all
Lightness - how bright the color is
So for example, if you wanted to find all the green pixels in a picture, you will convert each pixel from RGB to HSL, then look for H values that correspond to green, with some tolerance for "near green" colors. Below is a Hue chart, from Wikipedia:
So in your case you will be looking at pixels that have a Hue of 120 degrees +/- some amount. The bigger the range the more colors will get selected. If you make your range too wide you will start seeing yellow and cyan pixels getting selected, so you'll have to find the right range, and you may even want to offer the user of your app controls to select this range.
In addition to selecting by Hue, you may want to allow ranges for Saturation and Lightness, so that you can optionally put more limits to the pixels that you want to select for colorization.
Finally, you may want to offer the user the ability to draw a "lasso selection" so that specific parts of the picture can be left out of the colorization. This is how you could tell the app that you want the body of the green truck, but not the green wheel.
Once you know which pixels you want to modify it's time to alter their color.
The easiest way to colorize the pixels is to just change the Hue, leaving the Saturation and Lightness from the original pixel. So for example, if you want to make green pixels magenta you will be adding 180 degrees to all the Hue values of the selected pixels (making sure you use modulo 360 math).
If you wanted to get more sophisticated, you can also apply changes to Saturation and that will give you a wider range of tones you can go to. I think the Lightness is better left alone, you may be able to make small adjustments and the image will still look good, but if you go too far away from the original you may start seeing hard edges where the process pixels border with background pixels.
Once you have the colorized HSL pixel you just convert it back to RGB and write it back to the image.
I hope this helps. A final comment I should make is that Hue values in code are typically recorded in the 0-255 range, but many applications show them as a color wheel with a range of 0 to 360 degrees. Keep that in mind!
Can I suggest you look into using OpenCV? It's an open source image manipulation library, and it's got an iOS port too. There are plenty of blog posts about how to use it and set it up.
It has a whole heap of functions that will help you do a good job of what you're attempting. You could do it just using CoreGraphics, but the end result isn't going to look nearly as good as OpenCV would.
It was developed by some folks at MIT, so as you might expect it does a pretty good job at things like edge detection and object tracking. I remember reading a blog about how to separate a certain color from a picture with OpenCV - the examples showed a pretty good result. See here for an example. From there I can't imagine it would be a massive job to actually change the separated color to something else.
I don't know of a CoreGraphics operation for this, and I don't see a suitable CoreImage filter for this. If that's correct, then here's a push in the right direction:
Assuming you have a CGImage (or a uiImage.CGImage):
Begin by creating a new CGBitmapContext
Draw the source image to the bitmap context
Get a handle to the bitmap's pixel data
Learn how the buffer is structured so you could properly populate a 2D array of pixel values which have the form:
typedef struct t_pixel {
uint8_t r, g, b, a;
} t_pixel;
Then create the color to locate:
const t_pixel ColorToLocate = { 0,0,0,255 }; // << black, opaque
And its substitution value:
const t_pixel SubstitutionColor = { 255,255,255,255 }; // << white, opaque
Iterate over the bitmap context's pixel buffer, creating t_pixels.
When you find a pixel which matches ColorToLocate, replace the source values with the values in SubstitutionColor.
Create a new CGImage from the CGBitmapContext.
That's the easy part! All that does is takes a CGImage, replace exact color matches, and produces a new CGImage.
What you want is more sophisticated. For this task, you will want a good edge detection algorithm.
I've not used this app you have linked. If it's limited to a few colors, then they may simply be swapping channel values, paired with edge detection (keep in mind that buffers may also be represented in multiple color models - not just RGBA).
If (in the app you linked) the user can choose an arbitrary colors, values, and edge thresholds, then you will have to use real blending and edge detection. If you need to see how this is accomplished, you may want to check out a package such as Gimp (it's an open image editor) - they have the algos to detect edges and choose by color.

How to get the real RGBA or ARGB color values without premultiplied alpha?

I'm creating an bitmap context using CGBitmapContextCreate with the kCGImageAlphaPremultipliedFirst option.
I made a 5 x 5 test image with some major colors (pure red, green, blue, white, black), some mixed colors (i.e. purple) combined with some alpha variations. Every time when the alpha component is not 255, the color value is wrong.
I found that I could re-calculate the color when I do something like:
almostCorrectRed = wrongRed * (255 / alphaValue);
almostCorrectGreen = wrongGreen * (255 / alphaValue);
almostCorrectBlue = wrongBlue * (255 / alphaValue);
But the problem is, that my calculations are sometimes off by 3 or even more. So for example I get a value of 242 instead of 245 for green, and I am 100% sure that it must be exactly 245. Alpha is 128.
Then, for the exact same color just with different alpha opacity in the PNG bitmap, I get alpha = 255 and green = 245 as it should be.
If alpha is 0, then red, green and blue are also 0. Here all data is lost and I can't figure out the color of the pixel.
How can I avoid or undo this alpha premultiplication alltogether so that I can modify pixels in my image based on the true R G B pixel values as they were when the image was created in Photoshop? How can I recover the original values for R, G, B and A?
Background info (probably not necessary for this question):
What I'm doing is this: I take a UIImage, draw it to a bitmap context in order to perform some simple image manipulation algorithms on it, shifting the color of each pixel depending on what color it was before. Nothing really special. But my code needs the real colors. When a pixel is transparent (meaning it has alpha less than 255) my algorithm shouldn't care about this, it should just modify R,G,B as needed while Alpha remains at whatever it is. Sometimes though it will shift alpha up or down too. But I see them as two separate things. Alpha contorls transparency, while R G B control the color.
This is a fundamental problem with premultiplication in an integral type:
245 * (128/255) = 122.98
122.98 truncated to an integer = 122
122 * (255/128) = 243.046875
I'm not sure why you're getting 242 instead of 243, but this problem remains either way, and it gets worse the lower the alpha goes.
The solution is to use floating-point components instead. The Quartz 2D Programming Guide gives the full details of the format you'll need to use.
Important point: You'd need to use floating-point from the creation of the original image (and I don't think it's even possible to save such an image as PNG; you might have to use TIFF). An image that was already premultiplied in an integral type has already lost that precision; there is no getting it back.
The zero-alpha case is the extreme version of this, to such an extent that even floating-point cannot help you. Anything times zero (alpha) is zero, and there is no recovering the original unpremultiplied value from that point.
Pre-multiplying alpha with an integer color type is an information lossy operation. Data is destroyed during the quantization process (rounding to 8 bits).
Since some data is destroy (by rounding), there is no way to recover the exact original pixel color (except for some lucky values). You have to save the colors of your photoshop image before you draw it into a bitmap context, and use that original color data, not the multiplied color data from the bitmap.
I ran into this same problem when trying to read image data, render it to another image with CoreGraphics, and then save the result as non-premultiplied data. The solution I found that worked for me was to save a table that contains the exact mapping that CoreGraphics uses to map non-premultiplied data to premultiplied data. Then, estimate what the original premultipled value would be with a mult and floor() call. Then, if the estimate and the result from the table lookup do not match, just check the value below the estimate and the one above the estimate in the table for the exact match.
// Execute premultiply logic on RGBA components split into componenets.
// For example, a pixel RGB (128, 0, 0) with A = 128
// would return (255, 0, 0) with A = 128
static
inline
uint32_t premultiply_bgra_inline(uint32_t red, uint32_t green, uint32_t blue, uint32_t alpha)
{
const uint8_t* const restrict alphaTable = &extern_alphaTablesPtr[alpha * PREMULT_TABLEMAX];
uint32_t result = (alpha << 24) | (alphaTable[red] << 16) | (alphaTable[green] << 8) | alphaTable[blue];
return result;
}
static inline
int unpremultiply(const uint32_t premultRGBComponent, const float alphaMult, const uint32_t alpha)
{
float multVal = premultRGBComponent * alphaMult;
float floorVal = floor(multVal);
uint32_t unpremultRGBComponent = (uint32_t)floorVal;
assert(unpremultRGBComponent >= 0);
if (unpremultRGBComponent > 255) {
unpremultRGBComponent = 255;
}
// Pass the unpremultiplied estimated value through the
// premultiply table again to verify that the result
// maps back to the same rgb component value that was
// passed in. It is possible that the result of the
// multiplication is smaller or larger than the
// original value, so this will either add or remove
// one int value to the result rgb component to account
// for the error possibility.
uint32_t premultPixel = premultiply_bgra_inline(unpremultRGBComponent, 0, 0, alpha);
uint32_t premultActualRGBComponent = (premultPixel >> 16) & 0xFF;
if (premultRGBComponent != premultActualRGBComponent) {
if ((premultActualRGBComponent < premultRGBComponent) && (unpremultRGBComponent < 255)) {
unpremultRGBComponent += 1;
} else if ((premultActualRGBComponent > premultRGBComponent) && (unpremultRGBComponent > 0)) {
unpremultRGBComponent -= 1;
} else {
// This should never happen
assert(0);
}
}
return unpremultRGBComponent;
}
You can find the complete static table of values at this github link.
Note that this approach will not recover information "lost" when the original unpremultiplied pixel was premultiplied. But, it does return the smallest unpremultiplied pixel that will become the premultiplied pixel once run through the premultiply logic again. This is useful when the graphics subsystem only accepts premultiplied pixels (like CoreGraphics on OSX). If the graphics subsystem only accepts premultipled pixels, then you are better off storing only the premultipled pixels, since less space is consumed as compared to the unpremultiplied pixels.