Flutter: Convert color temperature to RGB - flutter

I am looking for a way to convert a color temperature to its approximate RGB color in Flutter.
If you have a look at this website, a color temperature of 1800 equals some kind of orange:
Is there a convenient way to do this in Flutter? The website I found seems to have hardcoded the colors. Providing me with a formula would also be appreaciated.

This blog has a formula available in several languages. Below is my port to Dart/Flutter.
Color colorTempToRGB(double colorTemp) {
final temp = colorTemp / 100;
final red = temp <= 66
? 255
: (pow(temp - 60, -0.1332047592) * 329.698727446).round().clamp(0, 255);
final green = temp <= 66
? (99.4708025861 * log(temp) - 161.1195681661).round().clamp(0, 255)
: (pow(temp - 60, -0.0755148492) * 288.1221695283).round().clamp(0, 255);
final blue = temp >= 66
? 255
: temp <= 19
? 0
: (138.5177312231 * log(temp - 10) - 305.0447927307)
.round()
.clamp(0, 255);
return Color.fromARGB(255, red, green, blue);
}

Related

How to Convert Hexcolor to RGB color using flutter dart

I want to convert Hex color values to RGB color values. How do I do that? Sorry this question might be a bit short. but the meaning is exactly Because I haven't seen these answers anywhere.
For example, HEX color value = "0xff4f6872" converts to RGB color value = (R:79 G:104 B:114).
I look forward to these answers, thank you.
The built-in material Color class has properties that hold the color value of each color channel. You can use them to find the RGB (red, green blue) or even the RGBA (red, green, blue, alpha) value of a color.
You first need to create a color object for your hex color by putting the value inside the Color() method and add '0xff' before the hex value.
Color myColor = Color(0xffe64a6f);
You can then access any of the properties you want and use/display them however you want
Column(
children: [
Text('red value: ${myColor.red}'),
Text('green value: ${myColor.green}'),
Text('blue value: ${myColor.blue}'),
Text('alpha value: ${myColor.alpha}'),
],
)
0xff4f6872 isn‘t (R:79 G:104 B:114).
A hexadecimal color is specified with: #RRGGBB. To add transparency, add two additional digits between 00 and FF.
Here:
ff = 256 (R)
4f = 79 (G)
68 = 104 (B)
72 = 114 this means alpha = 0.45 (114/256)
The best way to convert a hexadecimal color string to RGB color values in Flutter or Dart:
String hexColor = "0xff4f6872";
int intColor = int.parse(hexColor);
int red = (intColor >> 16) & 0xff;
int green = (intColor >> 8) & 0xff;
int blue = (intColor >> 0) & 0xff;
So, you can get:
red = 79, green = 104, blue = 114.

How do I get the RGB values from an image in Flutter?

I'm creating a mobile client for my object-detection server. I already have a perfectly-working python client which takes an image as input, sends it to the server in an HTTP request and receives prediction as a json response. I'm trying to achieve the same in Dart which I'm fairly new to.
The python code I have converts the input JPG image into a numpy array of RGB values in the following format (using a 5x4 image as an example)-
[
[[182, 171, 153], [203, 188, 169], [242, 214, 200], [255, 235, 219], [155, 111, 98]],
[[181, 171, 146], [204, 190, 164], [255, 237, 214], [177, 142, 120], [84, 42, 20]],
[[176, 168, 129], [218, 206, 166], [180, 156, 118], [91, 59, 21], [103, 64, 25]],
[[186, 180, 132], [166, 156, 107], [91, 68, 24], [94, 63, 17], [122, 84, 39]]
]
In my dart code, I have attempted to convert the image into a list of 8-bit unsigned integers using-
Uint8List inputImg = (await rootBundle.load("assets/test.jpg")).buffer.asUint8List()
It gives me a long array of over 800 ints for the same 5x4 image.
On testing it with two single-pixel images (one black and one white), a large section of the Uint8List seems to be repeating for each. I isolated the differing sections of the array and they do not correspond with the RGB values expected for the colors (I expected [0 0 0] and [255 255 255], but got something like 255, 0, 63, 250, 0, 255, 0 and 254, 254, 40, 3 for the two respectively)
I just need the RGB values in the image. Would appreciate someone pointing me in the right direction!
Images are normally compressed when they're stored. Based on the file extension, I'm guessing you're using JPEG encoding. This means the data stored in the assets/test.jpg file is not an array of colors. That would be an inefficient use of data storage if everything were done that way. To get that array of colors, you need to decode the image. This can be done with the image package.
To do this, first add the package as a dependency by adding the following to the pubspec.yaml:
dependencies:
image: ^3.0.4
You should follow the same method of obtaining the encoded image data. But you then need to decode it.
final Uint8List inputImg = (await rootBundle.load("assets/test.jpg")).buffer.asUint8List();
final decoder = JpegDecoder();
final decodedImg = decoder.decodeImage(inputImg);
final decodedBytes = decodedImg.getBytes(format: Format.rgb);
decodedBytes contains a single depth list of all your pixel values in RGB format. To get it into your desired format, just loop over the values and add them to a new list.
List<List<List<int>> imgArr = [];
for(int y = 0; y < decodedImage.height; y++) {
imgArr.add([]);
for(int x = 0; x < decodedImage.width; x++) {
int red = decodedBytes[y*decodedImage.width*3 + x*3];
int green = decodedBytes[y*decodedImage.width*3 + x*3 + 1];
int blue = decodedBytes[y*decodedImage.width*3 + x*3 + 2];
imgArr[y].add([red, green, blue]);
}
}
To do this, first add the package as a dependency by adding the following to the pubspec.yaml:
image: ^3.2.0
import image library to your file:
import 'package:image/image.dart' as im;
get brightness of image:
int getBrightness(File file) {
im.Image? image = im.decodeImage(file.readAsBytesSync());
final data = image?.getBytes();
var color= 0;
for (var x = 0; x < data!.length; x += 4) {
final int r = data[x];
final int g = data[x + 1];
final int b = data[x + 2];
final int avg = ((r + g + b) / 3).floor();
color += avg;
}
return (color/ (image!.width * image.height)).floor();
}
Check selected image is dark or light:
bool isImageDark(int brightness) {
return brightness <= 127;
}
Use above method like this:
int brightness = getBrightness(file);
bool isDark= isImageDark(brightness);
Now you can change your icon color based on your background image

How to calculate brightness from list of unsigned 8 bytes integer which represents an image in dart?

I wanted to calculate the brightness of an UintList image. The image I used are picked from my phone (Using image_picker plugin in flutter). I tried a for loop on every value of this list and did this:
int r = 0, b = 0, g = 0, count = 0;
for (int value in imageBytesList) {
/// The red channel of this color in an 8 bit value.
int red = (0x00ff0000 & value) >> 16;
/// The blue channel of this color in an 8 bit value.
int blue = (0x0000ff00 & value) >> 8;
/// The green channel of this color in an 8 bit value.
int green = (0x000000ff & value) >> 0;
r += red;
b += blue;
g += green;
count++;
}
double result = (r + b + g) / (count * 3);
I know that the result should represent a brightness level between 0 and 255, where 0 = totally black and 255 = totally bright. but, what I get are really weird values like 0.0016887266175341332. What calculation mistakes am I making? (I know my method is gravely wrong but I wasn't able to find a way).
The flutter image widget does convert this Uint8List from memory to an Image with correct height & width using Image.memory() constructor. What is the logic behind it?

Can I change a color within an image in Swift [duplicate]

My question is if I have a Lion image just I want to change the color of the lion alone not the background color. For that I referred this SO question but it turns the color of whole image. Moreover the image is not looking great. I need the color change like photoshop. whether it is possible to do this in coregraphics or I have to use any other library.
EDIT : I need the color change to be like iQuikColor app
This took quite a while to figure out, mainly because I wanted to get it up and running in Swift using Core Image and CIColorCube.
#Miguel's explanation is spot on about the way you need to replace a "Hue angle range" with another "Hue angle range". You can read his post above for details on what a Hue Angle Range is.
I made a quick app that replaces a default blue truck below, with whatever you choose on Hue slider.
You can slide the slider to tell the app what color Hue you want to replace the blue with.
I'm hardcoding the Hue range to be 60 degrees, which typically seems to encompass most of a particular color but you can edit that if you need to.
Notice that it does not color the tires or the tail lights because that's outside of the 60 degree range of the truck's default blue hue, but it does handle shading appropriately.
First you need code to convert RGB to HSV (Hue value):
func RGBtoHSV(r : Float, g : Float, b : Float) -> (h : Float, s : Float, v : Float) {
var h : CGFloat = 0
var s : CGFloat = 0
var v : CGFloat = 0
let col = UIColor(red: CGFloat(r), green: CGFloat(g), blue: CGFloat(b), alpha: 1.0)
col.getHue(&h, saturation: &s, brightness: &v, alpha: nil)
return (Float(h), Float(s), Float(v))
}
Then you need to convert HSV to RGB. You want to use this when you discover a hue that in your desired hue range (aka, a color that's the same blue hue of the default truck) to save off any adjustments you make.
func HSVtoRGB(h : Float, s : Float, v : Float) -> (r : Float, g : Float, b : Float) {
var r : Float = 0
var g : Float = 0
var b : Float = 0
let C = s * v
let HS = h * 6.0
let X = C * (1.0 - fabsf(fmodf(HS, 2.0) - 1.0))
if (HS >= 0 && HS < 1) {
r = C
g = X
b = 0
} else if (HS >= 1 && HS < 2) {
r = X
g = C
b = 0
} else if (HS >= 2 && HS < 3) {
r = 0
g = C
b = X
} else if (HS >= 3 && HS < 4) {
r = 0
g = X
b = C
} else if (HS >= 4 && HS < 5) {
r = X
g = 0
b = C
} else if (HS >= 5 && HS < 6) {
r = C
g = 0
b = X
}
let m = v - C
r += m
g += m
b += m
return (r, g, b)
}
Now you simply loop through a full RGBA color cube and "adjust" any colors in the "default blue" hue range with those from your newly desired hue. Then use Core Image and the CIColorCube filter to apply your adjusted color cube to the image.
func render() {
let centerHueAngle: Float = 214.0/360.0 //default color of truck body blue
let destCenterHueAngle: Float = slider.value
let minHueAngle: Float = (214.0 - 60.0/2.0) / 360 //60 degree range = +30 -30
let maxHueAngle: Float = (214.0 + 60.0/2.0) / 360
var hueAdjustment = centerHueAngle - destCenterHueAngle
let size = 64
var cubeData = [Float](count: size * size * size * 4, repeatedValue: 0)
var rgb: [Float] = [0, 0, 0]
var hsv: (h : Float, s : Float, v : Float)
var newRGB: (r : Float, g : Float, b : Float)
var offset = 0
for var z = 0; z < size; z++ {
rgb[2] = Float(z) / Float(size) // blue value
for var y = 0; y < size; y++ {
rgb[1] = Float(y) / Float(size) // green value
for var x = 0; x < size; x++ {
rgb[0] = Float(x) / Float(size) // red value
hsv = RGBtoHSV(rgb[0], g: rgb[1], b: rgb[2])
if hsv.h < minHueAngle || hsv.h > maxHueAngle {
newRGB.r = rgb[0]
newRGB.g = rgb[1]
newRGB.b = rgb[2]
} else {
hsv.h = destCenterHueAngle == 1 ? 0 : hsv.h - hueAdjustment //force red if slider angle is 360
newRGB = HSVtoRGB(hsv.h, s:hsv.s, v:hsv.v)
}
cubeData[offset] = newRGB.r
cubeData[offset+1] = newRGB.g
cubeData[offset+2] = newRGB.b
cubeData[offset+3] = 1.0
offset += 4
}
}
}
let data = NSData(bytes: cubeData, length: cubeData.count * sizeof(Float))
let colorCube = CIFilter(name: "CIColorCube")!
colorCube.setValue(size, forKey: "inputCubeDimension")
colorCube.setValue(data, forKey: "inputCubeData")
colorCube.setValue(ciImage, forKey: kCIInputImageKey)
if let outImage = colorCube.outputImage {
let context = CIContext(options: nil)
let outputImageRef = context.createCGImage(outImage, fromRect: outImage.extent)
imageView.image = UIImage(CGImage: outputImageRef)
}
}
You can download the sample project here.
See answers below instead. Mine doesn't provide a complete solution.
Here is the sketch of a possible solution using OpenCV:
Convert the image from RGB to HSV using cvCvtColor (we only want to change the hue).
Isolate a color with cvThreshold specifying a certain tolerance (you want a range of colors, not one flat color).
Discard areas of color below a minimum size using a blob detection library like cvBlobsLib. This will get rid of dots of the similar color in the scene.
Mask the color with cvInRangeS and use the resulting mask to apply the new hue.
cvMerge the new image with the new hue with an image composed by the saturation and brightness channels that you saved in step one.
There are several OpenCV iOS ports in the net, eg: http://www.eosgarden.com/en/opensource/opencv-ios/overview/ I haven't tried this myself, but seems a good research direction.
I'm going to make the assumption that you know how to perform these basic operations, so these won't be included in my solution:
load an image
get the RGB value of a given pixel of the loaded image
set the RGB value of a given pixel
display a loaded image, and/or save it back to disk.
First of all, let's consider how you can describe the source and destination colors. Clearly you can't specify these as exact RGB values, since a photo will have slight variations in color. For example, the green pixels in the truck picture you posted are not all exactly the same shade of green. The RGB color model isn't very good at expressing basic color characteristics, so you will get much better results if you convert the pixels to HSL. Here are C functions to convert RGB to HSL and back.
The HSL color model describes three aspects of a color:
Hue - the main perceived color - i.e. red, green, orange, etc.
Saturation - how "full" the color is - i.e. from full color to no color at all
Lightness - how bright the color is
So for example, if you wanted to find all the green pixels in a picture, you will convert each pixel from RGB to HSL, then look for H values that correspond to green, with some tolerance for "near green" colors. Below is a Hue chart, from Wikipedia:
So in your case you will be looking at pixels that have a Hue of 120 degrees +/- some amount. The bigger the range the more colors will get selected. If you make your range too wide you will start seeing yellow and cyan pixels getting selected, so you'll have to find the right range, and you may even want to offer the user of your app controls to select this range.
In addition to selecting by Hue, you may want to allow ranges for Saturation and Lightness, so that you can optionally put more limits to the pixels that you want to select for colorization.
Finally, you may want to offer the user the ability to draw a "lasso selection" so that specific parts of the picture can be left out of the colorization. This is how you could tell the app that you want the body of the green truck, but not the green wheel.
Once you know which pixels you want to modify it's time to alter their color.
The easiest way to colorize the pixels is to just change the Hue, leaving the Saturation and Lightness from the original pixel. So for example, if you want to make green pixels magenta you will be adding 180 degrees to all the Hue values of the selected pixels (making sure you use modulo 360 math).
If you wanted to get more sophisticated, you can also apply changes to Saturation and that will give you a wider range of tones you can go to. I think the Lightness is better left alone, you may be able to make small adjustments and the image will still look good, but if you go too far away from the original you may start seeing hard edges where the process pixels border with background pixels.
Once you have the colorized HSL pixel you just convert it back to RGB and write it back to the image.
I hope this helps. A final comment I should make is that Hue values in code are typically recorded in the 0-255 range, but many applications show them as a color wheel with a range of 0 to 360 degrees. Keep that in mind!
Can I suggest you look into using OpenCV? It's an open source image manipulation library, and it's got an iOS port too. There are plenty of blog posts about how to use it and set it up.
It has a whole heap of functions that will help you do a good job of what you're attempting. You could do it just using CoreGraphics, but the end result isn't going to look nearly as good as OpenCV would.
It was developed by some folks at MIT, so as you might expect it does a pretty good job at things like edge detection and object tracking. I remember reading a blog about how to separate a certain color from a picture with OpenCV - the examples showed a pretty good result. See here for an example. From there I can't imagine it would be a massive job to actually change the separated color to something else.
I don't know of a CoreGraphics operation for this, and I don't see a suitable CoreImage filter for this. If that's correct, then here's a push in the right direction:
Assuming you have a CGImage (or a uiImage.CGImage):
Begin by creating a new CGBitmapContext
Draw the source image to the bitmap context
Get a handle to the bitmap's pixel data
Learn how the buffer is structured so you could properly populate a 2D array of pixel values which have the form:
typedef struct t_pixel {
uint8_t r, g, b, a;
} t_pixel;
Then create the color to locate:
const t_pixel ColorToLocate = { 0,0,0,255 }; // << black, opaque
And its substitution value:
const t_pixel SubstitutionColor = { 255,255,255,255 }; // << white, opaque
Iterate over the bitmap context's pixel buffer, creating t_pixels.
When you find a pixel which matches ColorToLocate, replace the source values with the values in SubstitutionColor.
Create a new CGImage from the CGBitmapContext.
That's the easy part! All that does is takes a CGImage, replace exact color matches, and produces a new CGImage.
What you want is more sophisticated. For this task, you will want a good edge detection algorithm.
I've not used this app you have linked. If it's limited to a few colors, then they may simply be swapping channel values, paired with edge detection (keep in mind that buffers may also be represented in multiple color models - not just RGBA).
If (in the app you linked) the user can choose an arbitrary colors, values, and edge thresholds, then you will have to use real blending and edge detection. If you need to see how this is accomplished, you may want to check out a package such as Gimp (it's an open image editor) - they have the algos to detect edges and choose by color.

How to change a particular color in an image?

My question is if I have a Lion image just I want to change the color of the lion alone not the background color. For that I referred this SO question but it turns the color of whole image. Moreover the image is not looking great. I need the color change like photoshop. whether it is possible to do this in coregraphics or I have to use any other library.
EDIT : I need the color change to be like iQuikColor app
This took quite a while to figure out, mainly because I wanted to get it up and running in Swift using Core Image and CIColorCube.
#Miguel's explanation is spot on about the way you need to replace a "Hue angle range" with another "Hue angle range". You can read his post above for details on what a Hue Angle Range is.
I made a quick app that replaces a default blue truck below, with whatever you choose on Hue slider.
You can slide the slider to tell the app what color Hue you want to replace the blue with.
I'm hardcoding the Hue range to be 60 degrees, which typically seems to encompass most of a particular color but you can edit that if you need to.
Notice that it does not color the tires or the tail lights because that's outside of the 60 degree range of the truck's default blue hue, but it does handle shading appropriately.
First you need code to convert RGB to HSV (Hue value):
func RGBtoHSV(r : Float, g : Float, b : Float) -> (h : Float, s : Float, v : Float) {
var h : CGFloat = 0
var s : CGFloat = 0
var v : CGFloat = 0
let col = UIColor(red: CGFloat(r), green: CGFloat(g), blue: CGFloat(b), alpha: 1.0)
col.getHue(&h, saturation: &s, brightness: &v, alpha: nil)
return (Float(h), Float(s), Float(v))
}
Then you need to convert HSV to RGB. You want to use this when you discover a hue that in your desired hue range (aka, a color that's the same blue hue of the default truck) to save off any adjustments you make.
func HSVtoRGB(h : Float, s : Float, v : Float) -> (r : Float, g : Float, b : Float) {
var r : Float = 0
var g : Float = 0
var b : Float = 0
let C = s * v
let HS = h * 6.0
let X = C * (1.0 - fabsf(fmodf(HS, 2.0) - 1.0))
if (HS >= 0 && HS < 1) {
r = C
g = X
b = 0
} else if (HS >= 1 && HS < 2) {
r = X
g = C
b = 0
} else if (HS >= 2 && HS < 3) {
r = 0
g = C
b = X
} else if (HS >= 3 && HS < 4) {
r = 0
g = X
b = C
} else if (HS >= 4 && HS < 5) {
r = X
g = 0
b = C
} else if (HS >= 5 && HS < 6) {
r = C
g = 0
b = X
}
let m = v - C
r += m
g += m
b += m
return (r, g, b)
}
Now you simply loop through a full RGBA color cube and "adjust" any colors in the "default blue" hue range with those from your newly desired hue. Then use Core Image and the CIColorCube filter to apply your adjusted color cube to the image.
func render() {
let centerHueAngle: Float = 214.0/360.0 //default color of truck body blue
let destCenterHueAngle: Float = slider.value
let minHueAngle: Float = (214.0 - 60.0/2.0) / 360 //60 degree range = +30 -30
let maxHueAngle: Float = (214.0 + 60.0/2.0) / 360
var hueAdjustment = centerHueAngle - destCenterHueAngle
let size = 64
var cubeData = [Float](count: size * size * size * 4, repeatedValue: 0)
var rgb: [Float] = [0, 0, 0]
var hsv: (h : Float, s : Float, v : Float)
var newRGB: (r : Float, g : Float, b : Float)
var offset = 0
for var z = 0; z < size; z++ {
rgb[2] = Float(z) / Float(size) // blue value
for var y = 0; y < size; y++ {
rgb[1] = Float(y) / Float(size) // green value
for var x = 0; x < size; x++ {
rgb[0] = Float(x) / Float(size) // red value
hsv = RGBtoHSV(rgb[0], g: rgb[1], b: rgb[2])
if hsv.h < minHueAngle || hsv.h > maxHueAngle {
newRGB.r = rgb[0]
newRGB.g = rgb[1]
newRGB.b = rgb[2]
} else {
hsv.h = destCenterHueAngle == 1 ? 0 : hsv.h - hueAdjustment //force red if slider angle is 360
newRGB = HSVtoRGB(hsv.h, s:hsv.s, v:hsv.v)
}
cubeData[offset] = newRGB.r
cubeData[offset+1] = newRGB.g
cubeData[offset+2] = newRGB.b
cubeData[offset+3] = 1.0
offset += 4
}
}
}
let data = NSData(bytes: cubeData, length: cubeData.count * sizeof(Float))
let colorCube = CIFilter(name: "CIColorCube")!
colorCube.setValue(size, forKey: "inputCubeDimension")
colorCube.setValue(data, forKey: "inputCubeData")
colorCube.setValue(ciImage, forKey: kCIInputImageKey)
if let outImage = colorCube.outputImage {
let context = CIContext(options: nil)
let outputImageRef = context.createCGImage(outImage, fromRect: outImage.extent)
imageView.image = UIImage(CGImage: outputImageRef)
}
}
You can download the sample project here.
See answers below instead. Mine doesn't provide a complete solution.
Here is the sketch of a possible solution using OpenCV:
Convert the image from RGB to HSV using cvCvtColor (we only want to change the hue).
Isolate a color with cvThreshold specifying a certain tolerance (you want a range of colors, not one flat color).
Discard areas of color below a minimum size using a blob detection library like cvBlobsLib. This will get rid of dots of the similar color in the scene.
Mask the color with cvInRangeS and use the resulting mask to apply the new hue.
cvMerge the new image with the new hue with an image composed by the saturation and brightness channels that you saved in step one.
There are several OpenCV iOS ports in the net, eg: http://www.eosgarden.com/en/opensource/opencv-ios/overview/ I haven't tried this myself, but seems a good research direction.
I'm going to make the assumption that you know how to perform these basic operations, so these won't be included in my solution:
load an image
get the RGB value of a given pixel of the loaded image
set the RGB value of a given pixel
display a loaded image, and/or save it back to disk.
First of all, let's consider how you can describe the source and destination colors. Clearly you can't specify these as exact RGB values, since a photo will have slight variations in color. For example, the green pixels in the truck picture you posted are not all exactly the same shade of green. The RGB color model isn't very good at expressing basic color characteristics, so you will get much better results if you convert the pixels to HSL. Here are C functions to convert RGB to HSL and back.
The HSL color model describes three aspects of a color:
Hue - the main perceived color - i.e. red, green, orange, etc.
Saturation - how "full" the color is - i.e. from full color to no color at all
Lightness - how bright the color is
So for example, if you wanted to find all the green pixels in a picture, you will convert each pixel from RGB to HSL, then look for H values that correspond to green, with some tolerance for "near green" colors. Below is a Hue chart, from Wikipedia:
So in your case you will be looking at pixels that have a Hue of 120 degrees +/- some amount. The bigger the range the more colors will get selected. If you make your range too wide you will start seeing yellow and cyan pixels getting selected, so you'll have to find the right range, and you may even want to offer the user of your app controls to select this range.
In addition to selecting by Hue, you may want to allow ranges for Saturation and Lightness, so that you can optionally put more limits to the pixels that you want to select for colorization.
Finally, you may want to offer the user the ability to draw a "lasso selection" so that specific parts of the picture can be left out of the colorization. This is how you could tell the app that you want the body of the green truck, but not the green wheel.
Once you know which pixels you want to modify it's time to alter their color.
The easiest way to colorize the pixels is to just change the Hue, leaving the Saturation and Lightness from the original pixel. So for example, if you want to make green pixels magenta you will be adding 180 degrees to all the Hue values of the selected pixels (making sure you use modulo 360 math).
If you wanted to get more sophisticated, you can also apply changes to Saturation and that will give you a wider range of tones you can go to. I think the Lightness is better left alone, you may be able to make small adjustments and the image will still look good, but if you go too far away from the original you may start seeing hard edges where the process pixels border with background pixels.
Once you have the colorized HSL pixel you just convert it back to RGB and write it back to the image.
I hope this helps. A final comment I should make is that Hue values in code are typically recorded in the 0-255 range, but many applications show them as a color wheel with a range of 0 to 360 degrees. Keep that in mind!
Can I suggest you look into using OpenCV? It's an open source image manipulation library, and it's got an iOS port too. There are plenty of blog posts about how to use it and set it up.
It has a whole heap of functions that will help you do a good job of what you're attempting. You could do it just using CoreGraphics, but the end result isn't going to look nearly as good as OpenCV would.
It was developed by some folks at MIT, so as you might expect it does a pretty good job at things like edge detection and object tracking. I remember reading a blog about how to separate a certain color from a picture with OpenCV - the examples showed a pretty good result. See here for an example. From there I can't imagine it would be a massive job to actually change the separated color to something else.
I don't know of a CoreGraphics operation for this, and I don't see a suitable CoreImage filter for this. If that's correct, then here's a push in the right direction:
Assuming you have a CGImage (or a uiImage.CGImage):
Begin by creating a new CGBitmapContext
Draw the source image to the bitmap context
Get a handle to the bitmap's pixel data
Learn how the buffer is structured so you could properly populate a 2D array of pixel values which have the form:
typedef struct t_pixel {
uint8_t r, g, b, a;
} t_pixel;
Then create the color to locate:
const t_pixel ColorToLocate = { 0,0,0,255 }; // << black, opaque
And its substitution value:
const t_pixel SubstitutionColor = { 255,255,255,255 }; // << white, opaque
Iterate over the bitmap context's pixel buffer, creating t_pixels.
When you find a pixel which matches ColorToLocate, replace the source values with the values in SubstitutionColor.
Create a new CGImage from the CGBitmapContext.
That's the easy part! All that does is takes a CGImage, replace exact color matches, and produces a new CGImage.
What you want is more sophisticated. For this task, you will want a good edge detection algorithm.
I've not used this app you have linked. If it's limited to a few colors, then they may simply be swapping channel values, paired with edge detection (keep in mind that buffers may also be represented in multiple color models - not just RGBA).
If (in the app you linked) the user can choose an arbitrary colors, values, and edge thresholds, then you will have to use real blending and edge detection. If you need to see how this is accomplished, you may want to check out a package such as Gimp (it's an open image editor) - they have the algos to detect edges and choose by color.