Smoothen Edges of Chroma Key -CoreImage - swift

I'm using the following code to remove green background from an image.But the edges of the image has green tint to it and some pixels are damaged.How can i smoothen this and make the cut out perfect.
func chromaKeyFilter(fromHue: CGFloat, toHue: CGFloat) -> CIFilter?
{
// 1
let size = 64
var cubeRGB = [Float]()
// 2
for z in 0 ..< size {
let blue = CGFloat(z) / CGFloat(size-1)
for y in 0 ..< size {
let green = CGFloat(y) / CGFloat(size-1)
for x in 0 ..< size {
let red = CGFloat(x) / CGFloat(size-1)
// 3
let hue = getHue(red: red, green: green, blue: blue)
let alpha: CGFloat = (hue >= fromHue && hue <= toHue) ? 0: 1
// 4
cubeRGB.append(Float(red * alpha))
cubeRGB.append(Float(green * alpha))
cubeRGB.append(Float(blue * alpha))
cubeRGB.append(Float(alpha))
}
}
}
#IBAction func clicked(_ sender: Any) {
let a = URL(fileURLWithPath:"green.png")
let b = URL(fileURLWithPath:"back.jpg")
let image1 = CIImage(contentsOf: a)
let image2 = CIImage(contentsOf: b)
let chromaCIFilter = self.chromaKeyFilter(fromHue: 0.3, toHue: 0.4)
chromaCIFilter?.setValue(image1, forKey: kCIInputImageKey)
let sourceCIImageWithoutBackground = chromaCIFilter?.outputImage
/*let compositor = CIFilter(name:"CISourceOverCompositing")
compositor?.setValue(sourceCIImageWithoutBackground, forKey: kCIInputImageKey)
compositor?.setValue(image2, forKey: kCIInputBackgroundImageKey)
let compositedCIImage = compositor?.outputImage*/
var rep: NSCIImageRep = NSCIImageRep(ciImage: sourceCIImageWithoutBackground!)
var nsImage: NSImage = NSImage(size: rep.size)
nsImage.addRepresentation(rep)
let url = URL(fileURLWithPath:"file.png")
nsImage.pngWrite(to: url)
super.viewDidLoad()
}
Input:
Output:
Update:
Update 2:
Update 3:

Professional tools for chroma keying usually include what's called a spill suppressor. A spill suppressor finds pixels that contain small amounts of the chroma key color and shifts the color in the opposite direction. So green pixels will move towards magenta. This reduces the green fringing you often see around keyed footage.
The pixels you call damaged are just pixels that had some level of the chroma color in them and are being picked up by your keyer function. Rather than choosing a hard 0 or 1, you might consider a function that returns a value between 0 and 1 based on the color of the pixel. For example, you could find the angular distance of the current pixel's hue to the fromHue and toHue and maybe do something like this:
// Get the distance from the edges of the range, and convert to be between 0 and 1
var distance: CGFloat
if (fromHue <= hue) && (hue <= toHue) {
distance = min(abs(hue - fromHue), abs(hue - toHue)) / ((toHue - fromHue) / 2.0)
} else {
distance = 0.0
}
distance = 1.0 - distance
let alpha = sin(.pi * distance - .pi / 2.0) * 0.5 + 0.5
That will give you a smooth variation from the edges of the range to the center of the range. (Note that I've left off dealing with the fact that hue wraps around at 360°. That's something you'll have to handle.) The graph of the falloff looks like this:
Another thing you can do is limit the keying to only affect pixels where the saturation is above some threshold and the value is above some threshold. For very dark and/or unsaturated colors, you probably don't want to key it out. I think that would help with the issues you're seeing with the model's jacket, for example.

My (live) keyer works like this (with the enhancements user1118321 describes) and using its analyser I quickly noticed this is most likely not a true green screen image. It's one of many fake ones where the green screen seems to have been replaced with a saturated monochrome green. Though this may look nice, it introduces artefacts where the keyed subject (with fringes of the originally used green) meets the monochrome green.
You can see a single green was used by looking at the histogram. Real green screen always have (in)visible shades of green. I was able to get a decent key but had to manually tweak some settings. With a NLE you can probably get much better results, but they will also be a lot more complex.
So to get back to your issue, your code probably works as it is now (update #3), you just have to use a proper real life green screen image.

Related

ChromaKey filter not filtering desired color CIImage

I am trying to make some colors of the image transparent. Below are the images that I have.
Lets say that I want to remove the bold red color from the image and have it transparent. I am viewing my image as PDF, therefore the transparent color would be if the background would match with the pink on the side. I am using the code from Apple documentation which I slightly modified in the following way:
// inside 3rd loop
let hue = getHue(red: red, green: green, blue: blue)
let wantedHue = getHue(red: myPixel.redComponent, green: myPixel.greenComponent, blue: myPixel.blueComponent)
let isHueInRange = hue >= wantedHue - 0.1 && hue <= wantedHue + 0.1
let alpha:CGFloat = isHueInRange ? 0 : 1
Here is the result I get. As you can see, there is some color left and the background is not fully transparent. I made these modifications, because I need to be able to dynamically remove the background color of the image (my images won't have any humans or other complex objects in it. It will most likely be text and some rectangles. No color mixing. Just still colors.)
So what I do is finding the first pixel of the image and get its color. When I have the color I get its hue, but I manually set allowed range to be 0.2. I am assuming that the image won't contain any similar color to the one I have.
EDIT:
The original color is: rgb(200, 39, 39) - hsv(200, 80.5, 78.4)
The residue color is: rgb(246, 215, 210) - hsv(352, 14.6, 96.5)
The image I have:
The image I get after applying the filter:
To remove red colour, if the hue is between 0.9 and 0.1 (approximately) alpha should be zero.
Use the following and it will work.
let hue = getHue(red: red, green: green, blue: blue)
var alpha : CGFloat = 1.0
if (hue < 0.1 && hue >= 0.0) || (hue > 0.9 && hue <= 1.0){
alpha = 0.0
}
I think the problem with your code is it never consider the range 0.9 to 1.0. It always consider some range from 0.0xxx to 0.1xxxx.

Swift, SpriteKit: Low FPS with a huge for-loop in update method

Is it normal to have very low FPS (~7fps to ~10fps) with Sprite Kit using the code below?
Use case:
I'm drawing just lines from bottom to top (1024 * 64 lines). I have some delta value that determines the positions of a single line for every frame. These lines represent my CGPath, which is assigned to the SKShapeNode every frame. Nothing else. I'm wondering about the performance of SpriteKit (or maybe of Swift).
Do you have any suggestions to improve the performance?
Screen:
Code:
import UIKit
import SpriteKit
class SKViewController: UIViewController {
#IBOutlet weak var skView: SKView!
var scene: SKScene!
var lines: SKShapeNode!
let N: Int = 1024 * 64
var delta: Int = 0
override func viewDidLoad() {
super.viewDidLoad()
scene = SKScene(size: skView.bounds.size)
scene.delegate = self
skView.showsFPS = true
skView.showsDrawCount = true
skView.presentScene(scene)
lines = SKShapeNode()
lines.lineWidth = 1
lines.strokeColor = .white
scene.addChild(lines)
}
}
extension SKViewController: SKSceneDelegate {
func update(_ currentTime: TimeInterval, for scene: SKScene) {
let w: CGFloat = scene.size.width
let offset: CGFloat = w / CGFloat(N)
let path = UIBezierPath()
for i in 0 ..< N { // N -> 1024 * 64 -> 65536
let x1: CGFloat = CGFloat(i) * offset
let x2: CGFloat = x1
let y1: CGFloat = 0
let y2: CGFloat = CGFloat(delta)
path.move(to: CGPoint(x: x1, y: y1))
path.addLine(to: CGPoint(x: x2, y: y2))
}
lines.path = path.cgPath
// Updating delta to simulate the changes
//
if delta > 100 {
delta = 0
}
delta += 1
}
}
Thanks and Best regards,
Aiba ^_^
CPU
65536 is a rather large number. Telling the CPU to comprehend this many loops will always result in slowness. For example, even if I make a test Command Line project that only measures the time it takes to run an empty loop:
while true {
let date = Date().timeIntervalSince1970
for _ in 1...65536 {}
let date2 = Date().timeIntervalSince1970
print(1 / (date2 - date))
}
It will result in ~17 fps. I haven't even applied the CGPath, and it's already appreciably slow.
Dispatch Queue
If you want to keep your game at 60fps, but your rendering of specifically your CGPath may be still slow, you can use a DispatchQueue.
var rendering: Bool = false // remember to make this one an instance value
while true {
let date = Date().timeIntervalSince1970
if !rendering {
rendering = true
let foo = DispatchQueue(label: "Run The Loop")
foo.async {
for _ in 1...65536 {}
let date2 = Date().timeIntervalSince1970
print("Render", 1 / (date2 - date))
}
rendering = false
}
}
This retains a natural 60fps experience, and you can update other objects, however, the rendering of your SKShapeNode object is still quite slow.
GPU
If you'd like to speed up the rendering, I would recommend looking into running it on the GPU instead of the CPU. The GPU (Graphics Processing Unit) is much better fitted for this, and can handle huge loops without disturbing gameplay. This may require you to program it as an SKShader, in which there are tutorials for.
Check the number of subdivisions
No iOS device has a screen width over 3000 pixels or 1500 points (retina screens have logical points and physical pixels where a point is equivalent to 2 or 3 pixels depending on the scale factor; iOS works with points, but you have to also remember pixels), and the ones that even come close are those with the biggest screens (iPad Pro 12.9 and iPhone Pro Max) in landscape mode.
A typical device in portrait orientation will be less than 500 points and 1500 pixels wide.
You are dividing this width into 65536 parts, and will end up with pixel (not even point) coordinates like 0.00, 0.05, 0.10, 0.15, ..., 0.85, which will actually refer to the same pixel twenty times (my result, rounded up, in an iPhone simulator).
Your code draws twenty to sixty lines in the exact same physical position, on top of each other! Why do that? If you set N to w and use 1.0 for offset, you'll have the same visible result at 60 FPS.
Reconsider the approach
The implementation will still have some drawbacks, though, even if you greatly reduce the amount of work to be done per frame. It's not recommended to advance animation frames in update(_:) since you get no guarantees on the FPS, and you usually want your animation to follow a set schedule, i.e. complete in 1 second rather than 60 frames. (Should the FPS drop to, say, 10, a 60-frame animation would complete in 6 seconds, whereas a 1-second animation would still finish in 1 second, but at a lower frame rate, i.e. skipping frames.)
Visibly, what your animation does is draw a rectangle on the screen whose width fills the screen, and whose height increases from 0 to 100 points. I'd say, a more "standard" way of achieving this would be something like this:
let sprite = SKSpriteNode(color: .white, size: CGSize(width: scene.size.width, height: 100.0))
sprite.yScale = 0.0
scene.addChild(sprite)
sprite.run(SKAction.repeatForever(SKAction.sequence([
SKAction.scaleY(to: 1.0, duration: 2),
SKAction.scaleY(to: 0.0, duration: 0.0)
])))
Note that I used SKSpriteNode because SKShapeNode is said to suffer from bugs and performance issues, although people have reported some improvements in the past few years.
But if you do insist on redrawing the entire texture of your sprite every frame due to some specific need, that may indeed be something for custom shaders… But those require learning a whole new approach, not to mention a new programming language.
Your shader would be executed on the GPU for each pixel. I repeat: the code would be executed for each single pixel – a radical departure from the world of SpriteKit.
The shader would access a bunch of values to work with, such a normalized set of coordinates (between (0.0,0.0) and (1.0,1.0) in a variable called v_tex_coord) and a system time (seconds elapsed since the shader has been running) in u_time, and it would need to determine what color value the pixel in question would need to be – and set it by storing the value in the variable gl_FragColor.
It could be something like this:
void main() {
// default to a black color, or a three-dimensional vector v3(0.0, 0.0, 0.0):
vec3 color = vec3(0.0);
// take the fraction part of the time in seconds;
// this value will go from 0.0 to 0.9999… every second, then drop back to 0.0.
// use this to determine the relative height of the area we want to paint white:
float height = fract(u_time);
// check if the current pixel is below the height of the white area:
if (v_tex_coord.y < height) {
// if so, set it to white (a three-dimensional vector v3(1.0, 1.0, 1.0)):
color = vec3(1.0);
}
gl_FragColor = vec4(color,1.0); // the fourth dimension is the alpha
}
Put this in a file called shader.fsh, create a full-screen sprite mySprite, and assign the shader to it:
mySprite.shader = SKShader.init(fileNamed: "shader.fsh")
Once you display the sprite, its shader will take care of all of the rendering. Note, however, that your sprite will lose some SpriteKit functionalities as a result.

Flickering SCNTube with transparency

UPDATED
I create some SCNTubes with following materials:
let radius: CGFloat = 0.3
let height: CGFloat = 0.5
let color: UIColor = UIColor(red:0.17, green:0.62, blue:0.76, alpha:1.0)
let geometryTube = SCNTube(innerRadius: (radius - 0.1), outerRadius: radius, height: height)
let matOuter = SCNMaterial()
matOuter.diffuse.contents = color
matOuter.transparency = 0.1
let matInner = SCNMaterial()
matInner.diffuse.contents = color
matInner.transparency = 0.6
let matTop = SCNMaterial()
matTop.diffuse.contents = color
geometryTube.materials = [matOuter, matInner, matTop]
for index in 0...20 {
let node = SCNNode(geometry: geometryTube)
node.position = SCNVector3(1.0 + Double(index), -1.0, 0.0);
sceneView.scene.rootNode.addChildNode(node)
}
Depending on the angle the tubes are flickering.
The materials property is designed to have one material for each geometry - whereas you are assigning three materials to a single geometry.
These are all overlapping and scenekit is having to decide what to render at each point.
EDIT: My answer (above) was wrong - rather than delete it I'll add this to highlight why I was wrong:
If a geometry has the same number of materials as it has geometry
elements, the material index corresponds to the element index. For
geometries with fewer materials than elements, SceneKit determines the
material index for each element by calculating the index of that
element modulo the number of materials. For example, in a geometry
with six elements and three materials, SceneKit renders the element at
index 5 using the material at index 5 % 3 = 2.
Apologies to OP for not being helpful with your question.
You might want to look at how your geometry is lit.
For example there are many different types of lightingModel properties you can use e.g.
Try for example to set each materials .lightingModel property to .constant e.g:
matOuter.lightingModel = .constant
Another option you may also want to explore SCNTransparencyMode as well...

Can I change a color within an image in Swift [duplicate]

My question is if I have a Lion image just I want to change the color of the lion alone not the background color. For that I referred this SO question but it turns the color of whole image. Moreover the image is not looking great. I need the color change like photoshop. whether it is possible to do this in coregraphics or I have to use any other library.
EDIT : I need the color change to be like iQuikColor app
This took quite a while to figure out, mainly because I wanted to get it up and running in Swift using Core Image and CIColorCube.
#Miguel's explanation is spot on about the way you need to replace a "Hue angle range" with another "Hue angle range". You can read his post above for details on what a Hue Angle Range is.
I made a quick app that replaces a default blue truck below, with whatever you choose on Hue slider.
You can slide the slider to tell the app what color Hue you want to replace the blue with.
I'm hardcoding the Hue range to be 60 degrees, which typically seems to encompass most of a particular color but you can edit that if you need to.
Notice that it does not color the tires or the tail lights because that's outside of the 60 degree range of the truck's default blue hue, but it does handle shading appropriately.
First you need code to convert RGB to HSV (Hue value):
func RGBtoHSV(r : Float, g : Float, b : Float) -> (h : Float, s : Float, v : Float) {
var h : CGFloat = 0
var s : CGFloat = 0
var v : CGFloat = 0
let col = UIColor(red: CGFloat(r), green: CGFloat(g), blue: CGFloat(b), alpha: 1.0)
col.getHue(&h, saturation: &s, brightness: &v, alpha: nil)
return (Float(h), Float(s), Float(v))
}
Then you need to convert HSV to RGB. You want to use this when you discover a hue that in your desired hue range (aka, a color that's the same blue hue of the default truck) to save off any adjustments you make.
func HSVtoRGB(h : Float, s : Float, v : Float) -> (r : Float, g : Float, b : Float) {
var r : Float = 0
var g : Float = 0
var b : Float = 0
let C = s * v
let HS = h * 6.0
let X = C * (1.0 - fabsf(fmodf(HS, 2.0) - 1.0))
if (HS >= 0 && HS < 1) {
r = C
g = X
b = 0
} else if (HS >= 1 && HS < 2) {
r = X
g = C
b = 0
} else if (HS >= 2 && HS < 3) {
r = 0
g = C
b = X
} else if (HS >= 3 && HS < 4) {
r = 0
g = X
b = C
} else if (HS >= 4 && HS < 5) {
r = X
g = 0
b = C
} else if (HS >= 5 && HS < 6) {
r = C
g = 0
b = X
}
let m = v - C
r += m
g += m
b += m
return (r, g, b)
}
Now you simply loop through a full RGBA color cube and "adjust" any colors in the "default blue" hue range with those from your newly desired hue. Then use Core Image and the CIColorCube filter to apply your adjusted color cube to the image.
func render() {
let centerHueAngle: Float = 214.0/360.0 //default color of truck body blue
let destCenterHueAngle: Float = slider.value
let minHueAngle: Float = (214.0 - 60.0/2.0) / 360 //60 degree range = +30 -30
let maxHueAngle: Float = (214.0 + 60.0/2.0) / 360
var hueAdjustment = centerHueAngle - destCenterHueAngle
let size = 64
var cubeData = [Float](count: size * size * size * 4, repeatedValue: 0)
var rgb: [Float] = [0, 0, 0]
var hsv: (h : Float, s : Float, v : Float)
var newRGB: (r : Float, g : Float, b : Float)
var offset = 0
for var z = 0; z < size; z++ {
rgb[2] = Float(z) / Float(size) // blue value
for var y = 0; y < size; y++ {
rgb[1] = Float(y) / Float(size) // green value
for var x = 0; x < size; x++ {
rgb[0] = Float(x) / Float(size) // red value
hsv = RGBtoHSV(rgb[0], g: rgb[1], b: rgb[2])
if hsv.h < minHueAngle || hsv.h > maxHueAngle {
newRGB.r = rgb[0]
newRGB.g = rgb[1]
newRGB.b = rgb[2]
} else {
hsv.h = destCenterHueAngle == 1 ? 0 : hsv.h - hueAdjustment //force red if slider angle is 360
newRGB = HSVtoRGB(hsv.h, s:hsv.s, v:hsv.v)
}
cubeData[offset] = newRGB.r
cubeData[offset+1] = newRGB.g
cubeData[offset+2] = newRGB.b
cubeData[offset+3] = 1.0
offset += 4
}
}
}
let data = NSData(bytes: cubeData, length: cubeData.count * sizeof(Float))
let colorCube = CIFilter(name: "CIColorCube")!
colorCube.setValue(size, forKey: "inputCubeDimension")
colorCube.setValue(data, forKey: "inputCubeData")
colorCube.setValue(ciImage, forKey: kCIInputImageKey)
if let outImage = colorCube.outputImage {
let context = CIContext(options: nil)
let outputImageRef = context.createCGImage(outImage, fromRect: outImage.extent)
imageView.image = UIImage(CGImage: outputImageRef)
}
}
You can download the sample project here.
See answers below instead. Mine doesn't provide a complete solution.
Here is the sketch of a possible solution using OpenCV:
Convert the image from RGB to HSV using cvCvtColor (we only want to change the hue).
Isolate a color with cvThreshold specifying a certain tolerance (you want a range of colors, not one flat color).
Discard areas of color below a minimum size using a blob detection library like cvBlobsLib. This will get rid of dots of the similar color in the scene.
Mask the color with cvInRangeS and use the resulting mask to apply the new hue.
cvMerge the new image with the new hue with an image composed by the saturation and brightness channels that you saved in step one.
There are several OpenCV iOS ports in the net, eg: http://www.eosgarden.com/en/opensource/opencv-ios/overview/ I haven't tried this myself, but seems a good research direction.
I'm going to make the assumption that you know how to perform these basic operations, so these won't be included in my solution:
load an image
get the RGB value of a given pixel of the loaded image
set the RGB value of a given pixel
display a loaded image, and/or save it back to disk.
First of all, let's consider how you can describe the source and destination colors. Clearly you can't specify these as exact RGB values, since a photo will have slight variations in color. For example, the green pixels in the truck picture you posted are not all exactly the same shade of green. The RGB color model isn't very good at expressing basic color characteristics, so you will get much better results if you convert the pixels to HSL. Here are C functions to convert RGB to HSL and back.
The HSL color model describes three aspects of a color:
Hue - the main perceived color - i.e. red, green, orange, etc.
Saturation - how "full" the color is - i.e. from full color to no color at all
Lightness - how bright the color is
So for example, if you wanted to find all the green pixels in a picture, you will convert each pixel from RGB to HSL, then look for H values that correspond to green, with some tolerance for "near green" colors. Below is a Hue chart, from Wikipedia:
So in your case you will be looking at pixels that have a Hue of 120 degrees +/- some amount. The bigger the range the more colors will get selected. If you make your range too wide you will start seeing yellow and cyan pixels getting selected, so you'll have to find the right range, and you may even want to offer the user of your app controls to select this range.
In addition to selecting by Hue, you may want to allow ranges for Saturation and Lightness, so that you can optionally put more limits to the pixels that you want to select for colorization.
Finally, you may want to offer the user the ability to draw a "lasso selection" so that specific parts of the picture can be left out of the colorization. This is how you could tell the app that you want the body of the green truck, but not the green wheel.
Once you know which pixels you want to modify it's time to alter their color.
The easiest way to colorize the pixels is to just change the Hue, leaving the Saturation and Lightness from the original pixel. So for example, if you want to make green pixels magenta you will be adding 180 degrees to all the Hue values of the selected pixels (making sure you use modulo 360 math).
If you wanted to get more sophisticated, you can also apply changes to Saturation and that will give you a wider range of tones you can go to. I think the Lightness is better left alone, you may be able to make small adjustments and the image will still look good, but if you go too far away from the original you may start seeing hard edges where the process pixels border with background pixels.
Once you have the colorized HSL pixel you just convert it back to RGB and write it back to the image.
I hope this helps. A final comment I should make is that Hue values in code are typically recorded in the 0-255 range, but many applications show them as a color wheel with a range of 0 to 360 degrees. Keep that in mind!
Can I suggest you look into using OpenCV? It's an open source image manipulation library, and it's got an iOS port too. There are plenty of blog posts about how to use it and set it up.
It has a whole heap of functions that will help you do a good job of what you're attempting. You could do it just using CoreGraphics, but the end result isn't going to look nearly as good as OpenCV would.
It was developed by some folks at MIT, so as you might expect it does a pretty good job at things like edge detection and object tracking. I remember reading a blog about how to separate a certain color from a picture with OpenCV - the examples showed a pretty good result. See here for an example. From there I can't imagine it would be a massive job to actually change the separated color to something else.
I don't know of a CoreGraphics operation for this, and I don't see a suitable CoreImage filter for this. If that's correct, then here's a push in the right direction:
Assuming you have a CGImage (or a uiImage.CGImage):
Begin by creating a new CGBitmapContext
Draw the source image to the bitmap context
Get a handle to the bitmap's pixel data
Learn how the buffer is structured so you could properly populate a 2D array of pixel values which have the form:
typedef struct t_pixel {
uint8_t r, g, b, a;
} t_pixel;
Then create the color to locate:
const t_pixel ColorToLocate = { 0,0,0,255 }; // << black, opaque
And its substitution value:
const t_pixel SubstitutionColor = { 255,255,255,255 }; // << white, opaque
Iterate over the bitmap context's pixel buffer, creating t_pixels.
When you find a pixel which matches ColorToLocate, replace the source values with the values in SubstitutionColor.
Create a new CGImage from the CGBitmapContext.
That's the easy part! All that does is takes a CGImage, replace exact color matches, and produces a new CGImage.
What you want is more sophisticated. For this task, you will want a good edge detection algorithm.
I've not used this app you have linked. If it's limited to a few colors, then they may simply be swapping channel values, paired with edge detection (keep in mind that buffers may also be represented in multiple color models - not just RGBA).
If (in the app you linked) the user can choose an arbitrary colors, values, and edge thresholds, then you will have to use real blending and edge detection. If you need to see how this is accomplished, you may want to check out a package such as Gimp (it's an open image editor) - they have the algos to detect edges and choose by color.

How to change a particular color in an image?

My question is if I have a Lion image just I want to change the color of the lion alone not the background color. For that I referred this SO question but it turns the color of whole image. Moreover the image is not looking great. I need the color change like photoshop. whether it is possible to do this in coregraphics or I have to use any other library.
EDIT : I need the color change to be like iQuikColor app
This took quite a while to figure out, mainly because I wanted to get it up and running in Swift using Core Image and CIColorCube.
#Miguel's explanation is spot on about the way you need to replace a "Hue angle range" with another "Hue angle range". You can read his post above for details on what a Hue Angle Range is.
I made a quick app that replaces a default blue truck below, with whatever you choose on Hue slider.
You can slide the slider to tell the app what color Hue you want to replace the blue with.
I'm hardcoding the Hue range to be 60 degrees, which typically seems to encompass most of a particular color but you can edit that if you need to.
Notice that it does not color the tires or the tail lights because that's outside of the 60 degree range of the truck's default blue hue, but it does handle shading appropriately.
First you need code to convert RGB to HSV (Hue value):
func RGBtoHSV(r : Float, g : Float, b : Float) -> (h : Float, s : Float, v : Float) {
var h : CGFloat = 0
var s : CGFloat = 0
var v : CGFloat = 0
let col = UIColor(red: CGFloat(r), green: CGFloat(g), blue: CGFloat(b), alpha: 1.0)
col.getHue(&h, saturation: &s, brightness: &v, alpha: nil)
return (Float(h), Float(s), Float(v))
}
Then you need to convert HSV to RGB. You want to use this when you discover a hue that in your desired hue range (aka, a color that's the same blue hue of the default truck) to save off any adjustments you make.
func HSVtoRGB(h : Float, s : Float, v : Float) -> (r : Float, g : Float, b : Float) {
var r : Float = 0
var g : Float = 0
var b : Float = 0
let C = s * v
let HS = h * 6.0
let X = C * (1.0 - fabsf(fmodf(HS, 2.0) - 1.0))
if (HS >= 0 && HS < 1) {
r = C
g = X
b = 0
} else if (HS >= 1 && HS < 2) {
r = X
g = C
b = 0
} else if (HS >= 2 && HS < 3) {
r = 0
g = C
b = X
} else if (HS >= 3 && HS < 4) {
r = 0
g = X
b = C
} else if (HS >= 4 && HS < 5) {
r = X
g = 0
b = C
} else if (HS >= 5 && HS < 6) {
r = C
g = 0
b = X
}
let m = v - C
r += m
g += m
b += m
return (r, g, b)
}
Now you simply loop through a full RGBA color cube and "adjust" any colors in the "default blue" hue range with those from your newly desired hue. Then use Core Image and the CIColorCube filter to apply your adjusted color cube to the image.
func render() {
let centerHueAngle: Float = 214.0/360.0 //default color of truck body blue
let destCenterHueAngle: Float = slider.value
let minHueAngle: Float = (214.0 - 60.0/2.0) / 360 //60 degree range = +30 -30
let maxHueAngle: Float = (214.0 + 60.0/2.0) / 360
var hueAdjustment = centerHueAngle - destCenterHueAngle
let size = 64
var cubeData = [Float](count: size * size * size * 4, repeatedValue: 0)
var rgb: [Float] = [0, 0, 0]
var hsv: (h : Float, s : Float, v : Float)
var newRGB: (r : Float, g : Float, b : Float)
var offset = 0
for var z = 0; z < size; z++ {
rgb[2] = Float(z) / Float(size) // blue value
for var y = 0; y < size; y++ {
rgb[1] = Float(y) / Float(size) // green value
for var x = 0; x < size; x++ {
rgb[0] = Float(x) / Float(size) // red value
hsv = RGBtoHSV(rgb[0], g: rgb[1], b: rgb[2])
if hsv.h < minHueAngle || hsv.h > maxHueAngle {
newRGB.r = rgb[0]
newRGB.g = rgb[1]
newRGB.b = rgb[2]
} else {
hsv.h = destCenterHueAngle == 1 ? 0 : hsv.h - hueAdjustment //force red if slider angle is 360
newRGB = HSVtoRGB(hsv.h, s:hsv.s, v:hsv.v)
}
cubeData[offset] = newRGB.r
cubeData[offset+1] = newRGB.g
cubeData[offset+2] = newRGB.b
cubeData[offset+3] = 1.0
offset += 4
}
}
}
let data = NSData(bytes: cubeData, length: cubeData.count * sizeof(Float))
let colorCube = CIFilter(name: "CIColorCube")!
colorCube.setValue(size, forKey: "inputCubeDimension")
colorCube.setValue(data, forKey: "inputCubeData")
colorCube.setValue(ciImage, forKey: kCIInputImageKey)
if let outImage = colorCube.outputImage {
let context = CIContext(options: nil)
let outputImageRef = context.createCGImage(outImage, fromRect: outImage.extent)
imageView.image = UIImage(CGImage: outputImageRef)
}
}
You can download the sample project here.
See answers below instead. Mine doesn't provide a complete solution.
Here is the sketch of a possible solution using OpenCV:
Convert the image from RGB to HSV using cvCvtColor (we only want to change the hue).
Isolate a color with cvThreshold specifying a certain tolerance (you want a range of colors, not one flat color).
Discard areas of color below a minimum size using a blob detection library like cvBlobsLib. This will get rid of dots of the similar color in the scene.
Mask the color with cvInRangeS and use the resulting mask to apply the new hue.
cvMerge the new image with the new hue with an image composed by the saturation and brightness channels that you saved in step one.
There are several OpenCV iOS ports in the net, eg: http://www.eosgarden.com/en/opensource/opencv-ios/overview/ I haven't tried this myself, but seems a good research direction.
I'm going to make the assumption that you know how to perform these basic operations, so these won't be included in my solution:
load an image
get the RGB value of a given pixel of the loaded image
set the RGB value of a given pixel
display a loaded image, and/or save it back to disk.
First of all, let's consider how you can describe the source and destination colors. Clearly you can't specify these as exact RGB values, since a photo will have slight variations in color. For example, the green pixels in the truck picture you posted are not all exactly the same shade of green. The RGB color model isn't very good at expressing basic color characteristics, so you will get much better results if you convert the pixels to HSL. Here are C functions to convert RGB to HSL and back.
The HSL color model describes three aspects of a color:
Hue - the main perceived color - i.e. red, green, orange, etc.
Saturation - how "full" the color is - i.e. from full color to no color at all
Lightness - how bright the color is
So for example, if you wanted to find all the green pixels in a picture, you will convert each pixel from RGB to HSL, then look for H values that correspond to green, with some tolerance for "near green" colors. Below is a Hue chart, from Wikipedia:
So in your case you will be looking at pixels that have a Hue of 120 degrees +/- some amount. The bigger the range the more colors will get selected. If you make your range too wide you will start seeing yellow and cyan pixels getting selected, so you'll have to find the right range, and you may even want to offer the user of your app controls to select this range.
In addition to selecting by Hue, you may want to allow ranges for Saturation and Lightness, so that you can optionally put more limits to the pixels that you want to select for colorization.
Finally, you may want to offer the user the ability to draw a "lasso selection" so that specific parts of the picture can be left out of the colorization. This is how you could tell the app that you want the body of the green truck, but not the green wheel.
Once you know which pixels you want to modify it's time to alter their color.
The easiest way to colorize the pixels is to just change the Hue, leaving the Saturation and Lightness from the original pixel. So for example, if you want to make green pixels magenta you will be adding 180 degrees to all the Hue values of the selected pixels (making sure you use modulo 360 math).
If you wanted to get more sophisticated, you can also apply changes to Saturation and that will give you a wider range of tones you can go to. I think the Lightness is better left alone, you may be able to make small adjustments and the image will still look good, but if you go too far away from the original you may start seeing hard edges where the process pixels border with background pixels.
Once you have the colorized HSL pixel you just convert it back to RGB and write it back to the image.
I hope this helps. A final comment I should make is that Hue values in code are typically recorded in the 0-255 range, but many applications show them as a color wheel with a range of 0 to 360 degrees. Keep that in mind!
Can I suggest you look into using OpenCV? It's an open source image manipulation library, and it's got an iOS port too. There are plenty of blog posts about how to use it and set it up.
It has a whole heap of functions that will help you do a good job of what you're attempting. You could do it just using CoreGraphics, but the end result isn't going to look nearly as good as OpenCV would.
It was developed by some folks at MIT, so as you might expect it does a pretty good job at things like edge detection and object tracking. I remember reading a blog about how to separate a certain color from a picture with OpenCV - the examples showed a pretty good result. See here for an example. From there I can't imagine it would be a massive job to actually change the separated color to something else.
I don't know of a CoreGraphics operation for this, and I don't see a suitable CoreImage filter for this. If that's correct, then here's a push in the right direction:
Assuming you have a CGImage (or a uiImage.CGImage):
Begin by creating a new CGBitmapContext
Draw the source image to the bitmap context
Get a handle to the bitmap's pixel data
Learn how the buffer is structured so you could properly populate a 2D array of pixel values which have the form:
typedef struct t_pixel {
uint8_t r, g, b, a;
} t_pixel;
Then create the color to locate:
const t_pixel ColorToLocate = { 0,0,0,255 }; // << black, opaque
And its substitution value:
const t_pixel SubstitutionColor = { 255,255,255,255 }; // << white, opaque
Iterate over the bitmap context's pixel buffer, creating t_pixels.
When you find a pixel which matches ColorToLocate, replace the source values with the values in SubstitutionColor.
Create a new CGImage from the CGBitmapContext.
That's the easy part! All that does is takes a CGImage, replace exact color matches, and produces a new CGImage.
What you want is more sophisticated. For this task, you will want a good edge detection algorithm.
I've not used this app you have linked. If it's limited to a few colors, then they may simply be swapping channel values, paired with edge detection (keep in mind that buffers may also be represented in multiple color models - not just RGBA).
If (in the app you linked) the user can choose an arbitrary colors, values, and edge thresholds, then you will have to use real blending and edge detection. If you need to see how this is accomplished, you may want to check out a package such as Gimp (it's an open image editor) - they have the algos to detect edges and choose by color.