I am working on replacing certain color in image by User's selected color. I am using OpenCV for color replacement.
Here in short I have described from where I took help and what I got.
How to change a particular color in an image?
I have followed the step or taken basic idea from answer of above link. In correct answer of that link that guy told you only need to change hue for colour replacement.
after that I run into the issue similar like
color replacement in image for iphone application (i.e. It's good code for color replacement for those who are completely beginners)
from that issue I got the idea that I also need to change "Saturation" also.
Now I am running into issues like
"When my source image is too light(i.e. with high brightness) and I am replacing colour with some dark colour then colours looks light in replaced image instead of dark due to that it seems like Replaced colour does not match with colour using that we done replacement"
This happens because I am not considering the brightness in replacement. Here I am stuck what is the formula or idea to change brightness?
Suppose I am replacing the brightness of image with brightness of destination colour then It would look like flat replacemnt and image will lose it's actual shadow or edges.
Edit:
When I am considering the brightness of source(i.e. the pixel to be processed) in replacment then I am facing one issue. let me explain as per scenario of my application.
for example I am changing the colour of car(like whiteAngl explain) after that I am erasing few portion of the newly coloured car. Again I am doing recolour on erased portion but now what happended is colour done after erase and colour before erase doesn't match because both time I am getting different lightness because both time my pixel of to be processed is changed and due to that lightness of colour changed in output. How to overcome this issue
Any help will be appreciated
Without seeing the code you have tried, it's not easy to guess what you have done wrong. To show you with a concrete example how this is done let's change the ugly blue color of this car:
This short python script shows how we can change the color using the HSV color space:
import cv2
orig = cv2.imread("original.jpg")
hsv = cv2.cvtColor(orig, cv2.COLOR_BGR2HSV)
hsv[:,:,0] += 100
bgr = cv2.cvtColor(hsv, cv2.COLOR_HSV2BGR)
cv2.imwrite('changed.jpg', bgr)
and you get:
On wikipedia you see the hue is between 0 to 360 degrees but for the values in OpenCV see the documentation. You see I added 100 to hue of every pixel in the image. I guess you want to change the color of a portion of your image, but probably you get the idea from the above script.
Here is how to get the requested dark red car. First we get the red one:
The dark red one that I tried to keep the metallic feeling in it:
As I said, the equation you use to shift the light of the color depends on the material you want to have for the object. Here I came up with a quick and dirty equation to keep the metallic material of the car. This script produces the above dark red car image from the first light blue car image:
import cv2
orig = cv2.imread("original.jpg")
hls = cv2.cvtColor(orig, cv2.COLOR_BGR2HLS)
hls[:,:,0] += 80 # change color from blue to red, hue
for i in range(1,50): # 50 times reduce lightness
# select indices where lightness is greater than 0 (black) and less than very bright
# 220-i*2 is there to reduce lightness of bright pixel fewer number of times (than 50 times),
# so in the first iteration we don't reduce lightness of pixels which have lightness >= 200, in the second iteration we don't touch pixels with lightness >= 198 and so on
ind = (hls[:,:,1] > 0) & (hls[:,:,1] < (220-i*2))
# from the lightness of the selected pixels we subtract 1, using trick true=1 false=0
# so the selected pixels get darker
hls[:,:,1] -= ind
bgr = cv2.cvtColor(hls, cv2.COLOR_HLS2BGR)
cv2.imwrite('changed.jpg', bgr)
You are right : changing only the hue will not change the brightness at all (or very weakly due to some perceptual effects), and what you want is to change the brightness as well. And as you mentioned, setting the brightness to the target brightness will loose all pixel values (you will only see changes in saturation). So what's next ?
What you can do is to change the pixel's hue plus try to match the average lightness. To do that, just compute the average brightness B of all your pixels to be processed, and then multiply all your brightness values by Bt/B where Bt is the brightness of your target color.
Doing that will both match the hue (due to the first step) and the brightness due to the second step, while preserving the edges (because you only modified the average brightness).
This is a special case of histogram matching, where here, your target histogram has a single value (the target color) so only the mean can be matched in a reasonable way.
And if you're looking for a "credible source" as stated in your bounty request, I am a postdoc at Harvard and will be presenting a paper on color histogram matching at Siggraph this year ;) .
Related
I'm trying to put together a simple shader with Unity' Shader graph. The material should appear white, yellow and blue - and static. However in the sample gradient, it's only displaying yellow. If I change the time input to Sine, the gradients blends through the colours. What is going wrong.
Plug an "UV" node in the gradient sampler's "time" input instead of a "Time" node.
First thing to know is that a Gradient goes from 0 to 1.
The 0 is on the left (in white) and the 1 in on the right (in yellow).
I guess that the blue is between approx. 0.25 and 0.3.
On the other hand, you use the Time, which has a constantly evolving value far above 1. Therefore, it is always on the max, meaning plain yellow.
If you use SinTime, the value alternates with a sinusoid between 0 and 1, making the color go from left to right then from right to left.
Additional information: as of today (2019.1.12) it seems that there is a bug in Shader Graph when you use Gradient on a system where the decimal separator is ","
The generated shader is created with a "," instead of "." in functions, messing up the arguments.
I've been trying to solve this for a while. I have to check if a given pixel(x,y) is fully transparent.
1.How to Extract the alpha channel from a given pixel? Having an alpha channel of 127 will mean that the pixel is transparent?
2. I have tested the following code on a transparent pixel and It produces an RGB combination of a really dark(almost black) colour. I could use this as indicator, but I need a more accurate way.
my $myImage = newFromPng GD::Image($path);
$myImage->saveAlpha(1);
my $index = $myImage->getPixel($x,$y);
my ($red,$green,$blue) = $myImage->rgb($index);
I found a solution that seems to work properly:
my $index = $myImage->getPixel($x,y);
will return a colour palette. The color palette's range depends on the mode, the image is open in. If it is TrueColor(24-bit RGB-16,777,216 colors), which is the maximum amount of colours recognisable by the human eye and the maximum colours practically used, the maximum palette number will be 16,777,215. When the function is called on a "transparent" pixel, the number returned is over 2 billion which is an invalid number for a 24-bit RGB colour. So one simple check:
if ($index >= 1<<24) {
#The pixel is transparent
}
did the trick for me.
I have this image:
I want to convert this to black and white at small increments, the strange thing is that it just disappears after one increment.
For this line
bw_normal = im2bw(img, 0.33);
I get this:
But for this line:
bw_normal = im2bw(img, 0.32);
The word disappears entirely, this shouldn't happen right? It only happens with this image, any other image will continue to show up until 0.1.
This is what I get at 0.32
Just a white space, can anyone please explain this.
im2bw converts the image to a binary (black/white) image. It does this by comparing all pixels' luminance component to the threshold value you provide as the second argument. If the pixel is brigther, it is made white, if it's darker, it is made black.
In your case, the image has only one color (pretty much). This color has a luminance component between 0.32 and 0.33, so if you use 0.33 as threshold, most of the colored portion of the image will be below the threshold and be made black. If you use 0.32, however, most if not all of the image will be above the threshold and thus be made white.
What you experience is expected behavior since your image is basically white background and a single color for the foreground. Once your "increment" reaches that color's luminance, your image is gone.
I've converted a colored photo to black and white, and bolded the edges. Now i need to convert it back to its original color with the bolded edges. Is there any function in matlab which allows me to do so?
Once you remove the colour from an image, there is no possible way to automatically put it back. You're basically reducing a set of 16,777,216 colours to a set of 256 - on average each shade of grey has 65,536 equivalent colours, and without the original image there's no way to guess which it could be.
Now, if you were to take the bolded lines from your black-and-white image and paint them on top of the original coloured image, that might end up producing what you're looking for.
If what you are trying to do is to use some filter over the B/W image and then use that with the original color. I suggest you convert your image to a color space with Lightness channel that suits your needs (for example L*a*b* if you need the ligtness to be uniformly distributed regarding human recognition of differences) and apply your filter only over the Lightness channel.
For image scanning purposes, I'd like a pixel (which I can get from a UIImage) to match (for a certain percentage) to a pre-set color.
Say pink. When I scan the image for pixels that are pink, I want a function to return a percentage of how much the RGB value in the pixel looks like my pre-set RGB value. This way I'd like all (well, most) pink pixels to become 'visible' to me, not just exact matches.
Is anyone familiar with such an approach? How would you do something like this?
Thanks in advance.
UPDATE: thank you all for your answers so far. I accepted the answer from Damien Pollet because it helped me further and I came to the conclusion that calculating the vector difference between two RGB colors does it perfectly for me (at this moment). It might need some tweaking over time but for now I use the following (in objective c):
float difference = pow( pow((red1 - red2), 2) + pow((green1 - green2), 2) + pow((blue1 - blue2), 2), 0.5 );
If this difference is below 85, I accept the color as my target color. Since my algorithm needs no precision, I'm ok with this solution :)
UPDATE 2: on my search for more I found the following URL which might be quite (understatement) useful for you if you are looking for something similar.
http://www.sunsetlakesoftware.com/2010/10/22/gpu-accelerated-video-processing-mac-and-ios
I would say just compute the vector difference to your target color, and check that it's norm is less than some threshold. I suspect some color spaces are better than others at this, maybe HSL or L*ab, since they separate the brightness from the color hue itself, and so might represent a small perceptual difference by a smaller color vector...
Also, see this related question
Scientific answer: You should convert both colors to the LAB color space and calculate the euclidian distance there. That value is also called deltaE.
The LAB space was developed (using test persons) for exactly that reaason: so that different color pairs with equal distances in tnis space correspond to equal perceived color differences.
However, it sounds like you are not looking for matching a specific color, but rather a color range (lets say all skin tones). That might require more user input than just a reference color + a deltaE tollerance:
a reference color with 3 tollerances for hue, saturation and brightness
a cloud of refence color samples
...