Let's say I've got the colour FF0000, which is red. Finding a darker colour is easy, I just type maybe CC instead of the FF, but let's say I've got the colour AE83FC, which is a complicated colour, how the heck would I find a lighter or darker version of it automatically?
I figured the easy way to do this is to convert my RGB to HSB [Hue, Saturation, Brightness]
How would I do that in Objective-C?
Let's say I've got a RGB which is: 1.0, 0.0, 0.0. That's red.
CGFloat r = 1.0;
CGFloat g = 0.0;
CGfloat b = 0.0;
How would I convert that to HSB and then transform the colors and make it go back to RGB to I can use CGContextRGBSetFillColor?
Are there any HSB functions?
Please help. :)
First, please be aware that three numbers don't describe a color, three numbers together with a colorspace do. RGB isn't a colorspace, it's what's called a color model. There are lots of colorspaces with the RGB model. So ((1,0,0), sRGB) is a different color than ((1,0,0), Adobe RGB). Are you aware of how string encodings work, where a bunch of bytes by itself is not a string? It's a lot like that. It's also similar in that you're kind of asking for trouble whenever you want to look at the component values, because it's an opportunity for messing up the colorspace handling.
Sorry, cannot help myself. Anyway, I'll answer the question as if your original color was ((1,0,0), Generic RGB).
NSColor *color = [NSColor colorWithCalibratedRed:1 green:0 blue:0 alpha:1];
NSLog(#"hue! %g saturation! %g brightness! %g, [color hueComponent], [color saturationComponent], [color brightnessComponent]);
and on the other hand,
NSColor *color = [NSColor colorWithCalibratedHue:h saturation:s brightness:b alpha:1];
NSLog(#"red! %g green! %g blue! %g, [color redComponent], [color greenComponent], [color blueComponent]);
These fooComponent methods do not work with all colors, only those in two specific colorspaces, see docs. You already know you're good if you created the colors yourself with the methods above. If you have a color of unknown provenance, you can (attempt to) convert it to a colorspace in which you can use those component methods with -[NSColor colorUsingColorspaceName:].
There is no need to go all the way to HSB for this. If you want to find a darker version of a color, just multiply each R, G and B component with a number between 0 and 1, and you will get the same color, but darker.
Example: half intensity of AE83FC:
{0xAE, 0x83, 0xFC} * 0.5 =
{174, 131, 252} * 0.5 =
{87, 65, 126} =
{0x57, 0x41, 0x7E} => 57417E
In the same way, you can obtain brighter versions by multiplying with anything >1. The value of each component can't be larger than 255, so when that happens, you need to limit it to 255. That means that the color wont be exactly a brighter version of the same color, but probably close enough for your purposes.
Related
I want to change green color pixels to gold color in peppers.png image in Matlab.
How can I do this task? Thanks very much for your help
Introduction
Using the HSV colorspace gives a better intuition of detecting a certain color hue and manipulating it. For further information, read the following answer.
Solution
Given an image in hsv format, each color has a certain range which it can reside in. In the peppers image, the hue channel of green peppers is in the range [40/360,180/360] (more or less). Also, the color gold can be identified by a hue value of 0.125 and 'V' value of 0.8. Therefore, a good way to change green to gold in a certain picture will be as follows:
transform the image to hsv.
locate green colors by identifying hue value between the range [40/360,180/360].
changing their first channel to 0.125, and their second channel to 0.8.
transform back to rgb.
*comment: instead of fully changing the third channel of the green pixels to 0.8, it will be better to perform an averaging of 0.8 with the originally stored value, to get a more natural effect (see code below).
Code
%reads the image. converts it to hsv.
I = imread('peppers.png');
hsv = rgb2hsv(I);
%locate pixels with green color
GREEN_RANGE = [40,180]/360;
greenAreasMask = hsv(:,:,1)>GREEN_RANGE(1) & hsv(:,:,1) < GREEN_RANGE(2);
%change their hue value to 0.125
HUE_FOR_GOLD = 0.12;
V_FOR_GOLD = 0.8;
goldHsv1 = hsv(:,:,1);
goldHsv1(greenAreasMask)=HUE_FOR_GOLD;
goldHsv3 = hsv(:,:,3);
goldHsv3(greenAreasMask)=V_FOR_GOLD;
newHsv = hsv;
newHsv(:,:,1) = goldHsv1;
newHsv(:,:,3) = newHsv(:,:,3) + goldHsv3 / 2;
%transform back to RGB
res = hsv2rgb(newHsv);
Result
As you can see, the green pixels became more goldish.
There is a room for improvement, but I think that this would be a good start for you. To improve the result you can modify the green and gold HSV values, and use morphological operations on greenAreasMask.
I need to change the white skin face to dark skin face...
For example American white face to African face(i.e color tone)...
I pick the color value of the pixel by digital color meter it gives the RGB value[red=101,green=63 and blue=43] for dark skin and for white skin it gives the RGB value as [red=253,green=210 and blue=176]...
Then i am setting that value in my code it gives the false result...
Here is my code...
-(UIImage*)customBlackFilterOriginal
{
CGImageRef imgSource=self.duplicateImage.image.CGImage;
CFDataRef m_DataRef1 = CGDataProviderCopyData(CGImageGetDataProvider(imgSource));
UInt8 *dataOriginal=(UInt8 *)CFDataGetBytePtr(m_DataRef1);
double lengthSource=CFDataGetLength(m_DataRef1);
NSLog(#"length::%f",lengthSource);
int redPixel;
int greenPixel;
int bluePixel;
for(int index=0;index<lengthSource;index+=4)
{
dataOriginal[index]=dataOriginal[index];
dataOriginal[index+1]= 101;
dataOriginal[index+2]= 63;
dataOriginal[index+3]=43;
}
NSUInteger width =CGImageGetWidth(imgSource);
size_t height=CGImageGetHeight(imgSource);
size_t bitsPerComponent=CGImageGetBitsPerComponent(imgSource);
size_t bitsPerPixel=CGImageGetBitsPerPixel(imgSource);
size_t bytesPerRow=CGImageGetBytesPerRow(imgSource);
NSLog(#"the w:%u H:%lu",width,height);
CGColorSpaceRef colorspace=CGImageGetColorSpace(imgSource);
CGBitmapInfo bitmapInfo=CGImageGetBitmapInfo(imgSource);
CFDataRef newData=CFDataCreate(NULL,dataOriginal,lengthSource);
CGDataProviderRef provider=CGDataProviderCreateWithCFData(newData);
CGImageRef newImg=CGImageCreate(width,height,bitsPerComponent,bitsPerPixel,bytesPerRow,colorspace,bitmapInfo,provider,NULL,true,kCGRenderingIntentDefault);
return [UIImage imageWithCGImage:newImg];
}
please share any idea about the above color changing....
what mistake i did in the code?..
I am not an IPhone programmer so I can't test anything but some things are odd in your code:
Pixel size
When reading your data, you seem to assume you have a 32bits ARGB picture, did you validate it's the case?
CFDataGetBytePtr
According to the docs, it Returns a read-only pointer to the bytes of a CFData object., are you sure you're not looking for CFDataGetBytes which Copies the byte contents of a CFData object to an external buffer. In which case you'll have to allocate your buffer to contain width * height * bpp. Once you have this copy, you can manipulate it anyway you want to create the new picture.
Pixel Selection
According to your question, I seem to understand that you want to change skin color from White to Black. Your current code iterates over every pixel to change its color. You should evaluate the "distance" between the pixel color and what you're looking for, and if it's below a certain threshold process it. It might be easier to perform the operation in HSV than by dealing with RGB colors.
// Make half-transparent grey, the background color for the layer
UIColor *Light_Grey
= [UIColor colorWithRed : 110/255.0
green : 110/255.0
blue : 110/255.0
alpha : 0.5];
// Get a CGColor object with the same color values
CGColorRef cgLight_Grey = [Light_Grey CGColor];
[boxLayer setBackgroundColor : cgLight_Grey];
// Create a UIImage
UIImage *layerImage = [UIImage imageNamed : #"Test.png"];
// Get the underlying CGImage
CGImageRef image = [layerImage CGImage];
// Put the CGImage on the layer
[boxLayer setContents : (id) image];
Consider the above sample code segment.
UIColor *Light_Grey is set with an alpha value of 0.5. My question is : Is there anyway I can set the alpha value of CGImageRef image?
The reason of my asking this is even though the alpha value of boxLayer is 0.5, any images set on top of BoxLayer seem to have an alpha default value of 1, which would cover up anything lying directly underneath the images.
Hope that somebody knowledgable on this can help.
It looks you can make a copy using CGImageCreate and use the decode array to rescale the alpha (e.g. 0.0-0.5)
decode
The decode array for the image. If you
do not want to allow remapping of the
image’s color values, pass NULL for
the decode array. For each color
component in the image’s color space
(including the alpha component), a
decode array provides a pair of values
denoting the upper and lower limits of
a range. For example, the decode array
for a source image in the RGB color
space would contain six entries total,
consisting of one pair each for red,
green, and blue. When the image is
rendered, Quartz uses a linear
transform to map the original
component value into a relative number
within your designated range that is
appropriate for the destination color
space.
I have an array of CGPoints, and I'd like to fill the whole screen with colours, the colour of each pixel depending on the total distance to each of the points in the array. The natural way to do this is to, for each pixel, compute the total distance, and turn that into a colour. Questions follow:
1) How can I colour a single pixel in Quartz? I've been thinking of making 1 by 1 rectangles.
2) Are there better, more efficient ways to achieve this effect?
You don't need to draw it pixel by pixel. You can use radial gradients:
CGPoint points[count];
/* set the points */
CGContextSaveGState(context);
CGContextBeginTransparencyLayer(context, NULL);
CGContextSetAlpha(context, 0.5);
CGContextSetBlendMode(context, kCGBlendModeXOR);
CGContextClipToRect(context, myFrame);
CGFloat radius = myFrame.size.height+myFrame.size.width;
CGColorSpaceRef colorSpace;
CFArrayRef colors;
const CGFloat * locations;
/* create the colors for the gradient */
for(NSUInteger i = 0;i<count;i++){
CGGradientRef gradient = CGGradientCreateWithColors(CGColorSpaceCreateDeviceGray(), colors, locations);
CGContextDrawRadialGradient(context, gradient, points[i], 0.0, points[i], radius, 0);
}
CGContextSetAlpha(context, 1.0);
CGContextEndTransparencyLayer(context);
CGContextRestoreGState(context);
Most of the code is clear, but here are some points:
kCGBlendMode basically adds the value of back- and foreground if both have the same alpha and alpha is not 1.0. You might also be able to use kCGBlendModeColorBurn without the need to play with transparency. Check the reference.
radius is big enough to cover the whole frame. You can set a different value.
Note that locations values should be between 0.0 and 1.0. You need to calibrate your color values depending on the radius.
This has been asked before:
How do I draw a point using Core Graphics?
From Quartz, a 1x1 rectangle would do what you want. But it is certainly not very efficient.
You are better off creating a memory buffer, calculating your point distances, and writing into the array directly within your processing loop. Then to display the result, simply create a CGImage which you can then render into your screen context.
I'm looking for a straight forward way to convert a color from RGB, grabbing it from a tool like Photoshop, then convert it to a UIColor. Since UIColor uses normalized gamut of 0.0 to 1.0 for each color space, I'm not sure how this is done.
Thanks for the solution.
Your values are between 0 and 255. Use them to create a UIColor:
float r; float g; float b; float a;
[UIColor colorWithRed:r/255.f
green:g/255.f
blue:b/255.f
alpha:a/255.f];