A question about the Alpha Values of images - iphone

// Make half-transparent grey, the background color for the layer
UIColor *Light_Grey
= [UIColor colorWithRed : 110/255.0
green : 110/255.0
blue : 110/255.0
alpha : 0.5];
// Get a CGColor object with the same color values
CGColorRef cgLight_Grey = [Light_Grey CGColor];
[boxLayer setBackgroundColor : cgLight_Grey];
// Create a UIImage
UIImage *layerImage = [UIImage imageNamed : #"Test.png"];
// Get the underlying CGImage
CGImageRef image = [layerImage CGImage];
// Put the CGImage on the layer
[boxLayer setContents : (id) image];
Consider the above sample code segment.
UIColor *Light_Grey is set with an alpha value of 0.5. My question is : Is there anyway I can set the alpha value of CGImageRef image?
The reason of my asking this is even though the alpha value of boxLayer is 0.5, any images set on top of BoxLayer seem to have an alpha default value of 1, which would cover up anything lying directly underneath the images.
Hope that somebody knowledgable on this can help.

It looks you can make a copy using CGImageCreate and use the decode array to rescale the alpha (e.g. 0.0-0.5)
decode
The decode array for the image. If you
do not want to allow remapping of the
image’s color values, pass NULL for
the decode array. For each color
component in the image’s color space
(including the alpha component), a
decode array provides a pair of values
denoting the upper and lower limits of
a range. For example, the decode array
for a source image in the RGB color
space would contain six entries total,
consisting of one pair each for red,
green, and blue. When the image is
rendered, Quartz uses a linear
transform to map the original
component value into a relative number
within your designated range that is
appropriate for the destination color
space.

Related

How to apply binary mask to remove background from skin lesion colour image

the figure outputted just displays the binary mask image, however I am trying to get just the foreground of the coloured image, with the background being black.
original = imread('originalImage.jpg');
binaryImage = imread('binaryImage.png');
mask = cat(3,binaryImage, binaryImage, binaryImage);
output = mask.*original;
figure,imshow(output);
the binary mask
The original image
The most likely issue is that binary is an image with values of 0 for background and 255 for foreground. Multiplying a color image with values in the range [0,255] by such a mask leads to overflow. Since the input images are uint8, overflow leads to values of 255. Thus, everywhere where the mask is white, you get white colors.
The solution is to convert the images to double:
output = double(mask)/255 .* double(original)/255;
or to truly binarize the mask image:
output = (mask>0) .* original;

Transforming a Stroked CAShapeLayer

I have a CAShapeLayer which contains a CGMutablePath that has a stroke drawn around it. In my app, I transform this CAShapeLayer to increase / decrease it's size at certain times. I'm noticing when I transform the CAShapeLayer, the stroke gets transformed as well. Ideally I'd like to keep the lineWidth of the stroke at 3 at all times even when the CAShapeLayers transformed.
I tried shutting off the stroke before I transformed then readding it afterwards but it didn't work:
subLayerShapeLayer.lineWidth = 0;
subLayerShapeLayer.strokeColor = nil;
self.layer.sublayerTransform = CATransform3DScale(self.layer.sublayerTransform, graphicSize.width / self.graphic.size.width, graphicSize.height / self.graphic.size.height, 1);
shapeLayer.strokeColor = [UIColor colorWithRed:0 green:0 blue:0 alpha:1].CGColor;;
shapeLayer.lineWidth = 3;
Does anyone know how I might be able to accomplish this task? Seems as though it should be able to redraw the stroke after transforming somehow.
Transform the CGPath itself and not its drawn representation (the CAShapeLayer).
Have a close look at CGPathCreateMutableCopyByTransformingPath - CGPath Reference
CGPathCreateMutableCopyByTransformingPath
Creates a mutable copy of a graphics path transformed by a
transformation matrix.
CGMutablePathRef CGPathCreateMutableCopyByTransformingPath(
CGPathRef path,
const CGAffineTransform *transform
);
Parameters
path The path to copy.
transform A pointer to an affine transformation matrix, or NULL if no transformation is needed. If specified, Quartz applies the transformation to all elements of the new path.
Return Value
A new, mutable copy of the specified path transformed by the transform parameter. You are responsible for releasing this object.
Availability
Available in iOS 5.0 and later.
Declared In
CGPath.h

How change the white skin face to dark skin face in iOS?

I need to change the white skin face to dark skin face...
For example American white face to African face(i.e color tone)...
I pick the color value of the pixel by digital color meter it gives the RGB value[red=101,green=63 and blue=43] for dark skin and for white skin it gives the RGB value as [red=253,green=210 and blue=176]...
Then i am setting that value in my code it gives the false result...
Here is my code...
-(UIImage*)customBlackFilterOriginal
{
CGImageRef imgSource=self.duplicateImage.image.CGImage;
CFDataRef m_DataRef1 = CGDataProviderCopyData(CGImageGetDataProvider(imgSource));
UInt8 *dataOriginal=(UInt8 *)CFDataGetBytePtr(m_DataRef1);
double lengthSource=CFDataGetLength(m_DataRef1);
NSLog(#"length::%f",lengthSource);
int redPixel;
int greenPixel;
int bluePixel;
for(int index=0;index<lengthSource;index+=4)
{
dataOriginal[index]=dataOriginal[index];
dataOriginal[index+1]= 101;
dataOriginal[index+2]= 63;
dataOriginal[index+3]=43;
}
NSUInteger width =CGImageGetWidth(imgSource);
size_t height=CGImageGetHeight(imgSource);
size_t bitsPerComponent=CGImageGetBitsPerComponent(imgSource);
size_t bitsPerPixel=CGImageGetBitsPerPixel(imgSource);
size_t bytesPerRow=CGImageGetBytesPerRow(imgSource);
NSLog(#"the w:%u H:%lu",width,height);
CGColorSpaceRef colorspace=CGImageGetColorSpace(imgSource);
CGBitmapInfo bitmapInfo=CGImageGetBitmapInfo(imgSource);
CFDataRef newData=CFDataCreate(NULL,dataOriginal,lengthSource);
CGDataProviderRef provider=CGDataProviderCreateWithCFData(newData);
CGImageRef newImg=CGImageCreate(width,height,bitsPerComponent,bitsPerPixel,bytesPerRow,colorspace,bitmapInfo,provider,NULL,true,kCGRenderingIntentDefault);
return [UIImage imageWithCGImage:newImg];
}
please share any idea about the above color changing....
what mistake i did in the code?..
I am not an IPhone programmer so I can't test anything but some things are odd in your code:
Pixel size
When reading your data, you seem to assume you have a 32bits ARGB picture, did you validate it's the case?
CFDataGetBytePtr
According to the docs, it Returns a read-only pointer to the bytes of a CFData object., are you sure you're not looking for CFDataGetBytes which Copies the byte contents of a CFData object to an external buffer. In which case you'll have to allocate your buffer to contain width * height * bpp. Once you have this copy, you can manipulate it anyway you want to create the new picture.
Pixel Selection
According to your question, I seem to understand that you want to change skin color from White to Black. Your current code iterates over every pixel to change its color. You should evaluate the "distance" between the pixel color and what you're looking for, and if it's below a certain threshold process it. It might be easier to perform the operation in HSV than by dealing with RGB colors.

Drawing single pixel in Quartz

I have an array of CGPoints, and I'd like to fill the whole screen with colours, the colour of each pixel depending on the total distance to each of the points in the array. The natural way to do this is to, for each pixel, compute the total distance, and turn that into a colour. Questions follow:
1) How can I colour a single pixel in Quartz? I've been thinking of making 1 by 1 rectangles.
2) Are there better, more efficient ways to achieve this effect?
You don't need to draw it pixel by pixel. You can use radial gradients:
CGPoint points[count];
/* set the points */
CGContextSaveGState(context);
CGContextBeginTransparencyLayer(context, NULL);
CGContextSetAlpha(context, 0.5);
CGContextSetBlendMode(context, kCGBlendModeXOR);
CGContextClipToRect(context, myFrame);
CGFloat radius = myFrame.size.height+myFrame.size.width;
CGColorSpaceRef colorSpace;
CFArrayRef colors;
const CGFloat * locations;
/* create the colors for the gradient */
for(NSUInteger i = 0;i<count;i++){
CGGradientRef gradient = CGGradientCreateWithColors(CGColorSpaceCreateDeviceGray(), colors, locations);
CGContextDrawRadialGradient(context, gradient, points[i], 0.0, points[i], radius, 0);
}
CGContextSetAlpha(context, 1.0);
CGContextEndTransparencyLayer(context);
CGContextRestoreGState(context);
Most of the code is clear, but here are some points:
kCGBlendMode basically adds the value of back- and foreground if both have the same alpha and alpha is not 1.0. You might also be able to use kCGBlendModeColorBurn without the need to play with transparency. Check the reference.
radius is big enough to cover the whole frame. You can set a different value.
Note that locations values should be between 0.0 and 1.0. You need to calibrate your color values depending on the radius.
This has been asked before:
How do I draw a point using Core Graphics?
From Quartz, a 1x1 rectangle would do what you want. But it is certainly not very efficient.
You are better off creating a memory buffer, calculating your point distances, and writing into the array directly within your processing loop. Then to display the result, simply create a CGImage which you can then render into your screen context.

Objective-C RGB to HSB

Let's say I've got the colour FF0000, which is red. Finding a darker colour is easy, I just type maybe CC instead of the FF, but let's say I've got the colour AE83FC, which is a complicated colour, how the heck would I find a lighter or darker version of it automatically?
I figured the easy way to do this is to convert my RGB to HSB [Hue, Saturation, Brightness]
How would I do that in Objective-C?
Let's say I've got a RGB which is: 1.0, 0.0, 0.0. That's red.
CGFloat r = 1.0;
CGFloat g = 0.0;
CGfloat b = 0.0;
How would I convert that to HSB and then transform the colors and make it go back to RGB to I can use CGContextRGBSetFillColor?
Are there any HSB functions?
Please help. :)
First, please be aware that three numbers don't describe a color, three numbers together with a colorspace do. RGB isn't a colorspace, it's what's called a color model. There are lots of colorspaces with the RGB model. So ((1,0,0), sRGB) is a different color than ((1,0,0), Adobe RGB). Are you aware of how string encodings work, where a bunch of bytes by itself is not a string? It's a lot like that. It's also similar in that you're kind of asking for trouble whenever you want to look at the component values, because it's an opportunity for messing up the colorspace handling.
Sorry, cannot help myself. Anyway, I'll answer the question as if your original color was ((1,0,0), Generic RGB).
NSColor *color = [NSColor colorWithCalibratedRed:1 green:0 blue:0 alpha:1];
NSLog(#"hue! %g saturation! %g brightness! %g, [color hueComponent], [color saturationComponent], [color brightnessComponent]);
and on the other hand,
NSColor *color = [NSColor colorWithCalibratedHue:h saturation:s brightness:b alpha:1];
NSLog(#"red! %g green! %g blue! %g, [color redComponent], [color greenComponent], [color blueComponent]);
These fooComponent methods do not work with all colors, only those in two specific colorspaces, see docs. You already know you're good if you created the colors yourself with the methods above. If you have a color of unknown provenance, you can (attempt to) convert it to a colorspace in which you can use those component methods with -[NSColor colorUsingColorspaceName:].
There is no need to go all the way to HSB for this. If you want to find a darker version of a color, just multiply each R, G and B component with a number between 0 and 1, and you will get the same color, but darker.
Example: half intensity of AE83FC:
{0xAE, 0x83, 0xFC} * 0.5 =
{174, 131, 252} * 0.5 =
{87, 65, 126} =
{0x57, 0x41, 0x7E} => 57417E
In the same way, you can obtain brighter versions by multiplying with anything >1. The value of each component can't be larger than 255, so when that happens, you need to limit it to 255. That means that the color wont be exactly a brighter version of the same color, but probably close enough for your purposes.