CImg : what kind of values are required on this constructor? - cimg

In the documentation (http://cimg.eu/reference/structcimg__library_1_1CImg.html#a24f3b43daa4444b94a973c2c3fff82c5), you can read that the N°7 constructor requires an array of values to fill the image :
values = Pointer to the input memory buffer.
You must know that I'm working with a RGB 2D image (so, a "normal"/"common" image).
Thus, I filled a vector (= more or less an array) with Nx3 values. N is the number of pixels, and the number "3" is because I use red, green and blue. I set the first value to 0, the 2nd to 0, the 3rd to 255 and these 3 operations are repeated N times. That's why my vector, which is named w, looks like that : {0, 0, 255 ; 0, 0, 255 ; etc.}
I wrote this constructor : cimg_library::CImg<unsigned char>(&w[0], width, height, 2, 3); to say that there are 3 channels, a depth of 2 (since I use 2D), and to give my values (width, height and pixels).
I should obtain a entirely blue image. But it's yellow. Why ? Did I badly used the vector ?

Unlike most formats which are stored "band interleaved by pixel", i.e. RGBRGBRGB..., the data in a CImg are stored "band-interleaved by plane", i.e. all the red components are first, then all the green components, then all the blue ones, so it looks like RRRGGGBBB. This is described here.
So, your code would need to be like this:
#include <vector>
#include "CImg.h"
using namespace std;
using namespace cimg_library;
int main()
{
const int width=3;
const int height=2;
// 1. row - red, green, blue
// 2. row - cyan, magenta, yellow
// 6 pixels
// Red plane first - red, green, blue, cyan, magenta, yellow
// 255,0,0,0,255,255
// Green plane next - red, green, blue, cyan, magenta, yellow
// 0,255,0,255,0,255
// Blue plane - red, green, blue, cyan, magenta, yellow
// 0,0,255,255,255,0
vector<unsigned char> w{
255,0,0,0,255,255,
0,255,0,255,0,255,
0,0,255,255,255,0
};
CImg<unsigned char> image((unsigned char*)&w[0],width,height,1,3);
image.save_pnm("result.pnm");
}
Or, if you simply want a solid blue image, the easiest way is probably to instantiate a simple 1x1 blue image using an initialiser for the one pixel, then to resize it:
// Instantiate a 1x1 RGB image initialised to blue (last three values)
CImg<unsigned char> blue(1,1,1,3,0,0,255);
// Resize to larger image
blue.resize(width,height);
Another method might be:
// Create RGB image and fill with Blue
CImg<unsigned char> image(width,height,1,3);
image.get_shared_channel(0).fill(0);
image.get_shared_channel(1).fill(0);
image.get_shared_channel(2).fill(255);
Another method might be:
CImg<unsigned char> image(256,256,1,3);
// for all pixels x,y in image
cimg_forXY(image,x,y) {
image(x,y,0,0)=0;
image(x,y,0,1)=0;
image(x,y,0,2)=255;
}

Related

How to replace colors with textures via Shaders in Unity 3D

I have a problem. I tried to search but I can't find what I want.
I have a texture. This texture have a blue, green and black colors. They are masks, this is a face texture actually. I want to replace them like blue color will replace with my eyes texture in unity, green color will replace with face texture.. How can I write this shader? I searched but only I find color changing shaders :( Thanks..
The way i interpret your question is that you want a shader where one texture serves as a mask, blending between 3 other textures. I'm assuming that this is for character customization, to stitch different pieces of face together.
In the fragment (or surf) function, sample your 3 textures and the mask:
fixed4 face = tex2D(_FaceTex, i.uv); // Green channel of the mask
fixed4 eyes = tex2D(_EyeTex, i.uv); // Blue channel of the mask
fixed4 mouth = tex2D(_MouthTex, i.uv); // No mask value (black)
fixed4 mask = tex2D(_MaskTex, i.uv);
Then, you need to blend them together using the mask. Let's assume that whatever black represents in the mask is the background color, and then we interpolate in the other textures.
fixed4 col = lerp(mouth, eyes, mask.b);
Then we can interpolate between the resulting color and our third value:
col = lerp(col, face, mask.g);
You could repeat this once again with the red channel, for a fourth texture. Of course, this assumes that you use pure red, green, or blue in the mask. There are ways to use a more specific color as key too, for instance you can use the absolute of the distance between the mask color and some reference color:
fixed4 eyeMaskColor = (0.5, 0.5, 1, 1);
half t = 1 - saturate(abs(length(mask - eyeMaskColor)));
In this case, t is the lerp factor you use to blend in the texture. The saturate function clamps the value in the range of [0, 1]. If the mask color is the same as eyeMaskColor, then the length of the vector between them is 0 and the statement evaluates to 1.

Accurately get a color from pixel on screen and convert its color space

I need to get a color from a pixel on the screen and convert its color space. The problem I have is that the color values are not the same when comparing the values against the Digital Color Meter app.
// create a 1x1 image at the mouse position
if let image:CGImage = CGDisplayCreateImage(disID, rect: CGRect(x: x, y: y, width: 1, height: 1))
{
let bitmap = NSBitmapImageRep(cgImage: image)
// get the color from the bitmap and convert its colorspace to sRGB
var color = bitmap.colorAt(x: 0, y: 0)!
color = color.usingColorSpace(.sRGB)!
// print the RGB values
let red = color.redComponent, green = color.greenComponent, blue = color.blueComponent
print("r:", Int(red * 255), " g:", Int(green * 255), " b:", Int(blue * 255))
}
My code (converted to sRGB): 255, 38, 0
Digital Color Meter (sRGB): 255, 4, 0
How do you get a color from a pixel on the screen with the correct color space values?
Update:
If you don’t convert the colors colorspace to anything (or convert it to calibratedRGB), the values match the Digital Color Meters values when it’s set to “Display native values”.
My code (not converted): 255, 0, 1
Digital Color Meter (set to: Display native values): 255, 0, 1
So why when the colors values match the native values in the DCM app, does converting the color to sRGB and comparing it to the DCM's values(in sRGB) not match? I also tried converting to other colorspaces and there always different from the DCM.
OK, I can tell you how to fix it/match DCM, you'll have to decide if this is correct/a bug/etc.
It seems the color returned by colorAt() has the same component values as the bitmap's pixel but a different color space - rather than the original device color space it is a generic RGB one. We can "correct" this by building a color in the bitmap's space:
let color = bitmap.colorAt(x: 0, y: 0)!
// need a pointer to a C-style array of CGFloat
let compCount = color.numberOfComponents
let comps = UnsafeMutablePointer<CGFloat>.allocate(capacity: compCount)
// get the components
color.getComponents(comps)
// construct a new color in the device/bitmap space with the same components
let correctedColor = NSColor(colorSpace: bitmap.colorSpace,
components: comps,
count: compCount)
// convert to sRGB
let sRGBcolor = correctedColor.usingColorSpace(.sRGB)!
I think you'll find that the values of correctedColor track DCM's native values, and those of sRGBcolor track DCM's sRGB values.
Note that we are constructing a color in the device space, not converting a color to the device space.
HTH

How to change green pixels to gold color in peppers image with matlab?

I want to change green color pixels to gold color in peppers.png image in Matlab.
How can I do this task? Thanks very much for your help
Introduction
Using the HSV colorspace gives a better intuition of detecting a certain color hue and manipulating it. For further information, read the following answer.
Solution
Given an image in hsv format, each color has a certain range which it can reside in. In the peppers image, the hue channel of green peppers is in the range [40/360,180/360] (more or less). Also, the color gold can be identified by a hue value of 0.125 and 'V' value of 0.8. Therefore, a good way to change green to gold in a certain picture will be as follows:
transform the image to hsv.
locate green colors by identifying hue value between the range [40/360,180/360].
changing their first channel to 0.125, and their second channel to 0.8.
transform back to rgb.
*comment: instead of fully changing the third channel of the green pixels to 0.8, it will be better to perform an averaging of 0.8 with the originally stored value, to get a more natural effect (see code below).
Code
%reads the image. converts it to hsv.
I = imread('peppers.png');
hsv = rgb2hsv(I);
%locate pixels with green color
GREEN_RANGE = [40,180]/360;
greenAreasMask = hsv(:,:,1)>GREEN_RANGE(1) & hsv(:,:,1) < GREEN_RANGE(2);
%change their hue value to 0.125
HUE_FOR_GOLD = 0.12;
V_FOR_GOLD = 0.8;
goldHsv1 = hsv(:,:,1);
goldHsv1(greenAreasMask)=HUE_FOR_GOLD;
goldHsv3 = hsv(:,:,3);
goldHsv3(greenAreasMask)=V_FOR_GOLD;
newHsv = hsv;
newHsv(:,:,1) = goldHsv1;
newHsv(:,:,3) = newHsv(:,:,3) + goldHsv3 / 2;
%transform back to RGB
res = hsv2rgb(newHsv);
Result
As you can see, the green pixels became more goldish.
There is a room for improvement, but I think that this would be a good start for you. To improve the result you can modify the green and gold HSV values, and use morphological operations on greenAreasMask.

GreyScale buffer with CImg

my goal is to create a unsigned char buffer filled with 0-255 grey scale color.
each cell in the buffer is 0-255 (no RGB).
i would like to exract from grey scale pic only one parameter (0-255 grey scale).
how to do so with CImg?
thanks,
jose.
You can compute the luminance of your input sRGB image, like this, in CImg:
CImg<unsigned char> luminance = RGB.get_RGBtoYCbCr().channel(0);
where RGB is the name of your RGB image.

How to get the real RGBA or ARGB color values without premultiplied alpha?

I'm creating an bitmap context using CGBitmapContextCreate with the kCGImageAlphaPremultipliedFirst option.
I made a 5 x 5 test image with some major colors (pure red, green, blue, white, black), some mixed colors (i.e. purple) combined with some alpha variations. Every time when the alpha component is not 255, the color value is wrong.
I found that I could re-calculate the color when I do something like:
almostCorrectRed = wrongRed * (255 / alphaValue);
almostCorrectGreen = wrongGreen * (255 / alphaValue);
almostCorrectBlue = wrongBlue * (255 / alphaValue);
But the problem is, that my calculations are sometimes off by 3 or even more. So for example I get a value of 242 instead of 245 for green, and I am 100% sure that it must be exactly 245. Alpha is 128.
Then, for the exact same color just with different alpha opacity in the PNG bitmap, I get alpha = 255 and green = 245 as it should be.
If alpha is 0, then red, green and blue are also 0. Here all data is lost and I can't figure out the color of the pixel.
How can I avoid or undo this alpha premultiplication alltogether so that I can modify pixels in my image based on the true R G B pixel values as they were when the image was created in Photoshop? How can I recover the original values for R, G, B and A?
Background info (probably not necessary for this question):
What I'm doing is this: I take a UIImage, draw it to a bitmap context in order to perform some simple image manipulation algorithms on it, shifting the color of each pixel depending on what color it was before. Nothing really special. But my code needs the real colors. When a pixel is transparent (meaning it has alpha less than 255) my algorithm shouldn't care about this, it should just modify R,G,B as needed while Alpha remains at whatever it is. Sometimes though it will shift alpha up or down too. But I see them as two separate things. Alpha contorls transparency, while R G B control the color.
This is a fundamental problem with premultiplication in an integral type:
245 * (128/255) = 122.98
122.98 truncated to an integer = 122
122 * (255/128) = 243.046875
I'm not sure why you're getting 242 instead of 243, but this problem remains either way, and it gets worse the lower the alpha goes.
The solution is to use floating-point components instead. The Quartz 2D Programming Guide gives the full details of the format you'll need to use.
Important point: You'd need to use floating-point from the creation of the original image (and I don't think it's even possible to save such an image as PNG; you might have to use TIFF). An image that was already premultiplied in an integral type has already lost that precision; there is no getting it back.
The zero-alpha case is the extreme version of this, to such an extent that even floating-point cannot help you. Anything times zero (alpha) is zero, and there is no recovering the original unpremultiplied value from that point.
Pre-multiplying alpha with an integer color type is an information lossy operation. Data is destroyed during the quantization process (rounding to 8 bits).
Since some data is destroy (by rounding), there is no way to recover the exact original pixel color (except for some lucky values). You have to save the colors of your photoshop image before you draw it into a bitmap context, and use that original color data, not the multiplied color data from the bitmap.
I ran into this same problem when trying to read image data, render it to another image with CoreGraphics, and then save the result as non-premultiplied data. The solution I found that worked for me was to save a table that contains the exact mapping that CoreGraphics uses to map non-premultiplied data to premultiplied data. Then, estimate what the original premultipled value would be with a mult and floor() call. Then, if the estimate and the result from the table lookup do not match, just check the value below the estimate and the one above the estimate in the table for the exact match.
// Execute premultiply logic on RGBA components split into componenets.
// For example, a pixel RGB (128, 0, 0) with A = 128
// would return (255, 0, 0) with A = 128
static
inline
uint32_t premultiply_bgra_inline(uint32_t red, uint32_t green, uint32_t blue, uint32_t alpha)
{
const uint8_t* const restrict alphaTable = &extern_alphaTablesPtr[alpha * PREMULT_TABLEMAX];
uint32_t result = (alpha << 24) | (alphaTable[red] << 16) | (alphaTable[green] << 8) | alphaTable[blue];
return result;
}
static inline
int unpremultiply(const uint32_t premultRGBComponent, const float alphaMult, const uint32_t alpha)
{
float multVal = premultRGBComponent * alphaMult;
float floorVal = floor(multVal);
uint32_t unpremultRGBComponent = (uint32_t)floorVal;
assert(unpremultRGBComponent >= 0);
if (unpremultRGBComponent > 255) {
unpremultRGBComponent = 255;
}
// Pass the unpremultiplied estimated value through the
// premultiply table again to verify that the result
// maps back to the same rgb component value that was
// passed in. It is possible that the result of the
// multiplication is smaller or larger than the
// original value, so this will either add or remove
// one int value to the result rgb component to account
// for the error possibility.
uint32_t premultPixel = premultiply_bgra_inline(unpremultRGBComponent, 0, 0, alpha);
uint32_t premultActualRGBComponent = (premultPixel >> 16) & 0xFF;
if (premultRGBComponent != premultActualRGBComponent) {
if ((premultActualRGBComponent < premultRGBComponent) && (unpremultRGBComponent < 255)) {
unpremultRGBComponent += 1;
} else if ((premultActualRGBComponent > premultRGBComponent) && (unpremultRGBComponent > 0)) {
unpremultRGBComponent -= 1;
} else {
// This should never happen
assert(0);
}
}
return unpremultRGBComponent;
}
You can find the complete static table of values at this github link.
Note that this approach will not recover information "lost" when the original unpremultiplied pixel was premultiplied. But, it does return the smallest unpremultiplied pixel that will become the premultiplied pixel once run through the premultiply logic again. This is useful when the graphics subsystem only accepts premultiplied pixels (like CoreGraphics on OSX). If the graphics subsystem only accepts premultipled pixels, then you are better off storing only the premultipled pixels, since less space is consumed as compared to the unpremultiplied pixels.