How to Convert Hexcolor to RGB color using flutter dart - flutter

I want to convert Hex color values to RGB color values. How do I do that? Sorry this question might be a bit short. but the meaning is exactly Because I haven't seen these answers anywhere.
For example, HEX color value = "0xff4f6872" converts to RGB color value = (R:79 G:104 B:114).
I look forward to these answers, thank you.

The built-in material Color class has properties that hold the color value of each color channel. You can use them to find the RGB (red, green blue) or even the RGBA (red, green, blue, alpha) value of a color.
You first need to create a color object for your hex color by putting the value inside the Color() method and add '0xff' before the hex value.
Color myColor = Color(0xffe64a6f);
You can then access any of the properties you want and use/display them however you want
Column(
children: [
Text('red value: ${myColor.red}'),
Text('green value: ${myColor.green}'),
Text('blue value: ${myColor.blue}'),
Text('alpha value: ${myColor.alpha}'),
],
)

0xff4f6872 isn‘t (R:79 G:104 B:114).
A hexadecimal color is specified with: #RRGGBB. To add transparency, add two additional digits between 00 and FF.
Here:
ff = 256 (R)
4f = 79 (G)
68 = 104 (B)
72 = 114 this means alpha = 0.45 (114/256)

The best way to convert a hexadecimal color string to RGB color values in Flutter or Dart:
String hexColor = "0xff4f6872";
int intColor = int.parse(hexColor);
int red = (intColor >> 16) & 0xff;
int green = (intColor >> 8) & 0xff;
int blue = (intColor >> 0) & 0xff;
So, you can get:
red = 79, green = 104, blue = 114.

Related

Flutter: Convert color temperature to RGB

I am looking for a way to convert a color temperature to its approximate RGB color in Flutter.
If you have a look at this website, a color temperature of 1800 equals some kind of orange:
Is there a convenient way to do this in Flutter? The website I found seems to have hardcoded the colors. Providing me with a formula would also be appreaciated.
This blog has a formula available in several languages. Below is my port to Dart/Flutter.
Color colorTempToRGB(double colorTemp) {
final temp = colorTemp / 100;
final red = temp <= 66
? 255
: (pow(temp - 60, -0.1332047592) * 329.698727446).round().clamp(0, 255);
final green = temp <= 66
? (99.4708025861 * log(temp) - 161.1195681661).round().clamp(0, 255)
: (pow(temp - 60, -0.0755148492) * 288.1221695283).round().clamp(0, 255);
final blue = temp >= 66
? 255
: temp <= 19
? 0
: (138.5177312231 * log(temp - 10) - 305.0447927307)
.round()
.clamp(0, 255);
return Color.fromARGB(255, red, green, blue);
}

How to calculate brightness from list of unsigned 8 bytes integer which represents an image in dart?

I wanted to calculate the brightness of an UintList image. The image I used are picked from my phone (Using image_picker plugin in flutter). I tried a for loop on every value of this list and did this:
int r = 0, b = 0, g = 0, count = 0;
for (int value in imageBytesList) {
/// The red channel of this color in an 8 bit value.
int red = (0x00ff0000 & value) >> 16;
/// The blue channel of this color in an 8 bit value.
int blue = (0x0000ff00 & value) >> 8;
/// The green channel of this color in an 8 bit value.
int green = (0x000000ff & value) >> 0;
r += red;
b += blue;
g += green;
count++;
}
double result = (r + b + g) / (count * 3);
I know that the result should represent a brightness level between 0 and 255, where 0 = totally black and 255 = totally bright. but, what I get are really weird values like 0.0016887266175341332. What calculation mistakes am I making? (I know my method is gravely wrong but I wasn't able to find a way).
The flutter image widget does convert this Uint8List from memory to an Image with correct height & width using Image.memory() constructor. What is the logic behind it?

Accurately get a color from pixel on screen and convert its color space

I need to get a color from a pixel on the screen and convert its color space. The problem I have is that the color values are not the same when comparing the values against the Digital Color Meter app.
// create a 1x1 image at the mouse position
if let image:CGImage = CGDisplayCreateImage(disID, rect: CGRect(x: x, y: y, width: 1, height: 1))
{
let bitmap = NSBitmapImageRep(cgImage: image)
// get the color from the bitmap and convert its colorspace to sRGB
var color = bitmap.colorAt(x: 0, y: 0)!
color = color.usingColorSpace(.sRGB)!
// print the RGB values
let red = color.redComponent, green = color.greenComponent, blue = color.blueComponent
print("r:", Int(red * 255), " g:", Int(green * 255), " b:", Int(blue * 255))
}
My code (converted to sRGB): 255, 38, 0
Digital Color Meter (sRGB): 255, 4, 0
How do you get a color from a pixel on the screen with the correct color space values?
Update:
If you don’t convert the colors colorspace to anything (or convert it to calibratedRGB), the values match the Digital Color Meters values when it’s set to “Display native values”.
My code (not converted): 255, 0, 1
Digital Color Meter (set to: Display native values): 255, 0, 1
So why when the colors values match the native values in the DCM app, does converting the color to sRGB and comparing it to the DCM's values(in sRGB) not match? I also tried converting to other colorspaces and there always different from the DCM.
OK, I can tell you how to fix it/match DCM, you'll have to decide if this is correct/a bug/etc.
It seems the color returned by colorAt() has the same component values as the bitmap's pixel but a different color space - rather than the original device color space it is a generic RGB one. We can "correct" this by building a color in the bitmap's space:
let color = bitmap.colorAt(x: 0, y: 0)!
// need a pointer to a C-style array of CGFloat
let compCount = color.numberOfComponents
let comps = UnsafeMutablePointer<CGFloat>.allocate(capacity: compCount)
// get the components
color.getComponents(comps)
// construct a new color in the device/bitmap space with the same components
let correctedColor = NSColor(colorSpace: bitmap.colorSpace,
components: comps,
count: compCount)
// convert to sRGB
let sRGBcolor = correctedColor.usingColorSpace(.sRGB)!
I think you'll find that the values of correctedColor track DCM's native values, and those of sRGBcolor track DCM's sRGB values.
Note that we are constructing a color in the device space, not converting a color to the device space.
HTH

Extracting color components in Lumia Imaging sdk - custom filter

Can someone explain the calculation being used to extract color components on the right side of the following statements using bit shift operators?
uint alpha = (currentPixel & 0xff000000) >> 24; // alpha component
uint red = (currentPixel & 0x00ff0000) >> 16; // red color component
uint green = (currentPixel & 0x0000ff00) >> 8; // green color component
uint blue = currentPixel & 0x000000ff; // blue color component
Lumia Imaging SDK exposes the color value using the ARGB color format. It will use 8 bits to encode each color component, but for simplicity/efficiency it will store and expose all four of them in a single uint32.
That means that each color component is "laid out" in the int in the order you saw: 8 bits for alpha, 8 bits for Red, 8 bits for Green and 8 bits for blue: ARGB.
To extract the individual components you need to do some bitwise operations on the int. First you need to do an and operation to single out the bits you are interested in (using the & operator), then you do a bitwise right shift (the >> operator) to get the bits you want into the [0, 255] range.

How to get the real RGBA or ARGB color values without premultiplied alpha?

I'm creating an bitmap context using CGBitmapContextCreate with the kCGImageAlphaPremultipliedFirst option.
I made a 5 x 5 test image with some major colors (pure red, green, blue, white, black), some mixed colors (i.e. purple) combined with some alpha variations. Every time when the alpha component is not 255, the color value is wrong.
I found that I could re-calculate the color when I do something like:
almostCorrectRed = wrongRed * (255 / alphaValue);
almostCorrectGreen = wrongGreen * (255 / alphaValue);
almostCorrectBlue = wrongBlue * (255 / alphaValue);
But the problem is, that my calculations are sometimes off by 3 or even more. So for example I get a value of 242 instead of 245 for green, and I am 100% sure that it must be exactly 245. Alpha is 128.
Then, for the exact same color just with different alpha opacity in the PNG bitmap, I get alpha = 255 and green = 245 as it should be.
If alpha is 0, then red, green and blue are also 0. Here all data is lost and I can't figure out the color of the pixel.
How can I avoid or undo this alpha premultiplication alltogether so that I can modify pixels in my image based on the true R G B pixel values as they were when the image was created in Photoshop? How can I recover the original values for R, G, B and A?
Background info (probably not necessary for this question):
What I'm doing is this: I take a UIImage, draw it to a bitmap context in order to perform some simple image manipulation algorithms on it, shifting the color of each pixel depending on what color it was before. Nothing really special. But my code needs the real colors. When a pixel is transparent (meaning it has alpha less than 255) my algorithm shouldn't care about this, it should just modify R,G,B as needed while Alpha remains at whatever it is. Sometimes though it will shift alpha up or down too. But I see them as two separate things. Alpha contorls transparency, while R G B control the color.
This is a fundamental problem with premultiplication in an integral type:
245 * (128/255) = 122.98
122.98 truncated to an integer = 122
122 * (255/128) = 243.046875
I'm not sure why you're getting 242 instead of 243, but this problem remains either way, and it gets worse the lower the alpha goes.
The solution is to use floating-point components instead. The Quartz 2D Programming Guide gives the full details of the format you'll need to use.
Important point: You'd need to use floating-point from the creation of the original image (and I don't think it's even possible to save such an image as PNG; you might have to use TIFF). An image that was already premultiplied in an integral type has already lost that precision; there is no getting it back.
The zero-alpha case is the extreme version of this, to such an extent that even floating-point cannot help you. Anything times zero (alpha) is zero, and there is no recovering the original unpremultiplied value from that point.
Pre-multiplying alpha with an integer color type is an information lossy operation. Data is destroyed during the quantization process (rounding to 8 bits).
Since some data is destroy (by rounding), there is no way to recover the exact original pixel color (except for some lucky values). You have to save the colors of your photoshop image before you draw it into a bitmap context, and use that original color data, not the multiplied color data from the bitmap.
I ran into this same problem when trying to read image data, render it to another image with CoreGraphics, and then save the result as non-premultiplied data. The solution I found that worked for me was to save a table that contains the exact mapping that CoreGraphics uses to map non-premultiplied data to premultiplied data. Then, estimate what the original premultipled value would be with a mult and floor() call. Then, if the estimate and the result from the table lookup do not match, just check the value below the estimate and the one above the estimate in the table for the exact match.
// Execute premultiply logic on RGBA components split into componenets.
// For example, a pixel RGB (128, 0, 0) with A = 128
// would return (255, 0, 0) with A = 128
static
inline
uint32_t premultiply_bgra_inline(uint32_t red, uint32_t green, uint32_t blue, uint32_t alpha)
{
const uint8_t* const restrict alphaTable = &extern_alphaTablesPtr[alpha * PREMULT_TABLEMAX];
uint32_t result = (alpha << 24) | (alphaTable[red] << 16) | (alphaTable[green] << 8) | alphaTable[blue];
return result;
}
static inline
int unpremultiply(const uint32_t premultRGBComponent, const float alphaMult, const uint32_t alpha)
{
float multVal = premultRGBComponent * alphaMult;
float floorVal = floor(multVal);
uint32_t unpremultRGBComponent = (uint32_t)floorVal;
assert(unpremultRGBComponent >= 0);
if (unpremultRGBComponent > 255) {
unpremultRGBComponent = 255;
}
// Pass the unpremultiplied estimated value through the
// premultiply table again to verify that the result
// maps back to the same rgb component value that was
// passed in. It is possible that the result of the
// multiplication is smaller or larger than the
// original value, so this will either add or remove
// one int value to the result rgb component to account
// for the error possibility.
uint32_t premultPixel = premultiply_bgra_inline(unpremultRGBComponent, 0, 0, alpha);
uint32_t premultActualRGBComponent = (premultPixel >> 16) & 0xFF;
if (premultRGBComponent != premultActualRGBComponent) {
if ((premultActualRGBComponent < premultRGBComponent) && (unpremultRGBComponent < 255)) {
unpremultRGBComponent += 1;
} else if ((premultActualRGBComponent > premultRGBComponent) && (unpremultRGBComponent > 0)) {
unpremultRGBComponent -= 1;
} else {
// This should never happen
assert(0);
}
}
return unpremultRGBComponent;
}
You can find the complete static table of values at this github link.
Note that this approach will not recover information "lost" when the original unpremultiplied pixel was premultiplied. But, it does return the smallest unpremultiplied pixel that will become the premultiplied pixel once run through the premultiply logic again. This is useful when the graphics subsystem only accepts premultiplied pixels (like CoreGraphics on OSX). If the graphics subsystem only accepts premultipled pixels, then you are better off storing only the premultipled pixels, since less space is consumed as compared to the unpremultiplied pixels.