Flutter - How to change color of marker based on magnitude of an int? - flutter

I want to create markers at each GPS point with a color on a scale from white to red that corresponds to how slow or fast the GPS data indicates the speed is at that point. However, since Color.fromRGBO() requires const arguments, I'm having trouble getting the speed to change the color. How could I circumvent the Color class or find a better way to draw such a gradient than with Marker objects?

Welcome to StackOverflow Onur!
You can indeed use https://api.flutter.dev/flutter/dart-ui/Color/Color.fromARGB.html
const Color.fromARGB(
int a,
int r,
int g,
int b
)
with a=0, g=0 and b=0 and use your scale (you need to know the maximum value, lets call it r_max, to normalize the range). So whenever you have an intensity value, lets call it r_intensity, you'd simply compute
const r = 255 * (r_intensity / r_max)
to compute your color intensity. The result of r_intensity / r_max will always be in range [0, 1] multiplied by 255 to conform with
r is red, from 0 to 255.
from the docs.
Regarding the const issue: it sounds like you want to (heavily?) animate the color. Look at https://api.flutter.dev/flutter/animation/AnimationController-class.html and set the red color value to your Widgets state.
Other than that, it should work to either have
a method that returns the new red value (and does the aforementioned computation)
simply use var r = ... instead of const r = ... and pass it to Color
EDIT: looking at your image, the issue might stem from your int being null. Please make sure that your Color Widget is not wrapped into any Widget prefixed with const! This will likely be the case here and produces the result you are seeing. Tested it on my side to confirm.

Related

How to use geoserver SLD style to serve single channel elevation raster ("gray" channel) as Mapbox Terrain-RGB tiles

I have an elevation raster layer in my GeoServer with a single channel ("gray").
The "gray" values is elevations values (signed int16).
I have 2 clients:
The first one is using that elevation values as is.
The second one expect to get [Mapbox Terrain-RGB format][1]
I do not want to convert the "gray scale" format to Mapbox Terrain-RGB format and hold duplicate data in the GeoServer.
I was thinking to use the SLD style and elements to map the elevation value to the appropriate RGB value (with gradient interpolation between discrete values).
For example:
<ColorMap>
<ColorMapEntry color="#000000" quantity="-10000" />
<ColorMapEntry color="#FFFFFF" quantity="1667721.5" />
</ColorMap>
It turns out that the above example does not span the full range of colors but rather creates gray values only.
That is, it seems that it interpolate each color (red, green, blue) independent of each other.
Any idea how to make it interpolate values like that: #000000, #000001, #000002, ... , #0000FF, #000100, ..., #0001FF, ..., #FFFFFF?
Tx.
[1]: https://docs.mapbox.com/data/tilesets/reference/mapbox-terrain-rgb-v1/
I'm trying to do the same with no luck, and i think it can't be done... Check this example. It's a "gradient" [-10000, -5000, -1000, -500 ... 100000000000000000, 5000000000000000000, 1000000000000000000] with the Mapbox color codification. The color progression/interpolation is anything but linear, so i think it can't be emulated in an SLD.
If you have the elevation data in the format you desire then that is the easiest option: it just works. However, if you want a more customized solution, here's what I've done for a project using the Mapbox Terrain-RGB format:
I have a scale of colors from dark blue to light blue to white.
I want to be able to specify how many steps are used from light blue to white (default is 10).
This code uses GDAL Python bindings. The following code snippet was used for testing.
It just outputs the color mapping to a GeoTIFF file.
To get values between 0 and 1, simply use value *= 1/num_steps.
You can use that value in the lookup table to get an RGB value.
If you're only interested in outputting the colors, you can ignore everything involving gdal_translate. The colors will automatically be stored in a single-band GeoTIFF. If you do want to re-use those colors, note that this version ignores alpha values (if present). You can use gdal_translate to add those. That code snippet is also available at my gist here.
import numpy as np
import gdal
from osgeo import gdal, osr
def get_color_map(num_steps):
colors = np.zeros((num_steps, 3), dtype=np.uint8)
colors[:, 0] = np.linspace(0, 255, num_steps, dtype=np.uint8)
colors[:, 1] = colors[::-1, 0]
return colors
ds = gdal.Open('/Users/myusername/Desktop/raster.tif')
band = ds.GetRasterBand(1) # Assuming single band raster
arr = band.ReadAsArray()
arr = arr.astype(np.float32)
arr *= 1/num_steps # Ensure values are between 0 and 1 (or use arr -= arr.min() / (arr.max() - arr.min()) to normalize to 0 to 1)
colors = get_color_map(num_steps) # Create color lookup table
colors[0] = [0, 0, 0] # Set black for no data so it doesn't show up as gray in the final product.
# Create new GeoTIFF with colors included (transparent alpha channel if possible). If you don't care about including the colors in the GeoTIFF, skip this.
cols = ds.RasterXSize
rows = ds.RasterYSize
out_ds = gdal.GetDriverByName('GTiff').Create('/Users/myusername/Desktop/raster_color.tif', cols, rows, 4)
out_ds.SetGeoTransform(ds.GetGeoTransform())
out_ds.SetProjection(ds.GetProjection())
band = out_ds.GetRasterBand(1)
band.WriteArray(colors[arr]) # This can be removed if you don't care about including the colors in the GeoTIFF
band = out_ds.GetRasterBand(2)
band.WriteArray(colors[arr]) # This can be removed if you don't care about including the colors in the GeoTIFF
band = out_ds.GetRasterBand(3)
band.WriteArray(colors[arr]) # This can be removed if you don't care about including the colors in the GeoTIFF
band = out_ds.GetRasterBand(4)
alpha = np.zeros((rows, cols), dtype=np.uint8) # Create alpha channel to simulate transparency of no data pixels (assuming 0 is "no data" and non-zero is data). You can remove this if your elevation values are not 0.
alpha[arr == 0] = 255
band.WriteArray(alpha) # This can be removed if you don't care about including the colors in the GeoTIFF
out_ds.FlushCache()
This issue is also present in Rasterio when using a palette with multiple values. Here is an example.
However, if your raster has n-dimensions or is a masked array, the flip operation can be tricky. Here's a solution based on one of the answers in this stackoverflow question: How to vertically flip a 2D NumPy array?.

Flutter/Dart : How to get single colors from Colorgradient?

I am trying to build a Widget that changes it's background color from green to red over time, but in a fluid way. So similar to a gradient background:
But not as a fixed gradient, but rather a full green background at the beginning, and as the time counts down the background should slowly transition over to yellow, then orange, then red at the end.
I want the color to update every 10 miliseconds so it's not an abrupt change. I am handling the changing with a timer which calculates how much time has passed in relation to the start time and update the color accordingly.
Now - is there any way how I can get A concrete color from a gradient-like object? Like I could just extract the color of a gradient at a specific point (which is the fraction of time_passed/max_time) to use it as the full background?
Or should I actually use Colorcodes and just increase the value of the colorcode every 10ms? That seems not very graceful
Ok I actually found two Solutions:
1. Use Color.lerp(color a, color b, double p) function
What this does is it creates a gradient between a and b, and return the color that exist on point p (value between 0 and 1) on that scale.
This is basically what I wanted, but there is a problem: It doesn't necessarily interpolate those two color with colors that you want to be inbetween, it's more of a brown-ish mix between those two.
2. Install the rainbow_color package
This was exactly what I needed, you can insert multiple custom colors, specify a range of values that you use (in my case 0.0 and 1.0, but it can also be integers), and it returns a gradient between only theese colors:
var rb = Rainbow(
spectrum: [
Color(0xff1fff00),
Color(0xffd0ff00),
Color(0xffffaa00),
Color(0xffffaa00),
Color(0xffff6600)
],
rangeStart: 1.0,
rangeEnd: 0.0
);
return rb[remainingFraction];

Extract black objects from color background

It is easy for human eyes to tell black from other colors. But how about computers?
I printed some color blocks on the normal A4 paper. Since there are three kinds of ink to compose a color image, cyan, magenta and yellow, I set the color of each block C=20%, C=30%, C=40%, C=50% and rest of two colors are 0. That is the first column of my source image. So far, no black (K of CMYK) ink is supposed to print. After that, I set the color of each dot K=100% and rest colors are 0 to print black dots.
You may feel my image is weird and awful. In fact, the image is magnified 30 times and how the ink cheat our eyes can be seen clearly. The color strips hamper me to recognize these black dots (the dot is printed as just one pixel in 800 dpi). Without the color background, I used to blur and do canny edge detector to extract the edge. However, when adding color background, simply do grayscale and edge detector cannot get good results because of the strips. How will my eyes do in order to solve such problems?
I determined to check the brightness of source image. I referred this article and formula:
brightness = sqrt( 0.299 R * R + 0.587 G * G + 0.114 B * B )
The brightness is more close to human perception and it works very well in the yellow background because the brightness of yellow is the highest compared with cyan and magenta. But how to make cyan and magenta strips as bright as possible? The expected result is that all the strips disappear.
More complicated image:
C=40%, M=40%
C=40%, Y=40%
Y=40%, M=40%
FFT result of C=40%, Y=40% brightness image
Anyone can give me some hints to remove the color strips?
#natan I tried FFT method you suggested me, but I was not lucky to get peak at both axis x and y. In order to plot the frequency as you did, I resized my image to square.
I would convert the image to the HSV colour space and then use the Value channel. This basically separates colour and brightness information.
This is the 50% cyan image
Then you can just do a simple threshold to isolate the dots.
I just did this very quickly and im sure you could get better results. Maybe find contours in the image and then remove any contours with a small area, to filter any remaining noise.
After inspecting the images, I decided that a robust threshold will be more simple than anything. For example, looking at the C=40%, M=40% photo, I first inverted the intensities so black (the signal) will be white just using
im=(abs(255-im));
we can inspect its RGB histograms using this :
hist(reshape(single(im),[],3),min(single(im(:))):max(single(im(:))));
colormap([1 0 0; 0 1 0; 0 0 1]);
so we see that there is a large contribution to some middle intensity whereas the "signal" which is now white, is mostly separated to higher value. I then applied a simple thresholds as follows:
thr = #(d) (max([min(max(d,[],1)) min(max(d,[],2))])) ;
for n=1:size(im,3)
imt(:,:,n)=im(:,:,n).*uint8(im(:,:,n)>1.1*thr(im(:,:,n)));
end
imt=rgb2gray(imt);
and got rid of objects smaller than some typical area size
min_dot_area=20;
bw=bwareaopen(imt>0,min_dot_area);
imagesc(bw);
colormap(flipud(bone));
here's the result together with the original image:
The origin of this threshold is from this code I wrote that assumed sparse signals in the form of 2-D peaks or blobs in a noisy background. By sparse I meant that there's no pile up of peaks. In that case, when projecting max(image) on the x or y axis (by (max(im,[],1) or (max(im,[],1) you get a good measure of the background. That is because you take the minimal intensity of the max(im) vector.
If you want to look at this differently you can look at the histogram of the intensities of the image. The background is supposed to be a normal distribution of some kind around some intensity, the signal should be higher than that intensity, but with much lower # of occurrences. By finding max(im) of one of the axes (x or y) you discover what was the maximal noise level.
You'll see that the threshold picks that point in the histogram where there are still some noise above it, but ALL the signal is above it too. that's why I adjusted it to be 1.1*thr. Last, there are many fancier ways to obtain a robust threshold, this is a quick and dirty way that in my view is good enough...
Thanks to everyone for posting his answer! After some search and attempt, I also come up with an adaptive method to extract these black dots from the color background. It seems that considering only the brightness could not solve the problem perfectly. Therefore natan's method which calculates and analyzes the RGB histogram is more robust. Unfortunately, I still cannot obtain a robust threshold to extract the black dots in other color samples, because things are getting more and more unpredictable when we add deeper color (e.g. Cyan > 60) or mix two colors together (e.g. Cyan = 50, Magenta = 50).
One day, I google "extract color" and TinEye's color extraction and color thief inspire me. Both of them are very cool application and the image processed by the former website is exactly what I want. So I determine to implement a similar stuff on my own. The algorithm I used here is k-means clustering. And some other related key words to search may be color palette, color quantation and getting dominant color.
I firstly apply Gaussian filter to smooth the image.
GaussianBlur(img, img, Size(5, 5), 0, 0);
OpenCV has kmeans function and it saves me a lot of time on coding. I modify this code.
// Input data should be float32
Mat samples(img.rows * img.cols, 3, CV_32F);
for (int i = 0; i < img.rows; i++) {
for (int j = 0; j < img.cols; j++) {
for (int z = 0; z < 3; z++) {
samples.at<float>(i + j * img.rows, z) = img.at<Vec3b>(i, j)[z];
}
}
}
// Select the number of clusters
int clusterCount = 4;
Mat labels;
int attempts = 1;
Mat centers;
kmeans(samples, clusterCount, labels, TermCriteria(CV_TERMCRIT_ITER|CV_TERMCRIT_EPS, 10, 0.1), attempts, KMEANS_PP_CENTERS, centers);
// Draw clustered result
Mat cluster(img.size(), img.type());
for (int i = 0; i < img.rows; i++) {
for(int j = 0; j < img.cols; j++) {
int cluster_idx = labels.at<int>(i + j * img.rows, 0);
cluster.at<Vec3b>(i, j)[0] = centers.at<float>(cluster_idx, 0);
cluster.at<Vec3b>(i, j)[1] = centers.at<float>(cluster_idx, 1);
cluster.at<Vec3b>(i, j)[2] = centers.at<float>(cluster_idx, 2);
}
}
imshow("clustered image", cluster);
// Check centers' RGB value
cout << centers;
After clustering, I convert the result to grayscale and find the darkest color which is more likely to be the color of the black dots.
// Find the minimum value
cvtColor(cluster, cluster, CV_RGB2GRAY);
Mat dot = Mat::zeros(img.size(), CV_8UC1);
cluster.copyTo(dot);
int minVal = (int)dot.at<uchar>(dot.cols / 2, dot.rows / 2);
for (int i = 0; i < dot.rows; i += 3) {
for (int j = 0; j < dot.cols; j += 3) {
if ((int)dot.at<uchar>(i, j) < minVal) {
minVal = (int)dot.at<uchar>(i, j);
}
}
}
inRange(dot, minVal - 5 , minVal + 5, dot);
imshow("dot", dot);
Let's test two images.
(clusterCount = 4)
(clusterCount = 5)
One shortcoming of the k-means clustering is one fixed clusterCount cannot be applied to every image. Also clustering is not so fast for larger images. That's the issue annoys me a lot. My dirty method for better real time performance (on iPhone) is to crop 1/16 of the image and cluster the smaller area. Then compare all the pixels in the original image with each cluster center, and pick the pixel that are the nearest to the "black" color. I simply calculate euclidean distance between two RGB colors.
A simple method is to just threshold all the pixels. Here is this idea expressed in pseudo code.
for each pixel in image
if brightness < THRESHOLD
pixel = BLACK
else
pixel = WHITE
Or if you're always dealing with cyan, magenta and yellow backgrounds then maybe you might get better results with the criteria
if pixel.r < THRESHOLD and pixel.g < THRESHOLD and pixel.b < THRESHOLD
This method will only give good results for easy images where nothing except the black dots is too dark.
You can experiment with the value of THRESHOLD to find a good value for your images.
I suggest to convert to some chroma-based color space, like LCH, and adjust simultaneous thresholds on lightness and chroma. Here is the result mask for L < 50 & C < 25 for the input image:
Seems like you need adaptive thresholds since different values work best for different areas of the image.
You may also use HSV or HSL as a color space, but they are less perceptually uniform than LCH, derived from Lab.

iPhone colour Image analysis

I am looking for some ideas about an approach that will let me analyze an image, and determine how greenISH or brownISH or whiteISH it is... I am emphasizing ISH here because, I am interested in capturing ALL the shades of these colours. So far, I have done the following:
I have my UIImage, I have CGImageRef and I actually have the colour of the pixel itself (it's RGB and Alpha), what I don't know is how to quantify this, and determine all the green shades, blues, browns, yellows, purples etc... So, I can process each and every pixel, determine it's basic RGB, but I need some help in quantifying the colours it over a whole image.
Thanks for your ideas...
Alex.
One fairly good solution is to switch from RGB colour space to one of the Y colour spaces, such as YUV, YCrCb or any of those. In all cases the Y channel represents brightness and the other two channels together represent colour, relative to brightness. You probably want to factor brightness out, possibly with the caveat that all colours below a certain darkness are to be excluded, so getting Y separately is a helpful first step in itself.
Converting from RGB to YUV is achieved with a simple linear combination. Straight from Wikipedia and a thousand other sources:
y = 0.299*r + 0.587*g + 0.114*b;
u = -0.14713*r - 0.28886*g + 0.436*b;
v = 0.615*r - 0.51499*g - 0.10001*b;
Assuming you're keeping r, g and b in the range [0, 1], your first test might be:
if(y < 0.05)
{
// this colour is very dark, so it's considered to be as
// far as we allow from any colour we're interested in
}
To decide how close your colour then is to, say, green, work out the u and v components of the green you're interested in, as a proportion of the y:
r = b = 0;
g = 0;
y = 0.299*r + 0.587*g + 0.114*b = 0.587;
u = -0.14713*r - 0.28886*g + 0.436*b = -0.28886;
v = 0.615*r - 0.51499*g - 0.10001*b = -0.51499;
proportionOfU = u / y = -2.0479;
proportionOfV = v / y = -0.8773;
Subsequently, work out and compare the proportions of U and V for incoming colours and compare (e.g. with 2d planar distance) to those you've computed for the colour you're comparing to. Closer values are more similar. How you scale and use that metric depends on your application.
Notice that as y goes toward 0, the computed proportions become increasingly less precise because of the limited range of the input data, and are undefined when y is 0. Conceptually that's because all colours look exactly the same when there's no light on them. Checking that y is above at least a certain minimum value is the pragmatic way of working around this issue. This also means that you're not going to get sensible results if you try to say "how black is this picture?", though again that's because of the ambiguity between a surface that doesn't reflect any light and a surface that doesn't have any light falling upon it.

How to get the real RGBA or ARGB color values without premultiplied alpha?

I'm creating an bitmap context using CGBitmapContextCreate with the kCGImageAlphaPremultipliedFirst option.
I made a 5 x 5 test image with some major colors (pure red, green, blue, white, black), some mixed colors (i.e. purple) combined with some alpha variations. Every time when the alpha component is not 255, the color value is wrong.
I found that I could re-calculate the color when I do something like:
almostCorrectRed = wrongRed * (255 / alphaValue);
almostCorrectGreen = wrongGreen * (255 / alphaValue);
almostCorrectBlue = wrongBlue * (255 / alphaValue);
But the problem is, that my calculations are sometimes off by 3 or even more. So for example I get a value of 242 instead of 245 for green, and I am 100% sure that it must be exactly 245. Alpha is 128.
Then, for the exact same color just with different alpha opacity in the PNG bitmap, I get alpha = 255 and green = 245 as it should be.
If alpha is 0, then red, green and blue are also 0. Here all data is lost and I can't figure out the color of the pixel.
How can I avoid or undo this alpha premultiplication alltogether so that I can modify pixels in my image based on the true R G B pixel values as they were when the image was created in Photoshop? How can I recover the original values for R, G, B and A?
Background info (probably not necessary for this question):
What I'm doing is this: I take a UIImage, draw it to a bitmap context in order to perform some simple image manipulation algorithms on it, shifting the color of each pixel depending on what color it was before. Nothing really special. But my code needs the real colors. When a pixel is transparent (meaning it has alpha less than 255) my algorithm shouldn't care about this, it should just modify R,G,B as needed while Alpha remains at whatever it is. Sometimes though it will shift alpha up or down too. But I see them as two separate things. Alpha contorls transparency, while R G B control the color.
This is a fundamental problem with premultiplication in an integral type:
245 * (128/255) = 122.98
122.98 truncated to an integer = 122
122 * (255/128) = 243.046875
I'm not sure why you're getting 242 instead of 243, but this problem remains either way, and it gets worse the lower the alpha goes.
The solution is to use floating-point components instead. The Quartz 2D Programming Guide gives the full details of the format you'll need to use.
Important point: You'd need to use floating-point from the creation of the original image (and I don't think it's even possible to save such an image as PNG; you might have to use TIFF). An image that was already premultiplied in an integral type has already lost that precision; there is no getting it back.
The zero-alpha case is the extreme version of this, to such an extent that even floating-point cannot help you. Anything times zero (alpha) is zero, and there is no recovering the original unpremultiplied value from that point.
Pre-multiplying alpha with an integer color type is an information lossy operation. Data is destroyed during the quantization process (rounding to 8 bits).
Since some data is destroy (by rounding), there is no way to recover the exact original pixel color (except for some lucky values). You have to save the colors of your photoshop image before you draw it into a bitmap context, and use that original color data, not the multiplied color data from the bitmap.
I ran into this same problem when trying to read image data, render it to another image with CoreGraphics, and then save the result as non-premultiplied data. The solution I found that worked for me was to save a table that contains the exact mapping that CoreGraphics uses to map non-premultiplied data to premultiplied data. Then, estimate what the original premultipled value would be with a mult and floor() call. Then, if the estimate and the result from the table lookup do not match, just check the value below the estimate and the one above the estimate in the table for the exact match.
// Execute premultiply logic on RGBA components split into componenets.
// For example, a pixel RGB (128, 0, 0) with A = 128
// would return (255, 0, 0) with A = 128
static
inline
uint32_t premultiply_bgra_inline(uint32_t red, uint32_t green, uint32_t blue, uint32_t alpha)
{
const uint8_t* const restrict alphaTable = &extern_alphaTablesPtr[alpha * PREMULT_TABLEMAX];
uint32_t result = (alpha << 24) | (alphaTable[red] << 16) | (alphaTable[green] << 8) | alphaTable[blue];
return result;
}
static inline
int unpremultiply(const uint32_t premultRGBComponent, const float alphaMult, const uint32_t alpha)
{
float multVal = premultRGBComponent * alphaMult;
float floorVal = floor(multVal);
uint32_t unpremultRGBComponent = (uint32_t)floorVal;
assert(unpremultRGBComponent >= 0);
if (unpremultRGBComponent > 255) {
unpremultRGBComponent = 255;
}
// Pass the unpremultiplied estimated value through the
// premultiply table again to verify that the result
// maps back to the same rgb component value that was
// passed in. It is possible that the result of the
// multiplication is smaller or larger than the
// original value, so this will either add or remove
// one int value to the result rgb component to account
// for the error possibility.
uint32_t premultPixel = premultiply_bgra_inline(unpremultRGBComponent, 0, 0, alpha);
uint32_t premultActualRGBComponent = (premultPixel >> 16) & 0xFF;
if (premultRGBComponent != premultActualRGBComponent) {
if ((premultActualRGBComponent < premultRGBComponent) && (unpremultRGBComponent < 255)) {
unpremultRGBComponent += 1;
} else if ((premultActualRGBComponent > premultRGBComponent) && (unpremultRGBComponent > 0)) {
unpremultRGBComponent -= 1;
} else {
// This should never happen
assert(0);
}
}
return unpremultRGBComponent;
}
You can find the complete static table of values at this github link.
Note that this approach will not recover information "lost" when the original unpremultiplied pixel was premultiplied. But, it does return the smallest unpremultiplied pixel that will become the premultiplied pixel once run through the premultiply logic again. This is useful when the graphics subsystem only accepts premultiplied pixels (like CoreGraphics on OSX). If the graphics subsystem only accepts premultipled pixels, then you are better off storing only the premultipled pixels, since less space is consumed as compared to the unpremultiplied pixels.