Flutter/Dart : How to get single colors from Colorgradient? - flutter

I am trying to build a Widget that changes it's background color from green to red over time, but in a fluid way. So similar to a gradient background:
But not as a fixed gradient, but rather a full green background at the beginning, and as the time counts down the background should slowly transition over to yellow, then orange, then red at the end.
I want the color to update every 10 miliseconds so it's not an abrupt change. I am handling the changing with a timer which calculates how much time has passed in relation to the start time and update the color accordingly.
Now - is there any way how I can get A concrete color from a gradient-like object? Like I could just extract the color of a gradient at a specific point (which is the fraction of time_passed/max_time) to use it as the full background?
Or should I actually use Colorcodes and just increase the value of the colorcode every 10ms? That seems not very graceful

Ok I actually found two Solutions:
1. Use Color.lerp(color a, color b, double p) function
What this does is it creates a gradient between a and b, and return the color that exist on point p (value between 0 and 1) on that scale.
This is basically what I wanted, but there is a problem: It doesn't necessarily interpolate those two color with colors that you want to be inbetween, it's more of a brown-ish mix between those two.
2. Install the rainbow_color package
This was exactly what I needed, you can insert multiple custom colors, specify a range of values that you use (in my case 0.0 and 1.0, but it can also be integers), and it returns a gradient between only theese colors:
var rb = Rainbow(
spectrum: [
Color(0xff1fff00),
Color(0xffd0ff00),
Color(0xffffaa00),
Color(0xffffaa00),
Color(0xffff6600)
],
rangeStart: 1.0,
rangeEnd: 0.0
);
return rb[remainingFraction];

Related

Flutter - How to change color of marker based on magnitude of an int?

I want to create markers at each GPS point with a color on a scale from white to red that corresponds to how slow or fast the GPS data indicates the speed is at that point. However, since Color.fromRGBO() requires const arguments, I'm having trouble getting the speed to change the color. How could I circumvent the Color class or find a better way to draw such a gradient than with Marker objects?
Welcome to StackOverflow Onur!
You can indeed use https://api.flutter.dev/flutter/dart-ui/Color/Color.fromARGB.html
const Color.fromARGB(
int a,
int r,
int g,
int b
)
with a=0, g=0 and b=0 and use your scale (you need to know the maximum value, lets call it r_max, to normalize the range). So whenever you have an intensity value, lets call it r_intensity, you'd simply compute
const r = 255 * (r_intensity / r_max)
to compute your color intensity. The result of r_intensity / r_max will always be in range [0, 1] multiplied by 255 to conform with
r is red, from 0 to 255.
from the docs.
Regarding the const issue: it sounds like you want to (heavily?) animate the color. Look at https://api.flutter.dev/flutter/animation/AnimationController-class.html and set the red color value to your Widgets state.
Other than that, it should work to either have
a method that returns the new red value (and does the aforementioned computation)
simply use var r = ... instead of const r = ... and pass it to Color
EDIT: looking at your image, the issue might stem from your int being null. Please make sure that your Color Widget is not wrapped into any Widget prefixed with const! This will likely be the case here and produces the result you are seeing. Tested it on my side to confirm.

How to use geoserver SLD style to serve single channel elevation raster ("gray" channel) as Mapbox Terrain-RGB tiles

I have an elevation raster layer in my GeoServer with a single channel ("gray").
The "gray" values is elevations values (signed int16).
I have 2 clients:
The first one is using that elevation values as is.
The second one expect to get [Mapbox Terrain-RGB format][1]
I do not want to convert the "gray scale" format to Mapbox Terrain-RGB format and hold duplicate data in the GeoServer.
I was thinking to use the SLD style and elements to map the elevation value to the appropriate RGB value (with gradient interpolation between discrete values).
For example:
<ColorMap>
<ColorMapEntry color="#000000" quantity="-10000" />
<ColorMapEntry color="#FFFFFF" quantity="1667721.5" />
</ColorMap>
It turns out that the above example does not span the full range of colors but rather creates gray values only.
That is, it seems that it interpolate each color (red, green, blue) independent of each other.
Any idea how to make it interpolate values like that: #000000, #000001, #000002, ... , #0000FF, #000100, ..., #0001FF, ..., #FFFFFF?
Tx.
[1]: https://docs.mapbox.com/data/tilesets/reference/mapbox-terrain-rgb-v1/
I'm trying to do the same with no luck, and i think it can't be done... Check this example. It's a "gradient" [-10000, -5000, -1000, -500 ... 100000000000000000, 5000000000000000000, 1000000000000000000] with the Mapbox color codification. The color progression/interpolation is anything but linear, so i think it can't be emulated in an SLD.
If you have the elevation data in the format you desire then that is the easiest option: it just works. However, if you want a more customized solution, here's what I've done for a project using the Mapbox Terrain-RGB format:
I have a scale of colors from dark blue to light blue to white.
I want to be able to specify how many steps are used from light blue to white (default is 10).
This code uses GDAL Python bindings. The following code snippet was used for testing.
It just outputs the color mapping to a GeoTIFF file.
To get values between 0 and 1, simply use value *= 1/num_steps.
You can use that value in the lookup table to get an RGB value.
If you're only interested in outputting the colors, you can ignore everything involving gdal_translate. The colors will automatically be stored in a single-band GeoTIFF. If you do want to re-use those colors, note that this version ignores alpha values (if present). You can use gdal_translate to add those. That code snippet is also available at my gist here.
import numpy as np
import gdal
from osgeo import gdal, osr
def get_color_map(num_steps):
colors = np.zeros((num_steps, 3), dtype=np.uint8)
colors[:, 0] = np.linspace(0, 255, num_steps, dtype=np.uint8)
colors[:, 1] = colors[::-1, 0]
return colors
ds = gdal.Open('/Users/myusername/Desktop/raster.tif')
band = ds.GetRasterBand(1) # Assuming single band raster
arr = band.ReadAsArray()
arr = arr.astype(np.float32)
arr *= 1/num_steps # Ensure values are between 0 and 1 (or use arr -= arr.min() / (arr.max() - arr.min()) to normalize to 0 to 1)
colors = get_color_map(num_steps) # Create color lookup table
colors[0] = [0, 0, 0] # Set black for no data so it doesn't show up as gray in the final product.
# Create new GeoTIFF with colors included (transparent alpha channel if possible). If you don't care about including the colors in the GeoTIFF, skip this.
cols = ds.RasterXSize
rows = ds.RasterYSize
out_ds = gdal.GetDriverByName('GTiff').Create('/Users/myusername/Desktop/raster_color.tif', cols, rows, 4)
out_ds.SetGeoTransform(ds.GetGeoTransform())
out_ds.SetProjection(ds.GetProjection())
band = out_ds.GetRasterBand(1)
band.WriteArray(colors[arr]) # This can be removed if you don't care about including the colors in the GeoTIFF
band = out_ds.GetRasterBand(2)
band.WriteArray(colors[arr]) # This can be removed if you don't care about including the colors in the GeoTIFF
band = out_ds.GetRasterBand(3)
band.WriteArray(colors[arr]) # This can be removed if you don't care about including the colors in the GeoTIFF
band = out_ds.GetRasterBand(4)
alpha = np.zeros((rows, cols), dtype=np.uint8) # Create alpha channel to simulate transparency of no data pixels (assuming 0 is "no data" and non-zero is data). You can remove this if your elevation values are not 0.
alpha[arr == 0] = 255
band.WriteArray(alpha) # This can be removed if you don't care about including the colors in the GeoTIFF
out_ds.FlushCache()
This issue is also present in Rasterio when using a palette with multiple values. Here is an example.
However, if your raster has n-dimensions or is a masked array, the flip operation can be tricky. Here's a solution based on one of the answers in this stackoverflow question: How to vertically flip a 2D NumPy array?.

Extract black objects from color background

It is easy for human eyes to tell black from other colors. But how about computers?
I printed some color blocks on the normal A4 paper. Since there are three kinds of ink to compose a color image, cyan, magenta and yellow, I set the color of each block C=20%, C=30%, C=40%, C=50% and rest of two colors are 0. That is the first column of my source image. So far, no black (K of CMYK) ink is supposed to print. After that, I set the color of each dot K=100% and rest colors are 0 to print black dots.
You may feel my image is weird and awful. In fact, the image is magnified 30 times and how the ink cheat our eyes can be seen clearly. The color strips hamper me to recognize these black dots (the dot is printed as just one pixel in 800 dpi). Without the color background, I used to blur and do canny edge detector to extract the edge. However, when adding color background, simply do grayscale and edge detector cannot get good results because of the strips. How will my eyes do in order to solve such problems?
I determined to check the brightness of source image. I referred this article and formula:
brightness = sqrt( 0.299 R * R + 0.587 G * G + 0.114 B * B )
The brightness is more close to human perception and it works very well in the yellow background because the brightness of yellow is the highest compared with cyan and magenta. But how to make cyan and magenta strips as bright as possible? The expected result is that all the strips disappear.
More complicated image:
C=40%, M=40%
C=40%, Y=40%
Y=40%, M=40%
FFT result of C=40%, Y=40% brightness image
Anyone can give me some hints to remove the color strips?
#natan I tried FFT method you suggested me, but I was not lucky to get peak at both axis x and y. In order to plot the frequency as you did, I resized my image to square.
I would convert the image to the HSV colour space and then use the Value channel. This basically separates colour and brightness information.
This is the 50% cyan image
Then you can just do a simple threshold to isolate the dots.
I just did this very quickly and im sure you could get better results. Maybe find contours in the image and then remove any contours with a small area, to filter any remaining noise.
After inspecting the images, I decided that a robust threshold will be more simple than anything. For example, looking at the C=40%, M=40% photo, I first inverted the intensities so black (the signal) will be white just using
im=(abs(255-im));
we can inspect its RGB histograms using this :
hist(reshape(single(im),[],3),min(single(im(:))):max(single(im(:))));
colormap([1 0 0; 0 1 0; 0 0 1]);
so we see that there is a large contribution to some middle intensity whereas the "signal" which is now white, is mostly separated to higher value. I then applied a simple thresholds as follows:
thr = #(d) (max([min(max(d,[],1)) min(max(d,[],2))])) ;
for n=1:size(im,3)
imt(:,:,n)=im(:,:,n).*uint8(im(:,:,n)>1.1*thr(im(:,:,n)));
end
imt=rgb2gray(imt);
and got rid of objects smaller than some typical area size
min_dot_area=20;
bw=bwareaopen(imt>0,min_dot_area);
imagesc(bw);
colormap(flipud(bone));
here's the result together with the original image:
The origin of this threshold is from this code I wrote that assumed sparse signals in the form of 2-D peaks or blobs in a noisy background. By sparse I meant that there's no pile up of peaks. In that case, when projecting max(image) on the x or y axis (by (max(im,[],1) or (max(im,[],1) you get a good measure of the background. That is because you take the minimal intensity of the max(im) vector.
If you want to look at this differently you can look at the histogram of the intensities of the image. The background is supposed to be a normal distribution of some kind around some intensity, the signal should be higher than that intensity, but with much lower # of occurrences. By finding max(im) of one of the axes (x or y) you discover what was the maximal noise level.
You'll see that the threshold picks that point in the histogram where there are still some noise above it, but ALL the signal is above it too. that's why I adjusted it to be 1.1*thr. Last, there are many fancier ways to obtain a robust threshold, this is a quick and dirty way that in my view is good enough...
Thanks to everyone for posting his answer! After some search and attempt, I also come up with an adaptive method to extract these black dots from the color background. It seems that considering only the brightness could not solve the problem perfectly. Therefore natan's method which calculates and analyzes the RGB histogram is more robust. Unfortunately, I still cannot obtain a robust threshold to extract the black dots in other color samples, because things are getting more and more unpredictable when we add deeper color (e.g. Cyan > 60) or mix two colors together (e.g. Cyan = 50, Magenta = 50).
One day, I google "extract color" and TinEye's color extraction and color thief inspire me. Both of them are very cool application and the image processed by the former website is exactly what I want. So I determine to implement a similar stuff on my own. The algorithm I used here is k-means clustering. And some other related key words to search may be color palette, color quantation and getting dominant color.
I firstly apply Gaussian filter to smooth the image.
GaussianBlur(img, img, Size(5, 5), 0, 0);
OpenCV has kmeans function and it saves me a lot of time on coding. I modify this code.
// Input data should be float32
Mat samples(img.rows * img.cols, 3, CV_32F);
for (int i = 0; i < img.rows; i++) {
for (int j = 0; j < img.cols; j++) {
for (int z = 0; z < 3; z++) {
samples.at<float>(i + j * img.rows, z) = img.at<Vec3b>(i, j)[z];
}
}
}
// Select the number of clusters
int clusterCount = 4;
Mat labels;
int attempts = 1;
Mat centers;
kmeans(samples, clusterCount, labels, TermCriteria(CV_TERMCRIT_ITER|CV_TERMCRIT_EPS, 10, 0.1), attempts, KMEANS_PP_CENTERS, centers);
// Draw clustered result
Mat cluster(img.size(), img.type());
for (int i = 0; i < img.rows; i++) {
for(int j = 0; j < img.cols; j++) {
int cluster_idx = labels.at<int>(i + j * img.rows, 0);
cluster.at<Vec3b>(i, j)[0] = centers.at<float>(cluster_idx, 0);
cluster.at<Vec3b>(i, j)[1] = centers.at<float>(cluster_idx, 1);
cluster.at<Vec3b>(i, j)[2] = centers.at<float>(cluster_idx, 2);
}
}
imshow("clustered image", cluster);
// Check centers' RGB value
cout << centers;
After clustering, I convert the result to grayscale and find the darkest color which is more likely to be the color of the black dots.
// Find the minimum value
cvtColor(cluster, cluster, CV_RGB2GRAY);
Mat dot = Mat::zeros(img.size(), CV_8UC1);
cluster.copyTo(dot);
int minVal = (int)dot.at<uchar>(dot.cols / 2, dot.rows / 2);
for (int i = 0; i < dot.rows; i += 3) {
for (int j = 0; j < dot.cols; j += 3) {
if ((int)dot.at<uchar>(i, j) < minVal) {
minVal = (int)dot.at<uchar>(i, j);
}
}
}
inRange(dot, minVal - 5 , minVal + 5, dot);
imshow("dot", dot);
Let's test two images.
(clusterCount = 4)
(clusterCount = 5)
One shortcoming of the k-means clustering is one fixed clusterCount cannot be applied to every image. Also clustering is not so fast for larger images. That's the issue annoys me a lot. My dirty method for better real time performance (on iPhone) is to crop 1/16 of the image and cluster the smaller area. Then compare all the pixels in the original image with each cluster center, and pick the pixel that are the nearest to the "black" color. I simply calculate euclidean distance between two RGB colors.
A simple method is to just threshold all the pixels. Here is this idea expressed in pseudo code.
for each pixel in image
if brightness < THRESHOLD
pixel = BLACK
else
pixel = WHITE
Or if you're always dealing with cyan, magenta and yellow backgrounds then maybe you might get better results with the criteria
if pixel.r < THRESHOLD and pixel.g < THRESHOLD and pixel.b < THRESHOLD
This method will only give good results for easy images where nothing except the black dots is too dark.
You can experiment with the value of THRESHOLD to find a good value for your images.
I suggest to convert to some chroma-based color space, like LCH, and adjust simultaneous thresholds on lightness and chroma. Here is the result mask for L < 50 & C < 25 for the input image:
Seems like you need adaptive thresholds since different values work best for different areas of the image.
You may also use HSV or HSL as a color space, but they are less perceptually uniform than LCH, derived from Lab.

UIColor become automatically black

I'm getting some trouble with UIColor.
So, I have a BackgroudColor that is White.
The user can choose a background color in the view by clicking some buttons
there is another button to go into a Color Picker View.
When he come back to the view, if he clicked the right button he should see his picked color.
And if he click on this button he should also see this color.
But he can't see it. There's still a black Color showing.
When the user click on picked color the float value come from (For red example) r: 1.0 g: 0.0 b: 0.0 to r: 0.0 g: 0.0 b: 0.0
I don't know how to do to keep the correct float values.
I hope i was clear enough.
Thanks :)
You must retain your r g b UIColor *color, otherwise it is autoreleased a bit later.
Colors components (r,g,b) must be specified in range 0.0 - 1.0.
Make sure that full red finaly calculates to r=1.0, g= 0, b= 0;
Store the current color in an float[4] array. (r,g,b,alpha) or float[3] for rgb only.

iPhone colour Image analysis

I am looking for some ideas about an approach that will let me analyze an image, and determine how greenISH or brownISH or whiteISH it is... I am emphasizing ISH here because, I am interested in capturing ALL the shades of these colours. So far, I have done the following:
I have my UIImage, I have CGImageRef and I actually have the colour of the pixel itself (it's RGB and Alpha), what I don't know is how to quantify this, and determine all the green shades, blues, browns, yellows, purples etc... So, I can process each and every pixel, determine it's basic RGB, but I need some help in quantifying the colours it over a whole image.
Thanks for your ideas...
Alex.
One fairly good solution is to switch from RGB colour space to one of the Y colour spaces, such as YUV, YCrCb or any of those. In all cases the Y channel represents brightness and the other two channels together represent colour, relative to brightness. You probably want to factor brightness out, possibly with the caveat that all colours below a certain darkness are to be excluded, so getting Y separately is a helpful first step in itself.
Converting from RGB to YUV is achieved with a simple linear combination. Straight from Wikipedia and a thousand other sources:
y = 0.299*r + 0.587*g + 0.114*b;
u = -0.14713*r - 0.28886*g + 0.436*b;
v = 0.615*r - 0.51499*g - 0.10001*b;
Assuming you're keeping r, g and b in the range [0, 1], your first test might be:
if(y < 0.05)
{
// this colour is very dark, so it's considered to be as
// far as we allow from any colour we're interested in
}
To decide how close your colour then is to, say, green, work out the u and v components of the green you're interested in, as a proportion of the y:
r = b = 0;
g = 0;
y = 0.299*r + 0.587*g + 0.114*b = 0.587;
u = -0.14713*r - 0.28886*g + 0.436*b = -0.28886;
v = 0.615*r - 0.51499*g - 0.10001*b = -0.51499;
proportionOfU = u / y = -2.0479;
proportionOfV = v / y = -0.8773;
Subsequently, work out and compare the proportions of U and V for incoming colours and compare (e.g. with 2d planar distance) to those you've computed for the colour you're comparing to. Closer values are more similar. How you scale and use that metric depends on your application.
Notice that as y goes toward 0, the computed proportions become increasingly less precise because of the limited range of the input data, and are undefined when y is 0. Conceptually that's because all colours look exactly the same when there's no light on them. Checking that y is above at least a certain minimum value is the pragmatic way of working around this issue. This also means that you're not going to get sensible results if you try to say "how black is this picture?", though again that's because of the ambiguity between a surface that doesn't reflect any light and a surface that doesn't have any light falling upon it.