Which color-key to use in a heat-map with a small range of values? - range

My data covers a small range, but I still like to make the small differences between the data points visible in a heat-map. What color-key is best to maximize color intensity (and not generating a greyish map) and how to set the range in pheatmap?

You didn't give enough information to give you an exact code example for your sample data, but something like the below is a way to get at the problem. In terms of what actual colours you want to use, I recommend you play around with it to see what looks best, I have just subbed in red and blue as a proxy.
pheatmap(yourdata, color = colorRampPalette(c("red", "blue"))(length(-12:12)),breaks=c(-12:12))
The length() is setting the range while the breaks=c(x:x) tells it where to make breaks. So let's say you wanted breaks every 0.2 from 0 to 1, you would modify it to be:
breaks=c(0,0.2,0.4,0.6,0.8,1.0)
You can play around with the break gradients to get something that works for your dataset.
This is my first attempt at answering a question on here, please let me know if something above does't work for you, or if you are confused by what I've written.

Related

Calculating transform-origin of two overlapping elements

I have an image (represented as green) overlaying a box (represented as blue), and the image is going to be transform: scale()ing in size. When this happens I need all edges of the image to complete their transformation at the same time.
To do this I need to calculate the transform-origin based on where the image is located overtop of the bounding box, using JavaScript. Assume I know all the coordinates that getBoundingClientRect() provides, for both elements.
In the six examples below I’ve placed a red dot where the transform-origin percentages should intersect.
I just can’t figure out the math to get there. The closest I've come to finding an answer is with this question, but it's a little vague and I'm not sure I fully understand the answer itself. I would greatly appreciate help with this, and will happily provide more details if I'm missing something.
After fiddling around, I figured out the formula is:
(
(box.left - image.left) /
(image.width - box.width)
) * 100

Performance on changing colour of big geojson by using data-driven (comparing to using setPaintProperty)

I have been rendering a FeatureCollection of polygons into the Map (in one GeoJSONLayer). The size of each polygon is big (5mb, 10mb). With user interactions, the colours of polygons would be re-calculated & changed constantly. We are using data-driven method and keeping the data in properties of each feature. So GeoJSONLayer has to call .setData(geojson) everytime the data and colours changed (they are kept in properties).
I find above approach is lead to performance issue since the size of geojsons is big and calling .setData() is expensive.
I'm thinking of separating the geojson source and the data, style, colouring and calling direct function (setPaintProperty) whenever colours changed would be better than.
Someone told me that .setData and .setPaintProperty would do the same thing, both 2 will trigger re-rendering whole polygons.
Kindly need help to advice on this matter
Thanks a lot!
If I understand you correctly, you're asking which of these two is faster:
map.setData(mylayer, mygeojson)
map.setPaintProperty(mylayer, 'fill-color', ...mydatadrivenproperty)
I haven't tested, but I'd assume the second is faster, because the first one has to:
Parse the GeoJSON
Convert it to vector tiles
Repaint
whereas the second just has to parse the property repaint. Try them both out to see.
You may also consider a third way, which is to have a second layer which is a highlight, which you update by calling map.setFilter(mylayer, ...).

Slider settings for GPUImage

I'm making an app which allows the user to apply GPUImage filters to still photos using a UISlider. I'd like for the slider to initially start at the zero point for each filter (i.e. the value at which none of the filter has been applied yet) and I'm wondering how this can be determined? I've used some of the values that are listed in the GPUImage documentation and for certain sliders they start at 0, but others it's hard to determine (and for some, the min and max values are way off for me). The values for something like GPUImagePosterizeFilter seem to be way off for me (set min to 1, max to 128 and initial to 1). I've also checked some of the values in the FilterShowcase test project which are different than the documentation, but still don't always start at 0. Am I just completely missing the point here? Or is there some setting I maybe have to turn on to be in line with the slider values?
Nope, no setting for this. All I can really recommend is that to make this as efficient as possible make a switch statement in a single method and determine what to do by index of the currently selected filter.
From there, I would leave the min/max values of the slider the same so that you don't have to animate from one calculated point to another if the filter changes and mathematically convert the slider's value into units that the current filter understands. i.e. 0-1 --> 1-128
I think I might have been approaching this the wrong way. Rather than looking for a "zero point" on a filter, I think I should focus on applying the filter only when the user applies it, and trying to find a good starting point that is close to how the image looks without the filter for the initial value on the slider.

Matlab,better histogram representation

This question is related to enter link description here.
I have this histogram but as you see it is very difficult to compare the bares. Is there any method to better represent the information for better comparison by eyes?Thanks.
Perhaps you want logarithmic Y/X-axis. this is possible by using a workaround, that is explained here:
Why does my histogram become incorrect when I change the y-axis scaling to 'log'?
You cannot just use the 'Yscale','log' , because:
the bars are incorrectly displayed; the histogram bars either become lines or disappear entirely.

Pixel color matching estimate

For image scanning purposes, I'd like a pixel (which I can get from a UIImage) to match (for a certain percentage) to a pre-set color.
Say pink. When I scan the image for pixels that are pink, I want a function to return a percentage of how much the RGB value in the pixel looks like my pre-set RGB value. This way I'd like all (well, most) pink pixels to become 'visible' to me, not just exact matches.
Is anyone familiar with such an approach? How would you do something like this?
Thanks in advance.
UPDATE: thank you all for your answers so far. I accepted the answer from Damien Pollet because it helped me further and I came to the conclusion that calculating the vector difference between two RGB colors does it perfectly for me (at this moment). It might need some tweaking over time but for now I use the following (in objective c):
float difference = pow( pow((red1 - red2), 2) + pow((green1 - green2), 2) + pow((blue1 - blue2), 2), 0.5 );
If this difference is below 85, I accept the color as my target color. Since my algorithm needs no precision, I'm ok with this solution :)
UPDATE 2: on my search for more I found the following URL which might be quite (understatement) useful for you if you are looking for something similar.
http://www.sunsetlakesoftware.com/2010/10/22/gpu-accelerated-video-processing-mac-and-ios
I would say just compute the vector difference to your target color, and check that it's norm is less than some threshold. I suspect some color spaces are better than others at this, maybe HSL or L*ab, since they separate the brightness from the color hue itself, and so might represent a small perceptual difference by a smaller color vector...
Also, see this related question
Scientific answer: You should convert both colors to the LAB color space and calculate the euclidian distance there. That value is also called deltaE.
The LAB space was developed (using test persons) for exactly that reaason: so that different color pairs with equal distances in tnis space correspond to equal perceived color differences.
However, it sounds like you are not looking for matching a specific color, but rather a color range (lets say all skin tones). That might require more user input than just a reference color + a deltaE tollerance:
a reference color with 3 tollerances for hue, saturation and brightness
a cloud of refence color samples
...