SceneKit pass uniform vector to shader modifiers - swift

I'm trying to pass a GLKVector4 to a shader that should receive it as a vec4. I'm using a fragment shader modifier:
material.shaderModifiers = [ SCNShaderModifierEntryPoint.fragment: shaderModifier ]
where shaderModifier is:
// color changes
uniform float colorModifier;
uniform vec4 colorOffset;
vec4 color = _output.color;
color = color + colorOffset;
color = color + vec4(0.0, colorModifier, 0.0, 0.0);
_output.color = color;
(I'm simply adding a color offset) I've tried:
material.setValue(GLKVector4(v: (250.0, 0.0, 0.0, 0.0)), "colorOffset")
which doesn't work (no offset is added and the shader uses the default value that is (0, 0, 0, 0)). Same happens if I replace GLKVector4 by SCNVector4
Following this I've also tried:
let points: [float2] = [float2(250.0), float2(0.0), float2(0.0), float2(0.0)]
material.setValue(NSData(bytes: points, length: points.count * sizeof(float2)), "colorOffset")
However, I can pass a float value to the uniform colorModifier easily by doing:
material.setValue(250.0, forKey: "colorModifier")
and that will increase the green channel as excepted

So you have to use NSValue, that has a convenience initialization for SCNVector4, so:
let v = SCNVector4(x: 250.0, y: 0.0, z: 0.0, w: 0.0)
material.setValue(NSValue(scnVector4: v), "colorOffset")
It'd be too good if SceneKit could handle it's own types directly...

Related

How to set the saturation level of an entire color channel in Unity

I would like to set the saturation of an entire color channel in my main camera. The closest option that I've found was the Hue vs. Sat(uration) Grading Curve. In the background of the scene is a palm tree that is colored teal. I want the green level of the tree to still show. Same with the top of the grass in the foreground, It's closer to yellow than green, but I'd still want to see the little bit of green value that it has.
I have been searching the Unity documentation and the asset store for a possible 3rd party shader for weeks, but have come up empty handed. My current result is the best I could come up with, any help would be greatly appreciated. Thank you
SOLVED
-by check-marked answer. Just wanted to share what the results look like for anyone in the future who stumbles across this issue. Compare the above screenshot, where the palm tree in the background and the grass tops in the foreground are just black and white, to the after screenshot below. Full control in the scene of RGB saturation!
Examples using this method:
Below is a postprocessing shader intended to let you set the saturation of each color channel.
It first takes the original pixel color, gets the hue, saturation, and luminance. That color is taken to its most saturated, neutral-luminance version. The rgb of that is then multiplied by the desaturation factor to compute the rgb of the new hue. The magnitude of that rgb is multiplied by the original saturation to get the new saturation. This new hue and saturation is fed back in with the original luminance to compute the new color.
Shader "Custom/ChannelSaturation" {
Properties{
_MainTex("Base", 2D) = "white" {}
_rSat("Red Saturation", Range(0, 1)) = 1
_gSat("Green Saturation", Range(0, 1)) = 1
_bSat("Blue Saturation", Range(0, 1)) = 1
}
SubShader{
Pass {
CGPROGRAM
#pragma vertex vert_img
#pragma fragment frag
#include "UnityCG.cginc"
uniform sampler2D _MainTex;
float _rSat;
float _gSat;
float _bSat;
/*
source: modified version of https://www.shadertoy.com/view/MsKGRW
written # https://gist.github.com/hiroakioishi/
c4eda57c29ae7b2912c4809087d5ffd0
*/
float3 rgb2hsl(float3 c) {
float epsilon = 0.00000001;
float cmin = min( c.r, min( c.g, c.b ) );
float cmax = max( c.r, max( c.g, c.b ) );
float cd = cmax - cmin;
float3 hsl = float3(0.0, 0.0, 0.0);
hsl.z = (cmax + cmin) / 2.0;
hsl.y = lerp(cd / (cmax + cmin + epsilon),
cd / (epsilon + 2.0 - (cmax + cmin)),
step(0.5, hsl.z));
float3 a = float3(1.0 - step(epsilon, abs(cmax - c)));
a = lerp(float3(a.x, 0.0, a.z), a, step(0.5, 2.0 - a.x - a.y));
a = lerp(float3(a.x, a.y, 0.0), a, step(0.5, 2.0 - a.x - a.z));
a = lerp(float3(a.x, a.y, 0.0), a, step(0.5, 2.0 - a.y - a.z));
hsl.x = dot( float3(0.0, 2.0, 4.0) + ((c.gbr - c.brg)
/ (epsilon + cd)), a );
hsl.x = (hsl.x + (1.0 - step(0.0, hsl.x) ) * 6.0 ) / 6.0;
return hsl;
}
/*
source: modified version of
https://stackoverflow.com/a/42261473/1092820
*/
float3 hsl2rgb(float3 c) {
float3 rgb = clamp(abs(fmod(c.x * 6.0 + float3(0.0, 4.0, 2.0),
6.0) - 3.0) - 1.0, 0.0, 1.0);
return c.z + c.y * (rgb - 0.5) * (1.0 - abs(2.0 * c.z - 1.0));
}
float4 frag(v2f_img i) : COLOR {
float3 sat = float3(_rSat, _gSat, _bSat);
float4 c = tex2D(_MainTex, i.uv);
float3 hslOrig = rgb2hsl(c.rgb);
float3 rgbFullSat = hsl2rgb(float3(hslOrig.x, 1, .5));
float3 diminishedrgb = rgbFullSat * sat;
float diminishedHue = rgb2hsl(diminishedrgb).x;
float diminishedSat = hslOrig.y * length(diminishedrgb);
float3 mix = float3(diminishedHue, diminishedSat, hslOrig.z);
float3 newc = hsl2rgb(mix);
float4 result = c;
result.rgb = newc;
return result;
}
ENDCG
}
}
}
If you're using URP (Universal Rendering Pipeline), which is recommended, you can create a new forward renderer pipeline asset, assign the shader to that asset, and configure it appropriately. Further information including diagrams can be found in the official unity tutorial for custom render passes with URP.
If you aren't using URP, you have other options. You could attach it to specific materials, or use the below script from Wikibooks to the camera's gameobject to apply a material using the above shader as a postprocessing effect to the camera:
using System;
using UnityEngine;
[RequireComponent(typeof(Camera))]
[ExecuteInEditMode]
public class PostProcessingEffectScript : MonoBehaviour {
public Material material;
void OnEnable()
{
if (null == material || null == material.shader ||
!material.shader.isSupported)
{
enabled = false;
}
}
void OnRenderImage(RenderTexture source, RenderTexture destination)
{
Graphics.Blit(source, destination, material);
}
}
If you use the postprocessing effect, you will want to render the things you want to exclude from the effect with a different camera, then put everything together. However, this is a bit out of scope for this answer.
My best guess would be to use a custom shader or camera FX that would gives you control over each channel.
Hope that helped ;)

How to render individual pixels for one layer of a 3DTexture in a framebuffer?

I have a 4x4x4 3DTexture which I am initializing and showing correctly to color my 4x4x4 grid of vertices (see attached red grid with one white pixel - 0,0,0).
However when I render the 4 layers in a framebuffer (all four at one time using gl.COLOR_ATTACHMENT0 --> gl.COLOR_ATTACHMENT3, only four of the sixteen pixels on a layer are successfully rendered by my fragment shader (to be turned green).
When I only do one layer, with gl.COLOR_ATTACHMENT0, the same 4 pixels show up correctly altered for the 1 layer, and the other 3 layers stay with the original color unchanged. When I change the gl.viewport(0, 0, size, size) (size = 4 in this example), to something else like the whole screen, or different sizes than 4, then different pixels are written, but never more than 4. My goal is to individually specify all 16 pixels of each layer precisely. I'm using colors for now, as a learning experience, but the texture is really for position and velocity information for each vertex for a physics simulation. I'm assuming (faulty assumption?) with 64 points/vertices, that I'm running the vertex shader and the fragment shader 64 times each, coloring one pixel each invocation.
I've removed all but the vital code from the shaders. I've left the javascript unaltered. I suspect my problem is initializing and passing the array of vertex positions incorrectly.
//Set x,y position coordinates to be used to extract data from one plane of our data cube
//remember, z we handle as a 1 layer of our cube which is composed of a stack of x-y planes.
const oneLayerVertices = new Float32Array(size * size * 2);
count = 0;
for (var j = 0; j < (size); j++) {
for (var i = 0; i < (size); i++) {
oneLayerVertices[count] = i;
count++;
oneLayerVertices[count] = j;
count++;
//oneLayerVertices[count] = 0;
//count++;
//oneLayerVertices[count] = 0;
//count++;
}
}
const bufferInfo = twgl.createBufferInfoFromArrays(gl, {
position: {
numComponents: 2,
data: oneLayerVertices,
},
});
And then I'm using the bufferInfo as follows:
gl.useProgram(computeProgramInfo.program);
twgl.setBuffersAndAttributes(gl, computeProgramInfo, bufferInfo);
gl.viewport(0, 0, size, size); //remember size = 4
outFramebuffers.forEach((fb, ndx) => {
gl.bindFramebuffer(gl.FRAMEBUFFER, fb);
gl.drawBuffers([
gl.COLOR_ATTACHMENT0,
gl.COLOR_ATTACHMENT1,
gl.COLOR_ATTACHMENT2,
gl.COLOR_ATTACHMENT3
]);
const baseLayerTexCoord = (ndx * numLayersPerFramebuffer);
console.log("My baseLayerTexCoord is "+baseLayerTexCoord);
twgl.setUniforms(computeProgramInfo, {
baseLayerTexCoord,
u_kernel: [
0, 0, 0,
0, 0, 0,
0, 0, 0,
0, 0, 1,
0, 0, 0,
0, 0, 0,
0, 0, 0,
0, 0, 0,
0, 0, 0,
],
u_position: inPos,
u_velocity: inVel,
loopCounter: loopCounter,
numLayersPerFramebuffer: numLayersPerFramebuffer
});
gl.drawArrays(gl.POINTS, 0, (16));
});
VERTEX SHADER:
calc_vertex:
const compute_vs = `#version 300 es
precision highp float;
in vec4 position;
void main() {
gl_Position = position;
}
`;
FRAGMENT SHADER:
calc_fragment:
const compute_fs = `#version 300 es
precision highp float;
out vec4 ourOutput[4];
void main() {
ourOutput[0] = vec4(0,1,0,1);
ourOutput[1] = vec4(0,1,0,1);
ourOutput[2] = vec4(0,1,0,1);
ourOutput[3] = vec4(0,1,0,1);
}
`;
I’m not sure what you’re trying to do and what you think the positions will do.
You have 2 options for GPU simulation in WebGL2
use transform feedback.
In this case you pass in attributes and generate data in buffers. Effectively you have in attributes and out attributes and generally you only run the vertex shader. To put it another way your varyings, the output of your vertex shader, get written to a buffer. So you have at least 2 sets of buffers, currentState, and nextState and your vertex shader reads attributes from currentState and writes them to nextState
There is an example of writing to buffers via transform feedback here though that example only uses transform feedback at the start to fill buffers once.
use textures attached to framebuffers
in this case, similarly you have 2 textures, currentState, and nextState, You set nextState to be your render target and read from currentState to generate next state.
the difficulty is that you can only render to textures by outputting primitives in the vertex shader. If currentState and nextState are 2D textures that’s trival. Just output a -1.0 to +1.0 quad from the vertex shader and all pixels in nextState will be rendered to.
If you’re using a 3D texture then same thing except you can only render to 4 layers at a time (well, gl.getParameter(gl.MAX_DRAW_BUFFERS)). so you’d have to do something like
for(let layer = 0; layer < numLayers; layer += 4) {
// setup framebuffer to use these 4 layers
gl.drawXXX(...) // draw to 4 layers)
}
or better
// at init time
const fbs = [];
for(let layer = 0; layer < numLayers; layer += 4) {
fbs.push(createFramebufferForThese4Layers(layer);
}
// at draw time
fbs.forEach((fb, ndx) => {;
gl.bindFramebuffer(gl.FRAMEBUFFER, fb);
gl.drawXXX(...) // draw to 4 layers)
});
I’m guessing multiple draw calls is slower than one draw call so another solution is to instead treat a 2D texture as a 3D array and calculate texture coordinates appropriately.
I don’t know which is better. If you’re simulating particles and they only need to look at their own currentState then transform feedback is easier. If need each particle to be able to look at the state of other particles, in other words you need random access to all the data, then your only option is to store the data in textures.
As for positions I don't understand your code. Positions define a primitives, either POINTS, LINES, or TRIANGLES so how does passing integer X, Y values into our vertex shader help you define POINTS, LINES or TRIANGLES?
It looks like you're trying to use POINTS in which case you need to set gl_PointSize to the size of the point you want to draw (1.0) and you need to convert those positions into clip space
gl_Position = vec4((position.xy + 0.5) / resolution, 0, 1);
where resolution is the size of the texture.
But doing it this way will be slow. Much better to just draw a full size (-1 to +1) clip space quad. For every pixel in the destination the fragment shader will be called. gl_FragCoord.xy will be the location of the center of the pixel currently being rendered so first pixel in bottom left corner gl_FragCoord.xy will be (0.5, 0.5). The pixel to the right of that will be (1.5, 0.5). The pixel to the right of that will be (2.5, 0.5). You can use that value to calculate how to access currentState. Assuming 1x1 mapping the easiest way would be
int n = numberOfLayerThatsAttachedToCOLOR_ATTACHMENT0;
vec4 currentStateValueForLayerN = texelFetch(
currentStateTexture, ivec3(gl_FragCoord.xy, n + 0), 0);
vec4 currentStateValueForLayerNPlus1 = texelFetch(
currentStateTexture, ivec3(gl_FragCoord.xy, n + 1), 0);
vec4 currentStateValueForLayerNPlus2 = texelFetch(
currentStateTexture, ivec3(gl_FragCoord.xy, n + 2), 0);
...
vec4 nextStateForLayerN = computeNextStateFromCurrentState(currentStateValueForLayerN);
vec4 nextStateForLayerNPlus1 = computeNextStateFromCurrentState(currentStateValueForLayerNPlus1);
vec4 nextStateForLayerNPlus2 = computeNextStateFromCurrentState(currentStateValueForLayerNPlus2);
...
outColor[0] = nextStateForLayerN;
outColor[1] = nextStateForLayerNPlus1;
outColor[2] = nextStateForLayerNPlus1;
...
I don’t know if you needed this but just to test here’s a simple example that renders a different color to every pixel of a 4x4x4 texture and then displays them.
const pointVS = `
#version 300 es
uniform int size;
uniform highp sampler3D tex;
out vec4 v_color;
void main() {
int x = gl_VertexID % size;
int y = (gl_VertexID / size) % size;
int z = gl_VertexID / (size * size);
v_color = texelFetch(tex, ivec3(x, y, z), 0);
gl_PointSize = 8.0;
vec3 normPos = vec3(x, y, z) / float(size);
gl_Position = vec4(
mix(-0.9, 0.6, normPos.x) + mix(0.0, 0.3, normPos.y),
mix(-0.6, 0.9, normPos.z) + mix(0.0, -0.3, normPos.y),
0,
1);
}
`;
const pointFS = `
#version 300 es
precision highp float;
in vec4 v_color;
out vec4 outColor;
void main() {
outColor = v_color;
}
`;
const rtVS = `
#version 300 es
in vec4 position;
void main() {
gl_Position = position;
}
`;
const rtFS = `
#version 300 es
precision highp float;
uniform vec2 resolution;
out vec4 outColor[4];
void main() {
vec2 xy = gl_FragCoord.xy / resolution;
outColor[0] = vec4(1, 0, xy.x, 1);
outColor[1] = vec4(0.5, xy.yx, 1);
outColor[2] = vec4(xy, 0, 1);
outColor[3] = vec4(1, vec2(1) - xy, 1);
}
`;
function main() {
const gl = document.querySelector('canvas').getContext('webgl2');
if (!gl) {
return alert('need webgl2');
}
const pointProgramInfo = twgl.createProgramInfo(gl, [pointVS, pointFS]);
const rtProgramInfo = twgl.createProgramInfo(gl, [rtVS, rtFS]);
const size = 4;
const numPoints = size * size * size;
const tex = twgl.createTexture(gl, {
target: gl.TEXTURE_3D,
width: size,
height: size,
depth: size,
});
const clipspaceFullSizeQuadBufferInfo = twgl.createBufferInfoFromArrays(gl, {
position: {
data: [
-1, -1,
1, -1,
-1, 1,
-1, 1,
1, -1,
1, 1,
],
numComponents: 2,
},
});
const fb = gl.createFramebuffer();
gl.bindFramebuffer(gl.FRAMEBUFFER, fb);
for (let i = 0; i < 4; ++i) {
gl.framebufferTextureLayer(
gl.FRAMEBUFFER,
gl.COLOR_ATTACHMENT0 + i,
tex,
0, // mip level
i, // layer
);
}
gl.drawBuffers([
gl.COLOR_ATTACHMENT0,
gl.COLOR_ATTACHMENT1,
gl.COLOR_ATTACHMENT2,
gl.COLOR_ATTACHMENT3,
]);
gl.viewport(0, 0, size, size);
gl.useProgram(rtProgramInfo.program);
twgl.setBuffersAndAttributes(
gl,
rtProgramInfo,
clipspaceFullSizeQuadBufferInfo);
twgl.setUniforms(rtProgramInfo, {
resolution: [size, size],
});
twgl.drawBufferInfo(gl, clipspaceFullSizeQuadBufferInfo);
gl.bindFramebuffer(gl.FRAMEBUFFER, null);
gl.viewport(0, 0, gl.canvas.width, gl.canvas.height);
gl.drawBuffers([
gl.BACK,
]);
gl.useProgram(pointProgramInfo.program);
twgl.setUniforms(pointProgramInfo, {
tex,
size,
});
gl.drawArrays(gl.POINTS, 0, numPoints);
}
main();
<canvas></canvas>
<script src="https://twgljs.org/dist/4.x/twgl-full.min.js"></script>

Multiple camera orbits in Paraview

So I need the camera to orbit my data multiple times. I thought this should be quite easy but I could not figure it out. Double clicking the camera sequence in the Animation View allowed me to add another path but it added a default path which was different to the orbit. Manually (and painfully) copying the path parameters over did not work either? Any ideas on how to do this?
I know this is quite some time later, but I was stuck on this same need. I ended up tracing to see how a follow path camera cue worked in python code, and then I stacked a few of those lines in a for loop making sure I updated the KeyTime of the frame. This gave me an animation that orbited the focal point set for as many orbits as I did loops.
https://discourse.paraview.org/t/issues-with-multiple-orbit-laps-in-single-animation/11371
from paraview.simple import *
anim = GetAnimationScene()
renderView1 = GetActiveViewOrCreate("RenderView")
cameraAnimationCue1 = CameraAnimationCue()
#cameraAnimationCue1 = GetCameraTrack(view=rv)
cameraAnimationCue1.Mode = 'Path-based'
cameraAnimationCue1.AnimatedProxy = renderView1
# create a new key frame
n = 3
for i in range(n):
keyFrameN = CameraKeyFrame()
keyFrameN.Position = [-6.6921304299024635, 0.0, 0.0]
keyFrameN.FocalPoint = [1e-20, 0.0, 0.0]
keyFrameN.ViewUp = [0.0, 0.0, 1.0]
keyFrameN.ParallelScale = 1.7320508075688772
keyFrameN.PositionPathPoints = [0.0, -5.0, 0.0, 2.938926261462365, -4.045084971874736, 0.0, 4.755282581475766, -1.545084971874737, 0.0, 4.755282581475766, 1.5450849718747361, 0.0, 2.938926261462365, 4.045084971874735, 0.0, 1.3322676295501878e-15, 4.9999999999999964, 0.0, -2.9389262614623624, 4.045084971874735, 0.0, -4.755282581475763, 1.5450849718747368, 0.0, -4.755282581475763, -1.5450849718747341, 0.0, -2.9389262614623632, -4.045084971874731, 0.0]
keyFrameN.FocalPathPoints = [0.0, 0.0, 0.0]
keyFrameN.ClosedPositionPath = 1
keyFrameN.KeyTime = i/n
cameraAnimationCue1.KeyFrames.append(keyFrameN)
# ending scale
keyFrame9333 = CameraKeyFrame()
keyFrame9333.KeyTime = 1.0
keyFrame9333.Position = [-6.6921304299024635, 0.0, 0.0]
keyFrame9333.FocalPoint = [1e-20, 0.0, 0.0]
keyFrame9333.ViewUp = [0.0, 0.0, 1.0]
keyFrame9333.ParallelScale = 1.7320508075688772
# initialize the animation track
cameraAnimationCue1.KeyFrames.append( keyFrame9333)
anim.Cues.append(cameraAnimationCue1)

Metal shader not working as expected

If I run the following vertex shader in Metal/Swift I get a nice rectangle on the screen:
vertex Vertex vertexShader(uint k [[ vertex_id ]],
device float2* position [[buffer(1)]]){
Vertex output;
float2 pos = position[k];
output.position = float4(pos,0,1);
return output;
};
//position [0.0, 0.0, 0.5, 0.0, 0.0, 0.5, 0.5, 0.5]
//indexList [0, 1, 2, 2, 1, 3]
Now if I run the following I get a blank screen:
vertex Vertex vertexShader(uint k [[ vertex_id ]],
device float3* position [[buffer(1)]]){
Vertex output;
float3 pos = position[k];
output.position = float4(pos,1);
return output;
};
//position [0.0, 0.0, 0.0, 0.5, 0.0, 0.0, 0.0, 0.5, 0.0, 0.5, 0.5, 0.0]
//indexList [0, 1, 2, 2, 1, 3]
It seems to me these should produce identical results. What am I missing?
How exactly are you filling the buffer associated with index 1 in your app code?
I suspect you're just supplying an array of floats. Well, float3 is not packed. Its layout is not the same as 3 floats. There's padding. Its size is actually the same as float4 or 4 floats.
Probably, the simplest fix is to declare position as a pointer to packed_float3.

Incorrect colors implementation OpenGL ES2 - IOS

I have very simple OpenGL ES example similar to Hehe's example : http://nehe.gamedev.net/tutorial/ios_lesson_02__first_triangle/50001/
As shown above triangle filled with three colors - red, blue, green.
Instead in my app i always get triangle almost completely filled with black color, only small area around top vertex filled with green and small area around right bottom filled with red ... and there is no blue at all.
The first question is : why do the colors not interpolate in the middle of my triangle and why does blue color is not visible at all?
Any changes in my colors array affect nothing, e.g. when i try to make triangle white the colors do not change anyway ... in the meantime if i change Z coordinate in positions array then i can see the blue color.
The second question is : why any changes in colors do nothing and changes in positions change the color instead?
Seems like somewhere here i made one stupid mistake but i can't catch it.
This is Vertex / Color arrays :
const float colors[] = { // this does not work, triangle still black-green-red
1.0, 1.0, 1.0, 1.0,
1.0, 1.0, 1.0, 1.0,
1.0, 1.0, 1.0, 1.0
};
const float positions[] = { // if i change 3rd index to 1.0 then i will see blue color
-0.5, -0.5, 0.0, 1.0,
0.0, 0.5, 0.0, 1.0,
0.5, -0.5, 0.0, 1.0
};
This is VBO :
- (BOOL)setupVBO
{
BOOL success = YES;
glGenBuffers(1, &_positionBuffer);
glBindBuffer(GL_ARRAY_BUFFER, _positionBuffer);
glBufferData(
GL_ARRAY_BUFFER,
sizeof(positions) * sizeof(float),
&positions[0],
GL_STATIC_DRAW);
glGenBuffers(1, &_colorBuffer);
glBindBuffer(GL_ARRAY_BUFFER, _colorBuffer);
glBufferData(
GL_ARRAY_BUFFER,
sizeof(colors) * sizeof(float),
&colors[0],
GL_STATIC_DRAW);
return success;
}
Render :
- (void)render:(CADisplayLink*)displayLink
{
glClearColor(0.5, 0.5, 0.5, 1.0);
glClear(GL_COLOR_BUFFER_BIT);
glBindBuffer(GL_ARRAY_BUFFER, _positionBuffer);
glVertexAttribPointer(_positionSlot, 4, GL_FLOAT, GL_FALSE, 0, NULL);
glBindBuffer(GL_ARRAY_BUFFER, _colorRenderBuffer);
glVertexAttribPointer(_colorSlot, 4, GL_FLOAT, GL_FALSE, 0, NULL);
glDrawArrays(GL_TRIANGLES, 0, 3);
[_glContext presentRenderbuffer:GL_RENDERBUFFER];
}
Thanks for any advice ...
Ok, i found where was the issue :)
As far as i am beginner in Open GL i just copypasted the code from example and renamed some variables ... and did not catch that i renamed ColorBuffer (VBO color, i.e. actual color data of drawing object) variable to ColorRenderBuffer variable (place in memory where GL processes actual color data)
Stupid mistake and i hope nobody will do the same :)