Getting Tango's camera stream data - unity3d

I'm trying to get the Tango's camera stream in order to combine an homemade AR Kit to Tango.
I'm stuck at a point where everything works as intended in Tango's editor emulation, but not in the app pushed to the tablet.
The code I'm using is the following:
YUVTexture yuvTexture = m_tangoApplication.GetVideoOverlayTextureYUV();
Texture2D yTexture = yuvTexture.m_videoOverlayTextureY;
// m_videoOverlayTextureCr is not used by Tango yet for some reason
Texture2D uvTexture = yuvTexture.m_videoOverlayTextureCb;
// convert from YV12 to RGB
for (int i = 0; i < yTexture.height; ++i)
{
for (int j = 0; j < yTexture.width; ++j)
{
Color yPixel = yTexture.GetPixel(j, i);
Color uvPixel = uvTexture.GetPixel(j, i);
m_texture.SetPixel(4 * j + 0, yTexture.height - i - 1, YUV2Color(yPixel.r, uvPixel.r, uvPixel.g));
m_texture.SetPixel(4 * j + 1, yTexture.height - i - 1, YUV2Color(yPixel.g, uvPixel.r, uvPixel.g));
m_texture.SetPixel(4 * j + 2, yTexture.height - i - 1, YUV2Color(yPixel.b, uvPixel.b, uvPixel.a));
m_texture.SetPixel(4 * j + 3, yTexture.height - i - 1, YUV2Color(yPixel.a, uvPixel.b, uvPixel.a));
}
}
YUV2Color (extracted from Tango's YUV2RGB Shader):
public static Color YUV2Color(float y_value, float u_value, float v_value)
{
float r = y_value + 1.370705f * (v_value - 0.5f);
float g = y_value - 0.698001f * (v_value - 0.5f) - (0.337633f * (u_value - 0.5f));
float b = y_value + 1.732446f * (u_value - 0.5f);
return new Color(r, g, b, 1f);
}
Did someone already solved this problem? I've seen a lot of post related to it when the ITangoVideoOverlay was mostly used, but nothing with the current IExperimentalTangoVideoOverlay
I've experimented a lot of things, so far it has been the closest I got to what I expected ... Any help would be highly appreciated.

You are using the Texture ID method to get the YUV texture color, this is not very common to do. A easier path would be using the Raw Byte buffer method to get color camera image, to do that:
On TangoManager prefab, enable video overlay, and select Raw Byte method from the drop down box.
Register to ITangoVideoOverlay interface.
Convert the image buffer data from YUV to RGB, this part is exactly like the YUV2Color function, but use data from TangoImageData.data

Related

SetPixel faster on mouse drag

Hi I'm creating a cleaning game but encountered a problem when I fast draw a straight line the line is broken but when I slow draw a straight line it works fine
Below is my code
private void Update()
{
if (Input.GetMouseButton(0))
{
if (Physics.Raycast(Camera.main.ScreenPointToRay(Input.mousePosition), out RaycastHit hit))
{
Vector2 textureCoord = hit.textureCoord;
int pixelX = (int)(textureCoord.x * _templateDirtMask.width);
int pixelY = (int)(textureCoord.y * _templateDirtMask.height);
Vector2Int paintPixelPosition = new Vector2Int(pixelX, pixelY);
int paintPixelDistance = Mathf.Abs(paintPixelPosition.x - lastPaintPixelPosition.x) + Mathf.Abs(paintPixelPosition.y - lastPaintPixelPosition.y);
int maxPaintDistance = 7;
if (paintPixelDistance < maxPaintDistance)
{
return;
}
lastPaintPixelPosition = paintPixelPosition;
int pixelXOffset = pixelX - (_brush.width / 2);
int pixelYOffset = pixelY - (_brush.height / 2);
for (int x = 0; x < _brush.width; x++)
{
for (int y = 0; y < _brush.height; y++) {
Color pixelDirt = _brush.GetPixel(x, y);
Color pixelDirtMask = _templateDirtMask.GetPixel(pixelXOffset + x, pixelYOffset + y);
float removedAmount = pixelDirtMask.g - (pixelDirtMask.g * pixelDirt.g);
dirtAmount -= removedAmount;
_templateDirtMask.SetPixel(
pixelXOffset + x,
pixelYOffset + y,
new Color(0, pixelDirtMask.g * pixelDirt.g, 0)
);
}
}
_templateDirtMask.Apply();
}
}
}
Start Paint, and using the pen, try draw circles as fast as you can then look at the result:
Obviously, you didn't draw such straight lines with such clean direction change.
So, how is Paint able to cope up with such huge delta changes?
Interpolation
Some pseudo code:
on mouse down
get current mouse position
if last mouse position has been set
draw all the positions between last to current
use Bresenham algorithm for instance
save current mouse position to last mouse position
You could/should make your algo aware about pen size, with some simple math you can figure out the necessary step in evaluating points in the interpolation.
And don't use SetPixel, keep a copy of the texture pixels with GetPixels32 that you'll update and then upload it all at once using SetPixels32.

SKShader to create parallax background

A parallax background with a fixed camera is easy to do, but since i'm making a topdown view 2D space exploration game, I figured that having a single SKSpriteNode filling the screen and being a child of my SKCameraNode and using a SKShader to draw a parallax starfield would be easier.
I went on shadertoy and found this simple looking shader. I adapted it successfully on shadertoy to accept a vec2() for the velocity of the movement that I want to pass as an SKAttribute so it can follow the movement of my ship.
Here is the original source:
https://www.shadertoy.com/view/XtjSDh
I managed to make the conversion of the original code so it compiles without any error, but nothing shows up on the screen. I tried the individual functions and they do work to generate a fixed image.
Any pointers to make it work?
Thanks!
This isn't really an answer, but it's a lot more info than a comment, and highlights some of the oddness and appropriateness of how SK does particles:
There's a couple of weird things about particles in SceneKit, that might apply to SpriteKit.
when you move the particle system, you can have the particles move with them. This is the default behaviour:
From the docs:
When the emitter creates particles, they are rendered as children of
the emitter node. This means that they inherit the characteristics of
the emitter node, just like nodes do. For example, if you rotate the
emitter node, the positions of all of the spawned particles are
rotated also. Depending on what effect you are simulating with the
emitter, this may not be the correct behavior.
For most applications, this is the wrong behaviour, in fact. But for what you're wanting to do, this is ideal. You can position new SKNodeEmitters offscreen where the ship is heading, and fix them to "space" so they rotate in conjunction with the directional changes of the player's ship, and the particles will do exactly as you want/need to create the feeling of moving throughout space.
SpriteKit has a prebuild, or populate ability in the form of advancing the simulation: https://developer.apple.com/reference/spritekit/skemitternode/1398027-advancesimulationtime
This means you can have stars ready to show wherever the ship is heading to, through space, as the SKEmittors come on screen. There's no need for a loading delay to build stars, this does it immediately.
As near as I can figure, you'd need a 3 particle emitters to pull this off, each the size of the screen of the device. Burst the particles out, then release each layer you want for parallax to a target node at the right "depth" from the camera, and carry on by moving these targets as per the screen movement.
Bit messy, but probably quicker, easier, and much more powerfully full of potential for playful effects than creating your own system.
Maybe... I could be wrong.
EDIT : Code is clean and working now. I've setup a GitHub repo for this.
I guess I didnt explain what I wanted properly. I needed a starfield background that follows the camera like you could find in Subspace (back in the days)
The result is pretty cool and convincing! I'll probably come back to this later when the node quantity becomes a bottleneck. I'm still convinced that the proper way to do that is with shaders!
Here is a link to my code on GitHub. I hope it can be useful to someone. It's still a work in progress but it works well. Included in the repo is the source from SKTUtils (a library by Ray Wenderlich that is already freely available on github) and from my own extension to Ray's tools that I called nuts-n-bolts. these are just extensions for common types that everyone should find useful. You, of course, have the source for the StarfieldNode and the InteractiveCameraNode along with a small demo project.
https://github.com/sonoblaise/StarfieldDemo
The short answer is, in SpriteKit you use the fragment coordinates directly without needing to scale against the viewport resolution (iResoultion in shadertoy land), so the line:
vec2 samplePosition = (fragCoord.xy / maxResolution) + vec2(0.0, iTime * 0.01);
can be changed to omit the scaling:
vec2 samplePosition = fragCoord.xy + vec2(0.0, iTime * 0.01);
this is likely the root issue (hard to know for sure without your rendition of the shader code) of why you're only seeing black from the shader.
For a full answer for an implementation of a SpriteKit shader making a star field, let's take the original shader and simplify it so there's only one star field, no "fog" (just to keep things simple), and add a variable to control the velocity vector of the movement of the stars:
(this is still in shadertoy code)
float Hash(in vec2 p)
{
float h = dot(p, vec2(12.9898, 78.233));
return -1.0 + 2.0 * fract(sin(h) * 43758.5453);
}
vec2 Hash2D(in vec2 p)
{
float h = dot(p, vec2(12.9898, 78.233));
float h2 = dot(p, vec2(37.271, 377.632));
return -1.0 + 2.0 * vec2(fract(sin(h) * 43758.5453), fract(sin(h2) * 43758.5453));
}
float Noise(in vec2 p)
{
vec2 n = floor(p);
vec2 f = fract(p);
vec2 u = f * f * (3.0 - 2.0 * f);
return mix(mix(Hash(n), Hash(n + vec2(1.0, 0.0)), u.x),
mix(Hash(n + vec2(0.0, 1.0)), Hash(n + vec2(1.0)), u.x), u.y);
}
vec3 Voronoi(in vec2 p)
{
vec2 n = floor(p);
vec2 f = fract(p);
vec2 mg, mr;
float md = 8.0;
for(int j = -1; j <= 1; ++j)
{
for(int i = -1; i <= 1; ++i)
{
vec2 g = vec2(float(i), float(j));
vec2 o = Hash2D(n + g);
vec2 r = g + o - f;
float d = dot(r, r);
if(d < md)
{
md = d;
mr = r;
mg = g;
}
}
}
return vec3(md, mr);
}
vec3 AddStarField(vec2 samplePosition, float threshold)
{
vec3 starValue = Voronoi(samplePosition);
if(starValue.x < threshold)
{
float power = 1.0 - (starValue.x / threshold);
return vec3(power * power * power);
}
return vec3(0.0);
}
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
float maxResolution = max(iResolution.x, iResolution.y);
vec2 velocity = vec2(0.01, 0.01);
vec2 samplePosition = (fragCoord.xy / maxResolution) + vec2(iTime * velocity.x, iTime * velocity.y);
vec3 finalColor = AddStarField(samplePosition * 16.0, 0.00125);
fragColor = vec4(finalColor, 1.0);
}
If you paste that into a new shadertoy window and run it you should see a monochrome star field moving towards the bottom left.
To adjust it for SpriteKit is fairly simple. We need to remove the "in"s from the function variables, change the name of some constants (there's a decent blog post about the shadertoy to SpriteKit changes which are needed), and use an Attribute for the velocity vector so we can change the direction of the stars for each SKSpriteNode this is applied to, and over time, as needed.
Here's the full SpriteKit shader source, with a_velocity as a needed attribute defining the star field movement:
float Hash(vec2 p)
{
float h = dot(p, vec2(12.9898, 78.233));
return -1.0 + 2.0 * fract(sin(h) * 43758.5453);
}
vec2 Hash2D(vec2 p)
{
float h = dot(p, vec2(12.9898, 78.233));
float h2 = dot(p, vec2(37.271, 377.632));
return -1.0 + 2.0 * vec2(fract(sin(h) * 43758.5453), fract(sin(h2) * 43758.5453));
}
float Noise(vec2 p)
{
vec2 n = floor(p);
vec2 f = fract(p);
vec2 u = f * f * (3.0 - 2.0 * f);
return mix(mix(Hash(n), Hash(n + vec2(1.0, 0.0)), u.x),
mix(Hash(n + vec2(0.0, 1.0)), Hash(n + vec2(1.0)), u.x), u.y);
}
vec3 Voronoi(vec2 p)
{
vec2 n = floor(p);
vec2 f = fract(p);
vec2 mg, mr;
float md = 8.0;
for(int j = -1; j <= 1; ++j)
{
for(int i = -1; i <= 1; ++i)
{
vec2 g = vec2(float(i), float(j));
vec2 o = Hash2D(n + g);
vec2 r = g + o - f;
float d = dot(r, r);
if(d < md)
{
md = d;
mr = r;
mg = g;
}
}
}
return vec3(md, mr);
}
vec3 AddStarField(vec2 samplePosition, float threshold)
{
vec3 starValue = Voronoi(samplePosition);
if (starValue.x < threshold)
{
float power = 1.0 - (starValue.x / threshold);
return vec3(power * power * power);
}
return vec3(0.0);
}
void main()
{
vec2 samplePosition = v_tex_coord.xy + vec2(u_time * a_velocity.x, u_time * a_velocity.y);
vec3 finalColor = AddStarField(samplePosition * 20.0, 0.00125);
gl_FragColor = vec4(finalColor, 1.0);
}
(worth noting, that is is simply a modified version of the original )

Read float values from RGBAFloat texture in Unity 3D

It seems people aren't discussing much around floating point textures. I used them to do some computations and then forward the result to another surface shader (to obtain some specific deformations) and that's cool, it always works for me if I digest the results in a shader but this time I need to get those values CPU side so I get a float[] array with the results (just after calling Graphics.Blit that fills the floating point texture). How can this be achieved?
On a side note: the only guy that I saw using this method so far is Keijiro, for example in his Kvant Wall; if you have other sources I'd be grateful if you let me know.
Incidentally, I know there are compute shaders and OpenCL and CUDA. This is the method I need now.
So I came up with this solution.
float[] DecodeFloatTexture()
{
Texture2D decTex = new Texture2D(resultBuffer.width, resultBuffer.height, TextureFormat.RGBAFloat, false);
RenderTexture.active = resultBuffer;
decTex.ReadPixels(new Rect(0, 0, resultBuffer.width, resultBuffer.height), 0, 0);
decTex.Apply();
RenderTexture.active = null;
Color[] colors = decTex.GetPixels();
// HERE YOU CAN GET ALL 4 FLOATS OUT OR JUST THOSE YOU NEED.
// IN MY CASE ALL 4 VALUES HAVE A MEANING SO I'M GETTING THEM ALL.
float[] results = new float[colors.Length*4];
for(int i=0; i<colors.Length; i++)
{
results[i * 4] = colors[i].r;
results[i * 4 + 1] = colors[i].g;
results[i * 4 + 2] = colors[i].b;
results[i * 4 + 3] = colors[i].a;
}
return results;
}
Alternatively, if what we need is not a float, GetRawTextureData can be used to then convert the bytes to the new type with System.BitConverter which gives some flexibility on the data you are passing from the shader (for example if your fragment shader is outputting half4). If you need float though the first method is better.
float[] DecodeFloatTexture()
{
Texture2D decTex = new Texture2D(resultBuffer.width, resultBuffer.height, TextureFormat.RGBAFloat, false);
RenderTexture.active = resultBuffer;
decTex.ReadPixels(new Rect(0, 0, resultBuffer.width, resultBuffer.height), 0, 0);
decTex.Apply();
RenderTexture.active = null;
byte[] bytes = decTex.GetRawTextureData();
float[] results = new float[resultBuffer.width * resultBuffer.height];
for (int i = 0; i < results.Length; i++)
{
int byteIndex = i * 4;
byte[] localBytes = new byte[] { bytes[i], bytes[i + 1], bytes[i + 2], bytes[i + 3] }; // converts 4 bytes to a float
results[i] = System.BitConverter.ToSingle(localBytes, 0);
}
return results;
}

How to draw anti-aliased circle with iPhone OpenGL ES

There are three main ways I know of to draw a simple circle in OpenGL ES, as provided by the iPhone. They are all based on a simple algorithm (the VBO version is below).
void circleBufferData(GLenum target, float radius, GLsizei count, GLenum usage) {
const int segments = count - 2;
const float coefficient = 2.0f * (float) M_PI / segments;
float *vertices = new float[2 * (segments + 2)];
vertices[0] = 0;
vertices[1] = 0;
for (int i = 0; i <= segments; ++i) {
float radians = i * coefficient;
float j = radius * cosf(radians);
float k = radius * sinf(radians);
vertices[(i + 1) * 2] = j;
vertices[(i + 1) * 2 + 1] = k;
}
glBufferData(target, sizeof(float) * 2 * (segments + 2), vertices, usage);
glVertexPointer(2, GL_FLOAT, 0, 0);
delete[] vertices;
}
The three ways that I know of to draw a simple circle are by using glDrawArray from an array of vertices held by the application; using glDrawArray from a vertex buffer; and by drawing to a texture on initialization and drawing the texture when rendering is requested. The first two methods I know fairly well (though I have not been able to get anti-aliasing to work). What code is involved for the last option (I am very new to OpenGL as a whole, so a detailed explanation would be very helpful)? Which is most efficient?
Antialiasing in the iOS OpenGL ES impelmentation is severely limited. You won't be able to draw antialiased circles using traditional methods.
However, if the circles you're drawing aren't that large, and are filled, you could take a look at using GL_POINT_SMOOTH. It's what I used for my game, Pizarro, which involves a lot of circles. Here's a detailed writeup of my experience with drawing antialiased circles on the iOS:
http://sveinbjorn.org/drawing_antialiased_circles_opengl_iphone

Looking for some help working with premultiplied alpha

I am trying to update a source image with the contents of multiple destination images. From what I can tell using premultiplied alpha is the way to go with this, but I think I am doing something wrong (function below). the image I am starting with is initialized with all ARGB values set to 0. When I run the function once the resulting image looks great, but when I start compositing on any others all the pixels that have alpha information get really messed up. Does anyone know if I am doing something glaringly wrong or if there is something extra I need to do to modify the color values?
void CompositeImage(unsigned char *src, unsigned char *dest, int srcW, int srcH){
int w = srcW;
int h = srcH;
int px0;
int px1;
int px2;
int px3;
int inverseAlpha;
int r;
int g;
int b;
int a;
int y;
int x;
for (y = 0; y < h; y++) {
for (x= 0; x< w*4; x+=4) {
// pixel number
px0 = (y*w*4) + x;
px1 = (y*w*4) + (x+1);
px2 = (y*w*4) + (x+2);
px3 = (y*w*4) + (x+3);
inverseAlpha = 1 - src[px3];
// create new values
r = src[px0] + inverseAlpha * dest[px0];
g = src[px1] + inverseAlpha * dest[px1];
b = src[px2] + inverseAlpha * dest[px2];
a = src[px3] + inverseAlpha * dest[px3];
// update destination image
dest[px0] = r;
dest[px1] = g;
dest[px2] = b;
dest[px3] = a;
}
}
}
I'm not clear on what data you are working with. Do your source images already have the alpha values pre-multiplied as they are stored? If not, then pre-multiplied alpha does not apply here and you would need to do normal alpha blending.
Anyway, the big problem in your code is that you're not keeping track of the value ranges that you're dealing with.
inverseAlpha = 1 - src[px3];
This needs to be changed to:
inverseAlpha = 255 - src[px3];
You have all integral value types here, so the normal incoming 0..255 value range will result in an inverseAlpha range of -254..1, which will give you some truly wacky results.
After changing the 1 to 255, you also need to divide your results for each channel by 255 to scale them back down to the appropriate range. The alternative is to do the intermediate calculations using floats instead of integers and divide the initial channel values by 255.0 (instead of these other changes) to get values in the 0..1 range.
If your source data really does already have pre-multiplied alpha, then your result lines should look like this.
r = src[px0] + inverseAlpha * dest[px0] / 255;
If your source data does not have pre-multiplied alpha, then it should be:
r = src[px0] * src[px3] / 255 + inverseAlpha * dest[px0] / 255;
There's nothing special about blending the alpha channel. Use the same calculation as for r, g, and b.