GLSL 1.2 Geometry shader varying in vec4 is illegal? - osx-lion

I'm trying to figure out whether something is buggy in my graphics card drivers or just in my code. Is the following illegal?
#version 120
#extension GL_EXT_gpu_shader4 : enable
#extension GL_EXT_geometry_shader4 : enable
varying in vec4 something; // <------- this
void main()
{
for(int i = 0; i < gl_VerticesIn; ++i)
{
gl_Position = gl_PositionIn[i];
EmitVertex();
}
EndPrimitive();
}
It's crashing on my OSX 10.7 (NVIDIA 9400m/9600m) laptop and I'm curious A) is this actually illegal in GLSL 1.2 or is it just my implemntation? Is there a flag of some kind to enable passing a vec4 to geometry shader in GLSL 1.2?
for the record: yes, I know this is waaaay easier in 1.3+ but I'm stuck with 1.2 at the moment. Thanks!

Inputs to the geometry shader are arrays, not single values. This is because the GS takes in a primitive, which can be composed of multiple vertices. Just like gl_PositionIn is an array, so too should your user-defined inputs.

Related

Unity3d shader e -f convert to glsl

I am trying to convert to a shader in Unity3D to normal glsl shader.
The code is :
Out = lerp(To, In, saturate((Distance - Range) / max(Fuzziness, e-f)));
I know the lerp need to be convert to mix and saturate to clamp(xxx, 0.0, 1.0).
But I don't know how to convert the e - f part in the code above.
Any suggestion will be appreciated, thanks :)
You can see the code generated by your graph : right click on any node -> Show generated code. For this node the glsl function generated is :
void Unity_ReplaceColor_float(float3 In, float3 From, float3 To, float Range, out float3 Out, float Fuzziness)
{
float Distance = distance(From, In);
Out = lerp(To, In, saturate((Distance - Range) / max(Fuzziness, 1e-5f)));
}

Does GLKit limit me to two attributes?

I've been working with some GLKit code for the past few days that has a color attribute and a position attribute, but when I try to add a normal attribute it crashes every time.
Vertex Shader:
attribute vec4 SourceColor;
attribute vec4 aVertexPosition;
attribute vec4 aVertexNormal;
varying vec4 DestinationColor;
uniform mat4 uPMatrix; /* perspectiveView matrix */
uniform mat4 uVMatrix; /* view matrix */
uniform mat4 uOMatrix; /* object matrix */
uniform mat4 Projection;
uniform mat4 ModelView;
uniform float u_time;
void main(void) {
DestinationColor = SourceColor;
gl_Position = aVertexPosition * Projection;
}
Code:
self.colorSlot = glGetAttribLocation(programHandle, "SourceColor")
self.positionSlot = glGetAttribLocation(programHandle, "aVertexPosition")
self.normalSlot = glGetAttribLocation(programHandle, "aVertexNormal")
glEnableVertexAttribArray(GLuint(self.positionSlot))
glEnableVertexAttribArray(GLuint(self.colorSlot))
glEnableVertexAttribArray(GLuint(self.normalSlot))<-crashes here
As found through the comments the reason for this crash that the value self.normalSlot was -1 which is returned when glGetAttribLocation fails to find the attribute with the specified name. The value -1 was then typecast using GLuint(self.normalSlot) which most likely produced a very large value which is not supported in enabling vertex attribute array causing a crash. So before using a location for attribute and uniform you should check if the location was retrieved. A valid locations are positive values so location >= 0 should be checked.
Still the attribute was present in the shader source but the location was not retrieved. It seems the reason for that is the attribute was never used in the shader so it might be a compiler optimization. In any case you may not force the attribute by simply declaring it in the vertex shader, you need to also use it. There seems to be another way to force it which is by using glBindAttribLocation. I would not expect this to guarantee the existence of the attribute so you should still at least check the GL errors after using it to avoid additional issues.
Note:
If you are using glBindAttribLocation make sure you completely understand its documentation. It is very easy to lose track of these indices and you should have a smart system or personal standards on how you index the attributes.
Absolutely agree with Matik, but the limit for number of attributes really exists. It may be checked by GL_MAX_VERTEX_ATTRIBS:
int max;
glGetIntegerv(GL_MAX_VERTEX_ATTRIBS, &max);
NSLog(#"GL_MAX_VERTEX_ATTRIBS=%#", #(max));
I got GL_MAX_VERTEX_ATTRIBS=16 for my iPod 5. This is much more than two.

frac() function not returning correct value in unity/cg shader

I've been playing with a noise function as part of a shader I'm using and encountered something I don't understand with regards the frac() function, specifically that it is returning the originally passed in number, rather than just the fractional part.
To debug the issue I created a very simple shader:
Shader "Custom/FracTest" {
SubShader {
CGPROGRAM
#pragma surface surf Lambert
#pragma target 3.0
#include "UnityCG.cginc"
struct Input {
half3 viewDir;
};
void surf(Input IN, inout SurfaceOutput o) {
float4 col = 0;
float n = 1;
float sinn = sin(n); // ~0.84147
float hash = sinn * 43758.5453; // ~36821.54621
float floorHash = floor(hash); // 36821
float fracHash = frac(hash); // should be ~0.54621
float manualFrac = hash - floorHash;
col.r = fracHash == hash ? 1 : 0;
col.g = floorHash == hash ? 1 : 0;
col.b = manualFrac == fracHash ? 1 : 0;
o.Albedo = col.rgb;
}
ENDCG
}
Fallback "Diffuse"
}
As you can see, this is a Unity Surface shader, however I don't think Unity is at fault here - from looking at the generated fragment shader it hasn't messed with any of the values/datatypes around this bit of code. The values in comments are what I expect the variables to contain.
When this shader runs it simply colours the whole object based on some tests against the result of the functions. In this case my object appears red, meaning the fracHash value is still equal to the original hash value, the floorHash is not equal to it and the manualFrac value is also not equal to fracHash value.
I can't see what I'm doing wrong here, the documentation states there is a float variant of the frac() function, but I can only assume I'm just hitting some precision limit inside the frac() function - if I use smaller numbers it seems to return sensible results. Given it works when I manually extract the fraction, perhaps I'm using higher precision numbers?
Any ideas - am I making some wrong assumption somewhere, or am I just being an idiot :P ?
For now I'm progressing by manually calculating the fraction, but I'd like to understand what I've missed :)

iOS: Get pixel-by-pixel data from camera

I'm aware of AVFoundation and its capture support (not too familiar though). However, I don't see any readily-accessible API to get pixel-by-pixel data (RGB-per-pixel or similar). I do recall reading in the docs that this is possible, but I don't really see how. So:
Can this be done? If so, how?
Would I be getting raw image data, or data that's been JPEG-compressed?
AV Foundation can give you back the raw bytes for an image captured by either the video or still camera. You need to set up an AVCaptureSession with an appropriate AVCaptureDevice and a corresponding AVCaptureDeviceInput and AVCaptureDeviceOutput (AVCaptureVideoDataOutput or AVCaptureStillImageOutput). Apple has some examples of this process in their documentation, and it requires some boilerplate code to configure.
Once you have your capture session configured and you are capturing data from the camera, you will set up a -captureOutput:didOutputSampleBuffer:fromConnection: delegate method, where one of the parameters will be a CMSampleBufferRef. That will have a CVImageBufferRef within it that you access via CMSampleBufferGetImageBuffer(). Using CVPixelBufferGetBaseAddress() on that pixel buffer will return the base address of the byte array for the raw pixel data representing your camera frame. This can be in a few different formats, but the most common are BGRA and planar YUV.
I have an example application that uses this here, but I'd recommend that you also take a look at my open source framework which wraps the standard AV Foundation boilerplate and makes it easy to perform image processing on the GPU. Depending on what you want to do with these raw camera bytes, I may already have something you can use there or a means of doing it much faster than with on-CPU processing.
lowp vec4 textureColor = texture2D(inputImageTexture, textureCoordinate);
float luminance = dot(textureColor.rgb, W);
mediump vec2 p = textureCoordinate;
if (p.x == 0.2 && p.x<0.6 && p.y > 0.4 && p.y<0.6) {
gl_FragColor = vec4(textureColor.r * 1.0, textureColor.g * 1.0, textureColor.b * 1.0, textureColor.a);
} else {
gl_FragColor = vec4(textureColor.r * 0.0, textureColor.g * 0.0, textureColor.b * 0.0, textureColor.a *0.0);
}

GLSL 'texture2D' : no matching overloaded function found OpenGL ES2 on iPhone

I'm experimenting with shaders with GLSL but I get a funny error when I try to take data from a texture to try a simple contrast enhancement algorithm.
'texture2D' : no matching overloaded function found
It happens with this code where "final" is the vec4 variable to hold the colour that is being worked on. The idea here is to push the pixel's colour further from the surrounding ones (An experimental idea). I'll mark the line in the code which has the error.
highp vec4 tex = texture2D(tex,vec2(texcoord.x+1.0,texcoord.y));
highp float total = tex.r + tex.g + tex.b;
tex = texture2D(tex,vec2(texcoord.x-1.0,texcoord.y)); <----This one as well as the next similar lines
total += tex.r + tex.g + tex.b;
tex = texture2D(tex,vec2(texcoord.x,texcoord.y+1.0));
total += tex.r + tex.g + tex.b;
tex = texture2D(tex,vec2(texcoord.x,texcoord.y-1.0));
total += tex.r + tex.g + tex.b;
highp float di = 12.0;
highp vec4 close_av = total/di;
final = (final - close_av)*1.3+close_av;
Why wont it work? Thank you.
Assuming that tex was originally declared as a uniform sampler2D at the top of your shader source, it is being redeclared as a local variable by the first line of your snippet, which hides the original definition. Changing either variable to keep their names distinct should fix your compilation issues.