WEBGL_depth_texture precision is too low in iPhone - iphone

Recently, I attempted to make deferred renderer in WebGL for mobile browsers.
To make the deferred renderer, I need to render depth of the scene objects into texture once.
I know I can pack depth values into UBYTE RGBA texture when render the depth of the scene objects.
However, I somehow know iPhone is supporting WEBGL_depth_texture. So, now I tring to use that feature instead of using UBYTE RGBA.
That was works great when I debug the renderer in PC browser, but the depth texture was too low precision in iPhone browser.
This is these depth texture rendered into UBYTE RGBA from rendered depth texture.
Correct depth texture image(PC browser)
Incorrect depth texture image(iPhone safari/chrome)
And, emulator in Mac produced same image of PC browsers.
This is the code I used to fetch depth texture pixel and rendering into UBYTE RGBA texture.
vec3 packUNorm24(const highp float value){
const vec3 bitSh = vec3(256.0*256.0, 256.0, 1.0);
const vec3 bitMsk = vec3(0.0, 1.0/256.0, 1.0/256.0);
vec3 highp res = fract(value * bitSh);
res -= res.xxy * bitMsk;
return res;
}
vec3 packRanged24(const highp float value,const highp float minimum,const highp float maximum){
return packUNorm24((value - minimum)/(maximum - minimum));
}
uniform sampler2D _depthBuffer;
void main(void)
{
gl_FragColor.rgb = packRanged24(texture2D(_depthBuffer,uv).r,-1.,1.);
gl_FragColor.a = 1.;
}
Why these precision are too low?

I think the WEBGL_Depth_Texture spec (https://www.khronos.org/registry/webgl/extensions/WEBGL_depth_texture/) allows the implementation to decide between 16, 24 or 32 bit depth values.
While the precision might be too low for your needs, I don't think it's incorrect if the iPhone implementation is choosing 16-bit and the desktop implementation chooses 24-bit or greater.
As per the OpenGL ES spec, there is no guarantee that the OpenGL ES implementation will use the texture type to determine how to store the depth texture internally. It may choose to downsample the 32-bit depth values to 16-bit or even 24-bit.

Related

Why is my simplest shader taking up the most processing power

So I ran a frame capture to see the performance. To my surprise it was my full screen rending things that were to blame. Take a look
Here are the two hogging functions. I have disabled the texture look up on the full screen texture to illustrate how ridiculous this is!
Program #3
Vert:
precision highp float;
attribute vec2 position;
uniform mat4 matrix;
void main()
{
gl_Position = matrix * vec4(position.xy, 0.0, 1.0);
}
Frag:
precision highp float;
uniform float alpha;
void main()
{
gl_FragColor = vec4(0.0, 0.0, 0.0, 1.0 - alpha);
}
Context:
//**Set up data
glUseProgram(shade_black.progId)
glBindBuffer(GLenum(GL_ARRAY_BUFFER), black_buffer) //Bind the coordinates
//**Pass in coordinates
let aTexCoordLoc = GLuint(black_attribute_position)
glEnableVertexAttribArray(aTexCoordLoc);
glVertexAttribPointer(aTexCoordLoc, 2, GLenum(GL_FLOAT), GLboolean(GL_FALSE), 0, BUFFER_OFFSET(0)) //Send to shader
//**Pass in uniforms
glUniformMatrix4fv(black_uniform_ortho, 1, GLboolean(GL_FALSE), &orthographicMatrix) //Pass matrix
glUniform1f(black_unifrom_alpha, 0.95) //Pass alpha
counter += timedo
//**Draw (instanced)
//The number 3 is actually variable but for this purpose I set it flat out
glDrawArraysInstanced(GLenum(GL_TRIANGLE_STRIP), 0, 4, 3 )// GLsizei(timedo)) //Draw it
//**Clean up
glBindBuffer(GLenum(GL_ARRAY_BUFFER), 0) //Clean up
glBindBuffer(GLenum(GL_ARRAY_BUFFER), 0)
Program #2
Vert:
precision highp float;
attribute vec4 data;
uniform mat4 matrix;
uniform float alpha;
varying vec2 v_texcoord;
varying float o_alpha;
void main()
{
gl_Position = matrix * vec4(data.xy, 0.0, 1.0);
v_texcoord = data.zw;
o_alpha = alpha;
}
Frag:
precision highp float;
uniform sampler2D s_texture;
varying float o_alpha;
varying vec2 v_texcoord;
void main()
{
//vec4 color = texture2D(s_texture, v_texcoord);
gl_FragColor = vec4(1.0);
//This line below is what it should be, but I wanted to isolate the issue, the picture results are from setting it to white.
//gl_FragColor = vec4(color.rgb, step(0.4, color.a ) * (color.a - o_alpha));
}
Context:
func drawTexture(texture: FBO, alpha: GLfloat)
{
//**Start up
//DONE EARLIER
//**Pass in vertices
glBindBuffer(GLenum(GL_ARRAY_BUFFER), textures_buffer)
let aTexCoordLoc = GLuint(textures_attribute_data)
glEnableVertexAttribArray(aTexCoordLoc);
glVertexAttribPointer(aTexCoordLoc, 4, GLenum(GL_FLOAT), GLboolean(GL_FALSE), 0, BUFFER_OFFSET(0)) //Tell gpu where
//**Pass in uniforms
glUniform1i(textures_uniform_texture, 0)
glUniformMatrix4fv(textures_uniform_matrix, 1, GLboolean(GL_FALSE), &orthographicMatrix)
glUniform1f(textures_uniform_alpha, alpha)
//**Texture
glBindTexture(GLenum(GL_TEXTURE_2D), texture.texture)
//**Draw
glDrawArrays(GLenum(GL_TRIANGLE_STRIP), 0, 4)
//**Clean up
glBindTexture(GLenum(GL_TEXTURE_2D), 0)
glBindBuffer(GLenum(GL_ARRAY_BUFFER), 0)
}
For the others you can at least se their draw call, but they aren't causing much damage.
What on earth is going on to cause the most complicated shaders to only be responsible for less then 1% of the latency?
NOTE: Both of these shaders use a VBO that is created and filled at the start of the app.
It does look kind of surprising. Here's how I'd try to make sense of those figures (assuming those times are GPU timings for those render calls):
Fill-rate is everything on mobile. Even running a simple pixel shader over 3 million pixels or so (iPad retina) is going to be an expensive task, and you shouldn't be too surprised that it's more expensive than a large number of much smaller particles. Your percentages are going to add up to 100%, so if all your other stuff is just a few hundred vertices and fills a few thousand pixels, you shouldn't be surprised if the full-screen stuff is huge relative to that. It also says '5ms', which is tempting to think of as an absolute figure, but bear in mind that the CPU and GPU automatically start running slower when there's not much work to do, so even a millisecond timing can be very misleading when the device is mostly idle.
Do you have a glClear at the start of the frame? If not, then you can pay a pretty high price because the first thing the GPU must do when it processes a tile is load in the old contents. With a glClear at the start of your rendering, it knows it needn't bother loading old contents. Maybe you're seeing that price on your first full-screen pass if you don't have a glClear.

Real-Time glow shader confusion

So I have a rather simple real-time 2d game that I am trying to add some nice glow to. To take it down to its most basic form it is simply circles and lies drawn on a black surface. And if you consider the scene from a hsv color space perspective all colors (except for black) have a "v" value of 100%.
Currently I have a sort of "accumulation" buffer where the current frame is joined with the previous frame. It works by using two off-screen buffers and a black texture.
Buffer one activated-------------
Lines and dots drawn
Buffer one deactivated
Buffer two activated-------------
Buffer two contents drawn as a ful screen quad
Black texture drawn with slight transparency over full screen
Buffer one contents drawn
Buffer two deactivated
On Screen buffer activated-------
Buffer two's contents drawn to screen
Right now all "lag" by far comes from latency on the cpu. The GPU handles all of this really well.
So I was thinking of maybe trying to spice things up abit by adding a glow effect to things. I was thinking perhaps for step 10 instead of using a regular texture shader, I could use one that draws the texture except with glow!
Unfortunately I am a bit confused on how to do this. Here are some reasons
Blur stuff. Mostly that some people claim that a Gaussian blur can be done real-time while others say you shouldn't. Also people mention another type of blur called a "focus" blur that I dont know what it is.
Most of the examples I can find use XNA. I need to have one that is written in a shader language that is like OpenGL es 2.0.
Some people call it glow, others call it bloom
Different blending modes? can be used to add the glow to the original texture.
How to combine vertical and horizontal blur? Perhaps in one draw call?
Anyway the process as I understand it for rendering glow is thus
Cut out dark data from it
Blur the light data (using Gaussian?)
Blend the light data on-top of the original (screen blending?)
So far I have gotten to the point where I have a shader that draws a texture. What does my next step look like?
//Vertex
percision highp float;
attrivute vec2 positionCoords;
attribute vec2 textureCoords;
uniform mat4 matrix;
uniform float alpha;
varying vec2 v_texcoord;
varying float o_alpha;
void main()
{
gl_Position = matrix * vec4(positionCoords, 0.0, 1.0);
v_texcoord = textureCoords.xy;
o_alpha = alpha;
}
//Fragment
varying vec2 v_texcoord;
uniform sampler2D s_texture;
varying float o_alpha;
void main()
{
vec4 color = texture2D(s_texture, v_texcoord);
gl_FragColor = vec4(color.r, color.g, color.b, color.a - o_alpha);
}
Also is this a feasible thing to do in real-time?
Edit: I probably want to do a 5px or less blur
To address your initial confusion items:
Any kind of blur filter will effectively spread each pixel into a blob based on its original position, and accumulate this result additively for all pixels. The difference between filters is the shape of the blob.
For a Gaussian blur, this blob should be a smooth gradient, feathering gradually to zero around the edges. You probably want a Gaussian blur.
A "focus" blur would be an attempt to emulate an out-of-focus camera: rather than fading gradually to zero, its blob would spread each pixel over a hard-edged circle, giving a subtly different effect.
For a straightforward, one-pass effect, the computational cost is proportional to the width of the blur. This means that a narrow (e.g. 5px or less) blur is likely to be feasible as a real-time one-pass effect. (It is possible to achieve a wide Gaussian blur in real-time by using multiple passes and a multi-resolution pyramid, but I'd recommend trying something simpler first...)
You could reasonably call the effect either "glow" or "bloom". However, to me, "glow" connotes a narrow blur leading to a neon-like effect, while "bloom" connotes using a wide blur to emulate the visual effect of bright objects in a high-dynamic-range visual environment.
The blend mode determines how what you draw is combined with the existing colors in the target buffer. In OpenGL, activate blending with glEnable(GL_BLEND) and set the mode with glBlendFunc().
For a narrow blur, you should be able to do horizontal and vertical filtering in one pass.
To do fast one-pass full-screen sampling, you will need to determine the pixel increment in your source texture. It is fastest to determine this statically, so that your fragment shader doesn't need to compute it at run-time:
float dx = 1.0 / x_resolution_drawn_over;
float dy = 1.0 / y_resolution_drawn_over;
You can do a 3-pixel (1,2,1) Gaussian blur in one pass by setting your texture sampling mode to GL_LINEAR, and taking 4 samples from source texture t as follows:
float dx2 = 0.5*dx; float dy2 = 0.5*dy; // filter steps
[...]
vec2 a1 = vec2(x+dx2, y+dy2);
vec2 a2 = vec2(x+dx2, y-dy2);
vec2 b1 = vec2(x-dx2, y+dy2);
vec2 b2 = vec2(x-dx2, y-dy2);
result = 0.25*(texture(t,a1) + texture(t,a2) + texture(t,b1) + texture(t,b2));
You can do a 5-pixel (1,4,6,4,1) Gaussian blur in one pass by setting your texture sampling mode to GL_LINEAR, and taking 9 samples from source texture t as follows:
float dx12 = 1.2*dx; float dy12 = 1.2*dy; // filter steps
float k0 = 0.375; float k1 = 0.3125; // filter constants
vec4 filter(vec4 a, vec4 b, vec4 c) {
return k1*a + k0*b + k1*c;
}
[...]
vec2 a1 = vec2(x+dx12, y+dy12);
vec2 a2 = vec2(x, y+dy12);
vec2 a3 = vec2(x-dx12, y+dy12);
vec4 a = filter(sample(t,a1), sample(t,a2), sample(t,a3));
vec2 b1 = vec2(x+dx12, y );
vec2 b2 = vec2(x, y );
vec2 b3 = vec2(x-dx12, y );
vec4 b = filter(sample(t,b1), sample(t,b2), sample(t,b3));
vec2 c1 = vec2(x+dx12, y-dy12);
vec2 c2 = vec2(x, y-dy12);
vec2 c3 = vec2(x-dx12, y-dy12);
vec4 c = filter(sample(t,c1), sample(t,c2), sample(t,c3));
result = filter(a,b,c);
I can't tell you if these filters will be real-time feasible on your platform; 9 samples/pixel at full resolution could be slow.
Any wider Gaussian would make separate horizontal and vertical passes advantageous; substantially wider Gaussian would require multi-resolution techniques for real-time performance. (Note that, unlike the Gaussian, filters such as the "focus" blur are not separable, which means they cannot be separated into horizontal and vertical passes...)
Everything that #comingstorm has said is true, but there's a much easier way. Don't write the blur or glow yourself. Since you're on iOS, why not use CoreImage which has a number of interesting filters to choose from and which work in realtime already? For example, they have a Bloom filter which will likely produce the results you want. Also of interest might be the Gloom filter.
Chaining together CoreImage filters is much easier than writing shaders. You can create a CIImage from an OpenGL texture via [+CIImage imageWithTexture:size:flipped:colorSpace:].

OpenGL ES 2.0 shader examples for image processing?

I am learning shader programming and looking for examples, specifically for image processing. I'd like to apply some Photoshop effect to my photos, e.g. Curves, Levels, Hue/Saturation adjustments, etc.
I'll assume you have a simple uncontroversial vertex shader, as it's not really relevant to the question, such as:
void main()
{
gl_Position = modelviewProjectionMatrix * position;
texCoordVarying = vec2(textureMatrix * vec4(texCoord0, 0.0, 1.0));
}
So that does much the same as ES 1.x would if lighting was disabled, including the texture matrix that hardly anyone ever uses.
I'm not a Photoshop expert, so please forgive my statements of what I think the various tools do — especially if I'm wrong.
I think I'm right to say that the levels tool effectively stretches (and clips) the brightness histogram? In that case an example shader could be:
varying mediump vec2 texCoordVarying;
uniform sampler2D tex2D;
const mediump mat4 rgbToYuv = mat4( 0.257, 0.439, -0.148, 0.06,
0.504, -0.368, -0.291, 0.5,
0.098, -0.071, 0.439, 0.5,
0.0, 0.0, 0.0, 1.0);
const mediump mat4 yuvToRgb = mat4( 1.164, 1.164, 1.164, -0.07884,
2.018, -0.391, 0.0, 1.153216,
0.0, -0.813, 1.596, 0.53866,
0.0, 0.0, 0.0, 1.0);
uniform mediump float centre, range;
void main()
{
lowp vec4 srcPixel = texture2D(tex2D, texCoordVarying);
lowp vec4 yuvPixel = rgbToYuv * srcPixel;
yuvPixel.r = ((yuvPixel.r - centre) * range) + 0.5;
gl_FragColor = yuvToRgb * yuvPixel;
}
You'd control that by setting the centre of the range you want to let through (which will be moved to the centre of the output range) and the total range you want to let through (1.0 for the entire range, 0.5 for half the range, etc).
One thing of interest is that I switch from the RGB input space to a YUV colour space for the intermediate adjustment. I do that using a matrix multiplication. I then adjust the brightness channel, and apply another matrix that transforms back from YUV to RGB. To me it made most sense to work in a luma/chroma colour space and from there I picked YUV fairly arbitrarily, though it has the big advantage for ES purposes of being a simple linear transform of RGB space.
I am under the understanding that the curves tool also remaps the brightness, but according to some function f(x) = y, which is monotonically increasing (so, will intersect any horizontal or vertical only exactly once) and is set in the interface as a curve from bottom left to top right somehow.
Because GL ES isn't fantastic with data structures and branching is to be avoided where possible, I'd suggest the best way to implement that is to upload a 256x1 luminance texture where the value at 'x' is f(x). Then you can just map through the secondary texture, e.g. with:
... same as before down to ...
lowp vec4 yuvPixel = rgbToYuv * srcPixel;
yuvPixel.r = texture2D(lookupTexture, vec2(yuvPixel.r, 0.0));
... and as above to convert back to RGB, etc ...
You're using a spare texture unit to index a lookup table, effectively. On iOS devices that support ES 2.0 you get at least eight texture units so you'll hopefully have one spare.
Hue/saturation adjustments are more painful to show because the mapping from RGB to HSV involves a lot of conditionals, but the process is basically the same — map from RGB to HSV, perform the modifications you want on H and S, map back to RGB and output.
Based on a quick Google search, this site offers some downloadable code that includes some Photoshop functions (though not curves or levels such that I can see) and, significantly, supplies example implementations of functions RGBToHSL and HSLToRGB. It's for desktop GLSL, which has a more predefined variables, types and functions, but you shouldn't have any big problems working around that. Just remember to add precision modifiers and supply your own replacements for the absent min and max functions.
For curves photoshop uses bicubic spline interpolation. For a given set of control points you can precalculate all 256 values for each channel and for the master curve. I found that it's easier to store the results as a 256x1 texture and pass it to the shader and then change values of each component:
uniform sampler2D curvesTexture;
vec3 RGBCurvesAdjustment(vec3 color)
{
return vec3(texture2D(curvesTexture, vec2(color.r, 1.0)).r,
texture2D(curvesTexture, vec2(color.g, 1.0)).g,
texture2D(curvesTexture, vec2(color.b, 1.0)).b);
}

Multi-textured Point Sprites in OpenGL ES2.0 on iOS?

I am trying to make a multi-textured point sprite for an iphone application using OpenGL ES 2.0. I can't find any examples of this on web, and it doesn't seem to be working. Is there some built-in limitation where gl_PointCoord can't be used on multiple textures when using GL_POINTS mode for point sprites?
uniform sampler2D tex;
uniform sampler2D blur_tex;
vec4 texPixel = texture2D( tex, gl_PointCoord );
vec4 blurPixel = texture2D( blur_tex, gl_PointCoord );
I'm sure I am passing in the textures properly, as I can do multi-texturing just fine in TRIANGLE_STRIP mode, but I am hoping to speed things up using point sprites.
If it is possible, a link to an example of working code would super helpful. Thanks!
EDIT:
Here's how I'm passing in the textures to my shader. This lets me do multi-texturing when I am in TRIANGLE or TRIANGLE_STRIP mode.
//pass in position and tex_coord attributes...
//normal tex
glActiveTexture(0);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, tex0);
glUniform1i(SAMPLER_0_UNIFORM, 0);
//blur tex
glActiveTexture(1);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, tex1);
glUniform1i(SAMPLER_1_UNIFORM, 1);
//draw arrays...
However if I am using POINTS mode then I never see the second texture. That is, referring to the shader code above, whether I do
gl_FragColor = texPixel;
OR
gl_FragColor = blurPixel;
I see the same texture. Which seems strange. My guess is that you CAN'T do multi-texturing on a point sprite and somehow having two active textures or two calls to gl_PointCoord causes a problem. But I'm hoping I'm wrong. So if someone has a simple example of multi-texturing working with point sprites in OpenGL ES 2.0 I would be happy to look at that code!
EDIT 2:
vertex shader:
attribute vec4 position;
void main() {
gl_PointSize = 15.0;
gl_Position = position;
}
fragment shader:
precision mediump float;
uniform sampler2D tex;
uniform sampler2D blur_tex;
void main() {
vec4 texPixel = texture2D( tex, gl_PointCoord );
vec4 blurPixel = texture2D( blur_tex, gl_PointCoord );
//these both do the same thing even though I am passing in two different textures?!?!?!?
//gl_FragColor = texPixel;
gl_FragColor = blurPixel;
}
There is a typo in your main program.
The right parameter to pass to glActiveTexture is GL_TEXTURE0, GL_TEXTURE1, ...
Note that GL_TEXTURE0, GL_TEXTURE1 does not have a value of 0,1 etc.
Since you are passing an invalid value to glActiveTexture, the function will fail and so the active texture will always be a default (probably 0) all your changes are going to texture at 0 position.
source
In my case there is a blending for points
The possible problem was in nonexistent parameters
glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE );
glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE );
I think may be too late to post this though.
There are two problems in your code. One is the one that Satyakam has pointed out. The other problem is that you should NOT use glUniform1f. Right one is glUniform1i. The deference is f or i on the tail which means float or integer.

Translating square with shaders on iPhone

I'm trying to update my (little) knowledge of OpenGL ES 1.1 to 2.0 on the iPhone. The default OpenGL ES Application template for the iPhone draws a square and makes it translate up and down and works fine. Their implementation does the math for the Y value changes on the shader itself which is pretty much useless. So, I've changed the vertext shader to:
uniform mat4 mvpMatrix;
attribute vec4 position;
attribute vec4 color;
varying vec4 colorVarying;
void main()
{
gl_Position = position * mvpMatrix;
colorVarying = color;
}
Which seems to be correct and common (from I've seen in my research). Obviously, I did the necessary changes to the code, like binding the uniform and, to help with the math, I got the sources for the esUtil.h code. On the drawing method, my code looks like this:
transY += 0.075f;
ESMatrix mvp, model, view;
esMatrixLoadIdentity(&view);
esPerspective(&view, 60.0, 320.0/480.0, 1.0, -1.0);
esMatrixLoadIdentity(&model);
esTranslate(&model, sinf(transY), 0.0f, 0.0f);
esMatrixLoadIdentity(&mvp);
esMatrixMultiply(&mvp, &model, &view);
glUniformMatrix4fv(uniforms[UNIFORM_MVPMATRIX], 1, GL_FALSE, (GLfloat *)&mvp);
And that should be working but, unfortunately, what I get is quite different from a simple translation.
I've restarted the template a few times but I can't figure out what I'm doing wrong here... Rotating seems to be working as expected, I believe...
Any help would be appreciated.
I think you want to reverse the order of your position transform, as your matrix library is probably working in Column-major order.
gl_Position = position * mvpMatrix;
=>
gl_Position = mvpMatrix * position;
unknowingly you have made a camera position change. In opengles camera(global) and object(local) transforms are just inverse.