I have an iPhone application what models the Planet Earth! I would like to make it realistic: There is a sphere object, and a Nightside and Dayside texture and shader, but it doesn't work!
My Sphere object's draw method:
-(bool)execute:(GLuint)texture;
{
glBindTexture(GL_TEXTURE_2D, texture);
glBindVertexArrayOES(m_VertexArrayName);
glDrawArrays(GL_TRIANGLE_STRIP, 0, m_NumVertices);
glBindTexture(GL_TEXTURE_2D, 0);
return true;
}
My ViewController call method:
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect
{
glClearColor(0.3f, 0.3f, 0.3f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glUniformMatrix4fv(uniforms[UNIFORM_MODELVIEWPROJECTION_MATRIX], 1, 0, _modelViewProjectionMatrix.m);
glUniformMatrix3fv(uniforms[UNIFORM_NORMAL_MATRIX], 1, 0, _normalMatrix.m);
glUseProgram(m_NightsideProgram);
[m_Sphere setBlendMode:0];
[m_Sphere execute:m_EarthNightTexture.name];
glUseProgram(m_DaysideProgram);
[m_Sphere setBlendMode:1];
[m_Sphere execute:m_EarthDayTexture.name];
glCullFace(GL_FRONT);
glEnable(GL_CULL_FACE);
glFrontFace(GL_CW);
}
Blendmodes:
0: glBlendFunc(GL_SRC_COLOR, GL_DST_COLOR);
1: glBlendFunc(GL_ONE, GL_CONSTANT_COLOR); //The constant is a mid blue, for make it lighter a bit
Nightside fragmentshader:
precision mediump float;
varying lowp vec4 colorVarying;
varying vec2 v_texCoord;
uniform sampler2D s_texture;
void main() {
vec4 newColor;
newColor=1.0-colorVarying;
gl_FragColor = texture2D(s_texture, v_texCoord)*newColor;
}
DaySide Fragmentshader:
precision mediump float;
varying lowp vec4 colorVarying;
varying lowp vec4 specularColorVarying;
varying vec2 v_texCoord;
uniform sampler2D s_texture;
void main() {
vec4 finalSpecular=vec4(0,0,0,1);
vec4 surfaceColor;
float halfBlue;
surfaceColor=texture2D(s_texture,v_texCoord);
halfBlue=0.5*surfaceColor[2];
if(halfBlue>1.0)
halfBlue=1.0;
if((surfaceColor[0]<halfBlue) && (surfaceColor[1]<halfBlue))
finalSpecular=specularColorVarying;
gl_FragColor = surfaceColor*colorVarying+colorVarying*finalSpecular;
}
If I use the only one of the shaders, it seens to be good, but it won't work together!
For a glUniform... call to take effect, there has to be a valid program bound/used, and even then it only changes the uniform value for that specific program's uniform (indentified by the uniform location). So you have to call your glUniform... functions for each program after the respective glUseProgram.
This is why it works with only one shader program, because you don't ever bind any other program. But it is still conceptually wrong, as in this case you are relying on a specific program being already bound, which is always a source for errors (like when adding a second program), as OpenGL is a state machine.
On the other hand a uniform variable keeps its value even when its corresponding program gets unbound (glUseProgram(0_or_any_other_program)).
Related
I have tried to draw some 3D squares (with OpenGL on iPhone) and make them rotate around, now they look like a sphere.
http://i618.photobucket.com/albums/tt265/LoyalMoral/Post/ScreenShot2013-05-15at23249PM.png
But the square is flat (the first one on image below), and I want to flex it:
http://i618.photobucket.com/albums/tt265/LoyalMoral/Post/Untitled-1.jpg
someone told me that I have to use glsl, but I don't know shading language.
this is my vertex and fragment (follow Ray Wenderlich's tutorial):
// Vertex.glsl
attribute vec4 Position;
attribute vec4 SourceColor;
varying vec4 DestinationColor;
uniform mat4 Projection;
uniform mat4 Modelview;
attribute vec2 TexCoordIn;
varying vec2 TexCoordOut;
void main(void) {
DestinationColor = SourceColor;
gl_Position = Projection * Modelview * Position;
TexCoordOut = TexCoordIn;
}
// Fragment.glsl
varying lowp vec4 DestinationColor;
varying lowp vec2 TexCoordOut;
uniform sampler2D Texture;
void main(void) {
gl_FragColor = DestinationColor * texture2D(Texture, TexCoordOut);
}
could somebody help me? :)
Instead of using a quad (pair of triangles) for a square use a grid for it. Thus you will be able to place vertices of the grid manually resulting in the shape you want.
I am trying to learn Shaders to implement something in my iPhone app. So far I have understood easy examples like making a color image to gray scale, thresholding, etc. Most of the examples involve simple operations in where processing input image pixel I(x,y) results in a simple modification of the colors of the same pixel
But, how about Convolutions?. For example, the easiest example would the Gaussian filter,
in where output image pixel O(x,y) depends not only on I(x,y) but also on surrounding 8 pixels.
O(x,y) = (I(x,y)+ surrounding 8 pixels values)/9;
Normally, this cannot be done with one single image buffer or input pixels will change as the filter is performed. How can I do this with shaders? Also, should I handle the borders myself? or there is a built-it function or something that check invalid pixel access like I(-1,-1) ?
Thanks in advance
PS: I will be generous(read:give a lot of points) ;)
A highly optimized shader-based approach for performing a nine-hit Gaussian blur was presented by Daniel Rákos. His process uses the underlying interpolation provided by texture filtering in hardware to perform a nine-hit filter using only five texture reads per pass. This is also split into separate horizontal and vertical passes to further reduce the number of texture reads required.
I rolled an implementation of this, tuned for OpenGL ES and the iOS GPUs, into my image processing framework (under the GPUImageFastBlurFilter class). In my tests, it can perform a single blur pass of a 640x480 frame in 2.0 ms on an iPhone 4, which is pretty fast.
I used the following vertex shader:
attribute vec4 position;
attribute vec2 inputTextureCoordinate;
uniform mediump float texelWidthOffset;
uniform mediump float texelHeightOffset;
varying mediump vec2 centerTextureCoordinate;
varying mediump vec2 oneStepLeftTextureCoordinate;
varying mediump vec2 twoStepsLeftTextureCoordinate;
varying mediump vec2 oneStepRightTextureCoordinate;
varying mediump vec2 twoStepsRightTextureCoordinate;
void main()
{
gl_Position = position;
vec2 firstOffset = vec2(1.3846153846 * texelWidthOffset, 1.3846153846 * texelHeightOffset);
vec2 secondOffset = vec2(3.2307692308 * texelWidthOffset, 3.2307692308 * texelHeightOffset);
centerTextureCoordinate = inputTextureCoordinate;
oneStepLeftTextureCoordinate = inputTextureCoordinate - firstOffset;
twoStepsLeftTextureCoordinate = inputTextureCoordinate - secondOffset;
oneStepRightTextureCoordinate = inputTextureCoordinate + firstOffset;
twoStepsRightTextureCoordinate = inputTextureCoordinate + secondOffset;
}
and the following fragment shader:
precision highp float;
uniform sampler2D inputImageTexture;
varying mediump vec2 centerTextureCoordinate;
varying mediump vec2 oneStepLeftTextureCoordinate;
varying mediump vec2 twoStepsLeftTextureCoordinate;
varying mediump vec2 oneStepRightTextureCoordinate;
varying mediump vec2 twoStepsRightTextureCoordinate;
// const float weight[3] = float[]( 0.2270270270, 0.3162162162, 0.0702702703 );
void main()
{
lowp vec3 fragmentColor = texture2D(inputImageTexture, centerTextureCoordinate).rgb * 0.2270270270;
fragmentColor += texture2D(inputImageTexture, oneStepLeftTextureCoordinate).rgb * 0.3162162162;
fragmentColor += texture2D(inputImageTexture, oneStepRightTextureCoordinate).rgb * 0.3162162162;
fragmentColor += texture2D(inputImageTexture, twoStepsLeftTextureCoordinate).rgb * 0.0702702703;
fragmentColor += texture2D(inputImageTexture, twoStepsRightTextureCoordinate).rgb * 0.0702702703;
gl_FragColor = vec4(fragmentColor, 1.0);
}
to perform this. The two passes can be achieved by sending a 0 value for the texelWidthOffset (for the vertical pass), and then feeding that result into a run where you give a 0 value for the texelHeightOffset (for the horizontal pass).
I also have some more advanced examples of convolutions in the above-linked framework, including Sobel edge detection.
Horizontal Blur using advantage of bilinear interpolation. Vertical blur pass is analog. Unroll to optimise.
//5 offsets for 10 pixel sampling!
float[5] offset = [-4.0f, -2.0f, 0.0f, 2.0f, 4.0f];
//int[5] weight = [1, 4, 6, 4, 1]; //sum = 16
float[5] weightInverse = [0.0625f, 0.25f, 0.375, 0.25f, 0.0625f];
vec4 finalColor = vec4(0.0f);
for(int i = 0; i < 5; i++)
finalColor += texture2D(inputImage, vec2(offset[i], 0.5f)) * weightInverse[i];
I have used the GPUImage framework for a blur effect similar to that of the Instagram application, where I have made a view for getting a picture from the photo library and then I put an effect on it.
One of the effects is a selective blur effect in which only a small part of the image is clear the rest is blurred. The GPUImageGaussianSelectiveBlurFilter chooses the circular part of the image to not be blurred.
How can I alter this to make the sharp region be rectangular in shape instead?
Because Gill's answer isn't exactly correct, and since this seems to be getting asked over and over, I'll clarify my comment above.
The fragment shader for the selective blur by default has the following code:
varying highp vec2 textureCoordinate;
varying highp vec2 textureCoordinate2;
uniform sampler2D inputImageTexture;
uniform sampler2D inputImageTexture2;
uniform lowp float excludeCircleRadius;
uniform lowp vec2 excludeCirclePoint;
uniform lowp float excludeBlurSize;
uniform highp float aspectRatio;
void main()
{
lowp vec4 sharpImageColor = texture2D(inputImageTexture, textureCoordinate);
lowp vec4 blurredImageColor = texture2D(inputImageTexture2, textureCoordinate2);
highp vec2 textureCoordinateToUse = vec2(textureCoordinate2.x, (textureCoordinate2.y * aspectRatio + 0.5 - 0.5 * aspectRatio));
highp float distanceFromCenter = distance(excludeCirclePoint, textureCoordinateToUse);
gl_FragColor = mix(sharpImageColor, blurredImageColor, smoothstep(excludeCircleRadius - excludeBlurSize, excludeCircleRadius, distanceFromCenter));
}
This fragment shader takes in a pixel color value from both the original sharp image and a Gaussian blurred version of the image. It then blends between these based on the logic of the last three lines.
The first and second of these lines calculate the distance from the center coordinate that you specify ((0.5, 0.5) in normalized coordinates by default for the dead center of the image) to the current pixel's coordinate. The last line uses the smoothstep() GLSL function to smoothly interpolate between 0 and 1 when the distance from the center point travels between two thresholds, the inner clear circle, and the outer fully blurred circle. The mix() operator then takes the output from the smoothstep() and fades between the blurred and sharp color pixel colors to produce the appropriate output.
If you just want to modify this to produce a square shape instead of the circular one, you need to adjust the two center lines in the fragment shader to base the distance on linear X or Y coordinates, not a Pythagorean distance from the center point. To do this, change the shader to read:
varying highp vec2 textureCoordinate;
varying highp vec2 textureCoordinate2;
uniform sampler2D inputImageTexture;
uniform sampler2D inputImageTexture2;
uniform lowp float excludeCircleRadius;
uniform lowp vec2 excludeCirclePoint;
uniform lowp float excludeBlurSize;
uniform highp float aspectRatio;
void main()
{
lowp vec4 sharpImageColor = texture2D(inputImageTexture, textureCoordinate);
lowp vec4 blurredImageColor = texture2D(inputImageTexture2, textureCoordinate2);
highp vec2 textureCoordinateToUse = vec2(textureCoordinate2.x, (textureCoordinate2.y * aspectRatio + 0.5 - 0.5 * aspectRatio));
textureCoordinateToUse = abs(excludeCirclePoint - textureCoordinateToUse);
highp float distanceFromCenter = max(textureCoordinateToUse.x, textureCoordinateToUse.y);
gl_FragColor = mix(sharpImageColor, blurredImageColor, smoothstep(excludeCircleRadius - excludeBlurSize, excludeCircleRadius, distanceFromCenter));
}
The lines that Gill mentions are just input parameters for the filter, and don't control its circularity at all.
I leave modifying this further to produce a generic rectangular shape as an exercise for the reader, but this should provide a basis for how you could do this and a bit more explanation of what the lines in this shader do.
Did it ... the code for the rectangular effect is just in these 2 lines
blurFilter = [[GPUImageGaussianSelectiveBlurFilter alloc] init];
[(GPUImageGaussianSelectiveBlurFilter*)blurFilter setExcludeCircleRadius:80.0/320.0];
[(GPUImageGaussianSelectiveBlurFilter*)blurFilter setExcludeCirclePoint:CGPointMake(0.5f, 0.5f)];
// [(GPUImageGaussianSelectiveBlurFilter*)blurFilter setBlurSize:0.0f]; [(GPUImageGaussianSelectiveBlurFilter*)blurFilter setAspectRatio:0.0f];
I am trying to make a multi-textured point sprite for an iphone application using OpenGL ES 2.0. I can't find any examples of this on web, and it doesn't seem to be working. Is there some built-in limitation where gl_PointCoord can't be used on multiple textures when using GL_POINTS mode for point sprites?
uniform sampler2D tex;
uniform sampler2D blur_tex;
vec4 texPixel = texture2D( tex, gl_PointCoord );
vec4 blurPixel = texture2D( blur_tex, gl_PointCoord );
I'm sure I am passing in the textures properly, as I can do multi-texturing just fine in TRIANGLE_STRIP mode, but I am hoping to speed things up using point sprites.
If it is possible, a link to an example of working code would super helpful. Thanks!
EDIT:
Here's how I'm passing in the textures to my shader. This lets me do multi-texturing when I am in TRIANGLE or TRIANGLE_STRIP mode.
//pass in position and tex_coord attributes...
//normal tex
glActiveTexture(0);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, tex0);
glUniform1i(SAMPLER_0_UNIFORM, 0);
//blur tex
glActiveTexture(1);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, tex1);
glUniform1i(SAMPLER_1_UNIFORM, 1);
//draw arrays...
However if I am using POINTS mode then I never see the second texture. That is, referring to the shader code above, whether I do
gl_FragColor = texPixel;
OR
gl_FragColor = blurPixel;
I see the same texture. Which seems strange. My guess is that you CAN'T do multi-texturing on a point sprite and somehow having two active textures or two calls to gl_PointCoord causes a problem. But I'm hoping I'm wrong. So if someone has a simple example of multi-texturing working with point sprites in OpenGL ES 2.0 I would be happy to look at that code!
EDIT 2:
vertex shader:
attribute vec4 position;
void main() {
gl_PointSize = 15.0;
gl_Position = position;
}
fragment shader:
precision mediump float;
uniform sampler2D tex;
uniform sampler2D blur_tex;
void main() {
vec4 texPixel = texture2D( tex, gl_PointCoord );
vec4 blurPixel = texture2D( blur_tex, gl_PointCoord );
//these both do the same thing even though I am passing in two different textures?!?!?!?
//gl_FragColor = texPixel;
gl_FragColor = blurPixel;
}
There is a typo in your main program.
The right parameter to pass to glActiveTexture is GL_TEXTURE0, GL_TEXTURE1, ...
Note that GL_TEXTURE0, GL_TEXTURE1 does not have a value of 0,1 etc.
Since you are passing an invalid value to glActiveTexture, the function will fail and so the active texture will always be a default (probably 0) all your changes are going to texture at 0 position.
source
In my case there is a blending for points
The possible problem was in nonexistent parameters
glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE );
glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE );
I think may be too late to post this though.
There are two problems in your code. One is the one that Satyakam has pointed out. The other problem is that you should NOT use glUniform1f. Right one is glUniform1i. The deference is f or i on the tail which means float or integer.
What's the best way?
I tried to do this naïvely with a fragment shader that looks like this:
varying lowp vec4 color;
void main()
{
lowp vec4 alpha = colorVarying.wwww;
const lowp vec4 one = vec4(1.0, 1.0, 1.0, 1.0);
lowp vec4 oneMinusAlpha = one-alpha;
gl_FragColor = gl_FragColor*oneMinusAlpha + colorVarying*alpha;
gl_FragColor.w = 1.0;
}
But this doesn't work, because it seems gl_FragColor does not contain anything meaningful before the shader runs.
What's the correct approach?
Alpha blending is done for you. On shader exit, gl_FragColor should hold the alpha value in w component and you have to set the blending mode with the normal API just like there is no shader at all. For example gl_FragColor = vec4(0,1,0,0.5) will result in a green, 50% transparent fragment.