Use multiple texture when rendering cause low framerate - iphone

I tried use different to render the ground of a game. First, I create some texture and upload to GPU.
void initialize(float width,float height)
{
for(int i = 0;i < 10;i++)
{
glActiveTexture(GL_TEXTURE1 + i);
GLuint gridTexture;
glGenTextures(1, &gridTexture);
glBindTexture(GL_TEXTURE_2D, gridTexture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
setPngTexture(gameMap->texture);
glGenerateMipmap(GL_TEXTURE_2D);
}
}
Then I use the textures when render in every frame.
void render() const
{
for(int i = 0; i < map -> maps.size();i++)
{
glUniform1i(uniforms.Sampler,i);
glBindBuffer(GL_ARRAY_BUFFER, gameMap -> verticesbuffer);
glVertexAttribPointer(position, 3, GL_FLOAT, GL_FALSE, stride, 0);
glVertexAttribPointer(normal, 3, GL_FLOAT, GL_FALSE, stride, offset);
glVertexAttribPointer(texture, 2, GL_FLOAT, GL_FALSE, stride, texCoordOffset);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, gameMap -> indicesbuffer);
glDrawElements(GL_TRIANGLES, gameMap -> indexCount, GL_UNSIGNED_SHORT, 0);
}
}
The code work properly on iPhone simulator, but when I testing the app on device, it gives me a very low framerate : 2fps. If I change the line glUniform1i(uniforms.Sampler,i); to glUniform1i(uniforms.Sampler,0); the app work fine!
I'm a freshman on OpenGL ES and I'm not sure I use multiple texture in a correct way. Who can kindly tell me what's the reason of this problem and how can I correct it? Thanks a lot!

there are many things that could be wrong with this. You don't provide enough code to really diagnose the issue (shader source). I'd start with moving the glUniform1i(uniforms.Sampler,i); outside of your loop. It makes no sense the way your are currently using it (basically you're trying to bind uniforms.Sampler to the nth value inside your shader for map.size(). I'm pretty sure this is throwing tons of OpenGL errors. Re-read the documentation on glUniform1i

Related

When we bind a texture created in unity to opengl the internal format is modified

I want to manipulate a texture that has been created in Unity directly with OpenGL.
I create the texture in unity with these parameters :
_renderTexture = new RenderTexture(_sizeTexture, _sizeTexture, 0
, RenderTextureFormat.ARGB32, RenderTextureReadWrite.Linear)
{
useMipMap = false,
autoGenerateMips = false,
anisoLevel = 6,
filterMode = FilterMode.Trilinear,
wrapMode = TextureWrapMode.Clamp,
enableRandomWrite = true
};
Then I send the texture pointer to a native rendering plugin with GetNativeTexturePtr() method. In the native rendering plugin I bind the texture with glBindTexture(GL_TEXTURE_2D, gltex); where gltex is the pointer of my texture in Unity.
Finally, I check the internal format of my texture with :
GLint format;
glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_INTERNAL_FORMAT, &format);
I have format = GL_RGBA8 even though I defined the texture in Unity with the format RenderTextureFormat.ARGB32. You can reproduce this by using the native rendering plugin example of Unity and just replacing the function RenderAPI_OpenGLCoreES::EndModifyTexture of the file RenderAPI_OpenGLCoreES.cpp by :
void RenderAPI_OpenGLCoreES::EndModifyTexture(void* textureHandle, int textureWidth, int textureHeight, int rowPitch, void* dataPtr)
{
GLuint gltex = (GLuint)(size_t)(textureHandle);
// Update texture data, and free the memory buffer
glBindTexture(GL_TEXTURE_2D, gltex);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB32F, textureWidth, textureHeight, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, textureWidth, textureHeight, GL_RGBA, GL_UNSIGNED_BYTE, dataPtr);
GLint format;
glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_INTERNAL_FORMAT,
&format);
delete[](unsigned char*)dataPtr;
}
Why did the internal format change after binding the texture to OpenGL? And is it possible to "impose" GL_RGBA32F format to my OpenGL texture?
I tried to use glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, textureWidth, textureHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL); after my binding but I get the following error: Error : OpenGL error 0x0502 (GL_INVALID_OPERATION).
Sorry if the question is simple I am new on OpenGL!
I just misread the documentation, moreover unity and open have not the same naming convention... I thought RenderTextureFormat.ARGB32 was a texture with 32 bit per channel, but it's a total of 32 bit and therefore 8 bit per channel which corresponds to GL_RGBA8.
There is a summary table for unity format here : https://docs.unity3d.com/Manual/class-ComputeShader.html.

Having issues with drawing to a frame buffer texture. It draws blank

I am in OpenGL es 2.0 with glKit trying to render to iOS devices.
Basically my goal is to instead of drawing to the main buffer draw to a texture. Then render that texture to the screen. I have been trying to follow another topic on so. Unfortunately they mention something about the power of two (im assuming with regards to resolution) but I don't know how to fix it. Anyway here is my swift interpretation of the code from that topic.
import Foundation
import GLKit
import OpenGLES
class RenderTexture {
var framebuffer:GLuint = 0
var tex:GLuint = 0
var old_fbo:GLint = 0
init(width: GLsizei, height: GLsizei)
{
glGetIntegerv(GLenum(GL_FRAMEBUFFER_BINDING), &old_fbo)
glGenFramebuffers(1, &framebuffer)
glGenTextures(1, &tex)
glBindFramebuffer(GLenum(GL_FRAMEBUFFER), framebuffer)
glBindTexture(GLenum(GL_TEXTURE_2D), tex)
glTexImage2D(GLenum(GL_TEXTURE_2D), 0, GL_RGBA, GLsizei(width), GLsizei(height), 0, GLenum(GL_RGBA), GLenum(GL_UNSIGNED_BYTE), nil)
glFramebufferTexture2D(GLenum(GL_FRAMEBUFFER), GLenum(GL_COLOR_ATTACHMENT0), GLenum(GL_TEXTURE_2D), tex, 0)
glClearColor(0, 0.1, 0, 1)
glClear(GLenum(GL_COLOR_BUFFER_BIT))
let status = glCheckFramebufferStatus(GLenum(GL_FRAMEBUFFER))
if (status != GLenum(GL_FRAMEBUFFER_COMPLETE))
{
print("DIDNT GO WELL WITH", width, " " , height)
print(status)
}
glBindFramebuffer(GLenum(GL_FRAMEBUFFER), GLenum(old_fbo))
}
func begin()
{
glGetIntegerv(GLenum(GL_FRAMEBUFFER_BINDING), &old_fbo)
glBindFramebuffer(GLenum(GL_FRAMEBUFFER), framebuffer)
}
func end()
{
glBindFramebuffer(GLenum(GL_FRAMEBUFFER), GLenum(old_fbo))
}
}
Then as far as rendering I have some things going on.
A code that theoretically renders any texture full screen. This has been tested with two manually loaded pngs (using no buffer changes) and works great.
func drawTriangle(texture: GLuint)
{
loadBuffers()
//glViewport(0, 0, width, height)
//glClearColor(0, 0.0, 0, 1.0)
//glClear(GLbitfield(GL_COLOR_BUFFER_BIT) | GLbitfield(GL_DEPTH_BUFFER_BIT))
glEnable(GLenum(GL_TEXTURE_2D))
glActiveTexture(GLenum(GL_TEXTURE0))
glUseProgram(texShader)
let loc1 = glGetUniformLocation(texShader, "s_texture")
glUniform1i(loc1, 0)
let loc3 = glGetUniformLocation(texShader, "matrix")
if (loc3 != -1)
{
glUniformMatrix4fv(loc3, 1, GLboolean(GL_FALSE), &matrix)
}
glBindTexture(GLenum(GL_TEXTURE_2D), texture)
glDrawArrays(GLenum(GL_TRIANGLE_STRIP), 0, 6)
glDisable(GLenum(GL_TEXTURE_2D))
destroyBuffers()
}
I also have a function that draws a couple dots on the screen. You dont really need to see the methods but it works. This is how I am going to know that OpenGL is drawing from the buffer texture and NOT a preloaded texture.
Finally here is the gist of the code I am trying to do.
func initialize()
{
nfbo = RenderTexture(width: width, height: height)
}
fun draw()
{
glViewport(0, 0, GLsizei(width * 2), GLsizei(height * 2)) //why do I have to multiply for 2 to get it to work?????
nfbo.begin()
drawDots() //Draws the dots
nfbo.end()
reset()
drawTriangle(nfbo.tex)
}
At the end of all this all that is drawn is a blank screen. If there is any more code that would help you figure things out let me know. I tried to trim it to make it less annoying for you.
Note: Considering the whole power of two thing I have tried passing the fbo class 512 x 512 just in case it would make things work being a power of two. Unfortunately it didnt do that.
Another Note: All I am doing is going to be 2D so I dont need depth buffers right?
yesterday I saw exactly the same issue.
after struggling for hours, I found out why.
the trick is configuring your texture map with the following:
glTexParameteri(GLenum(GL_TEXTURE_2D), GLenum(GL_TEXTURE_WRAP_S), GL_CLAMP_TO_EDGE);
glTexParameteri(GLenum(GL_TEXTURE_2D), GLenum(GL_TEXTURE_WRAP_T), GL_CLAMP_TO_EDGE);
otherwise, you won't draw anything on the texture map.
the reason seems to be that while ios supports texture maps that are not power of 2. it requires GL_CLAMP_TO_EDGE. otherwise it won't work.
it should really report incomplete framebuffer. it took me quite long time to debug this problem!
here a related discussion:
Rendering to non-power-of-two texture on iPhone

OpenGL ES 2d rendering into image

I need to write OpenGL ES 2-dimensional renderer on iOS. It should draw some primitives such as lines and polygons into 2d image (it will be rendering of vector map). Which way is the best for getting image from OpenGL context in that task? I mean, should I render these primitives into texture and then get image from it, or what? Also, it will be great if someone give examples or tutorials which look like the thing I need (2d GL rendering into image). Thanks in advance!
If you need to render an OpenGL ES 2-D scene, then extract an image of that scene to use outside of OpenGL ES, you have two main options.
The first is to simply render your scene and use glReadPixels() to grab RGBA data for the scene and place it in a byte array, like in the following:
GLubyte *rawImagePixels = (GLubyte *)malloc(totalBytesForImage);
glReadPixels(0, 0, (int)currentFBOSize.width, (int)currentFBOSize.height, GL_RGBA, GL_UNSIGNED_BYTE, rawImagePixels);
// Do something with the image
free(rawImagePixels);
The second, and much faster, way of doing this is to render your scene to a texture-backed framebuffer object (FBO), where the texture has been provided by iOS 5.0's texture caches. I describe this approach in this answer, although I don't show the code for raw data access there.
You do the following to set up the texture cache and bind the FBO texture:
CVReturn err = CVOpenGLESTextureCacheCreate(kCFAllocatorDefault, NULL, (__bridge void *)[[GPUImageOpenGLESContext sharedImageProcessingOpenGLESContext] context], NULL, &rawDataTextureCache);
if (err)
{
NSAssert(NO, #"Error at CVOpenGLESTextureCacheCreate %d");
}
// Code originally sourced from http://allmybrain.com/2011/12/08/rendering-to-a-texture-with-ios-5-texture-cache-api/
CFDictionaryRef empty; // empty value for attr value.
CFMutableDictionaryRef attrs;
empty = CFDictionaryCreate(kCFAllocatorDefault, // our empty IOSurface properties dictionary
NULL,
NULL,
0,
&kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
attrs = CFDictionaryCreateMutable(kCFAllocatorDefault,
1,
&kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
CFDictionarySetValue(attrs,
kCVPixelBufferIOSurfacePropertiesKey,
empty);
//CVPixelBufferPoolCreatePixelBuffer (NULL, [assetWriterPixelBufferInput pixelBufferPool], &renderTarget);
CVPixelBufferCreate(kCFAllocatorDefault,
(int)imageSize.width,
(int)imageSize.height,
kCVPixelFormatType_32BGRA,
attrs,
&renderTarget);
CVOpenGLESTextureRef renderTexture;
CVOpenGLESTextureCacheCreateTextureFromImage (kCFAllocatorDefault,
rawDataTextureCache, renderTarget,
NULL, // texture attributes
GL_TEXTURE_2D,
GL_RGBA, // opengl format
(int)imageSize.width,
(int)imageSize.height,
GL_BGRA, // native iOS format
GL_UNSIGNED_BYTE,
0,
&renderTexture);
CFRelease(attrs);
CFRelease(empty);
glBindTexture(CVOpenGLESTextureGetTarget(renderTexture), CVOpenGLESTextureGetName(renderTexture));
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, CVOpenGLESTextureGetName(renderTexture), 0);
and then you can just read directly from the bytes that back this texture (in BGRA format, not the RGBA of glReadPixels()) using something like:
CVPixelBufferLockBaseAddress(renderTarget, 0);
_rawBytesForImage = (GLubyte *)CVPixelBufferGetBaseAddress(renderTarget);
// Do something with the bytes
CVPixelBufferUnlockBaseAddress(renderTarget, 0);
However, if you just want to reuse your image within OpenGL ES, you just need to render your scene to a texture-backed FBO and then use that texture in your second level of rendering.
I show an example of rendering to a texture, and then performing some processing on it, within the CubeExample sample application within my open source GPUImage framework, if you want to see this in action.

Font not well rendered with freetype and Opengl ES 2 (IPhone device)

I'm using freetype for writing text in an IPhone device, the result are rare. As you can see in the image same characters(like 'b', 'n' or 'u') aren't rendered equally.
The texture is always the same. Any idea where is the problem or what's going on?
http://craneossgd.files.wordpress.com/2011/10/image_freetype1.png
Code:
FT_New_Face(getFTLibrary(), _filename, 0, &(_face))
FT_Select_Charmap( _face, FT_ENCODING_UNICODE );
FT_Set_Char_Size( _face, _pt<<6,_pt<<6, _dpi, _dpi);
for (i=0; i<255; i++)
{
if (!createGlyphTexture(i))
{
clean();
}
}
.....
createGlyphTexture(unsigned char ch){
....
FT_Load_Glyph(_face, FT_Get_Char_Index(_face,ch), FT_LOAD_DEFAULT)
FT_Get_Glyph(_face->glyph, &glyph)
....
// *** Transform glyph to bitmap
FT_Glyph_To_Bitmap(&glyph, FT_RENDER_MODE_NORMAL, 0, 1);
// *** Get bitmap and glyph data
bitmap_glyph = (FT_BitmapGlyph)glyph;
bitmap = bitmap_glyph->bitmap;
// ***
width = pow2(bitmap.width);
height = pow2(bitmap.rows);
// *** Alloc memory for texture
expanded_data = (GLubyte *)malloc( sizeof(GLubyte)*2*width*height );
for (j=0; j<height;j++)
{
for (i=0; i<width; i++)
{
if ( (i>=(CRuint)bitmap.width) || (j>=(CRuint)bitmap.rows) ){
expanded_data[2*(i+j*width)] = 0;
expanded_data[2*(i+j*width)+1] = 0;
} else {
expanded_data[2*(i+j*width)] = bitmap.buffer[i+bitmap.width*j];
expanded_data[2*(i+j*width)+1] = bitmap.buffer[i+bitmap.width*j];
}
}
}
// *** Load texture into memory
glBindTexture(GL_TEXTURE_2D, _textures[ch]);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexImage2D(GL_LUMINANCE_ALPHA, width, height, 0, GL_LUMINANCE_ALPHA, GL_UNSIGNED_BYTE, expanded_data);
....
}
Thanks!!
In freetype reference:
http://www.freetype.org/freetype2/docs/reference/ft2-base_interface.html
FT_LOAD_TARGET_LIGHT.
A lighter hinting algorithm for non-monochrome modes. Many generated glyphs are more fuzzy but better resemble its original shape. A bit like rendering on Mac OS X. As a special exception, this target implies FT_LOAD_FORCE_AUTOHINT.
FT_RENDER_MODE_LIGHT.This is equivalent to FT_RENDER_MODE_NORMAL. It is only defined as a separate value because render modes are also used indirectly to define hinting algorithm selectors. See FT_LOAD_TARGET_XXX for details.
It works using next configuration:
FT_Load_Glyph(_face, FT_Get_Char_Index(_face,ch), FT_LOAD_TARGET_LIGHT)
...
FT_Glyph_To_Bitmap(&glyph, FT_RENDER_MODE_NORMAL, 0, 1);

OpenGL ES 2.0 texturing

I'm trying to render a simple textured quad in OpenGL ES 2.0 on an iPhone. The geometry is fine and I get the expected quad if I use a solid color in my shader:
gl_FragColor = vec4 (1.0, 0.0, 0.0, 1.0);
And I get the expected gradients if I render the texture coordinates directly:
gl_FragColor = vec4 (texCoord.x, texCoord.y, 0.0, 1.0);
The image data is loaded from a UIImage, scaled to fit within 1024x1024, and loaded into a texture like so:
glGenTextures (1, &_texture);
glBindTexture (GL_TEXTURE_2D, _texture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA,
GL_UNSIGNED_BYTE, data);
width, height, and the contents of data are all correct, as examined in the debugger.
When I change my fragment shader to use the texture:
gl_FragColor = texture2D (tex, texCoord);
... and bind the texture and render like so:
glActiveTexture (GL_TEXTURE0);
glBindTexture (GL_TEXTURE_2D, _texture);
// this is unnecessary, as it defaults to 0, but for completeness...
GLuint texLoc = glGetUniformLocation(_program, "tex");
glUniform1i(texLoc, 0);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
... I get nothing. A black quad. glGetError() doesn't return an error and glIsTexture(_texture) returns true.
What am I doing wrong here? I've been over and over every example I could find online, but everybody is doing it exactly as I am, and the debugger shows my parameters to the various GL functions are what I expect them to be.
After glTexImage2D, set the MIN/MAG filters with glTexParameter, the defaults use mipmaps so the texture is incomplete with that code.
I was experiencing the same issue (black quad) and could not find an answer until a response by jfcalvo from this question led me to the cause. Basically make sure you are not loading the texture in a different thread.
make sure you set the texture wrap parameters to GL_CLAMP_TO_EDGE in both S and T directions. Without this, the texture is incomplete and will appear black.
make sure that you are calling (glTexImage2D) with right formats(constants)
make sure that you are freeing resources of image after glTexImage2D
that's how i'm do it on android:
int[] textures = new int[1];
GLES20.glGenTextures(1, textures, 0);
mTextureID = textures[0];
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, mTextureID);
GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER,
GLES20.GL_NEAREST);
GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D,
GLES20.GL_TEXTURE_MAG_FILTER,
GLES20.GL_LINEAR);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_S,
GLES20.GL_REPEAT);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_T,
GLES20.GL_REPEAT);
InputStream is = mContext.getResources()
.openRawResource(R.drawable.ywemmo2);
Bitmap bitmap;
try {
bitmap = BitmapFactory.decodeStream(is);
} finally {
try {
is.close();
} catch(IOException e) {
// Ignore.
}
GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, bitmap, 0);
bitmap.recycle();
maybe you forgot to
glEnable(GL_TEXTURE_2D);
is such case, texture2D in the shader would return black, as the OP seems to suffer.