I just updated to iOS 5 and get the GL_INVALID_OPERATION in glRenderbufferStorage.
code:
RenderingEngine::RenderingEngine(...)
{
glGenRenderbuffers(1, &m_hColorRenderBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, m_hColorRenderBuffer);
...
}
void RenderingEngine::Initialize(...)
{
glRenderbufferStorage(GL_RENDERBUFFER, GL_RGB565, s_ScreenWidth, s_ScreenHeight);
...
}
Instrument says:
Responsible Command:
GL_INVALID_OPERATION <- glRenderbufferStorage(GL_RENDERBUFFER, GL_RGB565, 640, 960)
Recommendation:
The specified operation is invalid for the current OpenGL state. Due to this error, the function call has no effect.
http://www.khronos.org/opengles/sdk/docs/man/xhtml/glRenderbufferStorage.xml says:
GL_INVALID_OPERATION is generated if the reserved renderbuffer object name 0 is bound.
I already confirmed that the m_hColorRenderBuffer is 1. Also tried calling glBindRenderBuffer again in Initialize(). The result is the same.
How do I fix this?
Related
I am in OpenGL es 2.0 with glKit trying to render to iOS devices.
Basically my goal is to instead of drawing to the main buffer draw to a texture. Then render that texture to the screen. I have been trying to follow another topic on so. Unfortunately they mention something about the power of two (im assuming with regards to resolution) but I don't know how to fix it. Anyway here is my swift interpretation of the code from that topic.
import Foundation
import GLKit
import OpenGLES
class RenderTexture {
var framebuffer:GLuint = 0
var tex:GLuint = 0
var old_fbo:GLint = 0
init(width: GLsizei, height: GLsizei)
{
glGetIntegerv(GLenum(GL_FRAMEBUFFER_BINDING), &old_fbo)
glGenFramebuffers(1, &framebuffer)
glGenTextures(1, &tex)
glBindFramebuffer(GLenum(GL_FRAMEBUFFER), framebuffer)
glBindTexture(GLenum(GL_TEXTURE_2D), tex)
glTexImage2D(GLenum(GL_TEXTURE_2D), 0, GL_RGBA, GLsizei(width), GLsizei(height), 0, GLenum(GL_RGBA), GLenum(GL_UNSIGNED_BYTE), nil)
glFramebufferTexture2D(GLenum(GL_FRAMEBUFFER), GLenum(GL_COLOR_ATTACHMENT0), GLenum(GL_TEXTURE_2D), tex, 0)
glClearColor(0, 0.1, 0, 1)
glClear(GLenum(GL_COLOR_BUFFER_BIT))
let status = glCheckFramebufferStatus(GLenum(GL_FRAMEBUFFER))
if (status != GLenum(GL_FRAMEBUFFER_COMPLETE))
{
print("DIDNT GO WELL WITH", width, " " , height)
print(status)
}
glBindFramebuffer(GLenum(GL_FRAMEBUFFER), GLenum(old_fbo))
}
func begin()
{
glGetIntegerv(GLenum(GL_FRAMEBUFFER_BINDING), &old_fbo)
glBindFramebuffer(GLenum(GL_FRAMEBUFFER), framebuffer)
}
func end()
{
glBindFramebuffer(GLenum(GL_FRAMEBUFFER), GLenum(old_fbo))
}
}
Then as far as rendering I have some things going on.
A code that theoretically renders any texture full screen. This has been tested with two manually loaded pngs (using no buffer changes) and works great.
func drawTriangle(texture: GLuint)
{
loadBuffers()
//glViewport(0, 0, width, height)
//glClearColor(0, 0.0, 0, 1.0)
//glClear(GLbitfield(GL_COLOR_BUFFER_BIT) | GLbitfield(GL_DEPTH_BUFFER_BIT))
glEnable(GLenum(GL_TEXTURE_2D))
glActiveTexture(GLenum(GL_TEXTURE0))
glUseProgram(texShader)
let loc1 = glGetUniformLocation(texShader, "s_texture")
glUniform1i(loc1, 0)
let loc3 = glGetUniformLocation(texShader, "matrix")
if (loc3 != -1)
{
glUniformMatrix4fv(loc3, 1, GLboolean(GL_FALSE), &matrix)
}
glBindTexture(GLenum(GL_TEXTURE_2D), texture)
glDrawArrays(GLenum(GL_TRIANGLE_STRIP), 0, 6)
glDisable(GLenum(GL_TEXTURE_2D))
destroyBuffers()
}
I also have a function that draws a couple dots on the screen. You dont really need to see the methods but it works. This is how I am going to know that OpenGL is drawing from the buffer texture and NOT a preloaded texture.
Finally here is the gist of the code I am trying to do.
func initialize()
{
nfbo = RenderTexture(width: width, height: height)
}
fun draw()
{
glViewport(0, 0, GLsizei(width * 2), GLsizei(height * 2)) //why do I have to multiply for 2 to get it to work?????
nfbo.begin()
drawDots() //Draws the dots
nfbo.end()
reset()
drawTriangle(nfbo.tex)
}
At the end of all this all that is drawn is a blank screen. If there is any more code that would help you figure things out let me know. I tried to trim it to make it less annoying for you.
Note: Considering the whole power of two thing I have tried passing the fbo class 512 x 512 just in case it would make things work being a power of two. Unfortunately it didnt do that.
Another Note: All I am doing is going to be 2D so I dont need depth buffers right?
yesterday I saw exactly the same issue.
after struggling for hours, I found out why.
the trick is configuring your texture map with the following:
glTexParameteri(GLenum(GL_TEXTURE_2D), GLenum(GL_TEXTURE_WRAP_S), GL_CLAMP_TO_EDGE);
glTexParameteri(GLenum(GL_TEXTURE_2D), GLenum(GL_TEXTURE_WRAP_T), GL_CLAMP_TO_EDGE);
otherwise, you won't draw anything on the texture map.
the reason seems to be that while ios supports texture maps that are not power of 2. it requires GL_CLAMP_TO_EDGE. otherwise it won't work.
it should really report incomplete framebuffer. it took me quite long time to debug this problem!
here a related discussion:
Rendering to non-power-of-two texture on iPhone
I recently found that glDrawArrays allocating and releasing huge amounts of memory on every frame.
I suspect that it's related to "Shaders compiled outside of initialization" issue reported by openGL profiler. That occurs on every frame! Should it be only once, and after shaders are compiled, disappear?
EDIT: I also double checked that my vertex are properly aligned. So I'm really confused what memory driver needs to allocate on every frame.
EDIT #2: I'm using VBO's and degenerated triangle strips to render sprites and . I'm passing geometry on every frame (GL_STREAM_DRAW).
EDIT #3:
I think I'm close to issue but still unable to solve it. Problem disappears if I pass same texture id value to shader (see source code comment). Somehow this issue is relate to fragment shader I think.
In my sprite batch I have list of sprites and I render them by texture id and FIFO queue.
Here's source code of my sprite batch class:
void spriteBatch::renderInRange(shader& prog, int start, int count){
int curTexture = textures[start];
int startFrom = start;
//Looping through all vertexes and rendering them by texture id's
for(int i=start;i<start+count;++i){
if(textures[i] != curTexture || i == (start + count) -1){
//Problem occurs after decommenting this line
// prog.setUniform("texture", curTexture-1);
prog.setUniform("texture", 0); // if I pass same texture id everything is OK
int startVertex = startFrom * vertexesPerSprite;
int cnt = ((i - startFrom) * vertexesPerSprite);
//If last one has same texture we just adding it
//to last render call
if(i == (start + count) - 1 && textures[i] == curTexture)
cnt = ((i + 1) - startFrom) * vertexesPerSprite;
render(vbo, GL_TRIANGLE_STRIP, startVertex+1, cnt-1);
//if last element has different texture
//we need to render it separately
if(i == (start + count) - 1 && textures[i] != curTexture){
// prog.setUniform("texture", textures[i]-1);
render(vbo, GL_TRIANGLE_STRIP, (i * vertexesPerSprite) + 1, 5);
}
curTexture = textures[i];
startFrom = i;
}
}
}
inline GLint getUniformLocation(GLuint shaderID, const string& name) {
GLint iLocation = glGetUniformLocation(shaderID, name.data());
if(iLocation == -1){ // shader variable not found
stringstream errorText;
errorText << "Uniform \"" << name << " was not found!";
throw logic_error(errorText.str());
}
return iLocation;
}
void shader::setUniform(const string& name, const matrix& value) {
GLint location = getUniformLocation(this->programID, name.data());
glUniformMatrix4fv(location, 1, GL_FALSE, &(value[0]));
}
void shader::setUniform(const string& name, int value) {
GLint iLocation = getUniformLocation(this->programID, name.data());
//GLenum error = glGetError();
glUniform1i(iLocation, value);
// error = glGetError();
}
EDIT#4: I tried to profile app on IOS 6 and Iphone5 and allocations are much bigger. But methods are different in this case. I'm attaching new screenshot.
Issue is resolved by creating separate shader for each texture.
It looks like bug in driver implementation that does happen on all IOS devices (I tested on IOS 5/6). However on higher iPhone models it's not that noticeable.
On iPhone4 performance hit was very significant from 60 FPS to 38!
More code would help, but have you checked to see if the amount of memory involved is comparable to the amount of geometry you're updating? (although that would seem like a lot of geometry!) It looks like GL is holding your update until glDrawArrays, releasing it when it can be pulled into internal GL state.
If you can run the code in a MacOS app, the OpenGL Profiler tool may be able to further isolate the condition. (look in XCode documentation for more info, if you're not familiar with this tool). I'd also suggest looking at texture use, given the amount of memory involved.
The easiest thing to do might be to conditionally break on malloc() for a large allocation, note the address, and examine what's been loaded there.
try to query the texture uniform just once (in initialization) and cache it. calling "glGetUniformLocation" too much in one frame will hammer the performance (depending on the sprite count).
I'm on the way to move my code from GLKit to OpenGL ES 2.0 because GLKBaseEffect leaks.
I'm in progress, but I face a problem when I link the context and the drawable like this :
[_currentContext renderbufferStorage:GL_RENDERBUFFER fromDrawable:_eaglLayer];
This line comes from my shapes (NSObject). Shapes are allocated by a UiView who set the _eaglLayer to self.layer.
The UiView contains the following function :
+ (Class)layerClass {
return [CAEAGLLayer class];
}
Here is my debug output :
2012-08-25 16:04:21.111 P3gameApp[11035:24903] Be layer in setup <P3BoardScene: 0xa072f70; frame = (0 0; 1024 768); layer = <CAEAGLLayer: 0xa06bef0>>
-[EAGLContext renderbufferStorage:fromDrawable:]: invalid drawable
2012-08-25 16:04:21.111 P3gameApp[11035:24903] Af layer in setup <P3BoardScene: 0xa072f70; frame = (0 0; 1024 768); layer = <CAEAGLLayer: 0xa06bef0>>
(Answered in a question edit. Converted to a community wiki answer. See What is the appropriate action when the answer to a question is added to the question itself? )
The OP wrote:
SOLVED : Error in my Controller when trying to set drawableProperties. Error comes from my view.
[P3BoardScene setDrawableProperties:]: unrecognized selector sent to instance 0xdb99090
Error comes from my view again
[P3Scene setEnableSetNeedsDisplay:]: unrecognized selector sent to instance 0x495d40
I've been trying to figure out some method to cause a GtkTreeView to redraw after I update the bound GtkListStore from a background thread created with pthreads.
Generally, the widget does not update until something obscures an existing row (even a mouse cursor ).
Most of my searches for this problem has "your tree model doesn't/isn't generating the correct signals" ....
I'm running an old Red Hat 9 with gtk+ 2.0.0, for industrial embedded applications. Most of the data comes from ipc/socket/pipes and gets displayed by a GTK app. Unfortunately so does CRITICAL alarms, which has a habit of not showing when they should. We will (one day) move to a current kernel, but I need to get something working with the existing software.
I've tried emiting the "row-changed" signals, tried calling the gtk_widget_queue_draw and also tried connecting to the "expose-event", where I've tried various things that don't work or seg fault.
server.c
bool Server::Start()
{
// ....
// pthread_t _id;
//
pthread_create( & _id, NULL, &StaticServerThread, this );
// ....
}
viewer.c
bool Viewer::ReadFinished( SocketArgs * args )
{
gdk_threads_enter();
// Populate the buffer and message
//
// GtkListStore *_outputStore;
// gchar *buffer;
// gchar *message;
GtkTreeIter iter;
gtk_list_store_insert_with_values( _outputStore, &iter, 0,
0, buffer, 1, message, -1 );
// ....
gdk_threads_leave();
}
You can perform the updates to the list store in the main thread. For example, you can use g_idle_add() in the worker thread.
I am in the process of adding drag and drop support to an existing Mono/C#/GTK# application. I was wondering whether it was possible to use RGBA transparency on the icons that appear under the mouse pointer when I start dragging an object.
So far, I realized the following:
I can set the bitmap in question by calling the Gtk.Drag.SourceSetIconPixbuf() method. However, no luck with alpha transparency: pixels that are not fully opaque would get 100% transparent this way.
I also tried calling RenderPixmapAndMask() on the GdkPixbuf so that I could use Gtk.Drag.SourceSetIcon() with an RGBA colormap of my Screen. It didn't work either: whenever I started dragging, I got the following error:
[Gdk] IA__gdk_window_set_back_pixmap: assertion 'pixmap == NULL || gdk_drawable_get_depth (window) == gdk_drawable_get_depth (pixmap)' failed.
This way, the pixmap doesn't even get copied, only a white shape (presumably set by the mask argument of SetSourceIcon()) shows up on dragging.
I'd like to ask if there's a way to make these icons have alpha transparency, despite the fact that I failed to do so. In case it's impossible, answers discussing the reasons of the lack of this feature would also be helpful. Thank you.
(Compositing is - of course - enabled on my desktop (Ubuntu/10.10, Compiz/0.8.6-0ubuntu9).)
Ok, finally I solved it. You should create a new Gtk.Window of POPUP type, set its Colormap to your screen's RGBA colormap, have the background erased by Cairo to a transparent color, draw whatever you'd like on it and finally pass it on to Gtk.Drag.SetIconWidget().
Sample code (presumably you'll want to use this inside OnDragBegin, or at a point where you have a valid drag context to be passed to SetIconWidget()):
Gtk.Window window = new Gtk.Window (Gtk.WindowType.Popup);
window.Colormap = window.Screen.RgbaColormap;
window.AppPaintable = true;
window.Decorated = false;
window.Resize (/* specify width, height */);
/* The cairo context can only be created when the window is being drawn by the
* window manager, so wrap drawing code into an ExposeEvent delegate. */
window.ExposeEvent += delegate {
Context ctx = Gdk.CairoHelper.Create (window.GdkWindow);
/* Erase the background */
ctx.SetSourceRGBA (0, 0, 0, 0);
ctx.Operator = Operator.Source;
ctx.Paint ();
/* Draw whatever you'd like to here, and then clean up by calling
Dispose() on the context's target. */
(ctx.Target as IDisposable).Dispose ();
};
Gtk.Drag.SetIconWidget(drag_context, window, 10, 10);