I have an OpenGL ES application for the iPhone I am developing, being a port of a 2d-oriented application from another platform. I have chosen to render the graphics using OpenGL ES for performance reasons. However, the main application runs on a separate thread (due to the original application design), so from within my app delegate I do this:
- (void) applicationDidFinishLaunching:(UIApplication *)application {
CGRect rect = [[UIScreen mainScreen] bounds];
glView = [[EAGLView alloc] initWithFrame:rect];
[window addSubview:glView];
// launch main application in separate thread
[NSThread detachNewThreadSelector:#selector(applicationMainThread) toTarget:self withObject:nil];
}
However, I notice that any calls within the applicationMainThread that try to render something to the screen do not render anything, until that thread terminates.
I set up the actual OpenGL ES context on the child application thread, not the UI thread. If I do this:
- (void) applicationMainThread {
CGRect rect = [[UIScreen mainScreen] bounds];
[glView createContext]; // creates the open GL ES context
//Initialize OpenGL states
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_TEXTURE_2D);
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glMatrixMode(GL_PROJECTION);
glOrthof(0, rect.size.width, 0, rect.size.height, -1, 1);
glMatrixMode(GL_MODELVIEW);
Texture2D *tex = [[Texture2D alloc] initWithImage:[UIImage imageNamed:#"iphone_default.png"]];
glBindTexture(GL_TEXTURE_2D, [tex name]);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glDisable(GL_BLEND);
[tex drawInRect:[glView bounds]];
glEnable(GL_BLEND);
[tex release];
[glView drawView];
}
Then the texture is updated to the screen pretty much immediately, as I would expect.
However, if after the [glView drawView] call I add this one line:
[NSThread sleepForTimeInterval:5.0]; // sleep for 5 seconds
Then the screen is only updated after the 5 second delay completes. This leads me to believe that the screen only updates when the thread itself terminates (need to do more testing to confirm). This means that when I substitute the actual application code, which does multiple screen updates, none of the updates actually happen (leaving a white screen) until the application thread exits, not exactly what I wanted!
So - is there any way I can get around this, or have I done something obviously wrong?
You have to be doing something obviously wrong, as multithreaded OpenGL rendering works just fine on iPhone. I can’t tell you what’s wrong with your code, but I can show you how we do it. It took me several iterations to get there, because the sample OpenGL code from Apple mashes everything together.
In the end I came up with three classes: Stage, Framebuffer and GLView. The Stage contains the game rendering logic and knows how to render itself to a framebuffer. The framebuffer class is a wrapper around the OpenGL framebuffer and renders to a renderbuffer or a EAGLDrawable. GLView is the drawable to render the framebuffer to, it contains all the OpenGL setup stuff. In the application entry point I create an OpenGL context, a GLView, a framebuffer that renders to this GLView and a Stage that renders using the framebuffer. The Stage update method runs in a separate thread and looks a bit like this:
- (void) mainLoop
{
[fbuffer bind];
[currentScene visit];
[[EAGLContext currentContext]
presentRenderbuffer:GL_RENDERBUFFER_OES];
[fbuffer unbind];
}
In plain English, it binds the framebuffer, walks the game object graph (= renders the scene), presents the framebuffer contents on the screen and unbinds the framebuffer. The presentRenderbuffer call is a bit misplaced, it belongs somewhere higher in the design – the Stage should just render into framebuffer and let you do whatever you want to do with the framebuffer. But I could not find the right place, so I just left the call there.
Otherwise I am pretty much content with the design: all the classes are simple, coherent and testable. There’s also a Scheduler class that creates the thread and calls Stage’s mainLoop as fast as possible:
- (void) loop
{
[EAGLContext setCurrentContext:context];
while (running)
{
NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
#synchronized (context)
{
[stage mainLoop];
}
[fpsCounter update];
[pool release];
}
}
- (void) run
{
NSAssert(!running, #"Scheduler already running.");
running = YES;
[fpsCounter reset];
context = [EAGLContext currentContext];
[NSThread detachNewThreadSelector:#selector(loop)
toTarget:self withObject:nil];
}
The game update thread is synchronized using the OpenGL context so that we can be sure that we don’t corrupt the context in the main thread. (Simple rule: All drawing has to be done in the game update loop or synchronized by the GL context.)
Hope that helps.
Seems that I missed the bleeding obvious...
glViewport(0, 0, rect.size.width, rect.size.height);
glScissor(0, 0, rect.size.width, rect.size.height);
... and the updates appear as they should. I think what happened is without the viewport and scissor set on the child thread context which used a sharegroup (was set on the parent thread context), only when the child thread exited did the view update with the proper viewport, thus finally displaying my updates. Or something like that! (I'm still an OpenGLES newbie!)
Related
I'm using GLKViewController with a GLKView, on ios5 works fine, after io6 update glReadPixels stopped to work, and return only black pixels.
I read something about preserveBackBuffer, but o success yet
My setup of GLKView:
_context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
if (!_context) {
DLog(#"Failed to create ES context");
}
GLKView *view = (GLKView *)self.view;
view.context = _context;
Possible path to solution? (I try that here but still don't works)
CAEAGLLayer * eaglLayer = (CAEAGLLayer*) view.layer;
eaglLayer.drawableProperties = #{kEAGLDrawablePropertyRetainedBacking : #(YES)};
I'm using glReadPixels on to record a camera after shader proccess
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
All help be accepted, Thanks,
According to the docs, you should use the 'snapshot' method of GLKView if you need to read the pixels and not glReadPixels. From the docs about 'snapshot':
Discussion:
When this method is called, the view sets up a drawing environment and calls your drawing method. However, instead of presenting the view’s contents on screen, they are returned to your application as an image instead. This method should be called whenever your application explicitly needs the contents of the view; never attempt to directly read the contents of the underlying framebuffer using OpenGL ES functions.
For my non-app store app, I've been using the private framework Core Surface to draw directly to the screen of the iPhone. However, this can be rather slow on older devices because it heavily uses the CPU to do its drawing. To fix this, I've decided to try to use OpenGLES to render the pixels to the screen.
Currently (and I have no way of changing this), I have a reference to an unsigned short * variable called BaseAddress, and essential 3rd party code accesses BaseAddress and updates it with the new pixel data.
I've set up a GLKViewController, and implemented the viewDidLoad as follows:
- (void)viewDidLoad {
[super viewDidLoad];
self.context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
if (!self.context) {
NSLog(#"Failed to create ES context");
}
[EAGLContext setCurrentContext:self.context];
GLKView *view = (GLKView *)self.view;
view.context = self.context;
glGenBuffers(1, &screenBuffer);
glBindBuffer(GL_ARRAY_BUFFER, screenBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(BaseAddress), BaseAddress, GL_DYNAMIC_DRAW);
}
where screenBuffer is an instance variable. In the glkView:drawInRect: method I have the following:
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect {
glDrawElements(GL_ARRAY_BUFFER, sizeof(BaseAddress)/sizeof(BaseAddress[0]), GL_UNSIGNED_SHORT, BaseAddress);
}
Unfortunately, only a black screen appears when I run the app. If I go back to using Core Surface, the app works fine. So basically, how can I draw the pixels to the screen using OpenGLES?
I think that it might be best to use a texture and for your case I'd try to find some older ES1 template for iOS devices. Basically what you need is a frame buffer and a color buffer made from your UIView layer:
glGenFramebuffers(1, &viewFramebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, viewFramebuffer);
glGenRenderbuffers(1, &viewColorBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, viewColorBuffer);
[context renderbufferStorage:GL_RENDERBUFFER fromDrawable:(CAEAGLLayer*)self.layer];
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, viewColorBuffer);
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &backingWidth);
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, & backingHeight);
As for projection matrix I suggest you use glOrthof(.0f, backingWidth, backingHeight, .0f, -1.0f, 1.0f); that will make your GL coordinates same as your view coordinates.
Next on some initialization generate a texture, bind it and give it dimensions of power of 2 (textureWidth = 1; while(textureWidth < backingWidth) textureWidth = textureWidth << 1;) and pass it NULL for data pointer (all in function "glTexImage2D")
Then generate vertex array for a square same as texture, from (0,0) to (textureWidth, textureHeight) and texture coordinates from (0,0) to (1,1)
When you get the data to your pointer and are ready to be pushed to texture use glTexSubImage2D to update it: You can update only a segment of a texture if you get data for it or to update a whole screen use rect (0,0,screenWidth, screenHeight)
Now just draw those 2 triangles with your texture..
Note that there is a limit on texture size: most active iOS devices 1<<11 (2048) iPad3 1<<12
Do not forget to set texture parameters when creating it: glTexParameteri
Do check for retina display and set content scale of CAEAGLLayer to 2 if needed
I am using CATiledLayer as backing layer for my UIView, which I have put inside UIScrollView. In init method of my view I am creating CGPathRef object which draws simple line. When I am trying to draw this path inside drawLayer:inContext it occasionally crashes with EXEC_BAD_ACCESS (rarely) when I am scrolling / zooming.
The code is very simple, I am using only standard CG* functions:
- (id)initWithFrame:(CGRect)frame
{
self = [super initWithFrame:frame];
if (self) {
CATiledLayer *tiledLayer = (CATiledLayer *)[self layer];
tiledLayer.levelsOfDetail = 10;
tiledLayer.levelsOfDetailBias = 5;
tiledLayer.tileSize = CGSizeMake(512.0, 512.0);
CGMutablePathRef mutablePath = CGPathCreateMutable();
CGPathMoveToPoint(mutablePath, nil, 0, 0);
CGPathAddLineToPoint(mutablePath, nil, 700, 700);
path = CGPathCreateCopy(mutablePath);
CGPathRelease(mutablePath);
}
return self;
}
+ (Class) layerClass {
return [CATiledLayer class];
}
- (void) drawRect:(CGRect)rect {
}
- (void) drawLayer:(CALayer *)layer inContext:(CGContextRef)ctx {
CGContextSetRGBFillColor(ctx, 1, 1, 1, 1);
CGContextFillRect(ctx, self.bounds);
CGContextSetLineWidth(ctx, 5);
CGContextAddPath(ctx, path);
CGContextDrawPath(ctx, kCGPathStroke);
}
- (void)dealloc {
[super dealloc];
}
UPDATE:
I have noiced that this problem exists only on iOS 5, it works fine on 4.3
I ran into a similar issue when attempting to draw cached CGPath objects on a custom MKOverlayView.
The crash may occur because a CGPath can't be simultaneously drawn on multiple threads – it's an opaque class which (as specified in the documentation) contains a pointer to the current point in its points array. Two or more threads iterating over this array simultaneously while they draw it could lead to undefined behavior and a crash.
I worked around this by copying the CGPath object into each drawing thread (contained within a mutex lock to prevent incomplete copying):
//lock the object's cached data
pthread_mutex_lock(&cachedPathMutex);
//get a handle on the previously-generated CGPath (myObject exists on the main thread)
CGPathRef myPath = CGPathCreateCopy(myObject.cachedPath);
//unlock the mutex once the copy finishes
pthread_mutex_unlock(&cachedPathMutex);
// all drawing code here
CGContextAddPath(context, myPath);
...
...
CGPathRelease(myPath);
If you're concerned about the memory overhead of doing a copy on each thread, you can also work directly on the cached CGPath objects, but the mutex will have to remain locked during the whole drawing process (which kind of defeats the purpose of threaded drawing):
//lock the object's cached data
pthread_mutex_lock(&cachedPathMutex);
//get a handle on the previously-generated CGPath (myObject exists on the main thread)
CGPathRef myPath = myObject.cachedPath;
// draw the path in the current context
CGContextAddPath(context, myPath);
...
...
//and unlock the mutex
pthread_mutex_unlock(&cachedPathMutex);
I'll qualify my answer by saying that I'm not an expert on multithreaded drawing with Quartz, only that this approach solved the crashes in my scenario. Good luck!
UPDATE:
I revisited this code now that iOS 5.1.0 is out and it looks like the root cause of the issue may have actually been a bug in Quartz in iOS 5.0.x. When testing on iOS 5.1.0 with the CGPathCreateCopy() and mutex calls removed, I'm seeing none of the crashes experienced on iOS 5.0.x.
//get a handle on the previously-generated CGPath (myObject exists on the main thread)
CGPathRef myPath = myObject.cachedPath;
// all drawing code here
CGContextAddPath(context, myPath);
...
...
//drawing finished
Since chances are we'll be supporting iOS 5.0.x for a while, it won't hurt to keep the mutex in your code (other than a slight performance hit), or simply run a version check before drawing.
I have this problem with openGL ES 1.1 on iPhone. I have made myself c++ engine that makes all the job in opengl and a view that shows the rendered content. The problem is that sometimes it works ok and sometimes (most of the times) it shows messed up view. By messed up i mean that objects that do not move appear in different locations, rotated, stretched, o ther parts of the scene is okay or invisible, there is no user interacion nor FPS (its just one frame when it breaks up). I thought it may be because my depth buffer is shitty. But i think that the overall buffer engines may be bad. Anyways these are the parts from my code.
I have view, that initializes like this:
self = [super initWithFrame:frame];
if (self) {
CAEAGLLayer* eaglLayer = (CAEAGLLayer*) super.layer;
eaglLayer.opaque = YES;
m_context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES1];
if (!m_context || ![EAGLContext setCurrentContext:m_context]) {
[self release];
return nil;
}
cplusplusEngine = CreateRenderer();
[m_context renderbufferStorage:GL_RENDERBUFFER_OES fromDrawable:eaglLayer];
cplusplusEngine ->Initialize(CGRectGetWidth(frame), CGRectGetHeight(frame));
//[self drawView: nil];
//m_timestamp = CACurrentMediaTime();
CADisplayLink* displayLink;
displayLink = [CADisplayLink displayLinkWithTarget:self selector:#selector(drawView:)];
[displayLink addToRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode];
[displayLink setFrameInterval:1/45];
[self loadUpTextures];
}
return self;
the draw View method looks like this:
GLint a = cplusplusengine->Render();
[m_context presentRenderbuffer:GL_RENDERBUFFER_OES];
now i create the buffer and present it, i also create buffers in engine like this:
int widthB, heightB;
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES,GL_RENDERBUFFER_WIDTH_OES, &widthB);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES,GL_RENDERBUFFER_HEIGHT_OES, &heightB); glViewport(0, 0, widthB, heightB);
// Create a depth buffer that has the same size as the color buffer.
glGenRenderbuffersOES(1, &m_depthRenderbuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, m_depthRenderbuffer);
glRenderbufferStorageOES(GL_RENDERBUFFER_OES, GL_DEPTH_COMPONENT16_OES, widthB, heightB);
// Create the framebuffer object.
GLuint framebuffer;
glGenFramebuffersOES(1, &framebuffer);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, framebuffer);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_RENDERBUFFER_OES, m_colorRenderbuffer);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_DEPTH_ATTACHMENT_OES,GL_RENDERBUFFER_OES,m_depthRenderbuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, m_colorRenderbuffer);
And every frame i clear color and depth buffers. Now i do get message from instruments for that draw view's line "present renderbuffer" it says "OpenGL ES performed an unnecessary logical buffer store operation. This is typically caused by not clearing buffers at the start of the rendering loop and not discarding buffers at the end of the rendering loop. If your application clears the depth buffer at the beginning of each frame, it should discard the depth buffer at the end of each frame. Please see the EXT_discard_framebuffer extension for more details." Now i am trying to work my a** off to solve this but i cannot find the solution. I may have few places in textures where this may be happening. It would be helpful to at least find out why opengl may draw messy.
P.S. I do load up textures in that view and set them in engine like this. engineTexture[index] = viewsTextureValueAt[index]; That just sets the GLuint from views texture pointer to engine texture pointer. Can i do that? It works but i don't know whether this is the case. I do get errors even if i comment out all the texture usages though.
I managed to work this out myself. It seems my buffers are all okay. My textures are also good. The error was lying in one simple "common newbie mistake". I used quite few variables to manipulate and align all my scene. It seems that when I was using those variables in objective-c without first defining the values to zero it was okay, the compiler somehow assigned 0 to them, but now, when I use c++ engine, all variables that I did not define now gets random values, this makes my application randomly crash up in different ways. For example my button alignment array was set only for last 4 buttons, first one is in 0 position so I left that number undefined so that's why that button flew off somewhere every time I launched. One time the value got 1700000+ another -0.000056+ and so.
I have an app that uses OpenGL-ES and an EAGLContext within a UIView - very much like Apple's GLPaint sample code app.
It might be significant that I see this bug on my iPhone 4 but not on my iPad.
Mostly, this works very well. However, I am getting GL_FRAMEBUFFER_INCOMPLETE_ATTACHMENT_OES from glCheckFramebufferStatusOES() within the createFrameBuffer method. The reason is that the backingWidth and backingHeight are both 0.
I am trying to understand the relation between )self.layer and its size - which is not (0,0) - and the values for backingWidth and backingHeight. My UIView and its CALayer both have the 'correct' size, while glGetRenderbufferParameterivOES() returns 0 for GL_RENDERBUFFER_WIDTH_OES and GL_RENDERBUFFER_HEIGHT_OES.
Here is my createFrameBuffer method - which works much of the time.
- (BOOL)createFramebuffer
{
// Generate IDs for a framebuffer object and a color renderbuffer
glGenFramebuffersOES(1, &viewFramebuffer);
glGenRenderbuffersOES(1, &viewRenderbuffer);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, viewFramebuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
// This call associates the storage for the current render buffer with the EAGLDrawable (our CAEAGLLayer)
// allowing us to draw into a buffer that will later be rendered to screen wherever the layer is (which corresponds with our view).
[context renderbufferStorage:GL_RENDERBUFFER_OES fromDrawable:(id<EAGLDrawable>)self.layer];
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_RENDERBUFFER_OES, viewRenderbuffer);
//DLog(#" backing size = (%d, %d)", backingWidth, backingHeight);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight);
DLog(#" backing size = (%d, %d)", backingWidth, backingHeight);
err = glGetError();
if (err != GL_NO_ERROR)
DLog(#"Error. glError: 0x%04X", err);
// For this sample, we also need a depth buffer, so we'll create and attach one via another renderbuffer.
glGenRenderbuffersOES(1, &depthRenderbuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, depthRenderbuffer);
glRenderbufferStorageOES(GL_RENDERBUFFER_OES, GL_DEPTH_COMPONENT16_OES, backingWidth, backingHeight);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_DEPTH_ATTACHMENT_OES, GL_RENDERBUFFER_OES, depthRenderbuffer);
if(glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES) != GL_FRAMEBUFFER_COMPLETE_OES)
{
NSLog(#"failed to make complete framebuffer object 0x%X", glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES));
return NO;
}
return YES;
}
When backingWidth and backingHeight are non-zero, then there is no error returned from glCheckFramebufferStatusOES().
I had this same problem. For me the fix was that in the opengl sample code of last year, Apple rebuilds the renderbuffer in every layoutSubviews call. Now, if you create an iPhone template opengl project, you will see that the layoutSubviews only destroys the renderbuffer. Then on every draw, if the render buffer is nil THEN create it. This is better cause when you are about to draw all CAlayers etc should be all shined up and ready to go.
I think that the render buffer in my case was trying to be built when the EagleView layer was not serviceable - i.e. in some tear down state. In any case when I changed my code to match it worked.
Also there are fewer calls to this code, which is likely faster. On startup there is a lot of scene loading and moving about, which generates 1/2 dozen layout sub view calls with my app.
Since the comments in Apple's code tend to be few and far between, the fact that there is one in the layoutsubviews call is significant:
// The framebuffer will be re-created at the beginning of the next
setFramebuffer method call.
--Tom
I had this same problem also, using Apple's OpenGL-ES sample code that does a destroyFramebuffer, createFramebuffer then drawView with the layoutSubviews function.
What you want to do is create the frame buffer in the drawView call as Tom says above, but additionally, you also want to defer the call to drawView until the layoutSubviews function returns. The way I did this was:
- (void) layoutSubviews
{
[EAGLContext setCurrentContext:context];
[self destroyFramebuffer];
// Create the framebuffer in drawView instead as needed, as before create
// would occasionally happen when the view wasn't servicable (?) and cause
// a crash. Additionally, send the drawView call to the main UI thread
// (ALA PostMessage in Win32) so that it is deferred until this function
// returns and the message loop has a chance to process other stuff, etc
// so the EAGLView will be ready to use when createFramebuffer is finally
// called and the glGetRenderbufferParameterivOES calls to get the backing
// width and height for the render buffer will always work (occasionally I
// would see them come back as zero on my old first gen phone, and this
// crashes OpenGL.)
//
// Also, using this original method, I would see memory warnings in the
// debugger console window with my iPad when rotating (not all the time,
// but pretty frequently.) These seem to have gone away using this new
// deferred method...
[self performSelectorOnMainThread:#selector(drawView)
withObject:nil
waitUntilDone:NO];
}
Ross