How big of a difference is the description language of Quartz2d to OpenGL ES?
It seems they are similar in description power... except that Quartz is mostly 2d and that OpenGL is out of the box 3d ( but can be made 2d focused ).
Are the mappings from 2dQuartz to 2d OpenGL ES that different? Im sure there must be differences in some specific features that might be handled differently on one vs another... but to do a translator?
Anyone have experience with both OpenGL and Quartz2d have some insights?
Quartz and OpenGL ES are two completely different animals. While they both have a C-based API that deals with a state machine and that draws into a context, their purposes are dissimilar. In Quartz you specify lines, Bezier and quadratic curves, arcs, or rectangles, as well as fills, gradients, and shadows / glows. In OpenGL ES, you provide vertices, raster textures, and lighting information, from which a scene is generated.
They are both useful in particular cases. You might draw a 2-D static element using Quartz, into a view, layer, or texture, and then place and move that view or layer in 3-D space using Core Animation or do the same for a texture using OpenGL ES.
Rather than try to overlay one API on the other, use whichever is more appropriate for what you are doing, or look to a framework like cocos2d which lets you build and animate 2-D scenes or Core Animation where you can do Quartz drawing into a layer but still use a nicely abstracted API for moving these layers around.
Related
I'm currently experimenting with OpenGL ES 1.1 on the iPhone and trying to get my head around some of the basics. So far I've managed to draw a grid of objects which are lit with one GL_LIGHT. Here is a screenshot of the current output (question to follow)...
So you can see that my test consists of a grid of about 140 cubes - some slightly elevated so I can see how the shaded areas work. Each cube consists of this model (from Blender) and have normals / texture coordinates...
What's puzzling me, is why I don't get a 'uniform' lighting across the entire surface. Each cube seems to be lit individually and I can kind of understand why that would be... but is it not possible to have the light transition 'normally' like it would if you arranged this model out of blocks and shone a light across it. I'd expect to not see a dark edge on each individual cube, but rather a smooth transition across the whole area.
(I'm still inwardly chuffed that I managed to get this far!)
Any help or explanations would be awesome.
Thanks,
Simon
The reason why you don't get 'uniform' lighting is because I presume you are using per vertex lighting. That is the lighting is calculated per vertex and interpolated over each triangle making up the model. Since your cube has a pretty low polygon count the transition of light across the model won't look smooth.
Using OpenGL ES 1.1 there are two solutions to this. You can use higher polygon count models or implement per-pixel (DOT3) lighting. I've not implemented this myself but have come across this problem before (my solution was to switch to OpenGL ES 2.0 and use shaders to perform per-pixel lighting).
Here is a link, which may be of use: What is DOT3 lighting?
All the best!
I'm not looking for a library or even open source code. I want to learn how to do this on my own.
Where do I start to find an online tutorial, a book chapter, or other educational material for generating a polygonal model of a 3D sphere suitable for feeding to Open GL ES on an iPhone, and then mapping the polygons to some sort of 2D map data so I can texture map the sphere? Is there some sort of software tool (blender? maya?) with a tutorial on how to do generate this data? Where is the best place to start?
How about these articles?
Procedural Spheres in OpenGL ES
OpenGL ES From the Ground Up, Part 6: Textures and Texture Mapping
I've heard good stuff about "iPhone 3D Programming". Jeff LaMarche also recommends it here.
Hope this helps!
While not OpenGL ES, I once tried porting across the examples from this chapter in the Red Book where they show how to create an icosahedron and subdivide it to produce smooth spheres. I only got as far as using a simple icosahedron to crudely represent a sphere in the code for my Molecules application. Perhaps you could extend that.
Apple has a Mac sample application, GLSLShowpiece, that textures a sphere in a couple of places, but they use gluSphere() to generate the sphere vertices, which is unavailable in OpenGL ES.
To be honest, I'm in the process of replacing the sphere rendering code in Molecules with a 2-D billboarding approach that uses shaders to generate the sphere coloring. This should allow for far smoother spheres without having to resort to massive amounts of geometry. See this paper for the kind of results you can produce this way.
I need help setting up multi-pass rendering with OpenGL ES 2.0 on the iPhone. I haven't been able to find an example which implements both rendering to a texture and multi-pass shading.
I'm looking for some instructions and sample code which implement:
First stage: Render to a texture
Second stage: Input that texture and render to screen
I have referenced Apple's OpenGL ES Programming Guide, OpenGL Shading Language (Orange Book), and O'Reilly's iPhone 3D Programming Book.
The Orange Book discusses deferred shading and provides two shader programs for first-pass and second-pass rendering, but doesn't provide example code to setup that application or show how to communicate data between both shaders.
Questions:
How to render to texture?
Using glDrawElements
How to input that texture to the next pass?
How to implement two shading programs?
How to alternate first- and second-pass shading programs?
Need to attach, detach, and call 'use' for each pass?
How to implement multi-pass shading?
I wrote a short example of doing just this (multiple render-to-texture passes on the iPhone using OpenGL ES 2.0) a few weeks ago: http://www.mat.ucsb.edu/a.forbes/blog/?p=245
**
Edit, this post is a bit old, and it has moved here:
http://blog.angusforbes.com/openglglsl-render-to-texture/
**
Ok, first of all: I'm no expert on OpenGL ES 2.0. I was kind of in the same situation where I wanted to do a multipass render setup, in one of my first OpenGL ES applications.
I also used the Orange Book. Check chapter 12. Framebuffer Objects > Examples. The first example demonstrates how to use a framebuffer to render to a texture, and then draws that texture to screen.
Basically using that example I created an application that renders some geometry to a texture using an effect shader, then renders that texture to screen, layered with some other content all using a different shader.
I'm not sure if this is the best approach, but it works for my purposes. My setup:
I create two framebuffers, the default and an offscreen one. Same for the renderbuffers
I create a texture which the app will render to
I bind the offscreen framebuffer, and attach the texture to it using glFramebufferTexture2D
My rendering:
bind the offscreen framebuffer.
use my first shader program
draw my geometry
bind the default framebuffer
use my second shader program
draw a fullscreen quad with the texture attached to it.
I'm working on an app that basically revolves around 2D shapes (mostly simple polygons) being dynamically drawn and animated.
I'm looking for a way to easily time my animations. It's basically just moving a vertex to a specified point in a specified time, so just interpolating floats, with all the usual easing parameters. I come from a Flash/ActionScript 3 environment, so if you're familiar with that, think Tween Classes.
I probably could easily be doing this with Core Animation (BasicAnimation etc), but i will have up to a hundred gradient-filled shapes with varying opacity being animated dynamically,
and I need good performance (60fps would be great). So i went for OpenGL ES. Plus I'm totally for investing time into learning something that I'll be able to reuse cross-platform.
So I know OpenGL is only for graphic rendering, and I'm not going to find any 2D animation methods built in. And I heard using CA with OpenGL (if feasible) was not a good idea performance-wise.
But before I look deeper into interpolation algorithms to increment my vertex's coordinates every frame, I juste wanted to make sure I wasn't totally missing out on something much easier!?
Thanks!
I would look into the popular cocos2d library. It looks really nice; supports animation and uses OpenGL ES behind the scenes.
I've seen a lot of bandying about what's better, Quartz or OpenGL ES for 2D gaming. Neverminding libraries like Cocos2D, I'm curious if anyone can point to resources that teach using OpenGL ES as a 2D platform. I mean, are we really stating that learning 3D programming is worth a slight speed increase...or can it be learned from a 2D perspective?
GL is likely to give you better performance, with less CPU usage, battery drain, and so on. 2D drawing with GL is just like 3D drawing with GL, you just don't change the Z coordinate.
That being said, it's easier to write 2D drawing code with Quartz, so you have to decide the trade-off.
Cribbed from a similar answer I provided here:
You probably mean Core Animation when you say Quartz. Quartz handles static 2-D drawing within views or layers. On the iPhone, all Quartz drawing for display is done to a Core Animation layer, either directly or through a layer-backed view. Each time this drawing is performed, the layer is sent to the GPU to be cached. This re-caching is an expensive operation, so attempting to animate something by redrawing it each frame using Quartz results in terrible performance.
However, if you can split your graphics into sprites whose content doesn't change frequently, you can achieve very good performance using Core Animation. Each one of those sprites would be hosted in a Core Animation CALayer or UIKit UIView, and then animated about the screen. Because the layers are cached on the GPU, basically as textures, they can be moved around very smoothly. I've been able to move 50 translucent layers simultaneously at 60 FPS (100 at 30 FPS) on the original iPhone (not 3G S).
You can even do some rudimentary 3-D layout and animation using Core Animation, as I show in this sample application. However, you are limited to working with flat, rectangular structures (the layers).
If you need to do true 3-D work, or want to squeeze the last bit of performance out of the device, you'll want to look at OpenGL ES. However, OpenGL ES is nowhere near as easy to work with as Core Animation, so my recommendation has been to try Core Animation first and switch to OpenGL ES only if you can't do what you want. I've used both in my applications, and I greatly prefer working with Core Animation.