I need to create a virtual tour tool for iOS. It's an archaeological application: the user could open it when he's inside an historic building or when he's visiting an archaeological dig. No need of doom-like subjective point of view: just a skybox. The application will have a list of points of interest (POIs). Every POI will have its own skybox.
I thought that I could use using OpenGL-ES to create a sort of textured skyboxes that could be driven/rotated by touches. Textures are hi-resolution PNG photos.
It's a funded project and I have 4 months.
Where do I have to go to learn how to develop it? Do I have to purchase a book? Which one?
I have just moderate Objectve-C and Cocoa-touch skills, since I've built just one application for the iPad. I have zero knowledge of OpenGL-ES.
Since I know OpenGL ES quite well, I had a go at a demo project, doing much of what you describe. The specific intention was to do everything in the simplest way available under OpenGL ES as long as the performance was good enough.
Starting from the OpenGL template that Apple supply, I have written one new class with a heavily commented implementation file 122 lines long that loads PNG images as textures. I've modified the sample view controller to draw a skybox as required and to respond to touches with a version of the normal iPhone inertial scrolling, which has meant writing less than 200 lines of (also commented) code.
To achieve this I needed to know:
the CoreGraphics means for getting pixel data from a PNG
how to set up the PROJECTION stack to get a perspective projection with the correct aspect ratio
how to manipulate the MODELVIEW stack to ensure two-axis rotation (first person shooter or Google StreetView style) of the scene according to member variables and to ensure that the cube geometry I defined doesn't visibly intersect the near clip plane
how to specify vertex locations and texture coordinates to OpenGL
how to specify the triangles OpenGL should construct between vertices
how to set the OpenGL texture parameters accordingly to supply only one level of detail for the texture
how to track a touch to manipulate the member variables dictating rotation, including a tiny bit of mechanics to give an inertial rotation
Of course, the normal view controller lifecycle instructions are obeyed. Textures are loaded on viewDidLoad and released on viewDidUnload, for example, to ensure that this view controller plays nicely with potential memory warnings.
The main observations are that, beyond knowing the Objective-C signalling mechanisms, most of this is C stuff. You're primarily using C arrays and references to make C function calls, both for OpenGL and CoreGraphics. So a prerequisite for coding this yourself is being happy in C, not just Objective-C.
The CoreGraphics stuff is a bit tedious but it's all just reading the docs to figure out how each type of thing relates to the next — none of it is really confusing. Just get into your head that you need a data provider for the PNG data, you can create an image from that data provider and then create a bitmap context with memory that you've allocated yourself, draw the image into the context and then release everything except the memory you allocated yourself to be left with the result. That result can be directly uploaded to OpenGL. It's relatively short boilerplate stuff, but OpenGL has no concept of PNGs and CoreGraphics has no convenient methods of pushing things into OpenGL.
I've assumed that textures are a suitable size on disk. For practical purposes, that means assuming they're a power-of-two in size along each edge. Mine are 512x512.
The OpenGL texture management stuff is easy enough; it's just reading the manual to learn about texture names, name allocation, texture parameters and uploading image data. More routine stuff that is more about knowing the right functions than managing an intuitive leap.
For supplying the geometry to OpenGL I've just written out the arrays in full. I guess you need a bit of a spatial mind to do it, but sketching out a 3d cube on paper and numbering the corners would be a big help. There are three relevant arrays:
the vertex positions
the texture coordinates that go with each vertex location
a list of indices referring to vertex positions that defines the geometry
In my code I've used 24 vertices, treating each face of the cube as a logically discrete thing (so, six faces, each with four vertices). I've defined the geometry using triangles only, for simplicity. Supplying this stuff to OpenGL is actually quite annoying when you're starting; making an error generally means your program crashes deep inside the OpenGL driver without giving you a hint as to what you did wrong. It's probably best to build up a bit at a time.
In terms of a UIView capable of hosting OpenGL content, I've more or less used the vanilla stuff Apple directly supply in the OpenGL template. The one change I made was explicitly to disable any attempted use of OpenGL ES 2.x. 1.x is more than sufficient for this task, so we gain simplicity firstly by not providing two alternative rendering paths and secondly because the ES 2.x path would be a lot more complicated. ES 2.x is the fully programmable pipeline with pixel and vertex shaders, but in ES land the fixed pipeline is completely removed. So if you want one then you have to supply your own substitutes for the normal matrix stacks, you have to write vertex and fragment shaders to do 'a triangle with a texture', etc.
The touch tracking isn't particularly complicated, more or less just requiring me to understand how the view frustum works and how touches are delivered in Cocoa Touch. Once you've done everything else, this bit should be quite easy.
Notably, the maths I had to implement was extremely simple. Just the touch tracking, really. Assuming you wanted a Google Maps-type view meant that I could rely entirely on OpenGL's built-in ability to rotate things, for example. At no point do I explicitly handle a matrix.
So, how long it would take you to write depends on your own confidence with C and with CoreGraphics, and how happy you are sometimes coding in the dark. Because I know what I'm doing, the whole thing took two or three hours.
I'll try to find somewhere to upload the project so that you can have a look at it. I think it'd be helpful to leaf through it and see how alien it looks. That'll probably give you a good idea about whether you could implement something that meets all of your needs within the time frame of your project.
I've left the view controller as having exactly one view, which is the OpenGL view. However, the normal iPhone compositing rules apply and in your project you can easily put normal controls on top. You can grab my little implementation at mediafire. StackOverflow post length limits prevent me from putting big snippets of code here, but please feel free to ask if you have any specific questions.
It's going to be pretty tough if you're learning OpenGL ES from scratch. I'd use a graphics engine to do most of the heavy lifting. I'm currently playing Ogre3d, from what I've seen so far I can recommend it: http://www.ogre3d.org/. It has Skybox (and much more) out of the box, and should be pretty straight forward to do.
I think you can do this, here are some links to help get you started:
http://sidvind.com/wiki/Skybox_tutorial
common problems:
( i would post direct links but stackoverflow wont let me )
look on stackoverflow items no 2859722 and 2297564.
some programs and tips to help make the textures:
spacescape
there are some great opengl tutorials here:
nehe.gamedev.net
they are not iphone specific, but they explain opengl pretty well. i think some folks have ported these to the phone as well, i just cant find them now.
Related
I'm completely new to OpenGL so sorry if it's a silly question. Also no idea if it makes a difference, just in case, I'm using OpenGL ES 1.1.
Currently I'm drawing sprites in order of texture, as I've read it's better for performance (makes sense). But now I'm wondering whether that was the right approach because I need certain sprites to be in front of others regardless of texture.
As far as I'm aware, my options for z-ordering would be either to enable the depth buffer and use that, or to switch the drawing order so the sprites are drawn in the order of a z value.
I've read that the depth buffer can be a performance hit, but so would changing the order. Which should I do?
The short answer is, sort the sprites.
It sounds like you're creating something that's really 2d based, and while a z-buffer can be a very useful tool, it can be an impressive performance hit if the hardware doesn't support it, and if you're not actually using 3d objects that may be intersecting one another, it doesn't make a lot of sense to me.
In addition, if you have any sprites that are partially transparent, i.e. have pixels with an alpha value that isn't 0 or 255 (or 0.0 or 1.0 if using floating point) then you have to sort anyway.
As a side note, I believe that the performance lost when changing "sprites" only occurs when switching out surfaces, and only rarely. One way to help mitigate this problem is to put as many different sprites in one image as you can, on a grid, and use little pieces of your surfaces as sprites.
If I was interested in getting across game development on the iPhone/iPAD any suggestions on which technology(s) to start looking into? In fact just a simple bullet on each of these technologies that highlight how it fits into typical game development would be great, as the reason for this question was me not understanding how these fit together. So in this case it's where do these roughly fit in here:
UIKit
Core Animation
QuartzCore
OpenGL
cocos2D
I was getting the impression that cocos2D would be the way to go, and it seems to be a simpler wrapper for OpenGL? Not sure where that leaves UIKit vs Core Animation vs QuartzCore then?
Ok well I will try to make a rather lengthy answer short: it depends.
Now for a few longer thoughts and explanations - this is long. If you won't read it to the end, make sure you read the last few sentences.
First I need to clarify that the "technologies" or API's you have listed are mostly concerned solely with graphics
(except UIKit which also handles input) and that is ok as long as you understand
that the graphics is only a part (and as of my experience I might add "minor") of
a whole game.
Besides the graphics, there will be quite a bit more to handle when doing a real game.
In order not to get a away from the question too much let me just throw in a few buzzwords:
game logic, AI, resource management, physics, interaction, networking (possibly), sound, etc.
My recommendation is to get a good book on the topic and dive in [1,2].
Now that I skimmed that, back to your question:
Basically, OpenGL and QuarzCore are the core technologies to "get something drawn on the screen". All other technologies are built on top of that. So in theory, if you want to go that
way, you can implement everything related to graphics output with OpenGL or QuarzCore alone (whereas you will have to account for the fact that QuarzCore is 2D drawing only, whereas OpenGL supports 3D also and 2D is really just a special case of 3D). I actually believe that Quarz is also built on top of OpenGL, but I'm unsure about that.
UIKit is built on top of those and using it does basically two things: a) avoid re-inventing the wheel and b) make creation of user interfaces easier, faster and more robust. Of course you could draw every button itself by specifying the coordinates of a quad and sending that to OpenGL for processing. Then you would have to check every user input if it hits your button and call an associated handler. However, just creating an UIButton instance (maybe even in IB) and specifying an on-click handler does remove quite a lot of work.
Bottom line: UIKit is perfectly suited for creating user interfaces. Plus it can be used together with OpenGL and QuarzCore. I use it for the user interface on iOS even in games.
Regarding Cocos2D: This one is a "game engine" built on top of OpenGL. As such it includes quite a lot of the stuff you would have to handle yourself when rolling your own OpenGL-based engine, i.e. scene management, sprites, texturing, meshes, resources etc. I believe it also has connections to physics and collision detection libraries. In order to understand what happens there, however, requires to learn how computer graphics and games work[1,2,3].
Bottom line: you would use Cocos2D for your (2D) game graphics and UIKit to do the user interface.
CoreAnimation then is for animating CoreGraphics stuff. It provides an easy interface, e.g. for smoothly sliding or rotating stuff around, encapsulating all the required calculations (interpolations, redrawing etc). Will not work with OpenGL - but I think Cocos2D will also have a wrapper for that.
Bottom line: Use for fancy user interface animations.
Having said all that and just merely skimming the top I didn't want to frighten you in the first place.
For your special game (and every game is special) you might well get away with just UIKit and CoreAnimiation for the graphics and it might well make a blockbuster game. They allow a lot of drawing to take place with an easy to use interface plus they don't require too deep background knowledge and are sufficiently performant for small projects.
If you have a great game idea, use the simplest possible graphics solution that will satisfy you and your players and realize it. Once you got it going, you can still improve it if necessary.
[1]http://www.mcshaffry.com/GameCode/
[2]http://www.gameenginebook.com/
[3]http://www.opengl.org/documentation/red_book/
I would like to know what kind of advantages I get from using Core Graphics instead of Open GL ES. My main question is based on this:
Creating simple View animations.
Creating some visual appealing objects (Graphics like Core Plot for instance, Animated Objects, etc).
Time consuming (both learning and implementing)
Simple 2D Games
Complex 2D Games
3D Games
Code maintenance ad also cleaner code.
Easier integration with other UI elements.
Thanks.
First, I want to clear up a little terminology here. When people talk about Core Graphics, they generally are referring to Quartz 2D drawing, which is a 2-D vector-based drawing API. It is used to draw out vector elements either to the screen or to offscreen contexts like PDFs. Core Animation is responsible for animation, layout, and some limited 3-D effects involving rectangular layers and UI elements. OpenGL ES is a lower-level API for talking with the graphics hardware on iOS devices for both 2-D and 3-D drawing.
You're asking a lot in your question, and the judgment on what's best in each scenario is subjective and completely up to the developer and their particular needs. I can, however, provide a few general tips.
In general, a recommendation you'll see in Apple's documentation and in presentations by engineers is that you're best off using the highest level of abstraction that solves your particular problem.
If you need to just draw a 2-D user interface, the first thing you should try is to implement this using Apple's provided UIKit elements. If they don't have the capability you need, make custom UIViews. If you are designing Mac-iOS cross-platform code (like in the Core Plot framework), you might drop down to using custom Core Animation CALayers. Each step down in this process requires you to write more code to handle things that the level above did for you.
You can do a surprising amount of stuff with Core Animation, with pretty good performance. This isn't just limited to 2-D animations, but can extend into some simple 3-D work as well.
OpenGL ES is underneath the drawing of everything you see on the screen for an iOS device, although this is not exposed to you. As such, it provides the least abstraction for onscreen rendering, and requires you to write the most code to get something done. However, it can be necessary in situations where you want to extract the most performance from 2-D display (say, in an action game) or to render true 3-D objects and environments.
Again, I tend to recommend that people start at the highest level of abstraction when writing an application, and only drop down when they find that they cannot do something or the performance is not within the specification they are trying to hit. Fewer lines of code makes applications easier to write, debug, and maintain.
That said, there are some nice frameworks that have developed around abstracting away OpenGL ES, such as cocos2D and Unity 3D, which might make working with OpenGL ES easier in many situations. For each case, you'll need to evaluate what makes sense for the particular needs of your application.
Basically, use OpenGL if you are making a game. Otherwise, use CoreGraphics. CoreGraphics lets you do simple things embedded in your normal UI code.
Creating simple View animations.
-> CG
Creating some visual appealing objects (Graphics like Core Plot for instance, Animated Objects, etc).
-> CG
Time consuming (both learning and implementing)
-> OpenGL and CG are both kind of tough at first.
Simple 2D Games
-> OpenGL
Complex 2D Games
-> OpenGL
3D Games
-> OpenGL
Code maintenance ad also cleaner code.
-> Irrelevant
I have a 3D CAD file of a set of products. I want to create a viewer so that the user can freely rotate the object in 3D.
How would I best go about this?
1) I had thought about exporting a series of 360 degree images every 30 degrees around the image, but that would be around 360 images per product. Then right the code to handle the matrix that would be required to handle rotation of the object. Seems very excessive, but doable.
2) OpenGL - I have never done any 3d animation using this, though.
We are using LightWave 3D, if that helps.
I'd recommend going with the 3-D rendering route, even though it might require more upfront work than the multiple sliced images approach. It will provide much greater flexibility over the long run, and I think you'll be able to generate a more pleasing experience in the end (small application binary size, smoother rotation, etc.). Also, once you have the display code done, you'll be able to pull in arbitrary models to add on to the ones you started with, and make tweaks to those models more easily.
This question points out a number of ways that you might be able to import LightWave models into formats usable by an OpenGL ES application. It looks like you'll probably need to pass through Blender or another intermediary to accomplish this.
Once you have the model in a form that you can work with, you can build off of several open source 3-D rendering applications for the iPhone / iPad, such as my Molecules application. My application is built for displaying 3-D molecular structures, but people have modified it to support rendering other models for their own needs, so I know that's possible. I go into detail on how this application works in the video for the OpenGL ES session of my class on iTunes U.
OpenGL ES may seem intimidating at first, but it only took me three weeks of nights-and-weekends development to build the initial version of Molecules, and I had no real OpenGL experience before starting that project. There are many great resources out there now, so it's easier than ever to get started.
I'm working on an app that basically revolves around 2D shapes (mostly simple polygons) being dynamically drawn and animated.
I'm looking for a way to easily time my animations. It's basically just moving a vertex to a specified point in a specified time, so just interpolating floats, with all the usual easing parameters. I come from a Flash/ActionScript 3 environment, so if you're familiar with that, think Tween Classes.
I probably could easily be doing this with Core Animation (BasicAnimation etc), but i will have up to a hundred gradient-filled shapes with varying opacity being animated dynamically,
and I need good performance (60fps would be great). So i went for OpenGL ES. Plus I'm totally for investing time into learning something that I'll be able to reuse cross-platform.
So I know OpenGL is only for graphic rendering, and I'm not going to find any 2D animation methods built in. And I heard using CA with OpenGL (if feasible) was not a good idea performance-wise.
But before I look deeper into interpolation algorithms to increment my vertex's coordinates every frame, I juste wanted to make sure I wasn't totally missing out on something much easier!?
Thanks!
I would look into the popular cocos2d library. It looks really nice; supports animation and uses OpenGL ES behind the scenes.