OpenGL-ES 2.0 VS OpenGL-ES 1.1, which is faster? - iphone

I've written an app using OpenGL-ES 1.1, but am wondering if there are speed gains to be found by switching to 2.0. Has anyone done any tests with large polygon count models? I only want to render triangles that have different colors, nothing fancy. However, I am wanting to render about 1 million triangles for my comparison test.

OpenGL ES 1.1 and 2.0 provide two very different ways of doing 3-D graphics, so I don't know that direct performance comparisons make much sense. You're probably going to see identical performance using both if you create 2.0 shaders that just simulate OpenGL ES 1.1's fixed function pipeline. This is backed by Apple's documentation on the PowerVR SGX, which says:
The graphics driver for the PowerVR SGX also implements OpenGL ES 1.1
by efficiently implementing the fixed-function pipeline using shaders.
For rendering basic, flat-colored triangles, I'd suggest going with OpenGL ES 1.1 simply because you'll need to write a lot less code. If you are able to get by with the built-in functionality in 1.1, it's usually easier to target that version. You also have a slightly larger market by being able to target the (now) minority of iOS device owners with hardware that doesn't support 2.0.
However, OpenGL ES 2.0 lets you do a whole lot more using its vertex and fragment shaders than 1.1 does, so some of the things that you might do with extensive geometry can instead be handled by shaders. This can make for better-looking, faster effects.
For example, I'm finishing an update to my molecular renderer using 2.0 shaders that will significantly increase resolution of the visualized structures. I'm doing this by using custom shaders that generate raytraced impostors for the spheres and cylinders in these structures. These objects look perfectly round and smooth at any magnification. Doing this in OpenGL ES 1.1 with pure geometry would be all but impossible, because the number of triangles required would be ridiculous (also, billboards wouldn't work well for my cylinders, and the intersection of these shapes wouldn't be handled right in that case).
A million triangles might be a bit much for these devices. In my benchmarks, the old iPhone 3G did around 500,000 triangles per second, and the first-generation iPad about 2,000,000. I haven't fully benchmarked the much faster iPad 2, but my early tests show it at about 8,000,000 - 10,000,000 triangles per second. Even on the fastest device out there, you're only going to get ~10 FPS on a million-triangle scene in the best of the devices. Odds are, you don't need that size of geometry, so I'd do what I could to reduce that first.

The performance gains in ES 2.0 aren't from rendering single VBOs, but through
1) performance tweaks in custom shaders to do only the bare minimum required rather than more general fixed functions
2) rendering lots of objects due to the streamlining of the matrix pipeline and the removal of the matrix stack and the "fixed function", which must figure out new shaders on state changes, and the removal of the need for multipass rendering for some effects.
This allows e.g. the CPU to do all dynamic matrix transformations in a separate thread, whilst ignoring static matrices and avoid unneeded transfer between the CPU->GPU. There's no need to continually redo camera matrices between 2D and 3D state changes in shader versions.

Related

Is there a double floating point data type in Godot shaders?

I'm trying to use shaders in Godot and I need a really precise calculation (more than float). Is it possible to have a double in Godot shaders? I searched the documentation but I found nothing...
Edit: I've made a Mandelbrot set explorer and with floats after some zooming the image gets all pixelated because the precision limit is reached, I think that with doubles I would be able to zoom further without losing quality. You can check out my code here btw
Godot doesn't support FP64 in shaders, since OpenGL 3.3/OpenGL ES 3.0 doesn't mandate support for them on GPUs. Doubles on the GPU are expensive and often crippled on consumer GPUs anyway, making them a bad fit for most real-time applications (especially games that need to run at high framerates).

Best Practice to Increase Frame Per Second for my daydream VR application [duplicate]

I have been building a game for VR using Unity3d. It has only low poly models and the file size is less then 40 mb still the game lags when played on mobile.. Please suggest how to improve the performance..
Thank you in advance..
In order to improve performance in VR for mobile you have to optimize everything as best as you can, you should keep some of these variables in mind:
Graphics Side
- Number of polygons in the scene
- How many source of lighting do you have
Programming Side
- How much work is taking your code, is doing it efficiently?
The programming part can include problems within the physics system, also some logic problems that can probably decrease the overall performance because of higher computation.
My advice is to learn about the Profiler that unity offers, actually you can observe how much work is taking your code and where exactly it is your bottle-neck. This video also can be useful.
Of course a solution could be implement your code following design standards, like design patterns and software architecture (depending on the size of the project).
I hope it can be useful for you!
What I found from developing and launching a vr game is some of the issues below
Number of polygons is usually your first to check even though your models are low poly. For example, I looked at Synty models in the unity store and some of them were over 1k for a bag and 7k for a character model. This seriously reduce the amount of objects you can if you want to target a max of 50000 per eyes.
With some models, you can use blender and the decimate tool to reduce the polygon count pretty easily. From there I would use LOD's to reduce their count further based on distance.
Use occlusion culling (pro version only)
Set your camera distance to maybe a 100 instead of the default
Use mobile shaders and careful using some of the standard shaders as they are expensive. Also transparent shaders will becomes expensive cause overdraw.
Batch your textures and make them static if possible
Don't use dynamic shadows on lighting but instead bake your lights
Try to avoid using physics as this becomes expensive and instead raycast to trigger events or shooting weapons.
Run profiler often and check for any bottlenecks (pro version only)
Reduce the count of Particles effects and their values
Character bones can also cause issue so remove as many as possible
There is also your code to look at as mentioned by Manujamming
Set quality setting to low in the inspector to gain best performance.
Could you provide a screenshot of your game scene?
I hope this makes sense.
Best of luck!

Best way to render 2d animated sprites openGl ES

I am writing a 2d game on the android and I am targeting phones with that have a minimum OpenGl ES 1.1 support.
I am currently looking at creating my animated sprite class which is basically a quad that has a changing texture on it to provide animation.
I wanted to stick to Opengl 1.1 so am avoiding shaders and was wondering how other people have approached the implementation of animated sprites.
My thoughts initially were to either:
Have a single vertex buffer object with one texture coordinate set, then use lots of pre loaded textures that would be swapped at runtime in the correct order.
Have just one texture sprite sheet and modify texture coordinates at runtime to display the coorect subsection of the sprite sheet.
Is there a more clever or more efficient way to do this without shaders?
Thanks
Choose #2 if you have only the two options.
However, I recommend making and caching all of quad vertex set for each sprite frames into vertex buffer on memory closest to GPU. Or just generate new sprite quad vertex and specify them for each drawing. This is trade off problem between performance vs memory by caching. Think about memory consumption vertices for single frame.
Changing GPU internal state is a lot expensive operation. Of course, including texture object swapping. Avoid this as much as possible.
This is the reason huge texture atlas are used on traditional game development.
Transferring resources (including vertices) to VRAM (closest memory to GPU) may be expensive because they need to be copied over slower bus. This is similar with server-client situation. GPU+VRAM is server, CPU+RAM is client connected through PCI-bus network. However this can be vary by system structure and memory/bus model.

Skybox OpenGL ES iPhone and iPad

I need to create a virtual tour tool for iOS. It's an archaeological application: the user could open it when he's inside an historic building or when he's visiting an archaeological dig. No need of doom-like subjective point of view: just a skybox. The application will have a list of points of interest (POIs). Every POI will have its own skybox.
I thought that I could use using OpenGL-ES to create a sort of textured skyboxes that could be driven/rotated by touches. Textures are hi-resolution PNG photos.
It's a funded project and I have 4 months.
Where do I have to go to learn how to develop it? Do I have to purchase a book? Which one?
I have just moderate Objectve-C and Cocoa-touch skills, since I've built just one application for the iPad. I have zero knowledge of OpenGL-ES.
Since I know OpenGL ES quite well, I had a go at a demo project, doing much of what you describe. The specific intention was to do everything in the simplest way available under OpenGL ES as long as the performance was good enough.
Starting from the OpenGL template that Apple supply, I have written one new class with a heavily commented implementation file 122 lines long that loads PNG images as textures. I've modified the sample view controller to draw a skybox as required and to respond to touches with a version of the normal iPhone inertial scrolling, which has meant writing less than 200 lines of (also commented) code.
To achieve this I needed to know:
the CoreGraphics means for getting pixel data from a PNG
how to set up the PROJECTION stack to get a perspective projection with the correct aspect ratio
how to manipulate the MODELVIEW stack to ensure two-axis rotation (first person shooter or Google StreetView style) of the scene according to member variables and to ensure that the cube geometry I defined doesn't visibly intersect the near clip plane
how to specify vertex locations and texture coordinates to OpenGL
how to specify the triangles OpenGL should construct between vertices
how to set the OpenGL texture parameters accordingly to supply only one level of detail for the texture
how to track a touch to manipulate the member variables dictating rotation, including a tiny bit of mechanics to give an inertial rotation
Of course, the normal view controller lifecycle instructions are obeyed. Textures are loaded on viewDidLoad and released on viewDidUnload, for example, to ensure that this view controller plays nicely with potential memory warnings.
The main observations are that, beyond knowing the Objective-C signalling mechanisms, most of this is C stuff. You're primarily using C arrays and references to make C function calls, both for OpenGL and CoreGraphics. So a prerequisite for coding this yourself is being happy in C, not just Objective-C.
The CoreGraphics stuff is a bit tedious but it's all just reading the docs to figure out how each type of thing relates to the next — none of it is really confusing. Just get into your head that you need a data provider for the PNG data, you can create an image from that data provider and then create a bitmap context with memory that you've allocated yourself, draw the image into the context and then release everything except the memory you allocated yourself to be left with the result. That result can be directly uploaded to OpenGL. It's relatively short boilerplate stuff, but OpenGL has no concept of PNGs and CoreGraphics has no convenient methods of pushing things into OpenGL.
I've assumed that textures are a suitable size on disk. For practical purposes, that means assuming they're a power-of-two in size along each edge. Mine are 512x512.
The OpenGL texture management stuff is easy enough; it's just reading the manual to learn about texture names, name allocation, texture parameters and uploading image data. More routine stuff that is more about knowing the right functions than managing an intuitive leap.
For supplying the geometry to OpenGL I've just written out the arrays in full. I guess you need a bit of a spatial mind to do it, but sketching out a 3d cube on paper and numbering the corners would be a big help. There are three relevant arrays:
the vertex positions
the texture coordinates that go with each vertex location
a list of indices referring to vertex positions that defines the geometry
In my code I've used 24 vertices, treating each face of the cube as a logically discrete thing (so, six faces, each with four vertices). I've defined the geometry using triangles only, for simplicity. Supplying this stuff to OpenGL is actually quite annoying when you're starting; making an error generally means your program crashes deep inside the OpenGL driver without giving you a hint as to what you did wrong. It's probably best to build up a bit at a time.
In terms of a UIView capable of hosting OpenGL content, I've more or less used the vanilla stuff Apple directly supply in the OpenGL template. The one change I made was explicitly to disable any attempted use of OpenGL ES 2.x. 1.x is more than sufficient for this task, so we gain simplicity firstly by not providing two alternative rendering paths and secondly because the ES 2.x path would be a lot more complicated. ES 2.x is the fully programmable pipeline with pixel and vertex shaders, but in ES land the fixed pipeline is completely removed. So if you want one then you have to supply your own substitutes for the normal matrix stacks, you have to write vertex and fragment shaders to do 'a triangle with a texture', etc.
The touch tracking isn't particularly complicated, more or less just requiring me to understand how the view frustum works and how touches are delivered in Cocoa Touch. Once you've done everything else, this bit should be quite easy.
Notably, the maths I had to implement was extremely simple. Just the touch tracking, really. Assuming you wanted a Google Maps-type view meant that I could rely entirely on OpenGL's built-in ability to rotate things, for example. At no point do I explicitly handle a matrix.
So, how long it would take you to write depends on your own confidence with C and with CoreGraphics, and how happy you are sometimes coding in the dark. Because I know what I'm doing, the whole thing took two or three hours.
I'll try to find somewhere to upload the project so that you can have a look at it. I think it'd be helpful to leaf through it and see how alien it looks. That'll probably give you a good idea about whether you could implement something that meets all of your needs within the time frame of your project.
I've left the view controller as having exactly one view, which is the OpenGL view. However, the normal iPhone compositing rules apply and in your project you can easily put normal controls on top. You can grab my little implementation at mediafire. StackOverflow post length limits prevent me from putting big snippets of code here, but please feel free to ask if you have any specific questions.
It's going to be pretty tough if you're learning OpenGL ES from scratch. I'd use a graphics engine to do most of the heavy lifting. I'm currently playing Ogre3d, from what I've seen so far I can recommend it: http://www.ogre3d.org/. It has Skybox (and much more) out of the box, and should be pretty straight forward to do.
I think you can do this, here are some links to help get you started:
http://sidvind.com/wiki/Skybox_tutorial
common problems:
( i would post direct links but stackoverflow wont let me )
look on stackoverflow items no 2859722 and 2297564.
some programs and tips to help make the textures:
spacescape
there are some great opengl tutorials here:
nehe.gamedev.net
they are not iphone specific, but they explain opengl pretty well. i think some folks have ported these to the phone as well, i just cant find them now.

Is there a good rule of thumb to help decide when it is appropriate to use Quartz 2D v. Core Animation v. OpenGL in an iPhone/iPad app?

I'm looking for is a heuristic for determining which of the primary graphics methods for iPhone/iPad development would be the most appropriate solution for a given problem.
Simple answer:
Quartz 2D
Use Quartz 2D for custom interface elements to give a stylized look to your application.
Core Animation
Easier to use, although used where performance is not critical. Great for quick animation routines. a lot easier to do effect in. Used with UIViews as well. Can be used to create simple fine games. like pong or card games.
OpenGL ES
Great for performance critical games. A bit more complex but once you can get your head around the tutorials available and the frameworks provided you can create high perfomance games. And also port them to other devices other than the iphone very easily which is quite cool.
Long Answer:
Quartz 2D
Quartz 2D is the powerful 2D graphics API for iOS. It offers professional-strength 2D graphics features such as Bézier curves, transformations, and gradients. Use Quartz 2D for custom interface elements to give a stylized look to your application.
Core Animation
Core Animation is likely the appropriate choice for games where performance is not critical such as simon says type games, card games, and trivia games. Some might argue that OpenGL ES is easier to use, and it likely is if you’ve studied say.. DirectX.. but Core Animation (and Quartz 2D for that matter) is much easier to do simple effects in, and can be used with existing UIViews.
Core Animation is fine for games where performance is not critical, and for new programmers will likely be easy to use, OpenGL is needed for anything else.
Core Animation utilizes OpenGL ES, it is high level, and in my testing works fine in situations where performance is critical.
OpenGL ES
is your choice for performance critical games. Which is essentially anything but simple mostly static games like the ones mentioned I above such as first person shooters, flight simulators and the like. You also get the added benefit of potentially being able to port your games to a device other than the iPhone, and there is alot of existing game code in OpenGL that can be converted the other way. OpenGL ES is an open standard that is used on a growing number of devices created by a wide variety of companies, and because CoreAnimation is a higher level framework built atop OpenGL ES it cannot provide nearly the same performance.
http://maniacdev.com/2009/07/iphone-game-programming-coreanimation-vs-opengl-es/
http://developer.apple.com/technologies/ios/graphics-and-animation.html
PK