How to update VBO vertex data directly? - iphone

I have a vertex buffer and an index buffer to render a polygon mesh.
I would like to manipulate the position of N number of vertices (move them around independent of other neighboring vertices).
How can i go about doing this?
And i certainly hope I dont have to go back to using glDrawArrays (instead of glDrawElements). It took me forever just to figure out the vertex/index buffer rendering.

You may get slightly better performance if you update the data using glBufferSubData, specially if you can avoid updating all the buffer but just an small part of it. Unless you move your vertex animation into the vertex shader, you need to update the vertex buffer each time a vertex is moved (by your user), and glBuffer(Sub)Data is your best bet.
EDIT: Create the VBO as DYNAMIC, and if you make changes very often, create two buffer and use a double buffering approach, to avoid a performance hit since this way you can write data while the gpu is using the other buffer for rendering.

Related

Metal/OpenGL: How to set vertex buffer only once?

I have gone through https://www.raywenderlich.com/146414/metal-tutorial-swift-3-part-1-getting-started. For every frame
renderEncoder.setVertexBuffer(vertexBuffer, offset: 0, at: 0)
renderEncoder.setFragmentTexture(texture, at: 0)
is done. But vertex and texture data is never changed. Only Uniform matrices change. My object being rendered contains 8*4*4*4*4 triangles(yep, its a sphere). I could only get 4FPS. I am skeptical about setting the vertexBuffer every frame.
Its done similarly in OpenGL tutorials http://www.opengl-tutorial.org/beginners-tutorials/tutorial-5-a-textured-cube/
In OpenGL I could pull out vertex/texture buffer binding out of the render loop. But in Metal MTLRenderCommandEncoder needs CAMetalDrawable which is fetched for every frame.
You would typically use a new render command encoder for each frame. Anything you did with the previous render command encoder, like setting vertex buffers or fragment textures, is "lost" when that encoder is ended and you drop any references to it. So, yes, you need to set buffers and textures again.
However, that should not be expensive. Both of those methods just put a reference to the buffer or texture into a table. It's cheap. If you haven't modified their contents on the CPU, no data has to be copied. It shouldn't cause any state compilation, either. (Apple has said a design goal of Metal is to avoid any implicit state compilation. It's all explicit, such as when creating a render pipeline state object from a render pipeline descriptor.)
You need to profile your app to figure out what's limiting your frame rate, rather than guessing.

Drawing a 3D tree structure in WebGL

I am working on drawing large directed acyclic graphs in WebGL using the gwt-g3d library as per the technique shown here: http://www-graphics.stanford.edu/papers/h3/
At this point, I have a simple two-level graph rendering:
Performance is terrible -- it takes about 1.5-2 seconds to render this thing. I'm not an OpenGL expert, so here is the general approach I am taking. Maybe somebody can point out some optimizations that will get this rendering quicker.
I am astonished how long it takes to push the MODELVIEW matrix and buffers to the graphics card. This is where the lion's share of the time is wasted. Should I instead be doing MODELVIEW transformations in the vertex shader?
This leads me to believe that manipulating the MODELVIEW matrix and pushing it once for each node shouldn't be a bad practice, but the timings don't lie:
https://gamedev.stackexchange.com/questions/27042/translate-the-modelview-matrix-or-change-vertex-coordinates
Group nodes in larger chunks instead of rendering them separately. Do background caching of all geometry with applied transformations that most likely will not be modified and store it in one buffer and render in one call.
Another solution: Store nodes(box + line) in one buffer(You can store more than you need at current time) and their transformations in texture. apply transformations in vertex shader based on node index(texture coordinates) It should be faster drastically faster.
To test support use this site. I have MAX_VERTEX_TEXTURE_IMAGE_UNITS | 4
Best solution will be Geometry Instancing but it currently isn't supported in WebGL.

Is it possible to persistently change the values of a VBO on the iPhone OpenGL ES 2.0 inside a vertex shader?

I am an Opengl ES 2.0 newbie (and GLSL newbie) so forgive me if this is an obvious question.
If I have a VBO that I initialize once on the CPU at the start of my program is it possible to then use vertex shaders to update it each frame without doing calculations on the cpu and then reuploading it to the GPU? Im not referring to sending a uniform and manipulating the data based on that. Instead I mean causing a persistent change in the VBO on the GPU itself.
So the simplest example I can think of would be adding 1 to the x,y and z component of gl_Position in the vertex shader every time the frame is rendered. This would mean that if I had only one vertex and its initial position was set on the cpu to be (0,0,0,1) then after 30 frames it would (30,30,30,1) .
If this is possible what would it look like in code?
On modern desktop hardware (GL3/DX10) you can use transform feedback to write back the output of the vertex or geometry shader into a buffer, but I really doubt that the transform_feedback extension is supported on the iPhone (or in ES in general).
If PBOs are supported (what I also doubt), you can at least do it with some GPU-GPU copies. Just copy the vertex buffer into a texture (by binding it as a PBO), then render a textured fullscreen quad and perform the update in the fragment shader. After that you copy the framebuffer (which now contains the updated vertex data) into the vertex buffer (again by binding it as PBO). But this way you have to do 2 copies (although they should both happen completely on the GPU) and if the vertex data is floating point you will need to floating point render targets and framebuffer objects to be supported, too.
I think in ES the best solution would really be to do the computation on the CPU. Just hold a CPU copy (so you at least have no unneccessary GPU-CPU readback) and update the buffer data every frame (using GL_DYNAMIC_DRAW or even GL_STREAM_DRAW as buffer usage).
Maybe you can also completely prevent the persistent update by making the changes dependent on another simpler data. In your example you could just use a uniform for the frame number and set this as coordinate in the vertex shader every frame, but I don't know how complex your update function really is.

Easiest way to visualize 10,000 shaded boxes in 3D

I have a simple task: I have 10,000 3D boxes, each with a x,y,z, width, height, depth, rotation, and color. I want to throw them into a 3D space, visualize it, and let the user fly through it using the mouse. Is there an easy way to put this together?
One easy way of doing this using recent (v 3.2) OpenGL would be:
make an array with 8 vertices (the corners of a cube), give them coordinates on the unit cube, that is from (-1,-1, -1) to (1, 1, 1)
create a vertex buffer object
use glBufferData to get your array into the vertex buffer
bind the vertex buffer
create, set up, and bind any textures that you may want to use (skip this if you don't use textures)
create a vertex shader which applies a transform matrix that is read from "some source" (see below) according to the value of gl_InstanceID
compile the shader, link the program, bind the program
set up the instance transform data (see below) for all cube instances
depending on what method you use to communicate the transform data, you may draw everything in one batch, or use several batches
call glDrawElementsInstanced N number of times with count set to as many elements as will fit into one batch
if you use several batches, update the transform data in between
the vertex shader applies the transform in addition to the normal MVP stuff
To communicate the per-cube transform data, you have several alternatives, among them are:
uniform buffer objects, you have a guaranteed minimum of 4096 values, respectively 256 4x4 matrices, but you can query the actual value
texture buffer objects, again you have a guaranteed minimum of 65536 values, respectively 4096 4x4 matrices (but usually something much larger, my elderly card can do 128,000,000 values, you should query the actual value)
manually set uniforms for each batch, this does not need any "buffer" stuff, but is most probably somewhat slower
Alternatively: Use pseudo-instancing which will work even on hardware that does not support instancing directly. It is not as elegant and very slightly slower, but it does the job.

Optimizing OpenGL ES application. Should I avoid calling glVertexPointer when possible?

I'm developing a game for iPhone in OpenGL ES 1.1; I have a lot of textured quads in a data structure where each node has a list of children nodes. So I traverse the structure from the root, and do the render of each quad, then its childs and so on.
The thing is, for each quad I'm calling glVertexPointer to set the vertices.
Should I avoid calling it for each quad? Will improve performance calling just once for example?
glVertexPointer copies the vertices to GPU memory or just saves the pointer?
Trying to minimize the number of calls will not be easy since each node may have a different quad. I have a lot of equal sprites with the same vertex data, but I'm not necessarily rendering one after another since I may be drawing a different sprite between them.
Thanks.
glVertexPointer keeps just the pointer, but incurs a state change in the OpenGL driver and an explicit synchronisation, so costs quite a lot. Normally when you say 'here's my data, please draw', the GPU starts drawing and continues to do so in parallel to whatever is going on on the CPU for as long as it can. When you change rendering state, it needs to finish whatever it was doing in the old state. So by changing once per quad, you're effectively forcing what could be concurrent processing to be consecutive. Hence, avoiding glVertexPointer (and, presumably, a glDrawArrays or glDrawElements?) per quad should give you a significant benefit.
An immediate optimisation is simply to keep a count of the number of quads in total in the data structure, allocate a single target buffer for vertices that is at least that size and have all quads copy their geometry into the target buffer rather than calling glVertexPointer each time. Then call glVertexPointer and your drawing calls (condensed to just one call also, hopefully) with the one big array at the end. It's a bit more costly on the CPU side but the parallelism and lack of repeated GPU/CPU synchronisations should save you a lot.
While tiptoeing around topics currently under NDA, I strongly suggest you look at the Xcode 4 beta. Amongst other features Apple have stated publicly to be present is an OpenGL ES profiler. So you can easily compare approaches.
To copy data to the GPU, you need to use a vertex buffer object. That means creating a buffer with glGenBuffers, pushing data to it with glBufferData and then posting a glVertexPointer with an address of e.g. 0 if the first byte in the data you uploaded is the first byte of your vertices. In ES 1.x, you can upload data as GL_DYNAMIC_DRAW to flag that you intend to update it quite often and draw from it quite often. It's probably worth doing if you can get into a position where you're drawing more often than you're uploading.
If you ever switch to ES 2.x there's also GL_STREAM_DRAW, which may be worth investigating but isn't directly relevant to your question. I mention it as it'll likely come up if you Google for vertex buffer objects, being available on desktop OpenGL. Options for ES 1.x are only GL_STATIC_DRAW and GL_DYNAMIC_DRAW.
I've just recently worked on an iPad ES 1.x application with objects that change every frame but are drawn twice per the rendering pipeline in use. There are only five such objects on screen, each 40 vertices, but switching from the initial implementation to the VBO implementation cut 20% off my total processing time.