Fast software rendering on iOS - iphone

Bad news everyone!
I'm trying to port old Windows game to iPad and I have a problem. Much of all graphics written without hardware acceleration. On Windows it uses LockRect() call from IDirect3DSurface8 class to get colorbuffer from backbuffer and write some graphics data. After this it uses UnlockRect to send our pixel color data to videomemory. But OpenGL ES hasn't such functionality. I'm trying to emulate this. I have an array which I draw every game tact using glTexImage2D() then glDrawTexfOES(0, 0, 0, 1024, 768)
But creating a texture from array every tact is too slow. How can I do this much faster? Thanks in advance.

Try a rendering a textured quad and use glTexSubImage2D() for texture re-uploads.

Related

Capturing the frames online without slowing the pace of the game

How to capture the background frames while a unity game is running?
I already knew
cam.Render();
Texture2D image = new Texture2D(cam.targetTexture.width, cam.targetTexture.height); image.ReadPixels(new Rect(0, 0, cam.targetTexture.width, cam.targetTexture.height), 0, 0);
and then
converting the texture into an image using EncodeToPng/JPG
But what I have seen using this extra cam.render and encoding to png and capturing the frames differently slow down the operation drastically, takes huge time to do this.
How can I directly get the textures or frames online while a game is running, maybe from the OpenGL calls which GPU are using?
Does anyone have any idea how to achieve this?
Have you tried using Unity's Video Capture API to record gameplay? There are tutorials and instructionals you can find, but the Unity Documentation is a good place to start.
https://docs.unity3d.com/Manual/windowsholographic-videocapture.html
It's flexibly implemented, so you can work around whatever requirements you want in terms of fps, resolution, etc based on your application and how it's set up.

What is different between CoreGraphics and CoreAnimation?

I am developing iphone game using coregraphics. but the speed is very slow. I could not play my game.. So, I googled a lot.. During the googling, I found the belows.
CoreGraphics, CoreAnimation, OpenGL ES, CALayer, Quartz 2D
I am so confused between them. Someone told me coregraphics is not using GPU. Some told me it is using GPU. coregraphics is best or openGL is best, calayer is better. ^^;;;; What is different between them and which one is using GPU?? Which one is the best to make a game. I have many image to draw.
Please let me know..... Thanks in advance.
The iOS graphics APIs are layered. Even though some portion of the final render might go thru the GPU, most of the CoreGraphics drawing functions do not.
CoreAnimation does use the GPU, but the types of graphics operations within its API (transforms of existing image data mostly) is limited.
OpenGL ES uses the GPU, but (re)compiling any changes to the rendering pipeline is reported to be quite CPU intensive.
And anything that uploads new bitmaps, images or textures to the display pipeline appears to be both CPU and GPU intensive.
If you are writing a game, you may wish to look at SpriteKit if 2D and metal if 3D. Core graphics is for rendering high quality still content, but wasn’t intended as a game interface with massive FPS, unless your game resembles common app UI more than the typical game.

glFramebufferTexture2D performance

I'm doing heavy computation using the GPU, which involves a lot of render-to-texture operations. It's an iterative computation, so there's a lot of rendering to a texture, then rendering that texture to another texture, then rendering the second texture back to the first texture and so on, passing the texture through a shader each time.
My question is: is it better to have a separate FBO for each texture I want to render into, or should I rather have one FBO and bind the target texture using glFramebufferTexture2D each time I want to change render target?
My platform is OpenGL ES 2.0 on the iPhone.
On the iPhone implementation, it is inexpensive to change the attachment, assuming the old and new textures are the same dimensions/format/etc. Otherwise, the driver has to do some additional work to reconfigure the framebuffer.
AFAIK, better performance is achieved by using only one FBO, and changing the texture attachments as necessary.
The best way is to do benchmark.

how to set/get pixel on a texture in OpenGL ES on iPhone?

I am trying to Google for what I've mentioned in the title, but somehow I couldn't find it. This should not be that hard, should it?
What I am looking for is a way to gain access to an OpenGL ES texture on iPhone, and a way to get/set pixel with it. What are the OpenGL ES functions I am looking for?
Before OpenGL ES is able to see your texture, you should have loaded it in memory already, generated texture names(glGenTextures), and bound it(glBindTexture). Your texture data is just a big array in memory.
Therefore, should you with to change a single texel, you can manipulate it in-memory and then bind it again. This approach is usually done for procedural texture generation. There are many available resources on the net about it, for instance: http://www.blumtnwerx.com/blog/2009/06/opengl-es-texture-mapping-for-iphone-oolong-powervr/
While glReadPixels is available, there are very few situations where you'd need to use it for interactive applications(screen capture comes to mind). It absolutely destroys performance. And still won't give you back the original textures, but instead will return a block of the framebuffer.
I have no idea what kind of effect you are looking for. However, if you are targeting a device that supports pixel shaders, perhaps a custom pixel shader can do what you want.
Of course, I am working under the assumption you didn't mean pixel as in screen coordinates.
I don't know about setting an individual pixel, but glReadPixels can read a block of pixels from the frame buffer (http://www.khronos.org/opengles/sdk/docs/man/glReadPixels.xml). Your problem googling may be because texture pixels are often shortened to 'texels'.

It's possible to draw multiple and different textures in openGL for iPhone with a single draw call?

I have this game with some variety of textures. Whenever I draw a texture I bind It and draw It.
It's possible to draw all the elements of my game in just one draw call ? , with Interleaved Arrays ...how can I do It ?
The performance of my game will increase by doing this ?
I believe that you would benefit from using a texture atlas, one giant texture containing all of your smaller ones. For OpenGL ES on the iPhone, Apple recommends that you place all of the vertices that you can draw at one time in a Vertex Buffer Object (VBO), and that you interleave the vertex, normal, color, and texture information within that buffer (in that order). Note that the VBO itself doesn't give you a significant performance boost over a standard array on the original iPhones, but the 3GS has hardware VBO support.
Either by grouping data into a VBO or an array, I've seen significant performance improvements in my application when I reduced the number of draw calls.
Another area that you might want to look at is in reducing the size of your geometry. By going from a GLfloat to GLshort for my vertex and normal coordinate data, I saw an over 30% improvement in OpenGL ES rendering speed on the iPhone 3G.
TBH you are better off doing it with seperate draw calls. Performance is unlikely to be noticeably affected.
There "may" be ways to do it by indexing into slices of a volume texture (or texture array) but you'd need the 3GS to do this and it wouldn't work on the old iPhone. I also doubt you'd get any noticeable performance improvement by doing it.