as3 starling rendertexture vs meshbatch [how to make a choice] - starling-framework

Im new to starling and game development in general. As i have understood so far, the two optimised techniques of rendering on mobile are "RenderTexture" and "MeshBatch".
- At an architectural level, how should we choose between the two?
- Is it also possible to use both simultaneously? (eg. drawing a meshbatch inside a rendertexture)

Those are two orthogonal concepts. You can utilize both simultaniously in your projects. Furthermore, this is related to any platform, not just mobile.
You don't have to make a choice, use both.
Render textures
This is not a direct optimization, it is an object used to implement particular effects or rendering techniques.
However, it can be used to optimize away some draw calls if you draw complex object to the render texture once and then draw that texture directly to the screen in subsequent frames.
Starling implements this optimization for filters.
Mesh batching
This is an actual optimization techique. Draw call overhead can be high, so combining several meshes into one can give some performance benefits if implemented correctly.
Starling does this automatically for it's display objects.

Related

New UI Builder/Toolkit & VR World Space

Is it possible to use the new UI builder and Toolkit for World Space UI in Virtual Reality use?
I have seen ways of doing it with Render Textures but not only does It not seem to be the same as a World Space Canvas (Which I did expect but it´s not even close) but I also don´t find a way of interacting using the VR Raycast method?
This isn't officially supported yet but is certainly possible for those willing to implement it themselves.
As the OP mentioned it is straightforward enough to render the UI, the main idea is to set panelSettings.targetTexture to some RenderTexture, which you can apply to a quad like any other texture.
Note that if you have multiple UIs you will need multiple instances of PanelSettings.
For raycasting there is a method panelSettings.SetScreenToPanelSpaceFunction which can be used for translating 2d coordinates to panel coordinates, here is an Official Unity Sample demonstrating how it can be implemented in camera space. The method is called every update so it can be hijacked to use a raycast from a controller instead of screen coordinates, although I've had mixed results with this approach.
Check out this repo for an example implementation in XR, it is a more sophisticated solution which makes extended use of the input system.

Unity XR Single Pass Instanced rendering and UI

I was wondering if anyone has recommendations regarding the use of the Unity Canvas-based UI system (UGUI) along with the Single Pass Instanced rendering mode for XR applications (?)
My concerns are whether the UI elements will render as Single Pass Instanced or if they are actually just rendered twice - potentially causing performance issues.
As far as I can see on the default UI shader (Unity 2019.4.21 built in shaders for the built in render pipeline), it doesn't appear to support GPU Instancing (correct me if I am wrong). I can of course create my own shader with support for GPU Instancing in accordance with the guidelines here but I don't know if the UI rendering system will actually respect that (?) thinking that there might be a reason why is not implemented in the default UI shader...
And if the UI rendering does indeed not support GPU Instancing, does it then have some other optimized way of rendering that makes up for the lack of GPU Instancing?
I am sorry for these slightly fuzzy questions. I am just trying to figure out which path to take with my project - whether to go the UI (UGUI) way or not.
Best regards, Jakob
I try to migrate a big VR project on Unity2021 and Single Pass Instanced.
I have no issue with UGUI. I had some issue with some of ours shaders and third parties shaders and in that case, the issue was the rendering was visible only on one eye or it was different some way between the two eyes.
It did not check the draw call specificaly on UGUI but for me if it's the same on both eye it's rendered once.
I have both screenspace and wordspace GUI.
Alex

Advantages of using Core Graphics

I would like to know what kind of advantages I get from using Core Graphics instead of Open GL ES. My main question is based on this:
Creating simple View animations.
Creating some visual appealing objects (Graphics like Core Plot for instance, Animated Objects, etc).
Time consuming (both learning and implementing)
Simple 2D Games
Complex 2D Games
3D Games
Code maintenance ad also cleaner code.
Easier integration with other UI elements.
Thanks.
First, I want to clear up a little terminology here. When people talk about Core Graphics, they generally are referring to Quartz 2D drawing, which is a 2-D vector-based drawing API. It is used to draw out vector elements either to the screen or to offscreen contexts like PDFs. Core Animation is responsible for animation, layout, and some limited 3-D effects involving rectangular layers and UI elements. OpenGL ES is a lower-level API for talking with the graphics hardware on iOS devices for both 2-D and 3-D drawing.
You're asking a lot in your question, and the judgment on what's best in each scenario is subjective and completely up to the developer and their particular needs. I can, however, provide a few general tips.
In general, a recommendation you'll see in Apple's documentation and in presentations by engineers is that you're best off using the highest level of abstraction that solves your particular problem.
If you need to just draw a 2-D user interface, the first thing you should try is to implement this using Apple's provided UIKit elements. If they don't have the capability you need, make custom UIViews. If you are designing Mac-iOS cross-platform code (like in the Core Plot framework), you might drop down to using custom Core Animation CALayers. Each step down in this process requires you to write more code to handle things that the level above did for you.
You can do a surprising amount of stuff with Core Animation, with pretty good performance. This isn't just limited to 2-D animations, but can extend into some simple 3-D work as well.
OpenGL ES is underneath the drawing of everything you see on the screen for an iOS device, although this is not exposed to you. As such, it provides the least abstraction for onscreen rendering, and requires you to write the most code to get something done. However, it can be necessary in situations where you want to extract the most performance from 2-D display (say, in an action game) or to render true 3-D objects and environments.
Again, I tend to recommend that people start at the highest level of abstraction when writing an application, and only drop down when they find that they cannot do something or the performance is not within the specification they are trying to hit. Fewer lines of code makes applications easier to write, debug, and maintain.
That said, there are some nice frameworks that have developed around abstracting away OpenGL ES, such as cocos2D and Unity 3D, which might make working with OpenGL ES easier in many situations. For each case, you'll need to evaluate what makes sense for the particular needs of your application.
Basically, use OpenGL if you are making a game. Otherwise, use CoreGraphics. CoreGraphics lets you do simple things embedded in your normal UI code.
Creating simple View animations.
-> CG
Creating some visual appealing objects (Graphics like Core Plot for instance, Animated Objects, etc).
-> CG
Time consuming (both learning and implementing)
-> OpenGL and CG are both kind of tough at first.
Simple 2D Games
-> OpenGL
Complex 2D Games
-> OpenGL
3D Games
-> OpenGL
Code maintenance ad also cleaner code.
-> Irrelevant

Using vector graphics in iPhone games

I'm Flash/AS3 developer and I'm wondering how some iPhone developers use vector assets in their games.
For example, "Lil' Pirates": this games looks like vector-based, it's zooming and unzooming easily, but I can't get any information about using vector assets at iOS.
Quartz 2D is a pretty lightweight framework for vector based graphics. It's very well documented...
Quartz Documentation
In particular I'd pay particular notice to layering and performance...
Quartz Layering and Performance
If performance is a worry I'd also have a read through the core animation documentation. Core animation uses CALayers to cache vectors drawn with Quartz to in-memory bitmaps. These CALayers can then be transformed and translated through the animation APIs. If you intend to perform a lot of drawing this is the route I would recommend.

Skybox OpenGL ES iPhone and iPad

I need to create a virtual tour tool for iOS. It's an archaeological application: the user could open it when he's inside an historic building or when he's visiting an archaeological dig. No need of doom-like subjective point of view: just a skybox. The application will have a list of points of interest (POIs). Every POI will have its own skybox.
I thought that I could use using OpenGL-ES to create a sort of textured skyboxes that could be driven/rotated by touches. Textures are hi-resolution PNG photos.
It's a funded project and I have 4 months.
Where do I have to go to learn how to develop it? Do I have to purchase a book? Which one?
I have just moderate Objectve-C and Cocoa-touch skills, since I've built just one application for the iPad. I have zero knowledge of OpenGL-ES.
Since I know OpenGL ES quite well, I had a go at a demo project, doing much of what you describe. The specific intention was to do everything in the simplest way available under OpenGL ES as long as the performance was good enough.
Starting from the OpenGL template that Apple supply, I have written one new class with a heavily commented implementation file 122 lines long that loads PNG images as textures. I've modified the sample view controller to draw a skybox as required and to respond to touches with a version of the normal iPhone inertial scrolling, which has meant writing less than 200 lines of (also commented) code.
To achieve this I needed to know:
the CoreGraphics means for getting pixel data from a PNG
how to set up the PROJECTION stack to get a perspective projection with the correct aspect ratio
how to manipulate the MODELVIEW stack to ensure two-axis rotation (first person shooter or Google StreetView style) of the scene according to member variables and to ensure that the cube geometry I defined doesn't visibly intersect the near clip plane
how to specify vertex locations and texture coordinates to OpenGL
how to specify the triangles OpenGL should construct between vertices
how to set the OpenGL texture parameters accordingly to supply only one level of detail for the texture
how to track a touch to manipulate the member variables dictating rotation, including a tiny bit of mechanics to give an inertial rotation
Of course, the normal view controller lifecycle instructions are obeyed. Textures are loaded on viewDidLoad and released on viewDidUnload, for example, to ensure that this view controller plays nicely with potential memory warnings.
The main observations are that, beyond knowing the Objective-C signalling mechanisms, most of this is C stuff. You're primarily using C arrays and references to make C function calls, both for OpenGL and CoreGraphics. So a prerequisite for coding this yourself is being happy in C, not just Objective-C.
The CoreGraphics stuff is a bit tedious but it's all just reading the docs to figure out how each type of thing relates to the next — none of it is really confusing. Just get into your head that you need a data provider for the PNG data, you can create an image from that data provider and then create a bitmap context with memory that you've allocated yourself, draw the image into the context and then release everything except the memory you allocated yourself to be left with the result. That result can be directly uploaded to OpenGL. It's relatively short boilerplate stuff, but OpenGL has no concept of PNGs and CoreGraphics has no convenient methods of pushing things into OpenGL.
I've assumed that textures are a suitable size on disk. For practical purposes, that means assuming they're a power-of-two in size along each edge. Mine are 512x512.
The OpenGL texture management stuff is easy enough; it's just reading the manual to learn about texture names, name allocation, texture parameters and uploading image data. More routine stuff that is more about knowing the right functions than managing an intuitive leap.
For supplying the geometry to OpenGL I've just written out the arrays in full. I guess you need a bit of a spatial mind to do it, but sketching out a 3d cube on paper and numbering the corners would be a big help. There are three relevant arrays:
the vertex positions
the texture coordinates that go with each vertex location
a list of indices referring to vertex positions that defines the geometry
In my code I've used 24 vertices, treating each face of the cube as a logically discrete thing (so, six faces, each with four vertices). I've defined the geometry using triangles only, for simplicity. Supplying this stuff to OpenGL is actually quite annoying when you're starting; making an error generally means your program crashes deep inside the OpenGL driver without giving you a hint as to what you did wrong. It's probably best to build up a bit at a time.
In terms of a UIView capable of hosting OpenGL content, I've more or less used the vanilla stuff Apple directly supply in the OpenGL template. The one change I made was explicitly to disable any attempted use of OpenGL ES 2.x. 1.x is more than sufficient for this task, so we gain simplicity firstly by not providing two alternative rendering paths and secondly because the ES 2.x path would be a lot more complicated. ES 2.x is the fully programmable pipeline with pixel and vertex shaders, but in ES land the fixed pipeline is completely removed. So if you want one then you have to supply your own substitutes for the normal matrix stacks, you have to write vertex and fragment shaders to do 'a triangle with a texture', etc.
The touch tracking isn't particularly complicated, more or less just requiring me to understand how the view frustum works and how touches are delivered in Cocoa Touch. Once you've done everything else, this bit should be quite easy.
Notably, the maths I had to implement was extremely simple. Just the touch tracking, really. Assuming you wanted a Google Maps-type view meant that I could rely entirely on OpenGL's built-in ability to rotate things, for example. At no point do I explicitly handle a matrix.
So, how long it would take you to write depends on your own confidence with C and with CoreGraphics, and how happy you are sometimes coding in the dark. Because I know what I'm doing, the whole thing took two or three hours.
I'll try to find somewhere to upload the project so that you can have a look at it. I think it'd be helpful to leaf through it and see how alien it looks. That'll probably give you a good idea about whether you could implement something that meets all of your needs within the time frame of your project.
I've left the view controller as having exactly one view, which is the OpenGL view. However, the normal iPhone compositing rules apply and in your project you can easily put normal controls on top. You can grab my little implementation at mediafire. StackOverflow post length limits prevent me from putting big snippets of code here, but please feel free to ask if you have any specific questions.
It's going to be pretty tough if you're learning OpenGL ES from scratch. I'd use a graphics engine to do most of the heavy lifting. I'm currently playing Ogre3d, from what I've seen so far I can recommend it: http://www.ogre3d.org/. It has Skybox (and much more) out of the box, and should be pretty straight forward to do.
I think you can do this, here are some links to help get you started:
http://sidvind.com/wiki/Skybox_tutorial
common problems:
( i would post direct links but stackoverflow wont let me )
look on stackoverflow items no 2859722 and 2297564.
some programs and tips to help make the textures:
spacescape
there are some great opengl tutorials here:
nehe.gamedev.net
they are not iphone specific, but they explain opengl pretty well. i think some folks have ported these to the phone as well, i just cant find them now.