Is there a double floating point data type in Godot shaders? - double

I'm trying to use shaders in Godot and I need a really precise calculation (more than float). Is it possible to have a double in Godot shaders? I searched the documentation but I found nothing...
Edit: I've made a Mandelbrot set explorer and with floats after some zooming the image gets all pixelated because the precision limit is reached, I think that with doubles I would be able to zoom further without losing quality. You can check out my code here btw

Godot doesn't support FP64 in shaders, since OpenGL 3.3/OpenGL ES 3.0 doesn't mandate support for them on GPUs. Doubles on the GPU are expensive and often crippled on consumer GPUs anyway, making them a bad fit for most real-time applications (especially games that need to run at high framerates).

Related

Best Practice to Increase Frame Per Second for my daydream VR application [duplicate]

I have been building a game for VR using Unity3d. It has only low poly models and the file size is less then 40 mb still the game lags when played on mobile.. Please suggest how to improve the performance..
Thank you in advance..
In order to improve performance in VR for mobile you have to optimize everything as best as you can, you should keep some of these variables in mind:
Graphics Side
- Number of polygons in the scene
- How many source of lighting do you have
Programming Side
- How much work is taking your code, is doing it efficiently?
The programming part can include problems within the physics system, also some logic problems that can probably decrease the overall performance because of higher computation.
My advice is to learn about the Profiler that unity offers, actually you can observe how much work is taking your code and where exactly it is your bottle-neck. This video also can be useful.
Of course a solution could be implement your code following design standards, like design patterns and software architecture (depending on the size of the project).
I hope it can be useful for you!
What I found from developing and launching a vr game is some of the issues below
Number of polygons is usually your first to check even though your models are low poly. For example, I looked at Synty models in the unity store and some of them were over 1k for a bag and 7k for a character model. This seriously reduce the amount of objects you can if you want to target a max of 50000 per eyes.
With some models, you can use blender and the decimate tool to reduce the polygon count pretty easily. From there I would use LOD's to reduce their count further based on distance.
Use occlusion culling (pro version only)
Set your camera distance to maybe a 100 instead of the default
Use mobile shaders and careful using some of the standard shaders as they are expensive. Also transparent shaders will becomes expensive cause overdraw.
Batch your textures and make them static if possible
Don't use dynamic shadows on lighting but instead bake your lights
Try to avoid using physics as this becomes expensive and instead raycast to trigger events or shooting weapons.
Run profiler often and check for any bottlenecks (pro version only)
Reduce the count of Particles effects and their values
Character bones can also cause issue so remove as many as possible
There is also your code to look at as mentioned by Manujamming
Set quality setting to low in the inspector to gain best performance.
Could you provide a screenshot of your game scene?
I hope this makes sense.
Best of luck!

Which is better way for rendering & performance?

I'm making maps for my game. I designed forest for map by using many tree, stone.. Images (insert image into unity scene and arrange it). My game runs well on Android but can not run on iOS (ip4s). It met memory problem. I want to ask everyone:
If i design the forest in photoshop instead of unity, is that way better than my current way?
Thanks all
It depends on what you are trying to do.
But anyway, there is a common way to solve memory usage related problems. Fire up the Profiler!
http://docs.unity3d.com/Manual/Profiler.html
There you can see what is eating your precious memory. So you can try and decide if it's best to make them combined in one image or keep them as separate images. Maybe you would also see the memory problem could be related to other assets you use.
You need to look at Instruments in XCode. Also you can use the Profiler that comes with Unity. Another way you can save memory is by reducing the graphics strain on the screen. You need to see your draw calls and verts, and tris under stats. Keep draw calls under 50-60. And keep verts and tris down. Look at the graphics benchmarks for Graphics on OpenGL, iPhone 4S is an older device and depending on your android may be substantially slower. iPhone 4S has 512MB of RAM I think. This should be enough to handle a pretty big memory load. Check out your OnGUI() objects and calls. You want to mitigate those as much as possible. Also try and use culling to your advantage! Also if you are using Fog or camera filters they take a substantial load as well too. Stay away from Literal Types, too if you can.
Also use Vertex Lit for rendering path vs forward rendering path. Use auto best performance for resolution, too. This will make everything go at 0.75 resolution instead of full retina 960x640 or whatever it is for 4S. You can also in Xcode tweak the resolution, depending on the size of your controls you could make it 0.6
under DeviceSettings.mm in your XCode Project:
case deviceiPhone4S: resMult = 0.6f; break;
or in your MonoDevelop UnityScript (depending on orientation):
Screen.SetResolution (Screen.width * 0.6f, Screen.height * 0.6f, true);

Why is the texture size changed?

I've made a small png (100x30) and added it as a texture in Unity. I've also added it to a script so I can poll it, but if I print out its width and height it now states its 128x32. What happened?
I've tried to adjust the camera, but it does not seems to have anything to do with it. Any clue?
Generally textures are scaled to a power of 2 on both sides for performance reasons.
For example multiplication and division are faster if the GPU can assume that you are working with POT textures. Even mipmaps generation process relies on that because division by 2 never produces a remainder.
Old GPU (I don't know exactly but probably even several mobile GPU these days) requires strictly POT textures.
Unity by default will try to scale the texture to a power of 2. You can disable or tweak that setting switching texture import settings to "Advanced".
For more details checkout the doc.
Note that generally you want NPOT textures only for GUI, where you need to control exactly the resolution on screen and mipmap aren't used. For everything in the 3D scene power of 2 performs better.

OpenGL-ES 2.0 VS OpenGL-ES 1.1, which is faster?

I've written an app using OpenGL-ES 1.1, but am wondering if there are speed gains to be found by switching to 2.0. Has anyone done any tests with large polygon count models? I only want to render triangles that have different colors, nothing fancy. However, I am wanting to render about 1 million triangles for my comparison test.
OpenGL ES 1.1 and 2.0 provide two very different ways of doing 3-D graphics, so I don't know that direct performance comparisons make much sense. You're probably going to see identical performance using both if you create 2.0 shaders that just simulate OpenGL ES 1.1's fixed function pipeline. This is backed by Apple's documentation on the PowerVR SGX, which says:
The graphics driver for the PowerVR SGX also implements OpenGL ES 1.1
by efficiently implementing the fixed-function pipeline using shaders.
For rendering basic, flat-colored triangles, I'd suggest going with OpenGL ES 1.1 simply because you'll need to write a lot less code. If you are able to get by with the built-in functionality in 1.1, it's usually easier to target that version. You also have a slightly larger market by being able to target the (now) minority of iOS device owners with hardware that doesn't support 2.0.
However, OpenGL ES 2.0 lets you do a whole lot more using its vertex and fragment shaders than 1.1 does, so some of the things that you might do with extensive geometry can instead be handled by shaders. This can make for better-looking, faster effects.
For example, I'm finishing an update to my molecular renderer using 2.0 shaders that will significantly increase resolution of the visualized structures. I'm doing this by using custom shaders that generate raytraced impostors for the spheres and cylinders in these structures. These objects look perfectly round and smooth at any magnification. Doing this in OpenGL ES 1.1 with pure geometry would be all but impossible, because the number of triangles required would be ridiculous (also, billboards wouldn't work well for my cylinders, and the intersection of these shapes wouldn't be handled right in that case).
A million triangles might be a bit much for these devices. In my benchmarks, the old iPhone 3G did around 500,000 triangles per second, and the first-generation iPad about 2,000,000. I haven't fully benchmarked the much faster iPad 2, but my early tests show it at about 8,000,000 - 10,000,000 triangles per second. Even on the fastest device out there, you're only going to get ~10 FPS on a million-triangle scene in the best of the devices. Odds are, you don't need that size of geometry, so I'd do what I could to reduce that first.
The performance gains in ES 2.0 aren't from rendering single VBOs, but through
1) performance tweaks in custom shaders to do only the bare minimum required rather than more general fixed functions
2) rendering lots of objects due to the streamlining of the matrix pipeline and the removal of the matrix stack and the "fixed function", which must figure out new shaders on state changes, and the removal of the need for multipass rendering for some effects.
This allows e.g. the CPU to do all dynamic matrix transformations in a separate thread, whilst ignoring static matrices and avoid unneeded transfer between the CPU->GPU. There's no need to continually redo camera matrices between 2D and 3D state changes in shader versions.

Skybox OpenGL ES iPhone and iPad

I need to create a virtual tour tool for iOS. It's an archaeological application: the user could open it when he's inside an historic building or when he's visiting an archaeological dig. No need of doom-like subjective point of view: just a skybox. The application will have a list of points of interest (POIs). Every POI will have its own skybox.
I thought that I could use using OpenGL-ES to create a sort of textured skyboxes that could be driven/rotated by touches. Textures are hi-resolution PNG photos.
It's a funded project and I have 4 months.
Where do I have to go to learn how to develop it? Do I have to purchase a book? Which one?
I have just moderate Objectve-C and Cocoa-touch skills, since I've built just one application for the iPad. I have zero knowledge of OpenGL-ES.
Since I know OpenGL ES quite well, I had a go at a demo project, doing much of what you describe. The specific intention was to do everything in the simplest way available under OpenGL ES as long as the performance was good enough.
Starting from the OpenGL template that Apple supply, I have written one new class with a heavily commented implementation file 122 lines long that loads PNG images as textures. I've modified the sample view controller to draw a skybox as required and to respond to touches with a version of the normal iPhone inertial scrolling, which has meant writing less than 200 lines of (also commented) code.
To achieve this I needed to know:
the CoreGraphics means for getting pixel data from a PNG
how to set up the PROJECTION stack to get a perspective projection with the correct aspect ratio
how to manipulate the MODELVIEW stack to ensure two-axis rotation (first person shooter or Google StreetView style) of the scene according to member variables and to ensure that the cube geometry I defined doesn't visibly intersect the near clip plane
how to specify vertex locations and texture coordinates to OpenGL
how to specify the triangles OpenGL should construct between vertices
how to set the OpenGL texture parameters accordingly to supply only one level of detail for the texture
how to track a touch to manipulate the member variables dictating rotation, including a tiny bit of mechanics to give an inertial rotation
Of course, the normal view controller lifecycle instructions are obeyed. Textures are loaded on viewDidLoad and released on viewDidUnload, for example, to ensure that this view controller plays nicely with potential memory warnings.
The main observations are that, beyond knowing the Objective-C signalling mechanisms, most of this is C stuff. You're primarily using C arrays and references to make C function calls, both for OpenGL and CoreGraphics. So a prerequisite for coding this yourself is being happy in C, not just Objective-C.
The CoreGraphics stuff is a bit tedious but it's all just reading the docs to figure out how each type of thing relates to the next — none of it is really confusing. Just get into your head that you need a data provider for the PNG data, you can create an image from that data provider and then create a bitmap context with memory that you've allocated yourself, draw the image into the context and then release everything except the memory you allocated yourself to be left with the result. That result can be directly uploaded to OpenGL. It's relatively short boilerplate stuff, but OpenGL has no concept of PNGs and CoreGraphics has no convenient methods of pushing things into OpenGL.
I've assumed that textures are a suitable size on disk. For practical purposes, that means assuming they're a power-of-two in size along each edge. Mine are 512x512.
The OpenGL texture management stuff is easy enough; it's just reading the manual to learn about texture names, name allocation, texture parameters and uploading image data. More routine stuff that is more about knowing the right functions than managing an intuitive leap.
For supplying the geometry to OpenGL I've just written out the arrays in full. I guess you need a bit of a spatial mind to do it, but sketching out a 3d cube on paper and numbering the corners would be a big help. There are three relevant arrays:
the vertex positions
the texture coordinates that go with each vertex location
a list of indices referring to vertex positions that defines the geometry
In my code I've used 24 vertices, treating each face of the cube as a logically discrete thing (so, six faces, each with four vertices). I've defined the geometry using triangles only, for simplicity. Supplying this stuff to OpenGL is actually quite annoying when you're starting; making an error generally means your program crashes deep inside the OpenGL driver without giving you a hint as to what you did wrong. It's probably best to build up a bit at a time.
In terms of a UIView capable of hosting OpenGL content, I've more or less used the vanilla stuff Apple directly supply in the OpenGL template. The one change I made was explicitly to disable any attempted use of OpenGL ES 2.x. 1.x is more than sufficient for this task, so we gain simplicity firstly by not providing two alternative rendering paths and secondly because the ES 2.x path would be a lot more complicated. ES 2.x is the fully programmable pipeline with pixel and vertex shaders, but in ES land the fixed pipeline is completely removed. So if you want one then you have to supply your own substitutes for the normal matrix stacks, you have to write vertex and fragment shaders to do 'a triangle with a texture', etc.
The touch tracking isn't particularly complicated, more or less just requiring me to understand how the view frustum works and how touches are delivered in Cocoa Touch. Once you've done everything else, this bit should be quite easy.
Notably, the maths I had to implement was extremely simple. Just the touch tracking, really. Assuming you wanted a Google Maps-type view meant that I could rely entirely on OpenGL's built-in ability to rotate things, for example. At no point do I explicitly handle a matrix.
So, how long it would take you to write depends on your own confidence with C and with CoreGraphics, and how happy you are sometimes coding in the dark. Because I know what I'm doing, the whole thing took two or three hours.
I'll try to find somewhere to upload the project so that you can have a look at it. I think it'd be helpful to leaf through it and see how alien it looks. That'll probably give you a good idea about whether you could implement something that meets all of your needs within the time frame of your project.
I've left the view controller as having exactly one view, which is the OpenGL view. However, the normal iPhone compositing rules apply and in your project you can easily put normal controls on top. You can grab my little implementation at mediafire. StackOverflow post length limits prevent me from putting big snippets of code here, but please feel free to ask if you have any specific questions.
It's going to be pretty tough if you're learning OpenGL ES from scratch. I'd use a graphics engine to do most of the heavy lifting. I'm currently playing Ogre3d, from what I've seen so far I can recommend it: http://www.ogre3d.org/. It has Skybox (and much more) out of the box, and should be pretty straight forward to do.
I think you can do this, here are some links to help get you started:
http://sidvind.com/wiki/Skybox_tutorial
common problems:
( i would post direct links but stackoverflow wont let me )
look on stackoverflow items no 2859722 and 2297564.
some programs and tips to help make the textures:
spacescape
there are some great opengl tutorials here:
nehe.gamedev.net
they are not iphone specific, but they explain opengl pretty well. i think some folks have ported these to the phone as well, i just cant find them now.