FPS drops when scale object. How fix? Unity Android - unity3d

There is an empty scene with one standard cube. If you change its scale to (5;5;1), then the fps does not drop.
But if I change it to (5; 10; 1) my fps drops to ~30.
If I move the camera away from the cube with the scale (5;10;1), then the fps is again 60.
Maybe I have wrong camera settings or something else.
How to achieve high fps without moving the camera away?
p.s. The fps does not drop in the editor. Only after launching on android.
Unity version 2020.3.18f1. Tried on another version same problem.
cube with scale(5;5;1)
cube with scale(5;10;1)
cube with scale(5;10;1) and camera is distant

The problem may be due to the fact that more pixels is rendered (fragment shader is executed for every one) when the object is scaled up. The other hint is that when you move the camera far from the object the frame rate is increased as the rendered object generates fewer pixels.
As you mentioned that the program runs on Android, changing a regular shader to mobile shader may improve the performance.

From some of Unity's documentation on transforms
Performance Issues and Limitations with Non-Uniform Scaling
Non-uniform scaling is when the Scale in a Transform has different values for x, y, and z; for example (2, 4, 2). In contrast, uniform scaling has the same value for x, y, and z; for example (3, 3, 3). Non-uniform scaling can be useful in a few select cases but should be avoided whenever possible.
Non-uniform scaling has a negative impact on rendering performance. In order to transform vertex normals correctly, we transform the mesh on the CPU and create an extra copy of the data. Normally we can keep the mesh shared between instances in graphics memory, but in this case you pay both a CPU and memory cost per instance.
I'm not certain if your Z's scale matters in this case, because you're only rendering the x-y plane. I can't comment for certain on why the performance hit is reduced as you increase your camera distance. I suspect Unity has some intelligent vertex manipulation going on to simplify rendering of distant objects, saving you on CPU cost.
That being said, try to avoid non-uniform scaling. Primitives should typically only be used as placeholders.

Related

Poor performance when downscaling surfaces

I am using PyCairo to write a chess game, each piece is a 512x512 ImageSurface. The pieces need to scale up and down. When the scale (x or y) is less than 1.0, the app is painfully slow, with only 32 pieces. When scale is equal to or greater than 1.0, it is blisteringly fast.
I have tried canvas.set_antialias(cairo.ANTIALIAS_NONE), with no good results. I have tried both cr.set_scale() and surface.set_device_scale() with the same poor performance. Is there any way to scale down faster, with possibly lower quality, which is acceptable?
I thought of recreating surfaces every time the chess board is resized, and use a scale of 1.0 in that case. However it will choke as the user resizes the window.
After you set the surface pattern as a source in a cairo context, do cairo_pattern_set_filter(cairo_get_source(cr), CAIRO_FILTER_FAST);. I think this means that 'nearest' is used as the scaling algorithm.

Scaling Object turns the textures white (Unity3D)

I'm trying to figure out why my Object's textures keep turning white once I scale the object down to 1% (or less) of its normal size.
I can manipulate the objects realtime with my fingers and there is a threshold where all the textures (except a few) turn completely ghost white, as shown below:
https://imgur.com/wMykeFw
Any input to fix is appreciated!
One potential cause of this issue is due to how certain shaders can miscalculate how to render textures when scales are set to low values.
To be able to render this asset so small using the same shader, re-import the mesh with a smaller scale factor (in the mesh import settings), and that may fix it.
select ARCamera then camera, in the inspector, select the cameras clipping plane and increase it(you want to find the minimum possible clipping that works to save on memory, so start at 20000, and work your way backwards til it stops working, then back up a notch).
next (still in the cameras inspector), select Rendering Path and set it to Legacy Vertex Lit
this should clear it up for you

Does changing Physics.defaultContactOffset have an important impact on performance?

As usual, the documentation lacking some information we have to gather somewhere else: Physics.defaultContactOffset.
Physics.defaultContactOffset is used by the collision detection system to predictively enforce the contact constraint.
Unity explains you should use 1 unit = 1 meter for physic simulation.
I needed a lot of small spheres and cubes: 10cm width. Thus 0,1 "unit".
What they dont say is that when you're working on a small scale (I'm using objects of 0,1m width = 10cm) you have to change Physics.defaultContactOffset to a smaller value than the default one.
Hence my question: is Physics.defaultContactOffset important for calculations, i.e. if I change this to a very small value, does it have a negative impact on performance?
I have to change it from 0.001 to 0.00001 to get an acceptable collision detection system and I'm worried about a negative impact on performance.
From Unity3D documentation on Default Contact Offset:
Use this to set the distance the collision detection system uses to
generate collision contacts. The value must be positive, and if set
too close to zero, it can cause jitter. This is set to 0.01 by
default. Colliders only generate collision contacts if their distance
is less than the sum of their contact offset values.
So we can assume the physics engine is calculating distances between colliders and checking if the distance counts as a collision or not. I don't think it matters so much for performance as the calculation is done anyway.
With all this being said, Unity3d physics engine doesn't really do well with tiny objects, so it's better if you scale the spheres up to 1 unit, and scale everything else to compensate. You will most likely run into issues with these tiny colliders.

Speeding up rendering in SceneKit

So, I am using SceneKit to render a collection of parametric surfaces (the sum of which make an object). To put these on screen I am creating custom geometries by sampling the points and creating triangles. Here is a quick over view of how I do it.
Loop through the collection of surfaces
Generate a random color C
For each surface calculate a grid of N x N points (both positions and normals)
Assign all vertexes for that surface the color C
Add groups of 3 vertexes from this surface to the face index list
And that seems to work. After I get all this data, I make it into the proper structures (SCNGeometrySource and SCNGeometryElement) and make a SCNGeometry like so
SCNGeometry(sources: [vertexSource, normalSource, colorSource], elements: [element])
This works and displays my surfaces on the screen fine as one single geometry element. My problem is, I have some really complicated objects that I am trying to work with and it is just running really slow to move the camera around when looking at the object. Rendering is taking around 500 ms. Which is making my frame rate and experience awful.
So the question is, what steps can I take to speed up SceneKit performance? I did this same project with WebGL using Three.js with the same amount of data and was able to use an orbiting camera fine, so I can't believe that scene kit couldn't at least compete with that. What features can I tweak and turn off to speed up performance? I am using the triangle primitive type, the allowsCameraControl = true for the orbiting camera, and metal for the SCNView.
For those curious, the model I am struggling on generated 231,900 vertices and 347,850 indices for faces (11.1312 MB of vertex data (position and normal) and 1.3914 MB of face data (essentially just index positions of vertexes in order for triangles.))
1) If you are "standing" on center of your generated surface, then your problem maybe that you drawing alot offscreen (no frustum culling) and you need to split your sufrface (single node) into subsurfaces (child nodes), so only nodes that is visible in camera view space is drawn.
That being said, 231,900 vertices is really not much, I draw several milions #60fps with SceneKit Metal renderer (+20% faster than using OpenGL renderer) on OSX.
2) If you are looking on your surfaces from distance and have bad performance, check what ammount of bytesPerComponent: you feeding when creating SCNGeometrySource. I experienced big performance drop when using CGFloat (double) instead of plain float on GeForce GTX (while okay on integrated Intel graphics).

I'm having an issue to use GLshort for representing Vertex, and Normal

As my project gets close to optimization stage, I notice that reducing Vertex Metadata could vastly improve the performance of 3D rendering.
Eventually, I've dearly searched around and have found following advices from stackoverflow.
Using GL_SHORT instead of GL_FLOAT in an OpenGL ES vertex array
How do you represent a normal or texture coordinate using GLshorts?
Advice on speeding up OpenGL ES 1.1 on the iPhone
Simple experiments show that switching from "FLOAT" to "SHORT" for vertex and normal isn't tough, but what troubles me is when you're to scale back verticies to their original size (with glScalef), normals are multiplied by the reciprocal of the scale. Natural remedy for this is to multiply the normals w/ scale before you submit to GPU. Then, my short normals almost become 0, because the scale factor is usually smaller than 0. Duh!
How do you use "short" for both vertex and normal at the same time? I've been trying this and that for about a full day, but I could only go for "float vertex w/ byte normal" or "short vertex w/ float normal" so far.
Your help would be truly appreciated.
Can't you just Normalise your normals by calling this?
glEnable( GL_NORMALIZE );
Its not ideal because normalising will likely hit the GPU a bit but it really depends on if your bottleneck is caused by the passing data to the GPU or by the GPU doing too much. As with any optimisation you need to figure out which gives the better speed up. I'd suspect you are probably slowed down by passing the vertex data so you WILL get a speed up.
Possible things to try:
try to counter the scaling by fiddling with the GL_NORMAL matrix
use a vertex shader to perform the calculations as you wish
don't scale your vertex data down, enlarge your camera matrices instead