Does TextureAtlas sprite indices reflect the order in which they were added? - bevy

I'm using TextureAtlasBuilder to produce a TextureAtlas. I'm adding multiple textures to the atlas using the add_texture method. I'm then using that texture atlas as part of a sprite entity created from a SpriteSheetComponents bundle.
When changing the sprite's index, the resulting texture is not what I'm expecting. I'm assuming that the textures in the texture atlas reflect the order in which they were added while building it. Is that an incorrect assumption?

The order in which you call add_texture does not guarantee the order in which they are stored in the textures Vec. This is because the current implementation (as of bevy 0.3.1) uses a HashMap to store the placements of the rectangles, thus not preserving the order of insertion.
Since the project is in its infancy, you might consider creating an issue in the project to have this changed.
For the time being, you could either try combining the sprites outside of Bevy and then reading in the TextureAtlas directly from that asset like in the sprite sheet example.

Related

How to find a covered element in an image using Vuforia?

I'm trying to make an augmented reality application about chemistry using Vuforia and Unity3D. I will physically have a big image of periodic table of elements and some small spherical objects, and I don't know how to determine which element is covered by the sphere when I put it on the periodic table. Does anyone have an idea or has done this already? I will next associate that chemical element with the sphere.
I think your best bet would be to try and track not only the position of the printed periodic table as Vuforia image target, but also the position of the 'small spherical objects' as Vuforia model targets. Whether or not that would work depends on the exact characteristics of those spherical objects and to which degree they are suitable for tracking as model targets. Otherwise consider replacing the spherical objects with alternative objects possibly with trackable stickers on them.

How can i find for every pixel on the screen which object it belongs to?

Each frame unity generate an image. I want that it will also create an additional arrays of int's and every time it decide to write a new color on the generated image it will write the id of the object on the correspond place in the array of int's.
In OpenGL I know that it’s pretty common and I found a lot of tutorials for this kind of things, basically based on the depth map you decide which id should be written at each pixel of the helper array. but in unity i using a given Shader and i didn't find a proper way to do just that. i think there should be any build in functions for this kind of common problem.
my goal is to know for every pixel on the screen which object it belongs to.
Thanks.
In forward rendering if you don't use it for another purpose you could store the ID into the alpha channel of the back buffer (and it would only be valid for opaque objects), up to 256 IDs without HDR. In deferred you could edit the unused channel of the gbuffer potentially.
This is if you want to minimize overhead, otherwise you could have a more generic system that re-renders specific objects into a texture in screenspace, whith a very simple shader that just outputs ID, into whatever format you need, using command buffers.
You'll want to make a custom shader that renders the default textures and colors to the mainCamera and renders an ID color to a renderTexture trough another camera.
Here's an example of how it works Implementing Watering in my Farming Game!

Why in 3D game we need to separate a material into so many textures for a static object?

Perhaps the question is not that correct, the textures should be say a kind of channel? although I know they will be mixed in the shader finally.
I know the knowledge of the various textures is very important, but also a bit hard to understand completely.
From my understanding:
diffuse - the 'real' color of an object without light involved.
light - for static objects. render light effections into texture beforehand.
specular - the area where has direct reflection.
ao - to absorb indirect light for the different area of an object.
alpha - to 'shape' the object.
emissive - self illuminance.
normal - pixel normal vector to deal with the light ray.
bump - (dont' know the exact differences between normalmap).
height - stores Z range values, to generate terrain, modify vertex etc.
And the items below should be related to PBR material which I'm not familiar with:
translucency / cavity / metalness / roughness etc...
Please correct me if some misunderstandings there.
But whatever, my question is why we need to separate these textures apart for a material but not only render them together into the diffusemap directly for a static object?
It'll be appreciated if some examples (especially for PBR) , and thank you very much.
I can beforehand bake all things into the diffuse map and apply to my
mesh, why I need to apply so many different textures?
Re-usability:
Most games re-use textures to reduce the size of the game. You can't if you combine them together. For example, when you two similar objects but you want to randomize the looks of them(aging effect), you can make them share the-same color(albedo) map but use different ao map. This becomes important when there hundreds of objects, you can use different combination of texture maps on similar objects to create unique Objects. If you have combined this into one, it would be impossible to share it with other similar objects but you to slightly make to look different.
Customize-able:
If you separate them, you'll be able to change the amount of effect each texture will apply to the Object. For example, the slider on the metallic slot for the Standard shader. There are more of this sliders on other map slots but they only appear once you plug a texture into the slot. You can't do this when you combine the textures into one.
Shader:
The standard shader can't do this so you have to learn how to write shader since you can't use one image to get the effects you would with all those texture maps with the standard shader. A custom shader is required and you need a way to read the information about the maps in the combined shader.
This seems like a reasonable place to start:
https://en.wikipedia.org/wiki/Texture_mapping
A texture map is an image applied (mapped) to the surface of a shape or polygon. This may be a bitmap image or a procedural texture. They may be stored in common image file formats, referenced by 3d model formats or material definitions, and assembled into resource bundles.
I would add to this that the shape or a polygon don't have to belong to 3d objects as one may imagine it. If you render two triangles as a rectangle, you can run all sorts of computations and store it in a "live" texture.
Texture mapping is a method for defining high frequency detail, surface texture, or color information on a computer-generated graphic or 3D model. Its application to 3D graphics was pioneered by Edwin Catmull in 1974.
What this detail represents is either some agreed upon format to represent some property, (say "roughness" within some BRDF model) which you would encounter if you are using some kind of an engine.
Or whatever you decide that detail to be, if you are writing your own engine. You can decide to store whatever you want, however you want it.
You'll notice on the link that different "mapping" techniques are mentioned, each with their own page. This is the result of some person, or people who did some research and came up with a paper detailing the technique. Other people adopt it, and that's how they find their way into engines.
There is no rule saying these can't be combined.

Duplicate a gameobject in right way Unity 5

I have a game object named Bolt with for example white texture , when I duplicate that and rename duplicate into EnemyBolt and change texture to orange the Bolt texture is changing into orange too , how can I copy or duplicate a gameObject that dosen't change its reference?
When you duplicate a GameObject, you also duplicate all the components that are attached to it. This way the new object gets its own set of component instances. This is in Hierarchy panel representing the content of the scene.
The Project panel represents the assets to be used in the different scenes. When you drag a material to a Renderer, that renderer instance points to a material that will be instantiated at runtime.
In order to optimise rendering process, if a material is shared among many object, only one material is created, this will reduce draw calls.
Now for your solution, you need to duplicate your material and assign the new one to the new object. Now, when you change it on one side, it does not affect the other since they are using different material. On the other hand, you increased the draw call.

How can I create larger worlds/levels in Unity without adding lag?

How can I scale up the size of my world/level to include more gameobjects without causing lag for the player?
I am creating an asset for the asset store. It is a random procedural world generator. There is only one major problem: world size.
I can't figure out how to scale up the worlds to have more objects/tiles.
I have generated worlds up to 2000x500 tiles, but it lags very badly.
The maximum sized world that will not affect the speed of the game is
around 500x200 tiles.
I have generated worlds of the same size with smaller blocks: 1/4th the size (it doesn't affect how many tiles you can spawn)
I would like to create a world at least the size of 4200x1200 blocks without lag spikes.
I have looked at object pooling (it doesn't seem like it can help me
that much)
I have looked at LoadLevelAsync (don't really know how to use this,
and rumor is that you need Unity Pro which I do not have)
I have tried setting chunks Active or Deactive based on player
position (This caused more lag than just leaving the blocks alone).
Additional Information:
The terrain is split up into chunks. It is 2d, and I have box colliders on all solid tiles/blocks. Players can dig/place blocks. I am not worried about the amount of time it takes for the level to load initially, but rather about the smoothness of the game while playing it -no lag spikes while playing.
question on Unity Forums
If you're storing each tile as an individual GameObject, don't. Use a texture atlas and 'tile data' to generate the look of each chunk whenever it is dug into or a tile placed on it.
Also make sure to disable, potentially even delete any chunks not within the visible range of the player. Object pooling will help significantly here if you can work out the maximum number of chunks that will ever be needed at once, and just recycle chunks as they go off the screen.
DETAILS:
There is a lot to talk about for the optimal generation, so I'm going to post this link (http://studentgamedev.blogspot.co.uk/2013/08/unity-voxel-tutorial-part-1-generating.html) It shows you how to do it in a 3D space, but the principales are essentially the same if not a little easier for 2D space. The following is just a rough outline of what might be involved, and going down this path will result in huge benefits, but will require a lot of work to get there. I've included all the benefits at the bottom of the answer.
Each tile can be made to be a simple struct with fields like int id, vector2d texturePos, bool visible in it's simplest form. You can then store these tiles in a 2 dimensional array within each chunk, though to make them even more memory efficient you could store the texturePos once elsewhere in the program and write a method to get a texturePos by id.
When you make changes to this 2 dimensional array which represents either the addition or removal of a tile, you update the chunk, which is the actual GameObject used to represent the tiles. By iterating over the tile data stored in the chunk, it will be possible to generate a mesh of vertices based on the position of each tile in the 2 dimensional array. If visible is false, simply don't generate any vertices for it.
This mesh alone could be used as a collider, but won't look like anything. It will also be necessary to generate UV co-ords which happen to be the texturePos. When Unity then displays the mesh, it will display specific points of the texture atlas as defined by the UV co-ords of the mesh.
This has the benefit of resulting in significantly fewer GameObjects, better texture batching for Unity, less memory usage, faster random access for any tile as it's not got any MonoBehaviour overhead, and a genuine plethora of additional benefits.