how to avoid more texture and textureRegion in andengine - andengine

I am developing a game using andengine. For every sprite I need to create texture and textureRegion.
Texture bgTexture;
TextureRegion bgTextureRegion;
this.bgTexture = new Texture(512, 1024,TextureOptions.BILINEAR_PREMULTIPLYALPHA);
this.bgTextureRegion = TextureRegionFactory.createFromAsset(
this.bgTexture, this, "gfx/bg.png", 0, 0);
Sprite bgSprite = new Sprite(0, 0, this.bgTextureRegion);
If I use scoreSprite, lifeSprite, etc I need to create a texture and textureRegion again & again for each sprite.
It increases the loading time of the game. Any solution to fix this issue?

in fact, you need atlas & texture region for every sprite-type, not for every single sprite instance.
to avoid too many atlas instances and loading time from disk, you could use tiled png's for your graphics (many images on one larger png , up to 1024x1024 per png). then you can get your texture-region for a sprite-type with
TextureRegion textureRegion = new TextureRegion(atlas, tiledX, tiledY, TILED_WIDTH, TILED_HEIGHT);
other hints and tricks are mentioned here:
http://www.matim-dev.com/tips-and-tricks---how-to-improve-performance.html
esp. the pooling, Spritebatch and Spritegroups, the culling and no theme background

Related

How can I batch draw meshes in my Unity scene?

I am using Unity3d to show a dock scene
the containers are updated based on realtime messages. I am optimizing the draw call for this scene. I found that containers are drawing one by one with the draw mesh method. What I do in the code is load a container from prefab and set color
instance = (GameObject)Instantiate(Resources.Load("Prefabs/box1"));
Material material = instance.GetComponent<MeshRenderer>().material;
material.color = new Color(r / 255f, g / 255f, b / 255f, 1f);
then gameObjects are added to the scene one by one. Is there any way to batch the gameObjects and draw them once?
UPDATE:
I do some changes for my container prefab to enable the GPU Instancing. Yes, the draw calls are down from 6k to 2k by the dynamic batch. But It cause another problem. All of containers are same color since I use
gameObject.GetComponent<MeshRenderer>().sharedMaterial.color = ContainerColor
to set the containers color. Is there any way can solve it ?
You can also use Graphics.DrawMeshInstanced. It draws meshes with GPU instancing. But, You can only draw a maximum of 1023 instances at once. So you need to add custom batches.
https://docs.unity3d.com/2018.1/Documentation/ScriptReference/Graphics.DrawMeshInstanced.html
Depends of what you need to do with the containers, if they are static, you should look at StaticBatchingUtility.Combine (https://docs.unity3d.com/ScriptReference/StaticBatchingUtility.Combine.html)
However, if you need them to be dynamic, you should use 1 material for all your meshes, just change the color in the shader will reduce the number of setPass calls.
You used .material. That creates a new instance of the material, so don't do that.
Use .sharedMaterial. To have different colors, create one material for each color and cache that.

Unity spawning lots of objects at runtime running slow

I've created a simple project where approx. 7000 cubes are created in the scene (with VR camera) but the problem is that when I move camera to see all cubes the FPS becomes very bad, something like 5-6 frames. My PC is i7 with GTX 1070, and I thought that it have to draw hundred thousands of cubes without any problems. Really, I saw minecraft, it looks like there no problems to draw cubes ))
So question is is it possible to optimize the scene so that all cubes are painted it one call or something to provide a good performance?
I'm actually made all cubes static and there are no textures only standard material...
Here is how it looks now:
I'm using default Directional Light and It would be good to not change the standard shader because of it great work with light.
Here is how I'm generating the cubes:
private void AddCube(Vector3 coords)
{
var particle = (Transform)MonoBehaviour.Instantiate(prototype.transform, holder.transform);
SetScale(particle);
SetPosition(particle, coords);
cubes.Add(particle.gameObject);
particle.gameObject.isStatic = true;
}
private void SetScale(Transform particle)
{
particle.localScale = new Vector3(Scale, Scale, Scale);
}
private void SetPosition(Transform particle, Vector3 coords)
{
particle.position = coords;
}
This is the screenshot of Stats:
I have here 41 fps because I moved camera away from cubes to have clean background for the stats panel. Actually after I'm making the cubes 'static' the FPS is depends on are the cubes visible on the screen or not.
The problem is most likely caused by number of individual objects you are instantiating. If cubes doesnt change their transform after generating you should be able to use StaticBatchingUtility to combine them into one batch.
Call this after generating cubes.
StaticBatchingUtility.Combine(GameObject cubesRoot);

Animate an SKSpriteNode with textures that have a size different from the original

I want to animate an SKSpriteNode using textures from an SKTextureAtlas, using SKAction.animateWithTextures(textures,timePerFrame,resize,restore). However, the textures in the atlas have a size that is slightly larger than the original texture (it's basically a character moving). When the action is run, the textures are either compressed to fit the original size of the sprite, or recentered when I set resize to false, which changes the position of the character. What I want, though, is for the textures to be anchored at the lower-left corner (or lower-right, depending on the direction) so that the position of the character doesn't change apart from the extra part of the texture.
I've tried changing the anchor point of the sprite prior to running the action, but obviously that applies to the original texture as well. Also, I guess changing the size of the original texture would have an impact on the physics behaviour, which I want to avoid.
Does anyone have a suggestion about how to do this?
Thanks!
David
This would work
Edit all the textures to match the size of the largest sized texture.
Just give the smaller textures some padding using an alpha channel to give you a transparent background.
E.g. Notice how the first texture has lots of negative space
(From CartoonSmart.com)
Create the physics body with a certain size in mind. E.g. You can load the texture without the padding and get the size. Then position it as needed onto the new and improved texture with padding. So after you create the Sprite as normal with the new resized textures you can then
/// load a texture to be a template for the size
let imageTextureSizeTemplate = SKTexture(imageNamed: textureWithoutPadding)
let bodySize = imageTextureSizeTemplate.size()
/// position template texture physics body on texture that will be used
let bodyCenter = CGPointMake(0.5, 0.5)
// create physics body
let body:SKPhysicsBody = SKPhysicsBody(rectangleOfSize: bodySize, center: bodyCeneter)
self.physicsBody = body
Set resize: false when you animate the textures.

build a tiled big texture from other textures

I am making a unity 2D RTS game and I thought of using a big texture for the tiled map (instead of a lot of textures - for the memory reasons...).
The tiled map is supposed to generate randomly at runtime so I don't want to save a texture and upload it. I want the map to be generated and then build it from a set of textures resources.
so, I have a little tiles textures of grass/forest/hills etc. and after I generate the map randomly, I need to draw those little textures on my big map texture so I will use it as my map.
How can I draw a texture from my resources on other texture? I saw there is only a Get/SetPixel functions... so I can use it to copy all the pixels one by one to the big texture, but there is something easier?
Is my solution for the map is OK? (is it better from just create a lot of texture tiles side by side? There is other better solution?)
The correct way to create a large tiled map would be to compose it from smaller, approximately-screen-sized chunks. Unity will correctly not draw the chunks that are off the screen.
As for your question about copying to a texture: I have not done this before in Unity, but this process is called Blitting, and there just happens to be a method in Unity called Graphics.Blit(). It takes a source texture and copies it into a destination texture, which sounds like exactly what you're looking for. However, it requires Unity Pro :(
There is also SetPixels(), but it sounds like this function does the processing on the CPU rather than the GPU, so it's going to be extremely slow/resource-intensive.
Well, after more searching I discovered the Get/SetPixel s
Texture2D sourceTex = //get it from somewere
var pix = sourceTex.GetPixels(x, y, width, height); // get the block of pixels
var destTex = new Texture2D(width, height); // create new texture to copy the pixels to it
destTex.SetPixels(pix);
destTex.Apply(); // important to save changes

Tiling an image that is part of a texture atlas

I'm using Cocos2D. What is the most efficient way to tile an image when it's part of a texture atlas that's been generated using Texture Packer. I have an image that is 10 x 320 and I want to tile it to fill the screen.
I've used this code before for tiling images
bgHolder = [CCSprite spriteWithFile:#"bg.png" rect:CGRectMake(0, 0, 700, 300*155)];
ccTexParams params = {GL_LINEAR,GL_LINEAR,GL_REPEAT,GL_REPEAT};
[bgHolder.texture setTexParameters:&params];
[self addChild:bgHolder];
but I don't think I can use this approach when the image I want to tile isn't square and is only a small part of the over al texture.
Chaining a bunch of CCSprites seems pretty inefficient to me so I'm hoping there is a better way.
Use one sprite per tile. That's the way to do it. You should use sprite batching to keep the number of draw calls to 1. Rendering 48 sprites is not much worse than rendering one 480x320 sprite when using sprite batching.