Unity 2D Tilemap, and ever increasing Total Object Count - unity3d

I'm creating a game that uses an 80x40 2D Tilemap, using procedural terrain generation. Currently, creating the map works flawlessly, as well as re-building with a button press (as many times as I want). And, I'm getting really great graphical performance (~800 of FPS).
However, I'm noticing that every time I rebuild the map, I get an increasing Total Object Count in the Profiler-memory section. The increase per map re-build is around 2500 to 2700 objects. This is not quite the full amount of tiles (3200) but is close enough that I suspect the sprite drawing as the source of the leak.
Reading online indicates that there is potentially a memory leak with the Material renderer. I'm not sure if the method I'm using to redraw the map falls within this category or not. Here's how I'm doing it...
I have a bunch of sprite atlases... for various auto-tiling terrain types. Essentially, I do the following...
Vector3Int v3 = new Vector3Int(0, 0, 0);
TerrainTile theTile = ScriptableObject.CreateInstance<TerrainTile>();
for (int x = 0; x < theWorld.Width; x++)
{
for (int y = 0; y < theWorld.Height; y++)
{
int nValue = theWorld.squares[x, y].tN_value[tType];
theTile.sprite = Terrain_atr.GetAutoTileSprite(nValue);
theTilemap.SetTile(v3, (TileBase)theTile);
}
}
GetAutoTileSprite just grabs a sprite from a sprite atlas based on an auto-tile rule.
So, does the above method of painting sprites fall into the Material renderer memory leak?
I can't find any other source of objects in my code, as I simply reuse all variables every time I rebuild the map... I'm not (as far as I can see) creating anything new.
Thanks for any insight.

Related

Unity resolution problem, mesh has holes when zooming out

I have a server which renders a mesh using OpenGL with a custom frame buffer. In my fragment shader I write the gl_primitiveID into an output variable. Afterwards I call glReadPixels to read the IDs out. Then I know which triangles were rendered (and are therefore visible) and I send all those triangles to a client which runs on Unity. On this client I add the vertex and index data to a GameObject and it renders it without a problem. I get the exact same rendering result in Unity as I got with OpenGL, unless I start to zoom out.
Here are pictures of the mesh rendered with Unity:
My first thought was that I have different resolutions, but this is not the case. I have 1920*1080 on both server and client. I use the same view and projection matrix from the client on my server, so this also shouldn't be the problem. What could be cause of this error?
In case you need to see some of the code I wrote.
Here is my vertex shader code:
#version 330 core
layout (location = 0) in vec3 position;
uniform mat4 view;
uniform mat4 projection;
void main()
{
gl_Position = projection * view * vec4(position, 1.0f);
}
Here is my fragment shader code:
#version 330 core
layout(location = 0) out int primitiveID;
void main(void)
{
primitiveID = gl_PrimitiveID + 1; // +1 because the first primitive is 0
}
and here is my getVisibleTriangles method:
std::set<int> getVisibleTriangles() {
glReadBuffer(GL_COLOR_ATTACHMENT0);
glReadPixels(0, 0, SCR_WIDTH, SCR_HEIGHT, GL_RED_INTEGER, GL_INT, &pixels[0]);
std::set<int> visibleTriangles;
for (int i = 0; i < pixelsLength; i += 4) {
int id = * (int*) &pixels[i];
if (id != 0) {
visibleTriangles.insert(id-1); // id 0 is NO_PRIMITIVE
}
}
return visibleTriangles;
}
Oh my god, I can't believe I made such a stupid mistake.
After all it WAS a resolution problem.
I didn't call gl_Viewport (only when resizing the window). Apparently when creating a window with glfwCreateWindow GLFW creates the window but since the parameters are only hints and not hard constraints (as it is stated here: glfw reference) it is possible that those are not exactly fulfilled. Since I have passed a desired size of 1920*1080 (which is also my resolution) apparently the drawing area did not actually get the size of 1920*1080, because there is also some space needed for the menu etc. So it is rendered with a lower resolution (to be exact, it was 1920*1061) on the server which results in missing triangles on the client.
Before getting into the shader details for this problem, are you sure that the problem doesn't lie with the way the zoom out functionality has been implemented in Unity? It's just a hunch since I have seen it in older projects, but if the zoom in/out functionality works by actually moving the camera then the movement of the clipping planes will create those "holes" when the mesh surfaces go outside the range. Although the placement of some of those holes in the shared image makes me doubt that this is the case, but you never know.
If this happens to be the way the zoom function works then you can confirm this by looking at the editor mode while zooming out. It will display the position of the clipping planes of the camera in relation to the mesh.

How to set tile in tilemap dynamically?

I'm using Isometric Tilemap to make my game map.
I'm using unity 2018 3.5f version.
But every guide said that just use palette, but my game tilemap is a little
dynamic. So I can add, change and delete tiles in tilemap at runtime (dynamically).
And I must read the tile data from map xml files. So i can add tiles by programmatically.
In reference there is a 'setTile()' method. But there is no proper example about use this.
Should I create tile game object first and drag it to prefabs folder for make it tile prefab. And I must use like this?
setTile(position , TilePrefab.Instantiate());
May I get some example for how to use setTile to add tiles programatically.
And I'm a newbie of unity so if you mind please give more tips or advice about tilemap (just anything).
This does not seem like a "newbie" question at all. It seems fairly clear that you just want to lay down tiles programmatically rather than via the editor.
I ran in to the same question because I wanted to populate my tiles based on a terrain generator. It's surprising to me that the documentation is so lacking in this regard, and every tutorial out there seems to assume you are hand-authoring your tile layouts.
Here's what worked for me:
This first snippet does not answer your question, but here's my test code to lay down alternating tiles in a checkerboard pattern (I just have two tiles, "water" and "land")
for (int x = 0; x < 10; x++)
{
for (int y = 0; y < 10; y++)
{
Vector3Int p = new Vector3Int(x,y,0);
bool odd = (x + y) % 2 == 1;
Tile tile = odd ? water : land;
tilemap.SetTile(p, tile);
}
}
In order to get an instance of a Tile, I found that two different methods worked:
Method 1: Hook up Tiles to properties using Editor
In the MonoBehaviour class where you are writing the code to programmatically generate the tile layout, expose some public properties:
public Tile water;
public Tile land;
Then in the Unity Inspector, these fields will appear. If you press the little bullseye next to these properties it will open up a "Select Tile" window where you should be able to see any tiles that you have previously added to the Tile Palette
Method 2: Programmatically Create Tile from Texture in Resources folder
If you have a lot of different tile types, manually hooking up properties as described above may become tedious and annoying.
Also if you have some intelligent naming scheme for your tiles which you want to take advantage of in code, it will probably make more sense to use this second method. (e.g. maybe you have four variants of water and four of land that are called water_01, water_02, water_03, water_04, land_01, etc., then you could easily write some code to load all textures "water_"+n for n from 1 to numVariants).
Here's what worked for me to load and create a Tile from a 128x128 texture saved as Assets/Resources/Textures/water.png:
Tile water = new Tile();
Texture2D texture = Resources.Load<Texture2D>("Textures/water") as Texture2D;
water.sprite = Sprite.Create(texture,
new Rect(0, 0, 128, 128), // section of texture to use
new Vector2(0.5f, 0.5f), // pivot in centre
128, // pixels per unity tile grid unit
1,
SpriteMeshType.Tight,
Vector4.zero
);
I stumbled across this question a year later and had to make tweaks to uglycoyote's answer to get it to work with RuleTiles, so I thought I would post the change I had to make just in case someone else finds this answer.
The good news is that you can also implement RuleTiles this way! Instead of creating Tile properties, I had to create TileBase properties instead, like so:
public TileBase water;
public TileBase grass;
Also, in this same class, create a Tilemap property and make it public like so:
public Tilemap tileMap;
Now, go into the inspector and drag your tiles, whether they be RuleTiles or normal tiles, into the exposed tile properties, and drag the tilemap from the hierarchy view into the exposed tileMap property. This gives you a reference to that tileMap object from within this class.
From here, it's as simple as creating a for loop to populate the tilemap:
for(int x = 0; x < 10; x++) {
for(int y = 0; y < 10; y++) {
tileMap.SetTile(new Vector3Int(x, y, 0), grass);
}
}
For instance, the above will create a 10x10 map of just grass tiles, but you can do whatever you want inside the nested for loops.

Access Terrain data more efficiently

I'm developing an editor tool for terrains. I want to do some calculation every time the terrain is modified by user.
Currently I get heightmap with these:
TerrainData tData = SketchTerrain.terrainData;
int resolution = tData.heightmapResolution - 1;
float[,] heights = tData.GetHeights(0, 0, resolution, resolution);
And then I have to check every float in the array to see if there are any changes. This works but it's slow and creates garbage and I have to do it in a fraction of a second (like 4 times per second). I also have to do it for splats too. This puts too much pressure on the CPU and results in frame rate drop.
So is there an alternative?

Array of CCSprite's VS CCSpriteBatchNode and an NSMutable array?

I have a little archer game I'm working on, and previously in my code I put every arrow sprite into a CCSprite[7]; array, and inside ccTime I would update the x/y coordinates and do some math to make the arrows move beautifully and smooth. So all the math/angles/movement works.
Later, while trying to implement collision detection, I couldn't use a data type that would have made my life so much easier, I think it was CGRect, and it would get the contents of an sprite and check if that was intersecting another sprite. The error said that I had to use members of a NSMutable array or something like that, which is great because this is better for memory anyway, to put my sprites inside a batchNode and an NSMutable array. But there is a problem.
In every tutorial I've seen, projectiles move based on an action sequence with a predetermined time. Just an example of an action sequence (not in my code)
id action = [Sequence actions:
[ScaleTo actionWithDuration:.3 scale:0.7f],
[ScaleTo actionWithDuration:.3 scale:1.0f],
nil];
Everywhere I've seen, this is how sprites move around, but I cant do that because arrow velocities change depending on how long touches is held, angles that make the arrow look realistic and stuff like that.
So in my code in touchesBegan:
ccTouchesBegan:(NSSet *) blah blah {
...
...
self.nextProjectile = [[CCSprite spriteWithFile:#"arrow.png"];
_nextProjectile.rotation = vector2 - 90; //this is angle of where the user touched screen
//add projectiles to array
_nextProjectile.tag = arrowTracker;
[_batchNode addChild:_nextProjectile z:1];
[_projectiles addObject:_nextProjectile];
//Release? If I don't have this the arrow fails to move at all... and it was in the tutorial
if(_nextProjectile.tag == 1){
[_nextProjectile release];
_nextProjectile = nil;
}
}
Every time I touch, the first arrow doesn't shoot out (this is NOT the problem, I can fix this easy) and the arrows shoot out perfectly, the movement is the exact same as when I was using a CCSprite array. The only problem is, every time I call ccTouchesBegan if the previous arrow was in mid-flight, it will stop all actions and just sit there. Mid air. So my problem is a logic error, obviously I'm doing something wrong in touchesBegan because it terminates the projection of the previous arrow!
So my questions are:
How do I fix this.
Should I just stick with CCsprite[7] (array of sprites)? Instead of finding the image's contents, I could find the endpoint of the arrow and just check if that intersects another image, but this would take much more work/math/memory(I'm not too sure exactly on how memory works in general in programming... but I'm pretty sure CCSprite array takes more memory.
EDIT---------------------------------------------------------------------------------------
This is where the arrow's position is updated.
-(void)callEveryFrame:(ccTime)dt{
...
...
//move selected arrows
for(int xe = 0; xe < 7; xe++{
float x = _theArrowArray[xe].position.x;
float y = _theArrowArray[xe].position.y;
vyArray[xe] += gravity; vyArray is the velocity on the y axis array, I'm just adding gravity
x += vxArray[xe] *dt; // dt is after (ccTime) in the method definition
y += vyArray[xe] *dt;
CGPoint newLocation = CGPointMake(x,y);
_theArrowArray[xe].position = newlocation;
//The Code above this moves the arrows inside the CCSprite array, not the batch/nsmutable array.
//The code below is just a copy and paste with a little change to it, for the batchnode/nsmutable
float x2 = _nextProjectile.x; // mextProjectile was declared earlier in my code
float y2 = _nextProjectile.y;
vyArray[xe] += gravity; vyArray is the velocity on the y axis array, I'm just adding gravity
x2 += vxArray[xe] *dt*1.2; // This way(dt*1.2), both arrows are being shot out but this one has more gravity to it, so you can tell which arrow is which and see that both are working.
y2 += vyArray[xe] *dt*1.2;
CGPoint newLocation2 = CGPointMake(x2,y2);
_nextProjectile.position = newlocation2;
}
Don't release the projectile unless the nextProjectile property retains it. CCSprite spriteWithFile returns an autoreleased object, which is retained by the batchNode and the projectiles array.
Weird thing is, the projectile is never set to have a tag == 1 so the code that releases the projectiles is probably going to be skipped.
My guess is regarding #1 that the projectile will be stopped but it isn't removed because it's still added to the node hierarchy. It would be helpful to see the code that actually removes the projectiles.
As for your second question I don't understand your concern. You have 7 projectiles. Whether they use 7 Bytes, 700 Bytes or 7 Kilobytes simply doesn't matter. This amount of memory is still negligible compared to even the smallest of textures.
Do yourself a favor and use the regular Foundation collections like NSMutableArray to store your objects. For one, they will retain added objects and release them when removed. You also get errors in case your code has a bug that causes the array to overrun. C style arrays may be a bit faster and may take less memory, but they're also inherently unsafe and need to be handled with much greater care.

Why do I get poor performance when rendering different images with UIImage on iPhone

I'm using UIImage to render a game map from 32x32 blocks. The code is as follows
for( int x = 0; x < rwidth; ++x)
{
for(int y = 0; y < rheight; ++y)
{
int bindex = [current_map GetTileIndex:x :y];
CGPoint p;
p.x = x * tile_size_x;
p.y = y * tile_size_x;
[img_list[bindex] drawAtPoint:p];
}
}
This ends up rendering about 150 tiles. My problem is when I run this code my render time goes down to 2-3 frames a second.
I thought the iPhone was fill bound, however if I force bindex to = 1 (ie render the same block 150 times) then I get full framerate.
I can't believe it is that expensive to render from different UIImages.
Could someone please tell me what I'm doing wrong ...
Oh I forgot to mention my image list created from a bigger texture page using CGImageCreateWithImageInRect.
Thanks
Rich
Crack open Instruments and profile your code. It looks like you are drawing into a UIView, and they are not really optimized for games.. Since UIViews are layer-backed, drawAtPoint: needs to draw the image several times (to offscreen buffers) before it appears on screen.
That being said, UIImage isn't really designed for games. There's a lot of overhead with regards to being part of UIKit. You're not really meant to have hundreds of them.
In approx order of speed (slowest to fastest):
UIImage (usually used if you have a few large images)
CALayers (if you need fine animation control or you have roughly 20 to 100 tiles/images)
OpelGL textures (what most games should be using)