Hexagon Unity TIles - Slow Performance - unity3d

When implementing a large hexagonal grid (256x256) of tiles in a Unity game, the game becomes very slow and hardly able to function. The hexagons are in a prefab. A script controls the size of the grid and the spacing between each hexagon. How does one go about rendering a 1024x1024 grid of Unity objects?
When the game is built on Win64 it is also still quite slow.
This is an image of hexagons rendered:
http://i.imgur.com/UbA6USt.png

Try making the grid elements static and make sure static batching is turned ON in player settings. This will optimize their rendering significantly. You probably should even go as far as combining them all into a single mesh (see tools like this one for that purpose).
If you can show us the actual scene hierarchy and the actual structure of your grid nodes then we can help even more.

Because of how Unity works, non-static objects have a tendency to get heavy - they each end up with their own transforms and end up getting drawn even when they're not on screen.
It's the reason more minecraft clones aren't seen coming out of Unity.
If you can't set the hexagons to static for some reason (i.e.: creating procedural levels etc), you'll have to perhaps simulate the hexagons through creative shader manipulation (like saving each mesh into a single array of vertices with a second that tracks a corresponding mesh id) or by writing a script that creates/adds vertices and faces to a single mesh on a single game object.
You may also speed up the scene by creating smaller levels and loading/unloading them as the player moves towards them. See: Application.LoadLevelAdditive

Related

Recommendations for clipping an entire scene in Unity

I'm looking for ways to clip an entire unity scene to a set of 4 planes. This is for an AR game, where I want to be able to zoom into a terrain, yet still have it only take up a given amount of space on a table (i.e: not extend over the edges of the table).
Thus far I've got clipping working as I want for the terrain and a water effect:
The above shows a much larger terrain being clipped to the size of the table. The other scene objects aren't clipped, since they use unmodifed standard shaders.
Here's a pic showing the terrain clipping in the editor.
You can see the clipping planes around the visible part of the terrain, and that other objects (trees etc) are not clipped and appear off the edge of the table.
The way I've done it involves adding parameters to each shader to define the clipping planes. This means customizing every shader I want to clip, which was fine when I was considering just terrain.
While this works, I'm not sure it's a great approach for hundreds of scene objects. I would need to modify whatever shaders I'm using, and then I'd have to be setting additional shader parameters every update for every object.
Not being an expert in Unity, I'm wondering if there are other approaches that are not "per shader" based that I might investigate?
The end goal is to render a scene within the bounds of some plane.
One easy way would be to use Box Colliders as triggers on each side of your plane. You could then turn off Renderers on objects staying in the trigger with OnTriggerEnter/OnTriggerStay and turn them on with OnTriggerExit.
You can also use Bounds.Contains.

Is it possible to combine textures in Maya into some sort of atlas?

I'm starting to use Maya to do some bone animations of a 2D character, which is composed of several different parts (legs, body, head, weapon, etc.). I have each part in a separate .PNG image file. Right now I have a polygon for each part, with its own material and texture:
I was wondering if there's a way to automatically combine the textures into an atlas, make all the polygons use the same material with the atlas, and correct their UV maps so they still point to the right part of the atlas. Right now, I can do it manually in reverse: I can make the atlas outside Maya with other tools, then use the atlas on a material and manually correct the UV maps of each polygon. But it's a very long process and if I need to change a texture, I have to do it all over again. So I was wondering if there's a way to automate it.
The reason why I'm trying to do this is to save draw calls in Unity. From what I understand, Unity can batch objects as long as they share the same material. So instead of having a draw call for each polygon in the character, I'd like to have a single draw call for the whole character. I'm pretty new to Maya, so any help would be greatly appreciated!
If you want to do the atlassing in Maya, you can do it by duplicating your mesh and using the Transfer Maps tool to bake all of the different meshes onto the duplicate as a single model. The steps would be:
1) Duplicate the mesh
2) Use UV layout to make sure that the duplicated mesh has no overlapping UVs (or only has them where appropriate, like mirrored pieces.
3) Use the Transfer Maps... tool to project the original mesh onto the new one, using the "Use topology" option to ensure that the projection is clean.
The end result should be that the new model has the same geometry and appearance as the original, but with all of it's textures combined onto a single sheet attached to a single material.
The limitation of this method is that some kinds of mesh (particularly meshes that self-intersect) may not project properly, leaving you to manually touch up the atlassed texture. As with any atlassing solution you will probably see some softening in the textures, since the atlas texture is not a pixel for pixel copy but rather a a projection, and thus a resampling.
You may find it more flexible to reprocess the character in Unity with a script or assetPostprocessor. Unity has a native texture atlassing function, documented here. Unity comes with a script for combining static meshes, but you'd need to implement your own; theres'a an example on the unity wiki that probably does what you want : http://wiki.unity3d.com/index.php?title=SkinnedMeshCombiner (Caveat: we do something similar to this at work, but I can't share it; I have not used the one in this link). FWIW Unity's native atlassing works only in rectangles, so it's not as memory efficient as something you could do for yourself.

quartz 2d / openGl / cocos2d image distortion in iphone by moving vertices for 2.5d iphone game

We are trying to achieve the following in an iphone game:
Using 2d png files, set-up a scene that seems 3d. As the user moves the device, the individual png files would warp/distort accordingly to give the effect of depth.
example of a scene: an empty room, 5 walls and a chair in the middle. = 6 png files layered.
We have successfully accomplished this using native functions like skew and scale. By applying transformations to the various walls and the chair, as the device is tilted moved, the walls would skew/scale/translate . However, the problem is since we are using 6 png files, the edges dont meet as we move the device. We need a new solution using a real engine.
Question:
we are thinking of instead of applying skew/scale transformations, that if given the freedom to move the vertices of the rectangular images, we could precisly distort images and keep all the edges 100% aligned.
What is the best framework to do this in the LEAST amount of time? Are we going about this the correct way?
You should be able to achieve this effect (at least in regards to the perspective being applied to the walls) using Core Animation layers and appropriate 3-D transforms.
A good example of constructing a scene like this can be found in the example John Blackburn provides here. He shows how to set up layers to represent the walls in a maze by applying the appropriate rotation and translation to them, then gives the scene perspective by using the trick of altering the m34 component of the CATransform3D for the scene.
I'm not sure how well your flat chair would look using something like this, but certainly you can get your walls to have a nice perspective to them. Using layers and Core Animation would let you pull off what you want using far less code than implementing this using OpenGL ES.
Altering the camera angle is as simple as rotating the scene in response to shifts in the orientation of the device.
If you're going to the effort of warping textures as they would be warped in a 3D scene, then why not let the graphics hardware do the hard work for you by mapping the textures to 3D polygons, then changing your projection or moving polygons around?
I doubt you could do it faster by restricting yourself to 2D transformations --- the hardware is geared up to do 3x3 (well, 4x4 homogenous) matrix multiplication.

how to deform image?

Hi Friends
I Want to make a simple gaming Application in which the user hit the car and car breaks from that point means the image get little deformed when the user hit the car image. I know everything could be possible with using of lots of images and get change when user hit that car image but i don't want to use so many images.
is there any solution for this , how can i deform the image ..sorry for my English but , here i paste a link of the game that is on flash and this is what i exactly want..
http://www.playgecogames.com/file.php?f=657&a=popup
please respond soon
thanks
You don't say if this is in 2D or 3D, or what techniques you're going to use.
If you're implementing the game using OpenGL, it's fairly straightforward. The object can be made up of a regular mesh, with the image as a texture mapped to the mesh. When the user hits the object, you just deform the mesh.
A simple method would be to take a vector in the direction of the hit, displace the nearest vertex by an amount proportional to the force of the strike, and then fan out in to deform the rest of the mesh in decreasing amounts. By deforming the mesh, the image texture will be rendered with all the dents or deformations you like.
If you want to to this without OpenGL and just straight images, you could use image resampling to simulate the effect. You have your original pristine image which is 'filtered' to make up the resulting image. At first there are no deformations so you copy the original image verbatim. Each time the user hits the object, you can add a deformation using a filter or transform within a local region of interest. This function would resample the source image in a distorted manner, causing it to look like the object is damaged.
If you look up some good books on game development, you'll find a great range of approaches to object collisions, deformations and so on.
If you know a bit about image processing technics here is the documentation for accessing the pixels of the image :
Apple Reference
You also have libraries for this such as this one :
simple-iphone-image-processing
But for what you want to do this might not be the easiest way. What I would suggest is that you divide the car into several images depending on what areas can be impacted. Then you just change the image corresponding to the damaged zone each time the car is hit.
I think you should use the cocos2d effects http://www.cocos2d-iphone.org/wiki/doku.php/prog_guide%3aeffects + multiple images. Because there are many parts which drops after the player kick the car. Like when user kick the side mirror you should change the car image with without side mirror car image.
The person that has made that flash game used around 4 images to display the car. If you want the game to be in 2d, the easiest way is to draw the car, cut it into about 4 pieces (: left side + right side (duplicate of the left side) hood and roof).
If you want to "really" deform the car you'll have to use a 3d engine like openGLES.
Id really suggest doing it in 2d :)
I suggest having a look at the cocos2d game engine. You can modify images with effects, which are applied using a virtual grid. Have a look at the effects page in their programming guide.

How do I create a floor for a game?

I'm attempting to build a Lunar Lander style game on the iPhone. I've got Cocos2D and I'm going to use Box2D. I'm wondering what the best way is to build the floor for the game. I need to be able to create both the visual aspect of the floor and the data for the physics engine.
Oh, did I mention I'm terrible at graphics editing?
I haven't used Box2D before (but I have used other 2D physics engines), so I can give you a general answer but not a Box2D-specific answer. You can easily just use a single static (stationary) Box if you want a flat plane as the floor. If you want a more complicated lunar surface (lots of craters, the sea of tranquility, whatever), you can construct it by creating a variety of different physics objects - boxes will almost always do the trick. You just want to make sure that all your boxes are static. If you do that, they won't move at all (which you don't want, of course) and they can overlap without and problems (to simulate a single surface).
Making an image to match your collision data is also easy. Effectively what you need to do is just draw a single image that more or less matches where you placed boxes. Leave any spots that don't have boxes transparent in your image. Then draw it at the bottom of the screen. No problem.
The method I ended up going with (you can see from my other questions) is to dynamically create the floor at runtime and then draw it to the screen.