The project works under a isometric orthographic camera, in a 3d space using 2d sprites.
What we are using are billboarding sprites into 3D colliders to archieve the 3d feeling.
The problem is that we don't really believe the way we are doing it it's the most optimal. We are also having problems introducing high areas, because we need to reply the sprite form in isometric perspective as colliders.
Because we are using 3D world, the tilemaps tools conflicts with the other vertical sprites.
We can not use a entire 2d floor billboarding sprite because that suposes to have a huge vertical sprite in front of the camera, so we can not display the others.
We are just researching for a solution before to change to a 2D world.
If you plan on sticking with isometric in 3D, get rid of the tilemaps entirely. They are just going to give you a headache and make your game lag itself to death. If you want to convert to entirely 2D isometric, you can stick with them as they would work fine. Now, a few comparisons between the 2D and 3D approaches, and how best to approach them. This is a jumbled list of drawbacks/advantages to each type, so it's more of a ramble after this point than an answer, but I couldn't be more specific without knowing more about your project's overall requirements and specifications.
Unity recently added Isometric Tilemapping as a dedicated feature. So, if you choose to fake it with 2D, your life will be a lot easier.
Controls are a lot easier in 3D, as the physics won't ever have to be
faked.
3D allows foreground objects to automatically cover up background
objects without having to add an arbitrary system to achieve the same
effect.
2D is fundamentally faster than 3D, and if you're aiming for mobile,
that's going to be very important to your project's success.
3D allows you to rotate your camera if you design it right. (Check out Don't Starve Together for an example of this design).
Related
I'm creating an isometric 3d brawler in Unity. I'm trying to draw a telegraphing "attack area" effect on the ground (Meshes, not Unity terrain), from an arbitrary polygon (including curved lines.), based on Vector3 points.
I figure I need to use the Unity decal system, but I'm not sure how to generate the area texture procedurally, especially since it needs to match points in 3d space. Here are a few examples of the effect I'm looking for.
Thanks for taking your time to read this :)
I've been using Unity3D for a while now and I've also had experience coding 2D games using LibGdx.
In the past, I used to get my sprites off the net or make my own however that wasn't really the best way to do things since I'm more of a programmer and would sometimes need very specific things and so I've started to learn blender and I'm actually enjoying it atm.
What I want to know is how much of an overhead is it if you're using 3D models for a 2D game? Especially if you want to port it to mobile?
The overhead is significant for rendering since with a basic sprite, you have 6 vertices (2 tris to make a quad) while a 3d model can have hundreds of thousands of vertices.
The advantage on the other hand is that animations are made of sprites, so your texture amount and size may increase. In 3D, an animation is a text file so fairly light.
The physics is simplified in 2D since you can do surface collision while 3D requires volume collision and obviously checking an extra dimension is more expensive.
There are probably other considerations but those are the first coming in mind.
Now, the choice of 3D over 2D should be simply based on what you are trying to achieve. Side scrolling games like Angry Birds do not need 3D. Games like Taichi Panda are better with 3D despite being a 2D game (only x and z camera movement I think).
A FPS game should only be done in 3D or it will look like Duke Nukem.
First of all, I am using Unity3d.
What is the most efficient (in terms of memory) way to create a group of 2D tiles with 2D colliders using a texture atlas (and tile data)?
Background Info:
I am working on a 2D terrain generation asset. It is a very similar generation style to Terraria's random generation. Currently, each tile is being instantiated as a separate GameObject. As I now know, this is extremely inefficient, and I should use a texture atlas and tile data instead. Here is a link to a tutorial I have been following that deals with this in 3D: http://studentgamedev.blogspot.co.uk/2013/08/unity-voxel-tutorial-part-1-generating.html
The problem is that mesh colliders are 3D colliders; 3D colliders CANNOT collide with 2D colliders. Currently in Unity, there are no 2D colliders (that I am aware of) that have the properties of a mesh; I need to dynamically change the 2D collider to adjust to the positions that contain tiles. How am I supposed to develop an efficient 2D tile system using 2D colliders?
Here are some of my ideas of techniques that may work:
Add a box collider 2D component to the chunk GameObject for each tile in the chunk.
Somehow dynamically use a polygon collider 2D to stretch over all solid tiles.
I have read through several threads and cannot find a good approach for this problem.
I am mostly looking for a proven technique/approach to this issue, but I am open to any suggestions or techniques. I am happy to provide clarity as needed. Thanks for any answers! I appreciate the time you put in to answering my question -it helps a ton!
As you rightly said, 2D colliders do not work with 3D colliders. The correct approach would be to pick one or the other.
If you are mixing 2D and 3D objects, go with 3D. You can restrict the axes in which your objects are affected by physics as explained in this answer.
I'm making RTS project on Unity3D.
I created terrain with Unity's standard Terrain tool, and added textures of grass, mood etc. on it. Then, for creating "man-made" objects of terrain (roads, sidewalks, road curbs etc) - I'm created this objects in separate assets, and placed on terrain.
And I have one issue with this solution. On moving camera away from the terrain, terrain's texture(e.g. mood) are flickering under roads, sidewalks and other objects. AFAIK, this bug caused by insuficcient accuracy of floating-point coordinates in Unity3D engine (?).
Now, as I concerned, my approach to creating terrain objects is not correct. I must create one mesh with terrain, and all manmade objects in 3D modeling software, and then create UV-map for texturing all of it. Is this approach correct? If "yes", is any special approach for modelling and texturing so large and complex object as terrain?
I had the same issue time ago and I solve it by increasing camera's near plane value. I had many objects on the plane that were flickering when moving the camera and was due to having the near plane value at 0.01. I change it to 0.5 and none of the objects where flickering anymore.
I hope this helps!
We are trying to achieve the following in an iphone game:
Using 2d png files, set-up a scene that seems 3d. As the user moves the device, the individual png files would warp/distort accordingly to give the effect of depth.
example of a scene: an empty room, 5 walls and a chair in the middle. = 6 png files layered.
We have successfully accomplished this using native functions like skew and scale. By applying transformations to the various walls and the chair, as the device is tilted moved, the walls would skew/scale/translate . However, the problem is since we are using 6 png files, the edges dont meet as we move the device. We need a new solution using a real engine.
Question:
we are thinking of instead of applying skew/scale transformations, that if given the freedom to move the vertices of the rectangular images, we could precisly distort images and keep all the edges 100% aligned.
What is the best framework to do this in the LEAST amount of time? Are we going about this the correct way?
You should be able to achieve this effect (at least in regards to the perspective being applied to the walls) using Core Animation layers and appropriate 3-D transforms.
A good example of constructing a scene like this can be found in the example John Blackburn provides here. He shows how to set up layers to represent the walls in a maze by applying the appropriate rotation and translation to them, then gives the scene perspective by using the trick of altering the m34 component of the CATransform3D for the scene.
I'm not sure how well your flat chair would look using something like this, but certainly you can get your walls to have a nice perspective to them. Using layers and Core Animation would let you pull off what you want using far less code than implementing this using OpenGL ES.
Altering the camera angle is as simple as rotating the scene in response to shifts in the orientation of the device.
If you're going to the effort of warping textures as they would be warped in a 3D scene, then why not let the graphics hardware do the hard work for you by mapping the textures to 3D polygons, then changing your projection or moving polygons around?
I doubt you could do it faster by restricting yourself to 2D transformations --- the hardware is geared up to do 3x3 (well, 4x4 homogenous) matrix multiplication.