Let me show you what I mean:
Suppose we have a puzzle game with colored square tiles/blocks falling, and they stack like this:
My question is, instead of each tile/block sprite stay visually separated from others, what technique can be used to make each tile aware of their neighbors when they stop falling, and change its sprite (and neighbors) to become visually "glued" with them, like this:
I cant seem to come up with a simple and efficient solution for this, any ideas?
Here are two options that come to mind:
Easier approach - Use a tilemap, each item should have variants for every color and connection in every direction
Harder approach - Build your meshes in realtime and calculate the actual sizes yourself. I think this option could be more robust, but it's much more complex to do (especially if you haven't done something of the sort in the past)
Related
For now, I use a 3D array to represent my voxels in different chunks. I want to render voxels which can be visible by the player, but the way I do it is totally not efficient:
I iterate over the whole 10*10*10 chunk and check on every voxel if there is a neighbor equal to Air. Then I render separatly each faces which can be visible. So I mostly check every voxels 6 times. And I do this for all chunks.
Is there a better way to proceed or an algorithm to reduce iterating?
I basicly don't know if it is better to work with 3D Array or Octree...
Thank.
I've been thinking through this problem recently, and since nobody has answered you I thought I'd mention some of the ideas I've come across.
Firstly, it's work noting that you only need to calculate which faces to render once, since that only changes if you remove or add a voxel, and then you only need to recalculate the voxels immediately around the place where you made the change. Just use a flag to mark for rendering and cache that until something changes. If you aren't already doing this, this will give you a big performance boost over calculating every frame.
I also recommend looking into this extremely fast raycasting algorythm:
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.42.3443&rep=rep1&type=pdf
You can use it for fast collision testing, and also for cull-testing. You can cast at grid nodes to see if any part of a face is visible.
I'm working on a 2D physics game that involves collisions between objects which at the moment use complex polygon colliders.
When the objects collide, I get the normal of the contact point using otherObject.contacts[0].normal, and knock the objects apart in equal-but-opposite directions using Rigidbody2D.AddForceAtPosition, with the force being the normal multiplied by a constant.
Most of the time this works flawlessly, but I've found that when the collision occurs with a concave section (aka: an inward "dip") in the polygon collider, the normal will be flipped, and the objects will instead get pushed towards each other.
Alternatively, are there any other ways I could go about solving this?
The blue circled area is an example of a problematic concave section
Alternatively, are there any other ways I could go about solving this?
Yes, the standard thing in vid games is that most stuff has many small simple colliders, rather than one large complex collider.
This is a basic of game engineering.
(It can be very surprising to hobbyists and folks new to the field.)
So, imagine a car in any ordinary 3D game. You'd have a collider for the rear bumper, one for the front, maybe one for "left doors" and so on. Very typically, each has to react in a different way, and you need to know which area was touched.
In your case if the 2D poly has say 12 edges, just make 12 "small" as it were colliders for each of the edges.
We know nothing about your setup since no screenshot, but that could possibly work.
Note however that Unity's 2D poly collider in fact already does know to slice the object in to smaller triangle-like shapes if it is concave - I'm surprused it dinnae work for you.
Further: now that we can see your image.
In any video game on Earth, the way you'd do that helmet is with a square collider as in orange:
If (for some reason .. why? for what purpose? how? where? what possible reason could there be?) you were making the most precise video game, ever created by humans, on an entirely new plane of engineering, for some imaginary new hardware with quantum warpspace cores, ... in that case ... you'd maybe add the two extra colliders to cover the horns. But nobody would ever notice the difference.
I appreciate you may be doing something exceptionally unusual, like a "close up game" ("you're an atom in medieval Scandinavia, bouncing off helmets" or whatever), in which case there'd be some other solution.
The very short answer is you've stumbled on one of the most surprising things about game technology ... we use crappy, simple, colliders, you've been tricked all your life in every title you play!
Is it possible to make some % of my mesh transparent?
For example, imagine I have a mesh that is a house. At first the mesh is transparent. As a person clicks on the house, it becomes opaque along the Y-axis so it looks like it's being built up.
Any ideas how to approach this problem?
"a house. At first the mesh is transparent. As a person clicks on the house, it becomes opaque along the Y-axis so it looks like it's being built up"
Literally in answer to your question, in general:
I would approach this by making a shader which was sensitive to the global Y value of the point in question. It would use that value, over time, to decide on alpha at a given point.
alternately
Imagine a second texture of the house, call it GUIDE, which is: imagine a monochrome house: at the ground it is black and it slowly becomes pure white at the tops. Additionally you could color it any way you want, for example, the window frames and quoining could be black and so on. Now, the shader would use the GUIDE texture as a key, to know at what time, that area, should become transparent.
That would actually look quite incredible and offer amazing control. You could fade in different parts in whatever order you wish.
It would be beyond the scope of an answer here to actually engineer this. But I believe the key here is, unfortunately for what you describe that is really all done in the shader, I'd say.
Note that if you just want "a clean hole", look in to approaches using a depth mask shader And indeed https://www.youtube.com/watch?v=s3RKGAj9Uzk
for 2D consider this, http://answers.unity3d.com/questions/449034/see-through-hole-via-shaders-on-a-2d-plane.html
in other cases you may literally want to cut a sharp hole in the mesh which is a "whole" different technology. https://gamedev.stackexchange.com/questions/72978/shader-that-cuts-hole-through-all-geometry
if you want this effect http://answers.unity3d.com/questions/622089/how-can-i-render-a-semi-transparent-texture-with-a.html (see mario image) that's totally different again - it's nothing more than a gray image with a hole!
I'm new to using xna and I want to make my player collide with with multipe walls from the same class. So I looked around and I understood that the best way for doing that is to create a list of variables containing the walls id's and make a loop that circles them all and then returns the variable of the objects that collide.
My question is if there is a faster more efficient way for doing that? I mean if I have like 10000 objects that loop can cause a lot of memory use.
Thx in advance
Option 1) If these 10000 object are walls of a level, then you should probably use some sort of grid (like this very old example: https://en.wikipedia.org/wiki/The_Legend_of_Zelda#mediaviewer/File:Legend_of_Zelda_NES.PNG)
With a grid you only have to check collision with adjacent objects, or only with objects that are nearby.
Option 2) If these 10000 objects are enemies or bullets that move more freely, then you could also calculate the distance first and only check for collision if the objects are nearby.
But may I ask why you are using XNA? I used to work with XNA 4.x but in my understanding it is pretty much dead (http://www.computerandvideogames.com/389018/microsoft-email-confirms-plan-to-cease-xna-support). If you're new to XNA, I would advice to use other software to make games (like Unity3D). In Unity3D the hard part of collision detection is done for you (is has standard functions for collision detection) and Unity3D also works with C# (like XNA)
You always want to do the least amount of processing to get the job done. For a tiled 2D game you usually have a 2 dimensional grid. When the player want to walk on a certain tile you can check that tile if it is allowed to walk there. In this case you just have to check a single tile. If you have a lot of NPC's you could divide your map into sections and keep track of in what sections the NPC's are. Now you just have to do collision detection on the enemies within your section.
When you need expensive collision, pixel perfect or polygon collision you should first check if an object is even close with a simple radius float or BoundingSphere only then you go on with more expensive collision detection.
Same goes for pretty much anything, if you have a 100x100 tilemap but only need to draw 20x10 for the screen then you should just render that portion by calculations. In unreal, mappers create invisible boxes, when inside these boxes it only draws a certain part of the map and only checks collision within these boxes. GameDev is all about tricks to make things work smoothly.
I would like to track (if that is the right word for this) the movement of a point on an object and return the co-ordinates for the point in each frame to arrays for plotting. How would you go about doing this?
The point on the video is a certain color and so my first effort was to eliminate all other colors and change the part I wish to follow to black and everything else to white. Doing this left me with some areas in the background which are the same color but I wish to ignore them and just focus on the moving point. I do not know where to even begin with this or if I've even been trying to do the right thing so far?
Any help would be greatly appreciated! :)
Try searching for terms like 'tracking', 'morphological', 'computer vision', 'matlab'
Here's a project that I found that will probably get you started.
http://www.mathworks.com/matlabcentral/fileexchange/28757-tracking-red-color-objects-using-matlab
if your object of interests is of a certain specific color. You can always apply a color-filter. To give you a bit of a background, i was trying to track not a point on an object, but a moving object in one of the videos i have. (it was a ping-pong video and my goal was to track the ping-pong ball). My algorithm was simple and fast (as i did not want any of my filters to induce heavy computations at one single frame). The basic idea was to apply a color filter. Similar to other shape filters, if your target is of high similarity to the filter, the response will be distinctive enough for you to notice. In other words, if you minus two objects that are extremely similar, you will get 0, otherwise, it will be far greater than 0.