I'm trying to make an augmented reality application about chemistry using Vuforia and Unity3D. I will physically have a big image of periodic table of elements and some small spherical objects, and I don't know how to determine which element is covered by the sphere when I put it on the periodic table. Does anyone have an idea or has done this already? I will next associate that chemical element with the sphere.
I think your best bet would be to try and track not only the position of the printed periodic table as Vuforia image target, but also the position of the 'small spherical objects' as Vuforia model targets. Whether or not that would work depends on the exact characteristics of those spherical objects and to which degree they are suitable for tracking as model targets. Otherwise consider replacing the spherical objects with alternative objects possibly with trackable stickers on them.
Related
I need a 3D equivalent to Collider2D.GetContacts for ground deteciton in my platformer, but I can't see how to do this neatly, in theory the physics engine should be keeping track of these points anyway, so this should be possible without any extra processing, but I can't figure out how. A 3D equivalent to this function simply doesn't seem to exist, so what is the best alternative?
I'm a complete noob in gamedev, but I've watched a number of videos on generating a 2D array to setup Grid-based combat (pathing, obstacles etc), and I don't find the programmable approach intuitive or visually friendly.
Is it possible to setup such level with obstacles using multiple tilemaps?
1st tilemap would include the whole level zone (I named it "General Tilemap"):
2nd tilemap would only contain tiles that would be marked as collision when being read (I named it "Collision Tilemap") and player wouldn't be able to move to them:
My logic would be to read the adjacent tiles around the player, and if:
A tile exists on the General tilemap, but not on the Collision tilemap, player can click it and move there.
A tile exists on both tilemaps, it is marked as collision, it cannot be clicked.
A tile doesn't exist, it is out of boundaries, it cannot be clicked.
Could you please let me know if this is a valid approach (for smaller levels at least, I won't be making anything large so scalability is not an issue), or have I gone completely off course and there's a superior way to do this properly?
I'm currently stuck at the very first step - reading whether the tile on a coordinate (next to player) is existing or null for both tilemaps. Doing my best to figure it out though.
Thanks!
Managed to check if tilemap contains a tile on xy coordinates in Start function, by finding the relevant Tilemap and using hasTile to read it it has value or not. This returns a boolean.
Tilemap generalTilemap = GameObject.Find("General Tilemap").GetComponent<Tilemap>();
hasGTile = generalTilemap.HasTile(playerTileCoord);
Still not sure if this approach will work for me long-term, especially when I get to the pathfinding algorithm, but we'll see!
I am currently developing an asteroid mining/exploration game with fully deformable, smooth voxel terrain using marching cubes in Unity 3D. I want to implement an "element ID" system that is kind of similar to Minecraft's, as in each type of material has a unique integer ID. This is simple enough to generate, but right now I am trying to figure out a way to render it, so that each individual face represents the element its voxel is assigned to. I am currently using a triplanar shader with a texture array, and I have gotten it set up to work with pre-set texture IDs. However, I need to be able to pass in the element IDs into this shader for the entire asteroid, and this is where my limited shader knowledge runs out. So, I have two main questions:
How do I get data from a 3D array in an active script to my shader, or otherwise how can I sample points from this array?
Is there a better/more efficient way to do this? I thought about creating an array with only the surface vertices and their corresponding ID, but then I would have trouble sampling them correctly. I also thought about possibly bundling an extra variable in with the vertices themselves, but I don't know if this is even possible. I appreciate any ideas, thanks.
This question is (mostly) game engine independent but I have been unable to find a good answer.
I'm creating a turn-based tile game in 3D space using Unity. The levels will have slopes, occasional non-planar geometry, depressions, tunnels, stairs etc. Each level is static/handcrafted so tiles should never move. I need a good way to keep track of tile-specific variables for static levels and i'd like to verify if my approaches make sense.
My ideas are:
Create 2 Meshes - 1 is the complex game world, the second is a reference mesh overlay that will have minimal geometry; it will not be rendered and will only be used for the tiles. I would then Overlay the two and use the 2nd mesh as a grid reference.
Hard-code the tiles for each level. While tedious it will work as a brute force approach. I would, however, like to avoid this since it's not very easy to deal with visually.
Workaround approach - Convert the 3d to 2D textures and only use 1 mesh.
"Project" a plane down onto the level and record height/slope to minimize complexity. Also not ideal.
Create individual tile objects for each tile manually (non-rendered). Easiest solution i could think of.
Now for the Unity3D specific question:
Does unity allow selecting and assigning individual Verts/Triangles/Squares of a mesh and adding componenets, scripts, or variables to those selections; for example, selecting 1 square in the 10x10 unity plane and telling unity the square of that plane now has a new boolean attached to it? This question mostly refers to idea #1 above, where i would use a reference mesh for positional and variable information that were directly assigned to the mesh. I have a feeling that if i do choose to have a reference mesh, i'd need to have the tiles be individual objects, snap them in place using the reference, then attach relevant scripts to those tiles.
I have found a ton of excellent resources (like http://www-cs-students.stanford.edu/~amitp/gameprog.html) on tile generation (mostly procedural), i'm a bit stuck on the basics due to being new to unity and im not looking for procedural design.
I'm developing an image warping iOS app with OpenGL ES 2.0.
I have a good grasp on the setup, the pipeline, etc., and am now moving along to the math.
Since my experience with image warping is nil, I'm reaching out for some algorithm suggestions.
Currently, I'm setting the initial vertices at points in a grid type fashion, which equally divide the image into squares. Then, I place an additional vertex in the middle of each of those squares. When I draw the indices, each square contains four triangles in the shape of an X. See the image below:
After playing with photoshop a little, I noticed adobe uses a slightly more complicated algorithm for their puppet warp, but a much more simplified algorithm for their standard warp. What do you think is best for me to apply here / personal preference?
Secondly, when I move a vertex, I'd like to apply a weighted transformation to all the other vertices to smooth out the edges (instead of what I have below, where only the selected vertex is transformed). What sort of algorithm should I apply here?
As each vertex is processed independently by the vertex shader, it is not easy to have vertexes influence each other's positions. However, because there are not that many vertexes it should be fine to do the work on the CPU and dynamically update your vertex attributes per frame.
Since what you are looking for is for your surface to act like a rubber sheet as parts of it are pulled, how about going ahead and implementing a dynamic simulation of a rubber sheet? There are plenty of good articles on cloth simulation in full 3D such as Jeff Lander's. Your application could be a simplification of these techniques. I have previously implemented a simulation like this in 3D. I required a force attracting my generated vertexes to their original grid locations. You could have a similar force attracting vertexes to the pixels at which they are generated before the simulation is begun. This would make them spring back to their default state when left alone and would progressively reduce the influence of your dragging at more distant vertexes.