How to handle a Domino draw game tile placement in Unity? - unity3d

Im planning to make a domino draw game but im not quite sure how to handle the tile placement so that each tile snaps to each value when a player sets a domino, could any one give me an idea on how to achive this?
Example image

This question is probably too broad. You really need to make an attempt at implementing a solution yourself and then come back if you run into a specific problem. However, if you really have no idea where to begin, you broadly need to do a few things:
Create a GameObjects for your dominoes and attach a scripts to the,
which define their numbers, set their corresponding texture etc.
Create an invisible play surface which is made up of a grid representing places where tiles can be put down.
Add code to handle picking up, moving and putting down your dominoes.
In the code that handles moving and/or putting down dominoes:
check whether the grid location where the domino is going to be placed is valid (e.g adjacent to another domino),
then check adjacent grid domino values
For adjacent dominoes, check their orientation
depending on the orientation relative to the domino you are placing, check the values at the nearest or both ends of the domino
if the adjacent domino values match a value on the domino being moved allow it to be placed, otherwise don't allow it to be placed
In the above example, "placing" a domino would simply mean moving it to a point on the play surface grid in either a vertical or horizontal orientation.
This is a very broad overview and there are plenty of gotchas that I haven't covered which may or may not give you trouble.
Edit: You could also do this without using a grid but it would be a little trickier when it comes to finding and inspecting adjacent dominoes.

Related

Displaying ARKit nodes in relation to real objects

I am trying to draw a box that can help someone understand the dimensions of an item, but I keep having the issue that since I first need to recognize a plane when I put my physical item on top of the plane, my box gets drawn in front of the item.
Is it possible to somehow overcome this?
#John Scalo is right, your problem is not having to first detect a plane, but it's that your render engine doesn't know that part of your green box frame is occluded (hidden) by a real-world object.
"…to somehow overcome this"
Yes, and by doing so you might be "solving" your original problem—help someone understand the dimensions of an item.
(Depending on your choice of render engine, e.g. SceneKit) You can add an invisible 3D object that has the same dimensions as the real-world object; so the render engine will "know" that some parts of your box frame are behind this (for the user invisible) 3D object. Therefor, you can tell it not to draw those parts of your box frame, which will give the illusion (borrowing from Apple here) that your soda can has the box around it.
These workarounds are inaccurate, but maybe their accuracy is enough for the level of realism you are trying to achieve:
Option 1: 1. After detecting the desk surface, place a semi-transparent 3D object over the soda can and then resize it (gestures/buttons your choice) until it's about the dimensions of the soda can. 2. Confirm that you're done, and just don't draw a texture on it at all just let it occlude the green box frame.
Option 2: Hold your device near the edges of the soda can and add "enough" ARAnchors to be able to create a "bounding shape" that (again) can be used to capture the real-word object and occlude that.
Option 3: (intense, and perhaps the least accurate) Use your finger to "brush" over the object from various angles, and on each touch perform a hit test (hopefully the top/nearest hit is a part of your soda can) and build up a "bounding shape" that way.
Option X: any combination of 1 - 2 - 3.
Good luck, there's lots of people trying to work around this device/ARKit limitation at them moment, so keep your eyes open for good ideas.
The problem you're dealing with is called occlusion, and ARKit doesn't (currently?) include occlusion support. Maybe some day soon iPhones and iPads will begin to ship with LIDAR (or similar), in which case ARKit will be able to detect objects in the scene, making occlusion much easier.

At some position character cannot see the whole mesh in unity

I want to show a transparent building in our project. I did that by setting the material of the mesh to be "transparent/diffuse". However, there exists some visibility problem of the mesh of the building. At some position, I can only see two or three sides of the cuboid(the transparent block, i.e the building). If I adjust my character position, I can see the whole cuboid. I googled the similar question online, someone mentioned about the frustum view of the camera. It seems like character has to be inside the frustum view of the camera, then user can see the whole mesh of the cuboid.
Can anyone give me some suggestions? I feel like it might be something about the way of how I build my mesh for the building, but at some position, I can see the whole cuboid.
I've solved this problem. It is just about the way how you construct the mesh.Basically, for the cuboid, I reconstructed the mesh in this way:
triangles[0]=topleft;
triangles[1]=topright;
triangles[2]=bottomright;
triangles[3]=bottomright;
triangles[4]=bottomleft;
triangles[5]=topleft;
Note: This is just front side, the other sides should be constructed in a same way.
Besides, in order to show the mesh when user enters the block, you have to construct the inside area of the block in previous way.

Objective C Drawing free form shape on top of image

I am working on a iOS 5 iPhone project where users can choose an image on their device, then trace an object inside of the picture (trace an apple out of a picture of a fruit basket), and then the picture needs to be uploaded with the "tagged" object so it can be pulled down later. Other people will then pull down the image and try to find where the tagged object is in the picture (Think "Where's Waldo?").
I have been trying to figure out the best way of tracing the object. Before, I had a user press the top left, top right, bottom left, and bottom right points around the object and create a square view around the object. The info for that view was uploaded and then pulled down for the user to find the object. The downside is that all objects are obviously not squares/rectangles so I need to do a free form shape.
I was thinking of allowing the user to draw over the object and then somehow I need to be able to tell what is inside of the trace (For example, inside of a circle that they traced), but a problem I forsee is making sure the trace they made is closed so I can fill in the shape (which is a whole not problem).
Any advice welcome on the best way of starting this.
Thanks!
UIBezierPath might be a very useful friend here. It allows you to create any shape you need, and it supports both drawing and hit-detection. I recently did an implementation for a storybook where I could trace out a shape with their finger, freeze it, and then use the shape for tap detection.
The basic idea is this:
When your finger touches the screen, start recording positions. Discard any positions that are too close to the previous position (eg, only record a point if it is >min-distance from the last recorded point). While doing this, you can draw the UIBezierPath so you can see what you are tracing out. Modify the UIBezierPath by adding points to it, instead of recreating it every time.* When you lift your finger, close the bezier path. Quite simple.
Now, this will result in a polygon (ie, straight edges). If your min-distance is low enough or if you are using it for hit-detection (as you say), it won't really matter. However, if you want to smooth the path, you have to use the curve-to methods, which slightly complicate it - but should you wish to follow up on this more, read up on splines and spline generation from a point series.
*note: otherwise you'll get lag while drawing large shapes because recreating a bezier path from an increasingly large series of points gets expensive. Modifying the existing path makes it much, much, much faster.

Implementing network (graph) visualization in iPhone SDK

I am starting a new project in iPhone SDK in order to represent and manipulate graphs (in the sense of networks, not bar/pie/* charts).
I want to be able to interactively add/move/delete nodes and arcs between them, as well as to pan/zoom the whole representation.
As far as I can see, I have two major options:
First option: A single view handles it all.
Second option: Each element (node or arc) has its own associated view.
I think that pros/cons of each option are:
First option makes the drawing easy (you just populate the data structure and draw the Quartz 2D paths). However detection of touches is problematic, since you have to decide if a given touch point hits a node (this is easy, but you have to poll all nodes) or an edge (this is not so easy).
Second option makes it easy to implement the detection and handling of touches. However the drawing part could be problematic, since you have to rotate and resize all your views; even worse, if you drag a node you have to tell all the views (holding an edge incident to the given node) to resize/rotate themselves.
Which of the two options would you recommend me?
Thanks in advance,
Biel

How does one interact with OBJ-based 3D models on iPhone?

I have a few different OBJ files that I am able to parse and display. This code is based on Jeff LaMarche's The Start of a WaveFront OBJ File Loader Class. However, I need some means of detecting what coordinates I have selected within a displayed model. Usually there is one model displayed at a time but sometimes there will be two or more on the screen and I want to set up a NSNotificationCenter object to notify other sections of code as to which object is "selected". I have also looked at javacom's "OpenGL ES for iPhone : A Simple Tutorial" and would like to model the behavior of what I'm trying to program after his.
This is my current line of logic:
Setup a means to detect where a user has touched the screen
Have those coordinates compared with the current coordinates of a OBJ-based model
If they match, indicate said touch as being within the bounds of the object
The touchable set of coordinates must scale with the model. Currently the model is able to scale so I will most likely need to be able follow this scaling.
Also note, I don't need to move the model around on the screen. Just detect when it's been touched whether there is one model or several being displayed.
While this is most likely quite simple, I've been stumped by this for months now. I would really appreciate any light others can shed on this topic.
Use gluUnProject on the touch coordinates to get a vector going from the screen into the world, and then intersect it with your models to see if one of them has been touched. gluUnProject isn't by default available on iPhone, but you can look up implementations of it. http://www.mesa3d.org/ has an open source implementation.
Read about gluUnProject here: http://web.iiit.ac.in/~vkrishna/data/unproj.html