Hololens Spatial Mapping Collision - Mixed Reality Toolkit - unity3d

I want to put a hologram-poster in AR, on the Hololens 1, on a wall using the Mixed Reality Toolkit. But everytime I try to put a hologram onto the spatial mapping it slides right through it and vanishes behind the spatial mapping.
What did I miss?

My understanding is you are implementing tap-to-place that enables users to move objects and place them on real-world surfaces. And your issue is that when you placing the object, it did not place on the surface as expected.
Usually, in MRTKv2, the Solver would be a good direction to implement tap-to-place.
The SurfaceMagnetism Solver can cast rays to surfaces in the world, and align the object to that surface.
So, you can add SurfaceMagnetism Component to the object and create a script that has two methods, one that turns on the solver when the object is selected and one that turns off the solver while the object is deselected. When you close the solver, the object will stay in the last position.

Related

How to find a covered element in an image using Vuforia?

I'm trying to make an augmented reality application about chemistry using Vuforia and Unity3D. I will physically have a big image of periodic table of elements and some small spherical objects, and I don't know how to determine which element is covered by the sphere when I put it on the periodic table. Does anyone have an idea or has done this already? I will next associate that chemical element with the sphere.
I think your best bet would be to try and track not only the position of the printed periodic table as Vuforia image target, but also the position of the 'small spherical objects' as Vuforia model targets. Whether or not that would work depends on the exact characteristics of those spherical objects and to which degree they are suitable for tracking as model targets. Otherwise consider replacing the spherical objects with alternative objects possibly with trackable stickers on them.

How can I change hand-rays in my MRTK v2 project for HoloLens 2 to parabolic instead of linear?

My HoloLens 2 project has content that is arranged as such I cannot target colliders with the existing hand-rays. I used to target my content with the head-gaze, but with hand-rays being lower on the body it is more difficult to reach the content that I want to select. I believe I would benefit from a parabolic selection ray, similar to those used when teleporting in Mixed Reality to reach surfaces above the participant.
The primary method of interacting with my content would be via a parabolic ray. There are instances within my application where I might change modality to focus on a menu system from close or far, and when I am far I'd like to change to a linear ray. So, having this capability to change the type of ray exposed via code would be preferred.
My project is employing the MRTK v2, and the standard linear hand-rays are functioning.
I would like to be able to change the type of ray being used in the Unity inspector, and to be able to change the style via code during run-time. I'd like to have control over the arc of the ray, as the scale of my content may impact the need for a different arc and min/max distance.
You can modify the DefaultControllerPointer prefab to use a Physical Parabolic Line Data Provider instead of a Bezier Line Data provider. This will distort the line used by the pointer to be more parabolic.
Before:
After:
Note that I removed the pink components and added the green components.
You will also want to increase the line cast resolution of the pointer from 2 to something larger, this means that the ray used to query what you have hit will have higher resolution:
And you may want to increase the resolution of the MR Line Renderer itself.
Demo of parabolic hand pointer:

How to implement Spring joint in Unity in the following manner?

I need help regarding the spring joint. I am trying to make a game where when I drag back and release the gameObject should fly like in Angry Birds(but slightly different). The difference is, I don’t want the gameObject connected to the spring joint to move when i drag back. But when i release, the gameObject should move as if I did drag back.
You could create a custom spring. Spring forces are proportional to the distance from the resting point. So your force would be roughly
[pseudocode]
Force = Coefficient * (PositionOfMouse-PositionOfRestingPoint).magnitude
Rigidbodyofobject.addforce(result)
So to not move the object, calculate the mouse position relative to your object (probably need to raycast to a plane during mouse drag) calculate the force as if the mouse point was your object. But don’t actually move the object. Then apply the spring force to the actual object.
You might need to add some kind of coefficient in to modify the force, which would represent the tension of the spring.
Then you'd just need a script to turn off the spring a short time after releasing it ().

Tile Grid Data storage for 3D Space in Unity

This question is (mostly) game engine independent but I have been unable to find a good answer.
I'm creating a turn-based tile game in 3D space using Unity. The levels will have slopes, occasional non-planar geometry, depressions, tunnels, stairs etc. Each level is static/handcrafted so tiles should never move. I need a good way to keep track of tile-specific variables for static levels and i'd like to verify if my approaches make sense.
My ideas are:
Create 2 Meshes - 1 is the complex game world, the second is a reference mesh overlay that will have minimal geometry; it will not be rendered and will only be used for the tiles. I would then Overlay the two and use the 2nd mesh as a grid reference.
Hard-code the tiles for each level. While tedious it will work as a brute force approach. I would, however, like to avoid this since it's not very easy to deal with visually.
Workaround approach - Convert the 3d to 2D textures and only use 1 mesh.
"Project" a plane down onto the level and record height/slope to minimize complexity. Also not ideal.
Create individual tile objects for each tile manually (non-rendered). Easiest solution i could think of.
Now for the Unity3D specific question:
Does unity allow selecting and assigning individual Verts/Triangles/Squares of a mesh and adding componenets, scripts, or variables to those selections; for example, selecting 1 square in the 10x10 unity plane and telling unity the square of that plane now has a new boolean attached to it? This question mostly refers to idea #1 above, where i would use a reference mesh for positional and variable information that were directly assigned to the mesh. I have a feeling that if i do choose to have a reference mesh, i'd need to have the tiles be individual objects, snap them in place using the reference, then attach relevant scripts to those tiles.
I have found a ton of excellent resources (like http://www-cs-students.stanford.edu/~amitp/gameprog.html) on tile generation (mostly procedural), i'm a bit stuck on the basics due to being new to unity and im not looking for procedural design.

Not able to calibrate camera view to 3D Model

I am developing an app which uses LK for tracking and POSIT for estimation. I am successful in getting rotation matrix, projection matrix and able to track perfectly but the problem for me is I am not able to translate 3D object properly. The object is not fitting in to the right place where it has to fit.
Will some one help me regarding this?
Check this links, they may provide you some ideas.
http://computer-vision-talks.com/2011/11/pose-estimation-problem/
http://www.morethantechnical.com/2010/11/10/20-lines-ar-in-opencv-wcode/
Now, you must also check whether the intrinsic camera parameters are correct. Even a small error in estimating the field of view can cause troubles when trying to reconstruct 3D space. And from your details, it seems that the problem are bad fov angles (field of view).
You can try to measure them, or feed the half or double value to your algorithm.
There are two conventions for fov: half-angle (from image center to top or left, or from bottom to top, respectively from left to right) Maybe you just mixed them up, using full-angle instead of half, or vice-versa
Maybe you can show us how you build a transformation matrix from R and T components?
Remember, that cv::solvePnP function returns inverse transformation (e.g camera in world) - it finds object pose in 3D space where camera is in (0;0;0). For almost all cases you need inverse it to get correct result: {Rt; -T}