How to make Holograms move relative to user's head in Hololens 2? - unity3d

I am creating a simple App on Hololens2 using Unity. I create two game objects and want them to move based on the user's head movement i.e. they do not stay still at a place in space but move relative to the user's head. However, I am not sure on how to enable this setting. Can someone please help?

The Orbital Solver provided in MRTK can implement this idea without even writing any code. It can lock the object to a specified position and offset it from the player. It is recommend to refer to the SolverExamples.unity which is located at /MRTK/Examples/Demos/Solvers/Scenes to get stated Solver components.

If I understand you correctly, you want to have a object at a specific distance from the HoloLens 2 at any given time.
So that (for example) a cube is always in the upper right corner of the users view.
If that is the case you can position the desired object as a child to the Main Camera (located under the MixedRealityPlayspace) in the hierarchy view.

Related

HoloLens companion map

I am implementing a "companion map" for a HoloLens application using Unity and Visual Studio. My vision is for a small rectangular map to be affixed to the bottom right of the HoloLens view, and to follow the HoloLens user as they move about, much like the display of a video game.
At the moment my "map" is a .jpeg made into a material and put on an upright plane. Is there a way for me to affix the plane such that it is always in the bottom right of the user's view, as opposed to being fixed in the 3D space that the user moves through?
The Orbital Solver in MRTK can implement this idea without even writing any code. It can lock the map to a specified position and offset it from the player.
To use it what you need to do is:
Add Orbital Script Component to your companion map.
Modify the Local Offset and World Offset properties to keep the map in the bottom right of the user's view.
Modify the Orientation Type as Face Tracked Object.
Besides, the SolverExamples scene provided by the mrtkv2 SDK is an excellent outset to become familiar with Solver components

How to get the distance from daydream controller to a pointed game object in unity?

Following the google-vr sample I manage to add a camera and controller to my scene.
The next thing I need is to get the distance between my controller to any pointed game object in the scene.
After searching for a while, I cannot find any tutorial nor information on how to get the distance.
So, is there any newest working tutorial on how to do this? (Many tutorial on the internet is outdated since google updates its API so frequently)
Or it is actually a simple task i.e. I can get the value from GvrPointerInputModule.Pointer / GvrLaserPointer / some other GVR class?
Thanks in advance~
You need to do raycasts from the controller and measure the difference between the hit location and the origin of the Ray cast. I think unity raycasts can return this distance built-in.
Just as I suspected, GvrLaserPointer is the answer.
If its CurrentRaycastResult.gameObject is not null, then the laser is intersecting with something. Then, we can get the intersection point from CurrentRaycastResult.worldPosition.
Using this point, we can easily calculate the distance.
Note: Just in case anyone failing with this method, like I did before. Check your ray casting group. Make sure that your Raycaster Event Mask in GVRPointerPhysicsRaycaster only include the desired layers. And if you have any canvas in screen space, check its Blocking Mask in Graphic Raycaster. It's Everything by default and your pointer may keep intersecting with the canvas, resulting in "weird" intersection point. This the cause of my problem, and to fix it, I select Nothing for Blocking Mask, and voila.

How do I force orbital movement around a point in space, while maintaining forward/backwards capability on a player?

Long-winded question out of the way, I'll provide a diagram of what I am going for:
The red square represents the character, the blue rectangle represents the camera, the green dot represents the center of the "stage", and the black circle is the stage itself.
What I desire is to essentially lock the player's movement around the "center" of the stage, so that anytime you move left or right you are more or less rotating around said center. However, I also want the player to be able to move forwards and backwards to/from the center as well. Keep in mind I want the camera to always stay directly behind the player. I have tried many different methods, and the latest is the following:
I took a default actor, attached a spring arm, attached a child actor to that (gets possessed to become the playable character), attached another spring arm, and finally the camera to that. I then added the blueprint code to the first spring arm so that it was the one being controlled by the left/right controls. However, upon hitting play, the only thing that moves is the camera, and it can only move forwards and backwards.
I'm admittedly pretty new to Unreal Blueprints, so any help would be appreciated.
Alright, I figured it out.
Here's the setup needed if anyone else wants something similar.
For the player themselves, you'll need something like this:
The important thing is to center the root mesh where you want to rotate around. The spring arm's target arm length will be affected for the player mesh movement, giving the illusion you are physically moving the character. The second spring arm isn't necessary unless you wish to have more control over the camera to player distance.
For the rotation Blueprint, you'll need this:
The target is whatever you named the root mesh. (Mine was called Center) Drag and drop it from the hierarchy.
For the forward/backward movement, you'll need this:
The target is what you named the spring arm. (I left mine as the default "SpringArm") Again, just drag and drop it from the hierarchy.
Adjustments in Project Settings:
Yes, my inputs are backwards from what you'd think. I felt it was quicker just to reverse the inputs instead of adjusting whatever was causing the movement to be backwards in the first place. (It's probably just the sphere orientation.) Also, you'll notice I have the w and s inputs set to 5 or -5 instead of 1 or -1. This is due to the fact the movement was slow otherwise. I'm sure there's a fix that doesn't involve changing the input axis scale, but honestly I won't really have a reason to alter the values at any point in my project. If it ever comes up where I do need to, I'm sure there's a bypass to change the values from within blueprints anyways.
End result:
End Result Video
If I remember correctly, child actor components are a bit different from other components in that they are transformation independent, that is they do not update their transformation when their parent component moves around.
I find it a bit strange that you would separate your player actor and the camera component. Normally, the player "pawn" contains the mesh and camera components for one player.
I would suggest you do the following:
Create a player actor (e.g. a "pawn" or "character" class)
Create the following component hierarchy:
Root Scene -> Spring arm -> Skeletal or static mesh -> spring arm -> camera
Your root scene is the green center in your drawing. You can then basically use the blueprint you already have to rotate and move your player.

How to set dynamic hotspot for 360 image with unity 3D

I am trying to build a visitors tour with Unity 3D. I have panaromic picture of bedrooms within an hotel and I would like to add points (hot spots) to my pictures that leads to another picture.
The problem is that I want to add this point dynamically via a backend, and I can't find a way to achieve that in Unity.
I will try to answer this question.
Unity has a XYZ coordinate system that can be translated to real world. I would measure real distances to these points (from the center where you took your picture) in your location/room and send these coordinates via backend to Unity3D client.
In Unity you can create Vector3 positions or directions based on coordinates you sent before. Use these positions/directions to instantiate 'hotspots' objects prefabs in right positions and directions. It might be necessary to adjust the scale/units to get the right result.
Once you have your 'hotspot' objects in place add a script to them that will load new scene (on click) with another location/image and repeat the process.
This is a very brief suggestion on how to do it. The code would be quite simple.

Keep object in place when AR track is found

I'm pretty new with Vuforia and Unity, what I'm looking for is to have a full-screen/in place image/object (without movements) when the track is found (in keep it on screen till track is lost)
I was thinking to make the image target child of AR camera (to keeps is relative position to the camera) and put my image on top of it (a sample cube for now).
This will give me the correct positional result (image is not moving) but there are some rendering issues. (when I run it, it's blinking as I guess could be some rendering issues)
any comments and suggestions would be really helpfull
The way vuforia works is that the camera is manipulated so that it has the same relative position to the target as your device has to the physical target in the real world.* Making one the child of the other won't allow this to happen. The whole point is to replace the "marker" with the 3D object so that it looks like it actually exists in the real world, moving and keeping itself fixed in place as the user moves their device around.
If you want something to be displayed on screen relative to the screen you should have it check to see if the target is detected and enabled, and depending on that state, enable or disable itself. This object can then be a child of the camera (so it doesn't move).
*Alternatively it moves the object around relative to the camera.