How to set dynamic hotspot for 360 image with unity 3D - unity3d

I am trying to build a visitors tour with Unity 3D. I have panaromic picture of bedrooms within an hotel and I would like to add points (hot spots) to my pictures that leads to another picture.
The problem is that I want to add this point dynamically via a backend, and I can't find a way to achieve that in Unity.

I will try to answer this question.
Unity has a XYZ coordinate system that can be translated to real world. I would measure real distances to these points (from the center where you took your picture) in your location/room and send these coordinates via backend to Unity3D client.
In Unity you can create Vector3 positions or directions based on coordinates you sent before. Use these positions/directions to instantiate 'hotspots' objects prefabs in right positions and directions. It might be necessary to adjust the scale/units to get the right result.
Once you have your 'hotspot' objects in place add a script to them that will load new scene (on click) with another location/image and repeat the process.
This is a very brief suggestion on how to do it. The code would be quite simple.

Related

How to put markers in the real world with mapbox and unity when making an augmented reality app

I'm making an augmented reality app with Unity and Mapbox for both ios and android. I have data sets that I am using to make markers in the real world when someone uses the app. I collected json files and converted them to geojson files and then I made a custom map in Mapbox Studio with these 4 different geojson files. Basically I want to have the markers from the datasets I collected to show up in the real world. I am not sure how to get these markers to show up in the real world and not with building prefabs. Example of my custom app made in Mapbox. Each color shows a different category of markers. There are four categories.
Here is an example of what I am referring to.
In this image skeletons can show up in the real world.
Here is an example of what I am not referring to.
In this image droids are place in a map but it is not the real world. It is like Pokemon Go where the map is generated with location but you don't actually see the real world when you are playing.
I already have my Unity project set up and this is the final step, but I am just having issues getting it to show up in the real world. So far, tutorials only show on to get it to reflect something like Pokemon Go.
You will have one scene with a stationary Camera. Your code will monitor the MapBox data in Update(), constantly passing the current GPS position and receiving your list of markers/points of interest. You can simply randomly spawn skeletons in a sphere area (see https://docs.unity3d.com/ScriptReference/Random-insideUnitSphere.html) around the Camera's transform position once you detect that the user's GPS position is in within a certain distance of the center of your point of interest. Keep track of that list, and destroy the skeletons once they leave the area - and have some way of making sure you only spawn them once for that area.
Your skeletons should have a NavMeshAgent, and you should generate a NavMesh onto the ARFoundation plane for them to walk on. In this case, the plane is probably dynamically created and you may need to use the dynamic NavMesh component https://github.com/Unity-Technologies/NavMeshComponents. If you tell the NavMeshAgent to go to a specific point it will walk to the closest point - so even though you get a random position in the sphere in 3D, the skeleton will move or spawn onto the nearest point so there is no need to figure out how to convert it to the 2D plane space.
Your AR view, both the tracking of the camera position/angle and the generation of a plane representing the ground, will be something generated by ARFoundation and it is simple to add the basic functionality. They have a prefab that already includes the camera and generates the plane for you. You can get ARFoundation via the Unity Package Manager. It will work with many different types of devices.
You should start with a cheap Android phone or tablet, even if you own an iPhone, because it's easier to load the APK and debug/develop your app via Android build.
This is a simplification. I recommend using Singletons, ScriptableObjects, Object Pooling, and other Unity paradigms and plenty of other things within Unity that would help you but as another user pointed out - you may want to spend time learning Unity, ARFoundation, MapBox, and ask more specific programming questions when you are ready.

How to add 3D elements into the Hololens 2 field of view

I'm trying to build a Remote Assistance solution using the Hololens 2 for university, i already set up the MRTK WebRTC example with Unity. Now i want to add the functionality of the desktop counterpart being able to add annotations in the field of view of the Hololens to support the remote guidance, but i have no idea how to achieve that. I was considering the Azure Spatial Anchors, but i haven't found a good example of adding 3D elements in the remote field of view of the Hololens from a 2D desktop environment. Also i'm not sure if the Spatial Anchors is the right framework, as they are mostly for persistent markers in the AR environment, and i'm rather looking for a temporary visual indicator.
Did anyone already work on such a solution and can give me a few frameworks/hints where to start?
To find the actual world location of a point from a 2D image, you can refer this answer: https://stackoverflow.com/a/63225342/11502506
In short, cameraToWorldMatrix and projectionMatrix transforms define for each pixel a ray in 3D space representing the path taken by the photons that produced the pixel. But anything along a certain ray will show up on the same pixel. So to find the actual world location of a point, you'll need either use Physics.Raycast method to calculate the impact point in world space where the ray hit the SpatialMapping.

How to make a terrain that acts like the globe?

I want to make a terrain where the ending point is also the starting point. So, like on the earth you could just go on walking straight and you would reach the point where you started again after some time.
Thanks for your help!
Unity's Terrain system can only create square regions of terrain. So this can't be done as such.
However, you can approximate it, and I'll tell you how I've done it in my project to some success.
Figure out how much terrain you need to cover the "globe", we'll say it takes NxN chunks of terrain we'll call a "tile".
What you do next is you make 9 of those NxN tiles, and arrange them in a 3x3 grid. Put the camera in the center tile of the grid, and whenever the camera leaves that tile, determine where it is on the tile it is on, then change its position to the corresponding position on the center tile.
This will give you a "toroidal" world. I found this was the easiest solution to get the player to see things on the other "corner" of the world map, and then cross into it without graphical issues.
If you have other objects residing on the world, that presents some additional challenges. One thing you can start with is duplicating them 9x and start them at the same relative position of each tile. If they only interact with the player, that should be fine, just whenever the player interacts with 1, the other 8 do whatever that 1 does.
If the other residents of the globe have to interact with each other, you'll need a way to figure out how to make all 9 copies of everything behave consistently, but that's too broad of a question to address here.

Getting 3D points coordinates of a person as person is walking in Unity

For a research project, I need to find the coordinates of 3D points on the surface of a person body as the person is walking straight. I know that unity is rendering an object using a mesh based on 3D points coordinates.
I know very little about unity. I wonder if it is possible that I could use unity to create one person character and make him walk and get the 3D points of that person for each 50ms or 1sec, etc and save them to them to a file? So that I could read the points coordinates later using either C# or python and perform my simulation? How easy is that? is there any sample code or example or ready character which I could use in a relatively short time?
If there is any suggestion for any tool or software which I could achieve that would be great.
Thanks
Easiest thing to do in my opinion would be using either Kinect or photogrammetry to create your model as Point Cloud which will have vertices on the surface only. This is one of the reasons why i am suggesting Point Cloud because you do not have to find vertices of a mesh on the surface this way.
Then import it to Unity using Point Cloud Viewer.
At last in Unity you can log all the global positions of the model using transform.TransformPoint(meshVert) over time easily.

Tango Predefined Objects

I'm somewhat familiar with tango and unity. I have worked through the examples and can get them to work correctly. I have seen some people doing an AR type example where they have their custom objects in an area to interact with or another example would be directions where you follow a line to a destination.
The one thing I cannot figure out is how to precisely place a 3d object into a scene. How are people getting that data to place it within unity in the correct location? I ha e an area set up and the AR demo seems promising but I'm not placing objects with the click of a finger. What I am looking to do is when they walk by my 3d object will already be there and they can interact with it. Any ideas? I feel like I've been searching everywhere with little luck to an answer to this question.
In my project, I have a specific space the user will always be in - so I place things in the (single room) scene when I compile.
I Create an ADF using the provided apps, and then my app has a mode where it does the 3D Reconstruction and saves off the mesh.
I then load the Mesh into my Unity Scene (I have to rotate it by 180° in the Y axis because of how I am saving the .obj files)
You now have a guide letting you place objects exactly where you want them, and a nice environment to build up your scene.
I disable the mesh before I build. When tango localises, your unity stuff matches up with the tango world space.
If you want to place objects programatically, you can place them in scripts using Instantiate
I also sometimes have my app place markers with a touch, like in the examples, and record the positions to a file, which I then use to place objects specifically... But having a good mesh loaded into your scene is really the nicest way i've found.