Is it possible to synchronize coordinate system between 2 devices in Unity with ARCore? - unity3d

I work with ARCore in Unity and would like to know how I can synchronize coordinate system between 2 devices with help Network Manager. Maybe somebody knows if it is possible/impossible. Thanks.

Using Cloud Anchors is probably the most reliable way of coordinating multiple viewers in the same AR Scene.
The Cloud Anchors Sample uses the Unity networking to share information.

Related

hololens - using Azure Anchors with Vuforia

We are using Vuforia for image tracking with hololens and unity engine. Vuforia works fine. We are also using Azure Spacial Anchors to fix the location of objects. However, Anchors do not seem to work with Vuforia. It appears that Vuforia captures camera events and does not pass them on to Azure Anchors, maybe?
Is there a way to get both technologies working at the same time?
the major issue would be Vuforia occupied the Camera pipeline
you may stop Vuforia and switch to ASA and switch back.
Or you may use pics and time stamps and ASA
Please read this page
https://library.vuforia.com/platform-support/working-camera-unity
may help you get the camera frame. then you may make the parameter transferd to a service you hosted in linux server, with Spatial https://github.com/microsoft/azure_spatial_anchors_ros

Can anyone explain the technicality of zspace AR-VR product and how it works?

I just came to know about a AR-VR company for educational interactive content. I know about Augmented reality apps which can be developed using Unity framework and know Virtual reality too.
But can anyone try to explain how they are doing it or any idea or direction will be helpful?
Can we use existing Google cardboard and some tool to interact with the 3D object? Like this - DIY hand tracking VR controller.
Thanks in advance and let me know if you guys have more questions.
After a quick look at the official documentation: it looks like the Z Space system is a 3d display (working like NVidia 3dVision or any some 3d television sets) with head tracking (to render in correct perspective) and a 3d-tracked stylus for interaction.
TL;DR: It's a 3D VR-like portal through a laptop screen.
Cardboard controllers won't work with it and would be completely redundant because of the stylus.

ARCore in unity vs Sceneform features/use cases?

The way I understand it is that there are several environments that support ARCore and Unity and Sceneform SDK are some of the options.
I was wondering how are they different from each other besides one being in Java and the other being in C#? Why would someone choose one over the other aside from language preference?
Thank you
Sceneform empowers Android developers to work with ARCore without learning 3D graphics and OpenGL. It includes a high-level scene graph API, realistic physically based renderer, an Android Studio plugin for importing, viewing, and building 3D assets, and easy integration into ARCore that makes it straightforward to build AR apps. Visit this video link of Google I/O '18.
Whereas ARCore in Unity uses three key capabilities to integrate virtual content with the real world as seen through your phone's camera:
Motion tracking
Environmental understanding allows the phone to detect the size
and location of all type of surfaces: horizontal, vertical and
angled surfaces like the ground, a coffee table or walls.
Light estimation allows the phone to estimate the environment's
current lighting conditions.
ARCore is Google’s platform for building augmented reality experiences. Using different APIs, ARCore enables your phone to sense its environment, understand the world and interact with information. Some of the APIs are available across Android and iOS to enable shared AR experiences.

Fixing object when camera open Unity AR

Im trying to create a AR Game in Unity for educational project.
I want to create something like pokemon go: when the camera open the object will be fixed somewhere on the real world and you will have to search for it with the camera.
My problem is that ARCore and vuforia groundDetection (I dont want to use targets) are only limited for few types of phone and i tried to use kudan sdk but it didnt work.
Any one can give me a tool or a tutorial on how to do this? I just need ideas or someone to tell me where to start?
Thanks in advance.
The reason why plane detection is limited to only some phones at this time is partially because older/less powerful phones cannot handle the required computing power.
If you want to make an app that has the largest reach, Vuforia is probably the way to go. Personally, I am not a fan of Vuforia, and I would suggest you use ARCore (and/or ARKit for iOS).
Since this is an educational tool and not a game, are you sure Unity is the way to go? I am sure you may be able to do it in Unity, but choosing the right platform for a project is important - just keep that in mind. You could make a native app instead.
If you want to work with ARCore and Unity (which is a great choice in general), here is the first in a series of tutorials that can get you started as a total beginner.
Let me know if you have other questions :)
You can use GPS data from phone to display object when the user arrived specific place you can show the object. You can search GPS based Augmented Reality on google. You can check this video : https://www.youtube.com/watch?v=X6djed8e4n0

Unity. Move player when mobile moves (android VR)

i'm developing VR using google cardboard SDK..
i want to move on virtual environment when i walk on real world, like this : https://www.youtube.com/watch?v=sZG5__Z9pzs&feature=youtu.be&t=48
is it possible to make VR application like that for android...? maybe using accelerometer sensor ? how can i implement this using unity...?
i try to record accelerometer sensor while i walk with smartphone, here are the result : https://www.youtube.com/watch?v=ltPwS7-3nOI [i think the accelerometer value is so random -___- ]
Actually it is not possible with only mobile:
You're up against a fundamental limitation of the humble IMU (the primary motion sensor in a smartphone).
I won't go into detail, but basically you need an external reference frame when trying to extract positional data from acceleration data. This is the topic of a lot of research right now, and it's why VR headsets that track position like the Oculus Rift have external tracking cameras.
Unfortunately, what you're trying to do is impossible without using the camera on your phone to track visual features in the scene and use those as the external reference point, which is a hell of a task better suited to a lab full of computer vision experts.
One another possible but difficult way is:
This may be possible if you connect device to internet then watch it's position from satelite(google maps or something like that)but that is a very hard thing to do.