Implementing ITangoDepth in Unity. Project Tango - unity3d

I'm creating a programfor project tango in Unity and i'm trying to make a class implementing ITangoDepth. Just for testing I've made this class implement the method OnTangoDepthAvailable just for printing a text, so I can see the thing workin. I can't -.-' This is what I have:
public void Start(){
m_tangoApplication = FindObjectOfType<TangoApplication>();
m_tangoApplication.Register(this);
}
public void OnTangoDepthAvailable(TangoUnityDepth tangoDepth)
{
// Calculate the time since the last successful depth data
// collection.
debug = "Depth Available";
}
I've enabled Depth in TangoManager too.
I've been a long time studying the code in Point Cloud example but i don't see what else do I have to set to enable the depth sensor. Can anyone help me make this work?
Thanks so much :)
EDIT: OK. I think i found the problem, but it created another one: in my app I'm using a material that shows what the camera sees in a plane in front of the camera. When i disable this plane it all works properly. Is it possible that camera and depth sensor can't work the same time??

You can only use the Tango API to use access the camera if you are using the depth. That's being said, the webcam texture in Unity won't work when the depth is enabled. The augmented reality example uses both depth and color image together, you can take a look of that.

Related

Could having auto focus enabled be causing tracking issues in my AR scene?

I am building an AR scene onto a Galaxy S7 using AR Core and Unity 2018.2.20f where I use the AugmentedImageVisualizer to track an image and place a 3d gameObject in front. In order to track the image, I need autofocus enabled. The issue is that my anchor jitters around and sometimes completely moves away and disappears. From debugging it seems to be that the camera autofocus might be causing the tracking issue.
My best workaround so far is to switch the camera config to fixed once the image has been detected, but 1/3 of the time the anchor moves while switching to fixed focus. If the anchor is still in sight once the camera is on fixed focus the tracking works great.
Has anyone else noticed this issue before? Is there a way to control the camera focus more specifically?
If you use ARCore version 1.9 or above, the tracking of the Augmented Images is improved than the previous versions. Please find the official releases from the link below.
Link: https://github.com/google-ar/arcore-unity-sdk/releases
Having the config in Auto Mode would be fine, as far as the debugging is concerned. Sometimes, if you are using both plane tracking and Augmented images in the config, the jittering might appear.
Also, there is no need to use an Augmented Visualizer to detect the Augmented Image. The following piece of code could help out to track the Augmented Image and Place any 3D Object.
using GoogleARCore;
private List<AugmentedImage> augmentedImages = new List<AugmentedImage>();
public GameObject AndyObject;//object to place on the image
private bool Is_Placed=false;
void Update()
{
Session.GetTrackables<AugmentedImage>(augmentedImages,
TrackableQueryFilter.Updated);
foreach (var image in augmentedImages)
{
if (image.TrackingState == TrackingState.Tracking && Is_Placed==false)
{
Anchor anchor=image.CreateAnchor(image.CenterPose);
var andyref = Instantiate(AndyObject,image.CenterPose.position,
image.CenterPose.rotation);
andyref.transform.parent=anchor.transform;
Is_Placed=true;//To only place the object once
}
}
}

How much efforts does it takes to let three monitors to replace VR headset program?

I have a unity project. It is developed for VR headset training usage. However, users have a strong dizzy feeling after playing the game. Now, I want to use 3 monitors to replace the VR headset so the users can look at the 3 monitors to drive. Is it a big effort to change the software code to achieve this? What can I do for the software so that it can be run in monitor?
Actually it is quite simple:
See Unity Manual Multi-Display
In your Scene have 3 Camera objects and set their according Camera.targetDisplay via the Inspector (1-indexed).
To make them follow the vehicle correctly simply make them childs of the vehicle object then they are always rotated and moved along with it. Now position and rotate them according to your needs relative to the vehicle.
In PlayerSettings &rightarrow; XRSettings (at the bottom) disable the Virtual Reality Supported since you do not want any VR-HMD move the Camera but it's only controlled by the vehicle transform.
Then you also have to activate according Displays (0-indexes where 0 is the default monitor which is always enabled) in your case e.g.
private void Start()
{
Display.displays[1].Activate();
Display.displays[2].Activate();
}
I don't know how exactly the "second" or "third" connected Monitor is defined but I guess it should match with the monitor numbering in the system display settings.

Rotate the AR Object to always face the camera using kudan AR

I'm developing a game similar to Pokemon Go in Unity3D. Using Kudan AR SDK. My requirements are straightforward, the AR object must face the camera at all times. I am totally clueless on how to get this done with Kudan AR.
In other words, i wanna disable the gyroscope's rotation data from kudan.
This isn't really a KudanAR issue. Unity3D handles all of the rendering, so if you want your model to face in a different direction you will need to achieve this using Unity's APIs.
Kudan's Unity plugin is only responsible for the computer vision side of things, meaning that it takes care of the recognition and tracking.
The solution is the following in the scripts of your objects:
First declare:
public Transform player;
and link your Kudan Camera to the player in the inspector chase skript of your object:
And the importend line in the update method:
this.transform.LookAt (player);
Your object is now looking at you.
Hope this helps!

Kudan in Unity: how to stop or reset markerless tracking?

I am creating an application with Kudan where a photograph (a 2D sprite) appears via markerless tracking. Based on the sample project I've successfully made adjustments so that the 2D plane is always perpendicular to the camera and placed on the screen in the position I want. Really wonderful!
But I am unable to figure out how to restart/reset the tracking via a script. I can always force the tracking to restart by blocking the camera or shaking the phone, but I want to do it via a button-- it is exactly the same behavior I've found described in the "ArbiTrack Basics" guide for Android and iOS, but am unable to reproduce it in Unity. To what script should I send a stop tracking command in order to get the tracking instance to restart (exactly the same effect as blocking the camera when running one of the sample Unity projects in Markerless Mode).
The situation is described here for Android coding: https://wiki.kudan.eu/ArbiTrack_Basics#Stopping_ArbiTrack
where it says to call these three things:
// Stop ArbiTrack
arbiTrack.stop();
// Display target node
arbiTrack.getTargetNode().setVisible(true);
//Change enum and label to reflect ArbiTrack state
arbitrack_state = ARBITRACK_STATE.ARBI_PLACEMENT;
I have found one way to do this-- though I'm not sure it's ideal.
Looking in the TrackingMethodMarkerless.cs script, it seems that the StopTracking() does not work-- it disables the updating of the tracking but doesn't actually disable the instance of detection. But taking a note from it, I added an if statement to the ProcessFrame() function:
//
if (disableMarkerless == false)
trackable.isDetected = _kudanTracker.ArbiTrackIsTracking ();
else
trackable.isDetected = false;
//
Now, changing the disableMarkerless bool operator disables the tracking.

Open device camera for iPhone and Android in Unity3d

I am working an application in which i need to open device camera in full view and i am making this app for both iOS and Android. So can any one tell me that how can i open full screen device camera in Unity for all devices Android and iPhone.
This will be great help for me. Thanks in advance.
After some more digging on Google and Official docs. I got solution which I am going to share with you , It help someone .. someday..
1.Create New Project.
2.Select Main Camera in GameObject and change Transform via Inspector
Position X= -90 Y=785 Z=0 Rotation X=90 Y=90 Z=0 Scale X=1 Y=1 Z=1
3.Now go to GameObject — > Create Other — > Plane.
4.Select Main Camera in GameObject and
4.1 change Transform via Inspector
Position X=0 Y=0 Z=0 Rotation X=0 Y=0 Z=0 Scale X=100 Y=100 Z=100
4.2 change Tag=Player
Now create a c# script with name “CameraController” and replace the code with below one
using UnityEngine;
using System.Collections;
public class CameraController : MonoBehaviour
{
public WebCamTexture mCamera = null;
public GameObject plane;
// Use this for initialization
void Start ()
{
Debug.Log ("Script has been started");
plane = GameObject.FindWithTag ("Player");
mCamera = new WebCamTexture ();
plane.renderer.material.mainTexture = mCamera;
mCamera.Play ();
}
// Update is called once per frame
void Update ()
{
}
}
5.Finally save and Drag this Script file onto “Plane” GameObject
Note - you may see preview rotated in Unity Game view but on RealDevice it works well. tested on iPhone5 and Android - Nexus 5.
Here is the snap shot how it comes if you change rotation angle to 180:
If you mean you want to use the camera to take and save photographs, I'd recommend Prime31's iOS and Android plugins. Unfortunately the Etcetera plugin is us$65 per platform, but I've used them both and they work great.
http://u3d.as/content/prime31/i-os-etcetera-plugin/2CU
http://u3d.as/content/prime31/android-etcetera-plugin/2CY
If you just want to show the camera's live output inside of your app's scene, you can create a plane and use a WebCamTexture to display the camera's live video output.
http://docs.unity3d.com/Documentation/ScriptReference/WebCamTexture.html
There is a toolkit available to open the device camera in unity for iOS and Android called CameraCaptureKit - (https://www.assetstore.unity3d.com/en/#!/content/56673)
All the source is available and it has a plain and and simple plug'n'play demo as with Unity UI - it solves some of the obstacles involved with taking still images using the camera on your device.
Since the WebCamTexture is a generic solution to taking still images it doesn't enable sharpening of the images, the quality as well is more low-res on iOS since it uses a configuration for capturing real time video.
If you want to turn on flash / torch there you can do that with the toolkit.
There is a great plugin named Android native Camera for open device camera and save the video or image inside the device. Currently, it works only android.