How do I get the camera to follow a Gameobject in unity? - unity3d

I want to get a camera to follow a gameObject. So I am using a spring formula at the moment along a 2D plan within a 3D scene.
I have used this formula within the Update function, with the inbuilt lookat function.
It works as it should, however it really choppy (it is like a really low frame rate but the rest of it doesn't seem to be effected). Does anyone know what could be wrong?
I'm using Unity 5 and can upload the project somewhere if needed.

Related

Is it possible this unity?

I hope this question is fine here, I kindly would like to know if what I'm thinking is possible in Unity before to get start digging to it. I'm not familiar with unity I always use SpriteKit and Scenekit from apple.
Im currently working on an app which display a 3d cockpit of an airplane with all his instruments. In order to create the instruments displays I used a sprite scene apply as a texture of a screen object to simulate the screen.
So like this I'm able to simulate and display the indication that should appear on screen
See picture:
My question is: Is it possibile to replicate something like this in unity? How do you create instrument inside cockpit? What should I look into? Is the same concept of using sprite element as texture in order to be able simulate instruments?
What should I look into..

(Unity3D) When the Camera is in a object, Camera will stop rendering the object

I'm using unity 2018.4.14f1 personal (I don't use 2019 or 2020 because it lags my computer)
I'm using the Unity Standard Assets Player Prefab and Cinemachine Freelook for the camera. I have some water, and when my player walks into it, its fine. However, when the camera comes into the water, it stops rendering the water. Is there anyway I can fix it?
Update: I've somewhat got it working, however its hollow when your inside. Is there anyway to fix that?
Video : https://easyupload.io/2b0p3a
(I'm quite a noob so if you need any screenshots please ask.)
The problem here is that the water will only rendered when looking from the outside as the normalized are modeled so. The program renders outs objects that it thinks is not in view. You can load the model into a 3d program and then copy and invert the model to allow your camera to see the water, or I believe there are some shader option to stop this optimization. You can also look in this Reddit thread.

How to set dynamic hotspot for 360 image with unity 3D

I am trying to build a visitors tour with Unity 3D. I have panaromic picture of bedrooms within an hotel and I would like to add points (hot spots) to my pictures that leads to another picture.
The problem is that I want to add this point dynamically via a backend, and I can't find a way to achieve that in Unity.
I will try to answer this question.
Unity has a XYZ coordinate system that can be translated to real world. I would measure real distances to these points (from the center where you took your picture) in your location/room and send these coordinates via backend to Unity3D client.
In Unity you can create Vector3 positions or directions based on coordinates you sent before. Use these positions/directions to instantiate 'hotspots' objects prefabs in right positions and directions. It might be necessary to adjust the scale/units to get the right result.
Once you have your 'hotspot' objects in place add a script to them that will load new scene (on click) with another location/image and repeat the process.
This is a very brief suggestion on how to do it. The code would be quite simple.

3D AR Markers with Project Tango

I'm working on a project for an exhibition where an AR scene is supposed to be layered on top of a 3D printed object. Visitors will be given a device with the application pre-installed. Various objects should be seen around / on top of the exhibit, so the precision of tracking is quite important.
We're using Unity to render the scene, this is not something that can be changed as we're already well into development. However, we're somewhat flexible on the technology we use to recognize the 3D object to position the AR camera.
So far we've been using Vuforia. The 3D target feature didn't scan our object very well, so we're resorting to printing 2D markers and placing them on the table that the exhibit sits on. The tracking is precise enough, the downside is that the scene disappears whenever the marker is lost, e.g. when the user tries to get a closer look at something.
Now we've recently gotten our hands on a Lenovo Phab 2 pro and are trying to figure out if Tango can improve on this solution. If I understand correctly, the advantage of Tango is that we can use its internal sensors and motion tracking to estimate its trajectory, so even when the marker is lost it will continue to render the scene very accurately, and then do some drift correction once the marker is reacquired. Unfortunately, I can't find any tutorials on how to localize the marker in the first place.
Has anyone used Tango for 3D marker tracking before? I had a look at the Area Learning example included in the Unity plugin, by letting it scan our exhibit and table in a mostly featureless room. It does recognize the object in the correct orientation even when it is moved to a different location, however the scene it always off by a few centimeters, which is not precise enough for our purposes. There is also a 2D marker detection API for Tango, but it looks like it only works with QR codes or AR tags (like this one), not arbitrary images like Vuforia.
Is what we're trying to achieve possible with Tango? Thanks in advance for any suggestions.
Option A) Sticking with Vuforia.
As Hristo points out, Your marker loss problem should be fixable with Extended Tracking. This sounds definitely worth testing.
Option B) Tango
Tango doesn't natively support other markers than the ARTags and QRCodes.
It also doesn't support the Area Learnt scene moving (much). If your 3DPrinted objects stayed stationary you could scan an ADF and should have good quality tracking. If all the objects stay still you should have a little but not too much drift.
However, if you are moving those 3D Printed objects, it will definitely throw that tracking off. So moving objects shouldn't be part of the scanned scene.
You could make an ADF Scan without the 3D objects present to track the users position, and then track the 3D printed objects with ARMarkers using Tangos ARMarker detection. (unsure - is that what you tried already?) . If that approach doesn't work, I think your only Tango option is to add more features/lighting etc.. to the space to make the tracking more solid.
Overall, Natural Feature tracking by Vuforia (or Marker tracking for robustness) sounds more suited to what I think your project is doing, as users will mostly be looking at the ARTag/NFT objects. However, if it's robustness is not up to scratch, Tango could provide a similar solution.

Positioning 3d objects for AR in Unity3d

I'm experimenting with an AR experience in Unity3D. I'd like to place models in my Unity scene and have them show up on top of real world objects using tango. I'm using tango's augmentedReality scene as a starting point.
Say there is a table in a room and I want a 3d cube to sit on top of it when it is in tangos view. Do I need to be using an .adf file to solve this problem or is there something else I should be looking into.
Is there some way to test an .adf file locally in my unity scene? This would be ideal to establish and debug the correct positions to place models in my scene.
Just trying to sort everything out.
If you want keep your virtual object's position persistent between different runs of the application, you will need a ADF file to relocalize. Unfortunately, there's no in-editor debug functions for ADF at the moment, so you will need to create a program to place the objects.
You could take a look of the Experiments/PersistentState example for reference. This example is not using AR, however, it's saving objects position with respect to your ADF's origin and keeping them persistently.