We are trying to get the experience like you were an animal. And animals have different eye positions and rotations.
We wanted to use google cardboard because it's the most accessible for us as a preview.
But the question is, is there a way to change the angle of the eye cameras in the google vr SDK?
Thanks in advance,
Olivierus
Related
I am developing a marker based AR application with Vuforia in Unity 3D. I want to lay a coordinate system in a room with 4 unique markers on the walls so that I can place 3D objects on the floor of the room.
The camera may not be able to see the markers all the time. But the coordinate system should be persistent and should use motion/orientation sensors of the device to offer the user uninterrupted AR experience. As soon as the camera recognizes a marker, the coordinate system should be recalibrated.
I'm new to Vuforia, so can you please suggest a way I can achieve this kind of behavior? Does Vuforia support this kind of behavior out of the box?
Thank you.
Edit: Evert
After reading the comment from Evert, I realized I can use marker less AR to fill the gaps between marker based detection.
Now I'm curious to know how I can achieve that. Please help :)
Thank you.
I need a solution for my problem, please.
I make a character in unity 3d base on AR, the problem is when I build the project in my android phone I can found the character or I should to turn a camera for search where is he.
I need to make this object into the center of my camera how can I do this, please ...
The question isn't the clearest, but I suggest attaching the camera to the gameobject you want to see in the editor.
This link is to a unity tutorial.
https://unity3d.com/learn/tutorials/projects/2d-ufo-tutorial/following-player-camera
Just re-read your question, AR skimed over that at first ... so you dont want to move the camera rather you want to move the object.
To find the middle of the screen in world space is what you asking for.
Camera.current.ViewportToWorldPoint(new Vector3(0.5f,0.5f, 100));
Use your camera to find a point in the world that is in the middle of your view. You have several options, ViewportToWorldPoint for example as shown above. ViewPort calls 0.5,0.5 the middle of the screen, Z is the distance from the Camera origin point.
You could alternativly cast a ray from your camera center screen and find a point on the ground to put your character at ... would need to know more about your wourld set up to help further.
My old answer is how to move a camera ... I will leave it case its useful to you.
[Edit Old Answer]
The other answer will work and is nice and light but has its limitations as your project advances.
I recomend Cinemachine for camera work, in this case a simple Cinemachine virtual camera setting its 'Look At' and optionaly its 'Follow' is what you want.
https://unity3d.com/learn/tutorials/topics/animation/using-cinemachine-getting-started
Cinemachine tutorail above. in short, Cinemachine works with the concept of 'virtual cameras' these are just light easy to use behvaiours that discribe how to 'frame' a shot e.g. what to look at, how to move, etc.
You real camera will get a Cinemachine 'brain' which simply listens to these virtual cameras and sorts out what to do to the real camera to make it happen. Getting to terms with this system will greatly improve your camera work and masivly simplify it.
Things like simply attaching the camera to the player object and similar work but have big limitations that end up biting you in the backside eventually.
Alternativly you can write a script to transform the camera based on your own custom logic the draw back here is the 'ball of code' problem ... that is at first the logic is simple but as you want more and more specific shots framed up it gets to being a spegettie monster quickly.
Is there a possibility with a google tango camera to create a situation, that my player goes on the table and if he comes out of the table he falls? Has anyone ever done anything similar and has references or ideas on how to do it?
in order to implement the functionality that you described, you will need to find different planes from the real world and translate their position into Unity scene. There is a class in Tango SDK, called TangoPointCloud which contains several methods for recognizing planes and translate their position into unity scene points. By knowing the positions of the table and the floor, you might be able to implement the feature you want. In my case, TangoPointCloud helped me find the walls from a room and their position relative to unity scene units.
I am trying to wrap my head around Vuforia's capabilities. I want to make an app which lets me place a 3D object into a camera view and have that 3D object stick to the world. I've been learning how to use Vuforia in Unity3D, and Vuforia seems to be slightly capable of this, but is severely limited by its craving for "Targets". It doesn't seem to be able to do much if I don't give it some sort of target.
One workaround I've found is to set the ARCamera's World Center Mode to DEVICE_TRACKING. This seems to let me place a 3D object into the world (in Unity) and have this object overlay into the camera feed, almost making it seem like it's anhcored to the real world. This doesn't work perfectly though: it tracks properly when I angle the device up/down/left/right (rotation), but it does not seem to track the device's translational motion; that is, when I move the device forward/back/left/right, the overlaid object doesn't get closer/farther nor does it rotate as I move around it.
Is it possible to get this sort of tracking out of Vuforia, or am I better off switching to something like Google Tango?
The difficulty with setting World Center Mode to CAMERA in Vuforia is that apparently 3D objects rotate around the camera based on its accelerometer/gyroscope changes. This doesn't allow for objects to be anchored to the environment. Instead they follow with the camera.
Kudan is a good markerless tracking option.
Am developing a football game in Unity3D . in which I need only the foot orientation of the users from which I can detect the angle of the kick.
Am now tracking the whole skeleton data , but I can't able to get the foot orientation angles precisely.
Is there anyway that I can track only the foot of the user without going for the method of Tracking whole skeleton.
I have developed some games with unity and kinect. what I did is,
I could access the transform.position of each joints represent the skeleton. then by using relative positions of several joints, I detected the body gestures.
Hope this helps
let me know if you need any assistance