I want to get frames from back cameras at the same time (to compute the disparity map) on an Android smartphone, is there a way to do it?
I tried to get the list of a smartphone with 2 rear cameras and a front camera using ''WebCamTexture.devices" and it displays only two cameras: a back and a front camera. I then tried to display "WebCamDevice.depthCameraName" and I got nothing.
I know that there is a Depth API that provide depth image but I don't want to use it since it's based on a SLAM algorithm.
I appreciate your help, I'm still newbie in Unity.
Related
I am playing video over image target using vuforia plugin in unity3d. It is simple green screen video. I am using shader to remove it.
Sometimes when video played over image target it becomes too shaky and jittery and thus reducing the AR experience. How can I avoid or reduce it.
I tried multiple ways to get rid of it but no success. Here is what I tried:
Previously I was embedding the video in a Plane then I used Quad
but no success.
I tried to change AR camera (World Center Mode) to different
values like FIRST_TARGET,CAMERA and SPECIFIC_TARGET but still same problem.
Also my vuforia target image has 5 rating in vuforia database.
What could be the solution to this problem. Any help would be highly appreciated. Thanks!
I want to build a kind of photo booth. A normal booth is so boring so I decided to build a funky fancy one.
I will use 2 Raspberry Pis. One for stream, shoot and print the photo. The other to display the live stream of the photo.
The streaming shooting and printing is already done. Now I am building the video stream display part.
I will show the picture in format 1:1, because i want to display it on every 3 shot rotated by a random angle. So the guys in front of TV have to bend their heads, so I will get strange and funny pictures. Maybe it is possible to rotate constantly like a hypnotic spiral.
On Windows with VLC the rotation of the stream works very well. How can I do this on a Raspberry Pi?
I did it now with HTML and Browser in Fullscreen Mode.
Rotate Videostream and crop it to 1:1
I am involved in a virtual-reality project using the HTC Vive device, Unity and the SteamVR SDK used to communicate with the Vive.
Thanks to the joysticks, the final user must draw some shapes (for example a circle) and the movements begin when he presses a joysticks' button.
From all the generated data (output from the joysticks), how could I detect a circle ?
Do you have some documentation on this ?
Please correct me if I understand your concern incorrectly here:
You use joysticks to draw some shapes like circles in some apps like steamvr home,
and you want to detect what you have drawn using software. And maybe you want to show the result in real time in screen or save to a file.
That means you need the ability to get rendered images, and detect image content using algorithms like deep learning.
HTC Vive device are compatible with openVR SDK:
https://github.com/ValveSoftware/openvr
You can use openVR SDK to DIY a steamVR driver,and get images in real time using direct mode component in the SDK. It has lot of works to do even before adding the detection algorithm, because you need a steamvr driver that can used to execute steamVR.
I am developing an augmented reality app for Project Tango using Unity3d.
Since I want to have virtual object interact with the real world, I use the Meshing with Physics scene from the examples as my basis and placed the Tango AR Camera prefab inside of the Tango Delta Camera (at the relative position (0,0,0)).
I found out, that I have to rotate the AR Camera up by about 17deg, so the Dynamic mesh matches the room, however there is still a significant offset to the live preview from the camera.
I was wondering, if anyone who had to deal with this before could share his solution to aligning the Dynamic Mesh with the real world.
How can I align the virtual world with the camera image?
I'm having similar issues. It looks like this is related to a couple of previously-answered questions:
Point cloud rendered only partially
Point Cloud Unity example only renders points for the upper half of display
You need to take into account the color camera offset from the device origin, which requires you to get the color camera pose relative to the device. You can't do this directly, but you can get the device in the IMU frame, and also the color camera in the IMU frame, to work out the color camera in the device frame. The links above show example code.
You should be looking at something like (in unity coordinates) a (0.061, 0.004, -0.001) offset and a 13 degree rotation up around the x axis.
When I try to use the examples, I get broken rotations, so take these numbers with a pinch of salt. I'm also seeing small rotations around y and z, which don't match with what I'd expect.
I tryed to implement the "Measure It" app on Unity 3D. I started with the PointCloud example scene downloaded on tango's website.
My problem is, when i look in 1st Person view, the point cloud don't fiel the screen, and when i look in 3rd Person I can see the point outside the Unity Camera FOV.
I don't see this problem on the Explorer app, but it looks to be made in Java so I think it's a Unity compatibility problem.
Does someone have the same problem, or a solution?
Unity 3D 5.1.1
Google Tango urquhart
Sorry for my poor english,
Regards.
EDIT :
It looks like the ExperimentalAugmentedReality scene is using the point cloud to place markers in real world, and this point cloud is right in front of the camera. I don't see any script difference between them so i don't understand why it works. If you've any idea.
I think it makes sense to divide you question into two parts.
Why the points are not filling in the screen in the point cloud example.
In order to make the points to fill in the first person view camera, the render camera's FOV needs to match the physical depth camera's FOV. In the point cloud example, I believe Tango is just using the default Unity camera's FOV, that's why you saw the points is not filling the screen(render camera).
In the third person camera view, the frustum is just a visual representation of the device movement. It doesn't indicate the FOV or any camera intrinsics of the device. For the visualization purpose, Tango explore might specifically matched the camera frustum size to the actual camera FOV, but that's not guaranteed to be 100% accurate.
Why the AR example works.
In the AR example, we must set the virtual render camera's FOV to match the physical camera's FOV, otherwise the AR view will be off. On the Tango hardware, the color camera and depth camera are the same camera sensor, so they shared a same FOV. That's why the AR example works.