I'm new to Unity so I don't really have much experience nor knowledge about subject. But I want to make a project that IP Camera will be projected on an object. For example, let's say I want to virtualize my security room. I have 2 security cameras in real life. So I need 2 monitors to show that real time capture in game. I was able to project my own webcam but I don't know how to do it with IP Cameras. I tried WebRTC but it works as exactly opposite of what I want to. Is this possible? Because I still don't know the capabilities of Unity.
Related
I would like to make an AR iPhone app in unity that places an object in the real world which you can then interact with it on your iPhone. like you have a bar at the bottom of your screen and you can drag the objects into the ar world and interact with them with the ability of hand tracking. This will work kind of like the meta 2 interface https://www.youtube.com/watch?v=m7ZDaiDwnxY which you can grab things and drag them. it uses hand tracking to do this.
I have done some research on this but, I need some help doing this because I don't know where to start and how to accomplish what I am trying to do.
I don't have any code.
You can email me at jaredmiller219#gmail.com for any comments and questions. also, you can email me to help me with this. thanks so much for your support!
To get started in mobile AR in Unity, I would recommend starting with Unity's resources:
https://unity.com/solutions/mobile-ar
Here's a tutorial resource for learning ARKit:
https://unity3d.com/learn/learn-arkit
As for hand tracking, obviously the Meta 2 has specialized hardware to execute its features... you shouldn't necessarily be expecting to achieve the same feature set with only a phone driving your experience. Leap Motion is the most common hand tracker I've seen integrated into VR and AR setups and it works well, but if you really need hand tracking with just a phone, you could check out ManoMotion which seeks to bring hand tracking and gesture recognition to ARKit, although I haven't personally worked with it.
I'm very new to this. During my research for my PhD thesis I found a way to solve a problem and for that I need to move my lab testing in the virtual environment. Anyway, I have an Oculus Rift and an OPTOTRAK system that allows me to motion capture a full body for VR (in theory). What my question is, can someone point me in the right direction, of what materials do I need to check out to start working on a project. I have a background in programming, so it's just that I need a nudge in the right direction (or if someone knows a similar project)
https://www.researchgate.net/publication/301721674_Insert_Your_Own_Body_in_the_Oculus_Rift_to_Improve_Proprioception - I want to make something like this :)
Tnx a lot
Nice challenge too.. how accurate and how real time is the image of your body in the Oculus Rift world ? my two - or three - cents
A selfie-based approach would be the most comfortable to the user.. there's an external camera somewhere and the software transforms your image to reflect the correct perspective, as you would see your body, through the oculus, at any moment. This is not trivial and quite expensive vision software. To let it work 360 degrees there should be more than 1 camera, watching all individual oculus users in a room !
An indirect approach could be easier.. model your body, only show dynamics. There's WII style electronics in bracelets and on/in special user clothing, involving multiple tilt and acceleration sensors. They form a cluster of "body state" sensor information, to be accessed by the modeller in the software. No camera is needed, and the software is not that complicated when you'd use a skeleton model.
Combine. Use the camera for the rendering texture and drive the skeleton model via dynamics drive by the clothing sensors. Maybe deep learning could be applied, in conjunction with a large number of tilt sensors in the clothing, a variety of body movement patterns are to be trained and connected to the rendering in the oculus. This would need the same hardware as the previous solution, but the software could be easier and your body looks properly textured and it moves less "mechanistic". There will be some research needed to find the correct deep learning strategy..
Im trying to create a AR Game in Unity for educational project.
I want to create something like pokemon go: when the camera open the object will be fixed somewhere on the real world and you will have to search for it with the camera.
My problem is that ARCore and vuforia groundDetection (I dont want to use targets) are only limited for few types of phone and i tried to use kudan sdk but it didnt work.
Any one can give me a tool or a tutorial on how to do this? I just need ideas or someone to tell me where to start?
Thanks in advance.
The reason why plane detection is limited to only some phones at this time is partially because older/less powerful phones cannot handle the required computing power.
If you want to make an app that has the largest reach, Vuforia is probably the way to go. Personally, I am not a fan of Vuforia, and I would suggest you use ARCore (and/or ARKit for iOS).
Since this is an educational tool and not a game, are you sure Unity is the way to go? I am sure you may be able to do it in Unity, but choosing the right platform for a project is important - just keep that in mind. You could make a native app instead.
If you want to work with ARCore and Unity (which is a great choice in general), here is the first in a series of tutorials that can get you started as a total beginner.
Let me know if you have other questions :)
You can use GPS data from phone to display object when the user arrived specific place you can show the object. You can search GPS based Augmented Reality on google. You can check this video : https://www.youtube.com/watch?v=X6djed8e4n0
i'm developing VR using google cardboard SDK..
i want to move on virtual environment when i walk on real world, like this : https://www.youtube.com/watch?v=sZG5__Z9pzs&feature=youtu.be&t=48
is it possible to make VR application like that for android...? maybe using accelerometer sensor ? how can i implement this using unity...?
i try to record accelerometer sensor while i walk with smartphone, here are the result : https://www.youtube.com/watch?v=ltPwS7-3nOI [i think the accelerometer value is so random -___- ]
Actually it is not possible with only mobile:
You're up against a fundamental limitation of the humble IMU (the primary motion sensor in a smartphone).
I won't go into detail, but basically you need an external reference frame when trying to extract positional data from acceleration data. This is the topic of a lot of research right now, and it's why VR headsets that track position like the Oculus Rift have external tracking cameras.
Unfortunately, what you're trying to do is impossible without using the camera on your phone to track visual features in the scene and use those as the external reference point, which is a hell of a task better suited to a lab full of computer vision experts.
One another possible but difficult way is:
This may be possible if you connect device to internet then watch it's position from satelite(google maps or something like that)but that is a very hard thing to do.
Apologies if this question has been asked before, and apologies too if it is obvious to those with knowledge - I'm completely tech illiterate especially when it comes to gaming etc so bear with me!
I'm wondering whether it is possible to record gameplay (any console/platform) but be able to play this back in a 360/VR format?
The usecase is this:
I want to watch and follow a game but rather than having 1st person PoV, I'd love to be able to use either a VR headset device (most ideal) or a 360 viewer (tablet or smartphone) to move perspective beyond forward facing field of vision.
Ideally the PoV would follow players (think specatator mode) and not necessarily be a static camera - although not necessarily a deal breaker.
Is this possible?
How would this be done with existing tools etc or would new tools need to be developed?
Would it be 'recorded' client side or serverside - and would this matter?
Huge thanks in advance - also very very happy to be pointed in the direction of sources of info around this subject to consume if readily available.
Thanks
S
You need to connect the gameobject(character) in your game that has the camera to your VR display (wherever you are coding the display at) and write a code that takes the image that it displaces in that camera under that gameobject and make it so it is continuously updating, making it seem like you are in the game.
look here http://docs.unity3d.com/Manual/VROverview.html