VR/360 playback of gameplay recording - unity3d

Apologies if this question has been asked before, and apologies too if it is obvious to those with knowledge - I'm completely tech illiterate especially when it comes to gaming etc so bear with me!
I'm wondering whether it is possible to record gameplay (any console/platform) but be able to play this back in a 360/VR format?
The usecase is this:
I want to watch and follow a game but rather than having 1st person PoV, I'd love to be able to use either a VR headset device (most ideal) or a 360 viewer (tablet or smartphone) to move perspective beyond forward facing field of vision.
Ideally the PoV would follow players (think specatator mode) and not necessarily be a static camera - although not necessarily a deal breaker.
Is this possible?
How would this be done with existing tools etc or would new tools need to be developed?
Would it be 'recorded' client side or serverside - and would this matter?
Huge thanks in advance - also very very happy to be pointed in the direction of sources of info around this subject to consume if readily available.
Thanks
S

You need to connect the gameobject(character) in your game that has the camera to your VR display (wherever you are coding the display at) and write a code that takes the image that it displaces in that camera under that gameobject and make it so it is continuously updating, making it seem like you are in the game.
look here http://docs.unity3d.com/Manual/VROverview.html

Related

make an app that displays desktop on iphone in vr on unity

I would like to make an AR iPhone app in unity that places an object in the real world which you can then interact with it on your iPhone. like you have a bar at the bottom of your screen and you can drag the objects into the ar world and interact with them with the ability of hand tracking. This will work kind of like the meta 2 interface https://www.youtube.com/watch?v=m7ZDaiDwnxY which you can grab things and drag them. it uses hand tracking to do this.
I have done some research on this but, I need some help doing this because I don't know where to start and how to accomplish what I am trying to do.
I don't have any code.
You can email me at jaredmiller219#gmail.com for any comments and questions. also, you can email me to help me with this. thanks so much for your support!
To get started in mobile AR in Unity, I would recommend starting with Unity's resources:
https://unity.com/solutions/mobile-ar
Here's a tutorial resource for learning ARKit:
https://unity3d.com/learn/learn-arkit
As for hand tracking, obviously the Meta 2 has specialized hardware to execute its features... you shouldn't necessarily be expecting to achieve the same feature set with only a phone driving your experience. Leap Motion is the most common hand tracker I've seen integrated into VR and AR setups and it works well, but if you really need hand tracking with just a phone, you could check out ManoMotion which seeks to bring hand tracking and gesture recognition to ARKit, although I haven't personally worked with it.

Unity musical instrument in VR for MIDI output

I want to create a xylophone in VR to create MIDI output. The MIDI output is needed so my team can use it later. If I understand correctly, MIDI contains information about the instrument in the channel and several other things like the note, pitch, velocity, etc. I don't quite get how would I create a valid MIDI output in Unity from a person playing a xylophone with Vive controllers in VR. I can track the note, timing, velocity, but what parameters do I really need to create a valid MIDI output, or is it even possible in such a scenario?
It might be helpful to reiterate your question to better clarify it so that knowledgeable individuals can answer your question with relevance.
Its indefinably possible:
https://www.twitch.tv/videos/177585172
I hope it's not too late for a response! (I'm aware it's been a year)
I'm developing a vr toy that does just that, sending MIDI signals when I hit, twist or interact with gameobjects, out to VCV Rack (or just any synth/DAW, since its MIDI <3)
I achieved this through midi-dot-net to turn collision events into MIDI:
https://code.google.com/archive/p/midi-dot-net/
(cool logo btw)
and then loopMIDI to open a virtual port on Windows to connect Unity to VCV.
https://www.tobias-erichsen.de/software/loopmidi.html
both are free and cake to use.
Hope this is helpful!

Fixing object when camera open Unity AR

Im trying to create a AR Game in Unity for educational project.
I want to create something like pokemon go: when the camera open the object will be fixed somewhere on the real world and you will have to search for it with the camera.
My problem is that ARCore and vuforia groundDetection (I dont want to use targets) are only limited for few types of phone and i tried to use kudan sdk but it didnt work.
Any one can give me a tool or a tutorial on how to do this? I just need ideas or someone to tell me where to start?
Thanks in advance.
The reason why plane detection is limited to only some phones at this time is partially because older/less powerful phones cannot handle the required computing power.
If you want to make an app that has the largest reach, Vuforia is probably the way to go. Personally, I am not a fan of Vuforia, and I would suggest you use ARCore (and/or ARKit for iOS).
Since this is an educational tool and not a game, are you sure Unity is the way to go? I am sure you may be able to do it in Unity, but choosing the right platform for a project is important - just keep that in mind. You could make a native app instead.
If you want to work with ARCore and Unity (which is a great choice in general), here is the first in a series of tutorials that can get you started as a total beginner.
Let me know if you have other questions :)
You can use GPS data from phone to display object when the user arrived specific place you can show the object. You can search GPS based Augmented Reality on google. You can check this video : https://www.youtube.com/watch?v=X6djed8e4n0

How to create an interactive wall?

I am a graphics/web designer with basic JS/php coding knowledge and I am interested in learning to make interactive walls.
I would like to know from anyone experienced at this.
What tools, languages do you use?
Unity, Flash, Cinder....etc. which makes it easier?
Thanks
If you just want basic interaction, po-motion.com is a really easy place to start. It tracks motion for simple effects like leaves being brushed away or revealing one image under another. It works using blog detection and can be set up with a mac or pc using a USB camera and any display you can connect your computer to. It also supports some versions of the Kinect on Windows.
This would be quite hard to make with "basic knowledge of JS/php". However how I think you would handle this would be to make the application as you would normally, but have it be controlled by touch input. And then your wall would be a touch/pressure controlled. Im not an engineer, just a programmer, so I dont know how you would make the actual wall, but I do know unity has some good syntax for touch input which I have used. This is very broad question, but I would recommend looking into unity's pre-built touch classes and input interpretation.
as you have alluded to in your tags, one solution is to create a gesture-controlled solution with Kinect.
The best starting point for this would be to download the SDK and get the hardware:
http://www.microsoft.com/en-us/kinectforwindows/
The SDK comes with demos and existing working C# code in Kinect Explorer that creates the 'interactive wall' experience (see Controls Basics, documentation here: https://msdn.microsoft.com/en-us/library/dn188701.aspx).
You can practically run the demo and replace the images to get your experience started. Just make sure you have the right specs on your machine (https://www.microsoft.com/en-us/kinectforwindows/purchase/sensor_setup.aspx ), and you have a good screen.
In terms of programming language, there's no better opportunity to learn C# than from these demos :p

Eyetracking for the iPhone?

Has anyone experimented with eyetracking for the iPhone or heard of projects related to eyetracking in iOS?
Is it technically feasible at all?
How problematic would recording that data be in the light of ongoing privacy discussions?
There is a technique introduced by Johny Lee
I found this, that applies such technique.
Head tracking for iPad
Hope you find it useful.
I think this is feasible as long as the phone's camera is pointed at the user's head. It would probably require pretty good light for the image to be crisp enough for it to be able to recognize the face and eyes properly, though.
When the user isn't looking directly at the camera, you would probably need to do some sort of "head recognition" and determine the orientation of the user's head. This would give you a rough direction on where they are looking.
I'm not aware of any face recognition related projects for iOS, but you could probably google and find some existing project in some other language and port it to iOS with a bit of effort.
As a side note, there's a DIY project for the head orientation tracking part for PC. It uses infrared lights placed for example on your headphones and a camera which determine the orientation of your head based on that. Perhaps it'll give you some ideas.
I know it's nearly 3 years late but I just came accross this commercial cross-platform SDK, which amongst other things does eye tracking. I think this kind of technology will be comming in future iOS/Android versions.