I've spent days on this, so I'm writing this for anyone who comes across it.
Problem:
I created a spectator camera in a multiplayer game, which is a Cinemachine Free Look camera in Unity.
Everything appeared to be fine while offline, but there was a lot of jitter while networking.
Solution:
For some people, it's a good idea to start with the CinmachineBrain's Update Method. This was not the solution in my case.
Turn off Dapming in the Free Look Camera if you have jitter. However, make sure you do it in all three rigs as well as the Body and Aim components. (I forgot to do it in a rig's Body component and couldn't find the bug for days)
Related
I would like to make an AR iPhone app in unity that places an object in the real world which you can then interact with it on your iPhone. like you have a bar at the bottom of your screen and you can drag the objects into the ar world and interact with them with the ability of hand tracking. This will work kind of like the meta 2 interface https://www.youtube.com/watch?v=m7ZDaiDwnxY which you can grab things and drag them. it uses hand tracking to do this.
I have done some research on this but, I need some help doing this because I don't know where to start and how to accomplish what I am trying to do.
I don't have any code.
You can email me at jaredmiller219#gmail.com for any comments and questions. also, you can email me to help me with this. thanks so much for your support!
To get started in mobile AR in Unity, I would recommend starting with Unity's resources:
https://unity.com/solutions/mobile-ar
Here's a tutorial resource for learning ARKit:
https://unity3d.com/learn/learn-arkit
As for hand tracking, obviously the Meta 2 has specialized hardware to execute its features... you shouldn't necessarily be expecting to achieve the same feature set with only a phone driving your experience. Leap Motion is the most common hand tracker I've seen integrated into VR and AR setups and it works well, but if you really need hand tracking with just a phone, you could check out ManoMotion which seeks to bring hand tracking and gesture recognition to ARKit, although I haven't personally worked with it.
I'm very new to this. During my research for my PhD thesis I found a way to solve a problem and for that I need to move my lab testing in the virtual environment. Anyway, I have an Oculus Rift and an OPTOTRAK system that allows me to motion capture a full body for VR (in theory). What my question is, can someone point me in the right direction, of what materials do I need to check out to start working on a project. I have a background in programming, so it's just that I need a nudge in the right direction (or if someone knows a similar project)
https://www.researchgate.net/publication/301721674_Insert_Your_Own_Body_in_the_Oculus_Rift_to_Improve_Proprioception - I want to make something like this :)
Tnx a lot
Nice challenge too.. how accurate and how real time is the image of your body in the Oculus Rift world ? my two - or three - cents
A selfie-based approach would be the most comfortable to the user.. there's an external camera somewhere and the software transforms your image to reflect the correct perspective, as you would see your body, through the oculus, at any moment. This is not trivial and quite expensive vision software. To let it work 360 degrees there should be more than 1 camera, watching all individual oculus users in a room !
An indirect approach could be easier.. model your body, only show dynamics. There's WII style electronics in bracelets and on/in special user clothing, involving multiple tilt and acceleration sensors. They form a cluster of "body state" sensor information, to be accessed by the modeller in the software. No camera is needed, and the software is not that complicated when you'd use a skeleton model.
Combine. Use the camera for the rendering texture and drive the skeleton model via dynamics drive by the clothing sensors. Maybe deep learning could be applied, in conjunction with a large number of tilt sensors in the clothing, a variety of body movement patterns are to be trained and connected to the rendering in the oculus. This would need the same hardware as the previous solution, but the software could be easier and your body looks properly textured and it moves less "mechanistic". There will be some research needed to find the correct deep learning strategy..
Apologies if this question has been asked before, and apologies too if it is obvious to those with knowledge - I'm completely tech illiterate especially when it comes to gaming etc so bear with me!
I'm wondering whether it is possible to record gameplay (any console/platform) but be able to play this back in a 360/VR format?
The usecase is this:
I want to watch and follow a game but rather than having 1st person PoV, I'd love to be able to use either a VR headset device (most ideal) or a 360 viewer (tablet or smartphone) to move perspective beyond forward facing field of vision.
Ideally the PoV would follow players (think specatator mode) and not necessarily be a static camera - although not necessarily a deal breaker.
Is this possible?
How would this be done with existing tools etc or would new tools need to be developed?
Would it be 'recorded' client side or serverside - and would this matter?
Huge thanks in advance - also very very happy to be pointed in the direction of sources of info around this subject to consume if readily available.
Thanks
S
You need to connect the gameobject(character) in your game that has the camera to your VR display (wherever you are coding the display at) and write a code that takes the image that it displaces in that camera under that gameobject and make it so it is continuously updating, making it seem like you are in the game.
look here http://docs.unity3d.com/Manual/VROverview.html
I am working on iPhone racing car game development using cocos3d.In this how can I get detection of boundaries of road in road-map.I have done with load pod file for road-map.I also want to know about how can I implement physics for car accident.Is their any sample code of game from which I get some information or tutorial which I can follow?
If I were you, I use Unity. Check out their Car Tutorial that's what you need.
Good luck
There are very few examples and tutorials available for Cocos3D at the moment. Definitely nothing that fits your bill.
Your project sounds like a lot of pioneering you need to do. Particularly physics will be an issue, because both Box2D and Chipmunk are 2D physics engines. You'll need to find an iOS compatible, open source 3D physics engine. I like ODE, and I've heard good things about Newton too. I don't know if they're compatible with iOS though.
To add collision shapes to your POD files would also be a manual process requiring (self-made) tools or a lot of sweat.
I have a couple ideas for some 3D games/apps and need to know which 3D engine to go with. I like the high-level approach of Unity and UDK. I have done some 2D games using cocos2d before, but I'm entirely new to the 3D world, which is why i think Unity or UDK are a good choice. I know about the differences in licensing, and i am more concerned with the learning curve instead of the licensing cost.
Plans:
A 3D "side scroller" that goes forwards (up) instead of to the side. Third person space ship. This would primarily be for learning. Tilt to steer, tap to jump. Very simple graphics, vertex coloring would be enough.
A 2.5D "side scroller" like the above one but with a car. This game would generate the levels randomly out of a couple prefab blocks of a certain length that fit together seamlessly.
A 3D augmented reality display for pilots with a terrain mesh loaded from DEM data. Accelerometer and GPS access required.
Other important points:
Must be able to tie in to In-App purchases.
The more community content like tutorials and forums the better.
Ability to add third party libraries like Flurry Analytics is a big plus! But i guess this isn't possible?
Which engine would you recommend for these projects, and why? Preferably, i'd like to pick one and stick with it.
You’re going to have a way, way better time developing with Unity. UDK’s got a fantastic, incredibly capable engine, but its tools don’t have the ease-of-use of Unity’s, its developer documentation leaves a lot to be desired, and the community hasn’t been using it for long enough for there to be much help to be found there. Some quick Googling suggests you can write your own Objective-C plug-ins for Unity games, so in-app purchases and third-party libraries are definitely a possibility. I think Unity’s your best bet.
What about cryengine? it free for non commercial use and also provides mono c#.
Check it out CryEngine