Mouse cursor becomes locked during keyboard input - unity3d

I'm attempting my first shot at using Unity. This is my first foray into game development and 3D environments. I'm following the tutorial for the survival shooter located here: https://unity3d.com/learn/tutorials/projects/survival-shooter/player-character?playlist=17144
I went to try mouse movement, which should change the direction of the player object, but found that the player does not change direction.
I have even tried copying the tutorial author's code directly, using it in its entirety and overwriting my original script. Their code can be found in the link.
Adding Debug.Log("Raycast not hitting"); to an else block in the Turning function causes the debug message to fire during each FixedUpdate, regardless of whether the mouse has been positioned over the floor.

Since you directly copied all scripts, I'm assuming that their are no issues with your scripts. This means that this issue is most likely located somewhere in your scene. The boolean controlling player, Physics.Raycast(camRay, out floorHit, camRayLength, floorMask). The else statement you used assures you that this function is returning false. I think the most likely reason this may be is that you do not have an object in the scene that has its layer set to floor. The aforementioned function will automatically return false if the RayCast does not find an object with a layer of floor within a range of camRayLengthunits. So, find or create an object that covers the entirety of the floor and set its layer to floor. I would also recommend looking at the scripting API documentation for Physics.Raycast so that you can better understand whats going on in the code. Here's a link to Unity's documentation: https://docs.unity3d.com/ScriptReference/Physics.Raycast.html

Related

Unity C# collision detection

I made/trying to make a melee two-dimensional game in Unity. The enemy can attack the player no problem, but the other way around is very finicky and unreliable. The enemy has both a ridged body and a box collider, and the same with the player.
This code is all on the enemy
code: https://pastebin.pl/view/a6d99e0d
Because I do not know the specific project, I can not give a specific answer.
But I can provide a similar dilemma I have encountered before. My input was not responded in time in the game. When the player pressed after the jump key, the protagonist may not have been be able to jump in time. When I changed the FixedUpdate in the code to Update, the problem was solved.
Of course, my approach is flawed, but I hope my problem can be addressed to the subject. Bring some tips and good luck!
You're validating input, and updating physics both in the void OnTriggerEnter method. Physics updates much differently than the regular old Update() method does.
Your physics checks are already completed (I guess in this case it could be either or).
By calling Input.GetButtonDown() in this method, it has to grab the exact frame that button mashed down to return true.
The problem is, physics don't Update in the same way. Actually, you aren't even running a physics check this frame. That button just got completely ignored. Dang.
You have to come up with a better way to structure your code to process that input. Find a way that allows you check if the player is attacking, without relying on input. You could always try a state pattern.
See also MonoBehaviour.FixedUpdate().

Interactable script allows "Hand follow transform", defaults to transform of the attached object, ignoring offset

We are building a 3D VR billiard, and we are struggling with the Cue-Stick.
We try to use the steamVR Interactable scripts as much as we can.
I attach the Cue stick to the hand with
hand.AttachObject(gameObject, GrabTypes.Grip, Hand.AttachmentFlags.VelocityMovement, Pivot.transform);
The Offset (Pivot.transform is important here because we want to attach to the object at the correct position (which is determined on runtime, depending on where the user puts his hand when grabbing).
Because the Cue stick is moved using velocities and not via parenting, the grab position will not always be on the stick (stick has drag, collision, etc). This is quite immersion breaking.
To compensate for this problem the Intractable script has a boolean for Hand Follow Transform which would be perfect for fixing this.
Sadly the Hand now jumps to the midpoint (origin) of the Cue, even though we grabbed it somewhere completely different. It still behaves like it should, but visually its very confusing.
SteamVR Intractable supports the offset when attaching, but it forgets about that offset when positioning the hand.
Is there something I am missing?
Or is this just not a thing?
I read though the script codes multiple times and it seams like the offset is not available in the scope where the hand is placed. So there seams no way this can work.
How can I solve this problem?
I tried falling back to just parenting to the hand, but this makes physics interactions between Cue and balls funky.
I also read through most of the Interactable and the Hand script to understand whats going on, other that valves naming conventions hurting me a bit I was not successful in finding a way.
I am quite overwhelmed by it though and I can see a good chance that I am missing something.
What probably is possible though, is to set the Hand attachment point to the correct place on the Cue, but this transform is placed in the RightHand (and left) prefab, it seams like a really bad idea to change this transform every frame, or parenting it to the Cue gameobject.
Is there a better way?

How to have a reference frame for markerless inside out tracking in VR, to achieve absolute positional tracking and prevent drift

We have the new Vive Focus headset, which has markerless inside out tracking. By default these headsets can only do relative tracking from their initial position. But this isn't totally accurate, you get some drift, and the position in the virtual world and the real world go out of sync. For the y position, this can mean ending up at the wrong height in the virtual world as well. Also, you don't know the user's absolute position, which you would need for a mulitplayer game for instance, so your players don't run into each other.
Now the ZED camera from Stereolabs has an option to use a reference frame (which I assume is a pointmap), which it will then use to do absolute positional tracking by calculating your position relative to the reference frame, instead of to the last frame (which I assume normal markerless inside out tracking does). Of course the ZED code is in a dll, so my question is, how difficult is it to code this system using a reference frame for the Vive Focus or another markerless inside out tracked headset. Preferably in C#, preferably using the Unity plugin, but any example would help.
And what I'm wondering about this reference frame system is would one reference frame be enough? The ZED documentation says you need to look at approximately the same scene as you were when you first made the reference frame. This makes sense, otherwise how would the system find its reference. But doesn't that mean you would need more references, for the other sides of your room as well? The ZED documentation also says that using the reference frame setting can cause jumps in VR, when syncing to the reference. Would this be a big problem? Because if it would jump all the time, that would only increase motion sickness, which is a big enough problem as it is in VR. And finally, would it require a lot of processing power to track using a reference frame? Because we're dealing with standalone headsets here powered by mobile processors, they have a hard enough time of it as it is.
Or would it be feasible to make something using markers and maybe Vuforia and do absolute positional tracking that way?
Thanks for any input!

Switch turns between the Player and AI in Unity

I've recently started using Unity for a resource management stealth game. The stealth part is turn based, similar to Hitman Go. I have a simple character controller and a simple patrolling AI over a specific path. However, these movements work in real time and I want to change that to turn based. The AI should wait for the player to finish his/her move and then move itself. The same goes for the player.
Both the player and AI should be able to move to their adjacent waypoints only when the movement of the other part is complete.
How should I go about that?
Thank you
The language that I'm writing in is UnityScript.
As a very simple solution, firstly you can create an empty gameobject. Name it as TurnController. With a simple script you can add a boolean variable on it. Lets name it as isPlayerTurn. For player movement you can check this, if it is true player can move. At the end of his/her move (maybe clicking end turn button or when it reachs the max distance to move or something else) you can set isPlayerTurn false. Ofcourse AI should check (Maybe in Update function. But can change by your design) if it is true, AI can do what it needs to do. And at the finish of its turn, it should change isPlayerTurn back to true. I know it is a very simple solution but hope it helps for begining. And I hope I didnt misunderstand your question.
Write the ai as a player instance and have it emulate player input.
(Instead you could also implement a common interface on both classes.)
Spawn a game object with a GameManager behaviour script that stores a reference to the current player (or ai). Then have the GameManager update the current player every frame by checking their input. If the (human) player gives input while it is not his turn, his input will just be ignored.
This way, the player and ai do not have to know if it is their turn.

Unity: Third Person Collision with Animated Platforms

first time posting on stack and everything looks promising so far! I had a bit of a complicated question here so I'll do my best to provide exact details of what I'd like to get accomplished. I'm working with a third person controller in unity, so far everything is going great. I've dabbled with basic up and down platforms, a little glitchy but things work. Anytime my player runs through a mesh I make sure the mesh collider is working and a 'rigid-body' is attached set to Kinematic. Here's the kicker, in my game I have turning gears which the player can jump on. This is great except for the player doesn't turn with my gear, which would make sense according to my game-play. What would be the process for getting my character to interact with this animated mesh? I imagine some sort of script which my nooby mind cannot fathom at this point in my unity career. If anyone out there knows the solution to this, I would love to have any assistance, either way I'll plugging away at a solution. Thanks again!!
This is assuming that you're using the packages that ship with Unity3D, which it sounds like you are. After importing the Character Controllers package, you'll have a bunch of scripts in the Standard Assets\Character Controllers\Sources\Scripts folder, in the project hierarchy view. There's a script in there called CharacterMotor.js, attach that to the same GameObject you're running ThirdPersonController on.
Essentially this script adds more interactivity between the character and the scene. There's several methods inside this script that automatically move the character when in contact with a moving object (as long as it has a collision mesh) basically by inheriting the object's velocity.
If your gear/cog wheel has a proper collision mesh set up, adding this script to your character should be all that you require.