Refresh rate of pose of Base Station in OpenVR - unity3d

I am able to get the position of the base stations but this only updates once, while controllers and HDM are constantly updated. Is there a way to force the refresh so that I can get the position of the base stations (trackingreference) in real-time? thanks!

Technically there is, I'm talking driver side right now, but it kinda does nothing, tracking references are normal tracked objects after all so drivers can update their poses like normal with a vr::VRServerDriverHost()->TrackedDevicePoseUpdated() call and on init through the pose returned by GetPose()
Now, that should work, but it doesnt, even more most of the time custom tracking refences don't show up in SteamVR. what it does under the hood? no idea
Also commercial headsets might update their tracking reference's position only once on startup which seems to be the case

Related

Troubleshooting in case of tracking loss

I created an Object to Play Animation through the HelloAR Example of ARCore. Then he covered Camera with his hand and caused a tracking loss.
And if you shine the space again, the object you create will return, but the Animation will start from the beginning.
If space is recognized again after the tracking loss occurs, sometimes the object is returned but not returned. Is there a way of distinguishing?
If you recognize space again after a tracking loss occurs, why does Animation start all over again when the object returns? Are you deleting and recreating the object?
ARCore uses a techniques called Visual Inertial Odometry. It is a hybrid techniques which combine computer vision and sensor fusion.
So what VIO does is it combines data extracted from feature points(corners, blobs, edges, etc) with data acquired from mobile device IMU unit. It is crucial in ARCore you know the position of your device. Because you estimate every trackable position based on this information(triangulation using device pose).
Also another aspect is ARCore builds a sparse map of the environment while you move in the room. So those extracted feature points are stored in the memory based on a confidence level and used later to localize device.
At last, what happens when tracking is lost is you can not extract feature points due to a while wall for example. When you can not extract feature points you can not localize the device. Therefore, device does not know where it is in this Sparse map i mentioned above. Sometimes you recover because you go back to the places which are already scanned and kept in this Sparse map.
Now for your questions:
If you anchor your objects. Your objects will return but there can be drifts because ARCore can accumulate errors during this process especially if you move during device tracking is lost. So probably they return but they are not at the same physical position anymore because of the drifts.
As in animation restarting since those anchors can not be tracked they deactivated. Also since you anchor your objects they are child of the anchor so your objects are deactivated as well. That is why your animation restart.
You can test both issues using instant preview and see what happens to anchors when you lose tracking. Good luck!

Unity Playables graph locking animated properties with values at the last frame

I've been trying to use a mixture of Unity's Animators and Playables in my game, and for the most part it works well, but there's two issues that I've been having for a long time, and I've at best worked around them. Today I bashed my head against them again, and after finding no solution online I decided to get my lazy ass to finally ask for help.
The basic setup is that my characters have:
An Animator with its controller, state machine, etc. that is used mostly for movement, jumping, climbing, etc. In case this is relevant, each character has an override controller of a generic one.
A very simple playable graph with just an output (wrapping the animator) and an input (wrapping the specific clip I want to play at the time). This is used for actions and attacks.
The problems I have are:
1- I can't seem to figure out an elegant, clean way to know when the clip fed to the graph (second part above) is finished. Currently I circumvent this by simply calculating how long the clip is and dividing by the current animation speed factor; I also have to account for when the animation is paused (e.g. hitstop). This gets the job done but is quite unelegant, and I'm sure there must be a better way.
2- Most importantly, when I'm done with the graph and standalone animation, the values of all of the properties the clip touches become locked at their last value. They stay locked even during any animation played by the regular animator; even if any of these later animations change its value, it snaps back to that locked "last frame" value when they end.
I've tried several things to solve this:
2.1- Set the default / desired value of the properties in the idle / default animation (to "mark" them as animatable properties in the normal animator's animation). This only fixes the issue for whatever animation is touched; any other animation played after that instantly reverts to the value locked by the last frame of the animation played by the graph.
2.2- Destroy the playable wrapping the animation (I do this anyway for cleanup since I need to recreate it each time a new animation plays).
2.3- Destroy the graph and recreate it each time (surprisingly, even this keeps the values locked).
2.4- Disabling the animator and enabling it again.
I'm frankly starting to lose my mind with the second problem, so any help would be exceedingly appreciated. Thanks in advance for any help!
Although this question is pretty old, I'm adding answer (along with my related follow up question) just in case there's more people that end up here from a search engine.
Animations (both "legacy" and non-legacy) can fire off events at some frame - just pick point (frame on dopesheet, place in graph on curves) and click "add event"...
There's some difference on how to specify which object/script & function to call between legacy and non-legacy - but in both cases it's basically a callback so you can know for sure when some animation started/finished (or any point in between).
Instead of trying to change values of those properties that are "locked by animations" from void Update() you seem to need to do those from within void LateUpdate().
From my testing - using/doing "legacy" animations (that also means "animation" component instead of "animator controller") allows you to use Update() - at least once the animation is finished.
And also worth keeping in mind that "animator controller" (component) doesn't accept importing "legacy" animations for any of it's states.
And animation (component) doesn't seem to play (at least not auto-play) non-legacy animations.
As of my question, well it's basically same as OPs question - is it possible to somehow "unlock" these properties (obviously without any states/animations playing) while using "newer" animator controller?
Although - based on things I've read while trying to find what's going on. Those "legacy" animations are not really "legacy" - and seem to be there to stay for reasons like being better for performance.

How to have a reference frame for markerless inside out tracking in VR, to achieve absolute positional tracking and prevent drift

We have the new Vive Focus headset, which has markerless inside out tracking. By default these headsets can only do relative tracking from their initial position. But this isn't totally accurate, you get some drift, and the position in the virtual world and the real world go out of sync. For the y position, this can mean ending up at the wrong height in the virtual world as well. Also, you don't know the user's absolute position, which you would need for a mulitplayer game for instance, so your players don't run into each other.
Now the ZED camera from Stereolabs has an option to use a reference frame (which I assume is a pointmap), which it will then use to do absolute positional tracking by calculating your position relative to the reference frame, instead of to the last frame (which I assume normal markerless inside out tracking does). Of course the ZED code is in a dll, so my question is, how difficult is it to code this system using a reference frame for the Vive Focus or another markerless inside out tracked headset. Preferably in C#, preferably using the Unity plugin, but any example would help.
And what I'm wondering about this reference frame system is would one reference frame be enough? The ZED documentation says you need to look at approximately the same scene as you were when you first made the reference frame. This makes sense, otherwise how would the system find its reference. But doesn't that mean you would need more references, for the other sides of your room as well? The ZED documentation also says that using the reference frame setting can cause jumps in VR, when syncing to the reference. Would this be a big problem? Because if it would jump all the time, that would only increase motion sickness, which is a big enough problem as it is in VR. And finally, would it require a lot of processing power to track using a reference frame? Because we're dealing with standalone headsets here powered by mobile processors, they have a hard enough time of it as it is.
Or would it be feasible to make something using markers and maybe Vuforia and do absolute positional tracking that way?
Thanks for any input!

Anti-Cheat / Glitch in game

Currently I am working with a team on a Unity-based game. The game is still in development and alpha version.
Recently, we saw that the game was vulnerable to Cheat Engine, speedhack etc. Updates after updates, cheats are now stable. We also introduced the ACT or anti Cheat Toolkit of Unity. As the game is Unity-based, it is easy to implement ideas in the game.
Though "hacks" are stablized, "glitches" are not.
This is an open world Survival game and it consists of picking/dropping items. The glitch is that when two players pick the item together, (currently you have to press E while the crosshead is over the item to pick it up) the item gets duplicated. We have been working DAYS to fix it, but no fortune.
We introduced that a player cannot pick up an item when there is another player nearby. It looks odd and we want the game smooth. We also tried auto pick up item. That's our plan, but are there any more ideas what we can do?
If your concerns are players cheating by modifying memory values, as well as maintaining a synchronized game state to avoid problems like item duplication, you should look into setting up an authoritative server that will contain and update the "official" values and state of the game.
Basically, rather than storing values and performing actions directly on the player's computer, the game will send a request to the server of what it wants to do, and the server will perform the actions, update the official game state, and send the new state back to the player so their game is updated.
This will prevent memory editing because even if a player modifies a value on their screen (such as currency or health) the server contains the true value.
It will also prevent exploits like speedhacks, because rather than having the local game directly move the player when a key is pressed, the keypress will just send a movement request to the server, which will update the player's position, and send back the new position.
Finally, this will prevent item duplication, because when both players attempt to pick up the item, they will both send an item pickup request to the server. Whichever player's request arrives first will receive the item, then the server will update the game state so that the item is no longer on the ground, and the second player's request will be ignored, because the item they're trying to pick up no longer exists.
Simply put, the best way to prevent cheating is: Don't store important values or perform important actions on the player's computer.

How to change player states?

I have a "design pattern" problem. I want to enable for a player to change his state. Lets say I have three states or super powers if you will. Each of them have different abilities. If this abilities were just based on some attributes (lets say mass or speed) I could just change that on the player and everything would work fine.
But what if there are some other functionalities changed. Lets say if the player is in the state 2 and he jumps the animation is different and some other thing changes. Now I know I could make this with a lot of checking in update loop for states but I want to make this elegant.
My idea until now is to make generalPlayer object and the each special player inherits from it and adds special abilities, and when player change state I would kind of change instance of player to that instance.
Is there any better way? I am using c# as scripting language
The problem I have with that approach is that you are using multiple different objects for one player. There could be some mess involved with passing data every time the player changes states which would be better avoided. Since C# has delegates, which, for our purposes, behave much like first class functions, it is possible to change the behavior of your player by changing out certain routines and field values on every change of state. This allows you to keep your data in one object and change behavior on the fly without relying solely on conditionals. There is a pithy phrase I have heard many times, that an object encapsulates state and behavior. In C#, you can change state by manipulating field values, and change behavior by relying on delegates. That should cover your problem.
I have found the best suiting sollution thanks to friend. What I use was an Strategy pattern and then put different instances to the interface I used to controll the player. It works like charm. Thanks all for help.