UE4 Navmesh isn't the same in viewport and in game - unreal-engine4

So I am making a navmesh in a scene which loads multiple levels, everything work except when the navmesh is rebuilt on startup it isn't the same as the one I see in the viewport and is way less accurate now for some reason(it is important to reload since multiple levels can be streamed but for testing purposes, I stream the same one that I see in the viewport), 2 links will be there with the viewport version and the in-game one when I simulate the game. The setting are the same in the project setting and on the actual RecastNavMesh and I can't find anything on the net refering to my problem.
[1] In viewport : https://i.stack.imgur.com/FWVRE.jpg
[2] In Game : https://i.stack.imgur.com/q0RdO.jpg

Related

HoloLens/Unity shared experience: How to track a user's "world" position instead of Unity's position?

I have here an AR game I'm developing for the HoloLens that involves rendering holograms according the the users's relative position. It's a multiplayer shared experience where everyone in the same physical room connects to the same instance (shared Unity scene) hosted via cloud or LAN, and the players who have joined can see holograms rendering at other player's positions.
For example: Player A, and B join an instance, they're in the same room together. Player A can see a hologram above Player B tracking Player B's position (A Sims cursor if you will). Then once Player A gets closer to Player B, a couple more holographic panels can open up displaying the stats of Player B. These panels are also tracking Player B's position and are always rendered with a slight offset relative to Player B's headset position. Player B also sees the same on Player A and vice versa.
That's fundamentally what my AR game does for the time being.
Problem:
The problem I'm trying to solve is tracking the user's position absolutely to the room itself instead of using the coordinate positions Unity says Player A's game object is at and Player B's.
My app works beautifully if I mark a physical position on the floor and a facing direction that all the players must assume when starting the Unity app. This then forces the coordinate system in all the player's Unity app to have a matching origin point and initial heading in the real world. Only then am I able to render holograms relative to a User's position and have it correlate 1:1 between the Unity space and real physical space around the headset.
But what if I want Player A to start the app on one side of the room and have Player B start the app on the other side of the room? When I do this, the origin point of Player A's Unity world is at different physical spot than Player B. Then this would result in Holograms rendering A's position or B's position at a tremendous offset.
I have some screenshots showing what I mean.
In this one, I have 3 HoloLenses. The two on the floor, plus the one I'm wearing to take screenshots.
There's a blue X on the floor (It's the sheet of paper. I realized you can't see it in the image.) where I started my Unity app on all three HoloLenses. So the origin of the Unity world for all three is that specific physical location. As you can see, the blue cursor showing connected players works to track the headset's location beautifully. You can even see the headsets's location relative to the screenshooter on the minimap.
The gimmick here to make the hologram tracking be accurate is that all three started in the same spot.
Now in this one, I introduced a red X. I restarted the Unity app on one of the headsets and used the red X as it's starting spot. As you can see in this screenshot, the tracking is still precise, but it comes at a tremendous offset. Because my relative origin point in Unity (the blue X) is different than the others headset's relative origin point (the red X).
Problem:
So this here is the problem I'm trying to solve. I don't want all my users to have to initialize the app in the same physical spot one after the other to make the holograms appear in the user's correct position. The HoloLens does a scan of the whole room right?
Is there not a way to synchronize these maps together with all the connected HoloLenses then they can share what their absolute coordinates are? Then I can use those as a transform point in the Unity scene instead of having to track multiplayer game objects.
Here's a map on my headset I used the get the screenshots from the same angel
This is tricky with inside-out tracking as everything is relative to the observer (as you've discovered). What you need is to be able to identify a common, unique real-location that your system will then treat as 'common origin'. Either a QR code or unique object that the system can detect and localise should suffice, then keep track of your user's (and other tracked objects) offset from that known origin within the virtual world.
My answer was deleted because reasons, so round #2. Something about link-only answers.
So, here's the link again.
https://learn.microsoft.com/en-us/windows/mixed-reality/develop/unity/tutorials/mr-learning-sharing-05
And to avoid the last situation, I'm going to add that whomever wants a synchronized multiplayer experience with HoloLens should read through the whole tutorial series. I am not providing a summary on how to do this wihtout having to copy and paste the docs. Just know that you need a spatial anchor that others load into their scene.

How much efforts does it takes to let three monitors to replace VR headset program?

I have a unity project. It is developed for VR headset training usage. However, users have a strong dizzy feeling after playing the game. Now, I want to use 3 monitors to replace the VR headset so the users can look at the 3 monitors to drive. Is it a big effort to change the software code to achieve this? What can I do for the software so that it can be run in monitor?
Actually it is quite simple:
See Unity Manual Multi-Display
In your Scene have 3 Camera objects and set their according Camera.targetDisplay via the Inspector (1-indexed).
To make them follow the vehicle correctly simply make them childs of the vehicle object then they are always rotated and moved along with it. Now position and rotate them according to your needs relative to the vehicle.
In PlayerSettings → XRSettings (at the bottom) disable the Virtual Reality Supported since you do not want any VR-HMD move the Camera but it's only controlled by the vehicle transform.
Then you also have to activate according Displays (0-indexes where 0 is the default monitor which is always enabled) in your case e.g.
private void Start()
{
Display.displays[1].Activate();
Display.displays[2].Activate();
}
I don't know how exactly the "second" or "third" connected Monitor is defined but I guess it should match with the monitor numbering in the system display settings.

CSS3D StereoEffect creating dual non synced webpages

This project is a combination of a few things I've found, first embedding webpages in Three.js:
http://adndevblog.typepad.com/cloud_and_mobile/2015/07/embedding-webpages-in-a-3d-threejs-scene.html
and the second is the custom CSS 3D Renderer I found on stackoverflow:
Three.js StereoEffect cannot be applied to CSS3DRenderer
The effect is almost exactly what I wanted, except instead of simply re-drawing the output from one side to the other, it's loading two separate instances, which sorta breaks the point of going VR...
Any ideas? Here's the file:
https://drive.google.com/open?id=1UmXmdgyhZkbeuZlCrXFUXx-yYEKtLzFP
My goal was to render the Ace cloud editor in a VR environment using the stereo effect algorithm (which seems like it could be a fun new way to develop code if you had a wireless keyboard/trackpad with a VR headset, locking the camera in one location of course but still needing the mirrored view for the lenses)...
https://www.ebay.com/itm/Wireless-Bluetooth-Keyboard-with-Touchpad-for-iOS-Android-Smart-phone-Tablet-PC/112515393899?_trkparms=aid%3D222007%26algo%3DSIM.MBE%26ao%3D1%26asc%3D20161006002618%26meid%3D50ca4e61c27345df85a8461fb1a0e6d5%26pid%3D100694%26rk%3D7%26rkt%3D30%26sd%3D222631353709&_trksid=p2385738.c100694.m4598
https://www.walmart.com/ip/ONN-Virtual-Reality-Headset-White/187088616?wmlspartner=wlpa&selectedSellerId=0

Unity - Canvas Size changes from one computer to the other

I am porting a video game from Xamarin to Unity.
The game uses, amonst other thing, Unity UI functionnalities (hence a canvas).
I did some work on one computer, adapting/placing the UI element I needed to the canvas, then saved and checked in my work into subversion.
I then checked-out the code from another machine and reopened the project, only to find out that the canvas size (and hence all the UI elements layout) was quite different and all over the place !
Why is that ? Did I omit to check-in some important file (for exemple metadata) into the source control ?
Thanks,
RĂ©gis
This is because the canvas height and width is dependent on the resolution of the main monitor of the machine running the game/editor.
You'll want to look into using anchors and layout components to make the canvas responsive.
Unity is a how to article on building a responsive UI: https://docs.unity3d.com/Manual/HOWTO-UIMultiResolution.html

Is there Optimizations in the render pipeline for Virtual Reality

Im trying to get my head around the render pipeline on a Head Mounted Display.
Given that a we have a target refresh rate of 90hz per screen.
Is there efficiency built within the pipeline that benefit from a reduced compute load on the reduced delta from one frame to another in VR?
Im wondering does the fact that less pixels have changed in the image from Frame A to B # 90fps compared to Frame A to B # 45fps given the same movement on screen.
I.e is the workload per frame from moving 1 frame to another anyway reduced by these new frames.
http://imgur.com/6rRiWGM
AFAIK all frames on VR HMDs are always rendered from scratch as in other 3d applications. If there was a good method to magically interpolate the rendering why would it only be used on VR?
There is however another trick called Timewarp. With proper async timewarp implementation if you don't provide a new frame in time, the last one is rotated by the delta of your headset rotation.
So when you look around, the head movement is still looking as if your
app would have high fps.
If you were not moving and there is nothing "stuck" to the camera like a GUI, this is a very good ilusion.
Currently timewarp is working well on GearVR and Vive, possibly on production ready Oculus (but not on DK2 on 0.8 drivers, still haven't got my new headset).