How can I only allow entering my virtual scene from a portal? - swift

I have an application which will render an augmented reality scene and a portal for which you can walk into the scene. The scene is occluded from view by a plane, but if you walk through that plane, you "bust" into the virtual environment.
I'm not looking for code but rather help on how to approach this problem. I want to make it so that the only way you can enter the virtual scene is by walking through the doorway that I've created. I first thought about tracking the location of the camera and making sure that you're very close to the entrance before you cross over the threshold to enable rendering but it seems like if I do it this way the user would not be able to see through the doorway before approaching/entering the virtual scene.

At first, look at How to create a Portal effect in ARKit just using the SceneKit editor? Stack Overflow post how to make a portal itself.
The robust way to prevent users from passing through virtual walls is to have the same configuration of virtual walls like real walls have (where physical wall is – the virtual wall exists too).
Also you need object detection tools. For precise positioning of your virtual walls over real physical walls just use Core ML framework with pre-trained small-sized mlmodel along with ARKit framework's classes like ARImageTrackingConfiguration() or ARWorldTrackingConfiguration().
In case you have no opportunity to build the same configuration of virtual walls like real walls were built, you can make a user's iPhone vibrate when a user has collided with a virtual wall. Here's a code:
import AudioToolbox.AudioServices
AudioServicesPlaySystemSound(kSystemSoundID_Vibrate)
AudioServicesPlayAlertSound(kSystemSoundID_Vibrate)
Hope this helps.

There are a few methods I can think of off the top of my head.
Make it so that when a person walks through a wall, the whole screen goes blank except for a message telling them that they need to back away out of the wall, and maybe an arrow to tell them what direction to move.
Make it so that bumping into a wall shifts the entire scene.
Do a combo of the two and ask them if they’d like to shift the scene when they run far into a wall.

Related

Unity2D: Mirror Multiplying - How to view an opponent's screen in a match 3 game

I'm making my own match 3 multiplayer game, the concept is to have two people face off against each other person can face off another person by swapping tiles to make a line of the same form. I want to introduce multiplayer by connecting two players together and allowing each person to see their opponent's screen, as well as syncing their moves. So far, I have a simple match 3 game (I created one using different tutorials, mainly this playlist) and followed a simple multiplayer tutorial (Mirror) for a player to host or be a client. My problem is that I have no idea how to show both players their opponent's screen to each other. I even found an example of what I want the multiplayer mode in my game to be like. Can anyone point me in the right direction, please and thank you.
Additional information:
I'm using mirror for multiplayer
I created a network manager gameobject and added the necessary components to it. I also added the game pieces into the 'registered spawnable prefabs' and created an empty gameobject, called player for the player prefab.
Each game piece has a network transform and network identity component attached.
The player prefab object has a camera child under it to.
This is what I want my game to look like:
Overall, I want to have player's view each other's screen:
As you can see, both player's are connected, what I want to do it to allow each player see their opponent's screen. Does anyone have an idea on how I can do it?
Thank you! :)

HoloLens/Unity shared experience: How to track a user's "world" position instead of Unity's position?

I have here an AR game I'm developing for the HoloLens that involves rendering holograms according the the users's relative position. It's a multiplayer shared experience where everyone in the same physical room connects to the same instance (shared Unity scene) hosted via cloud or LAN, and the players who have joined can see holograms rendering at other player's positions.
For example: Player A, and B join an instance, they're in the same room together. Player A can see a hologram above Player B tracking Player B's position (A Sims cursor if you will). Then once Player A gets closer to Player B, a couple more holographic panels can open up displaying the stats of Player B. These panels are also tracking Player B's position and are always rendered with a slight offset relative to Player B's headset position. Player B also sees the same on Player A and vice versa.
That's fundamentally what my AR game does for the time being.
Problem:
The problem I'm trying to solve is tracking the user's position absolutely to the room itself instead of using the coordinate positions Unity says Player A's game object is at and Player B's.
My app works beautifully if I mark a physical position on the floor and a facing direction that all the players must assume when starting the Unity app. This then forces the coordinate system in all the player's Unity app to have a matching origin point and initial heading in the real world. Only then am I able to render holograms relative to a User's position and have it correlate 1:1 between the Unity space and real physical space around the headset.
But what if I want Player A to start the app on one side of the room and have Player B start the app on the other side of the room? When I do this, the origin point of Player A's Unity world is at different physical spot than Player B. Then this would result in Holograms rendering A's position or B's position at a tremendous offset.
I have some screenshots showing what I mean.
In this one, I have 3 HoloLenses. The two on the floor, plus the one I'm wearing to take screenshots.
There's a blue X on the floor (It's the sheet of paper. I realized you can't see it in the image.) where I started my Unity app on all three HoloLenses. So the origin of the Unity world for all three is that specific physical location. As you can see, the blue cursor showing connected players works to track the headset's location beautifully. You can even see the headsets's location relative to the screenshooter on the minimap.
The gimmick here to make the hologram tracking be accurate is that all three started in the same spot.
Now in this one, I introduced a red X. I restarted the Unity app on one of the headsets and used the red X as it's starting spot. As you can see in this screenshot, the tracking is still precise, but it comes at a tremendous offset. Because my relative origin point in Unity (the blue X) is different than the others headset's relative origin point (the red X).
Problem:
So this here is the problem I'm trying to solve. I don't want all my users to have to initialize the app in the same physical spot one after the other to make the holograms appear in the user's correct position. The HoloLens does a scan of the whole room right?
Is there not a way to synchronize these maps together with all the connected HoloLenses then they can share what their absolute coordinates are? Then I can use those as a transform point in the Unity scene instead of having to track multiplayer game objects.
Here's a map on my headset I used the get the screenshots from the same angel
This is tricky with inside-out tracking as everything is relative to the observer (as you've discovered). What you need is to be able to identify a common, unique real-location that your system will then treat as 'common origin'. Either a QR code or unique object that the system can detect and localise should suffice, then keep track of your user's (and other tracked objects) offset from that known origin within the virtual world.
My answer was deleted because reasons, so round #2. Something about link-only answers.
So, here's the link again.
https://learn.microsoft.com/en-us/windows/mixed-reality/develop/unity/tutorials/mr-learning-sharing-05
And to avoid the last situation, I'm going to add that whomever wants a synchronized multiplayer experience with HoloLens should read through the whole tutorial series. I am not providing a summary on how to do this wihtout having to copy and paste the docs. Just know that you need a spatial anchor that others load into their scene.

How do I track the Unity position of physical objects the player is interacting with using Hololens2 hand tracking data?

Basically I am working on a mixed reality experience using the Hololens2 and Unity, where the player has several physical objects they need to interact with, as well as virtual objects. One of the physical objects is a gun controller that has an IMU to detect acceleration and orientation. My main challenge is this : how do I get the physical object's position in Unity, in order to accurately fire virtual projectiles at a virtual enemy?
My current idea is to have the player position the physical weapon inside a virtual bounding box at the start of the game. I can then track the position of the virtual box through collision with the player's hands when they pick up the physical controller. Does OnCollisionEnter, or a similar method, work with the Players hands? (see attached image)
I am also looking into the use of spatial awareness / image recognition / pose estimation to accomplish this task, as well as researching the use of a tracking base station to determine object position (similar to HTC Vive / Oculus Rift ).
Any suggestions, resources, and assistance is greatly appreciated here. Thank you!
EDIT UPDATE 11/30/2020 :
Hernando commented below suggesting QR codes, assume for this project we are not allowed to use QR codes, and we want as as precise orientation data as possible. Thanks Hernando!
For locating the object, QR code would definitely be the recommendation to find quickly with HL2 device. I have seen the QR approach in multiple venues too for VR LBE experiences like being described here. QR code is just sitting on top the device.
Otherwise, if the controller in question supports Bluetooth, can possibly pair the device and if device has location information, can possible transmit the location of where it is at. Based on what I am seeing from all of the above, this would be a custom solution and highly dependent on the controller abilities to be seen if QR codes are out of the equation. I have witnessed some controller solutions to first start the user experience to do something like touch the floor to get an initial reference point. Or alternatively doing something like always picking up the gun from specific location in the real world like some local based experiences do before starting.
Good luck with project, just my advice from using systems with VR
Is the controller allowed to paste multiple QRcodes? If allowed, we recommend you use QRCode tracking to assist in locating your controller. If you prefer to use image recognition, object detection, or other technologies, it needs Azure service or some third-party library, more information please see:Computer Vision documentation

UE4 - Changing ADS Camera when using a different weapon

I'm very new to Unreal Engine 4 and have been following an fps guide online!
Currently have an AK and M4 in the game and can switch between the two using 1 / 2 on the keypad. I had to setup the first aim down sights camera to the AK and it works well! However if I equip the M4 and aim down sights then the camera is no longer in the correct spot and it doesn't line up at all with the ironsights. So I added another camera called M4A1 ADS Camera, but can't figure out how to switch to that camera when aiming down sights then going back to the AK camera if using that weapon.
Is there a better way of doing this or any tutorials / tips to help with the process for the future?
If I want to try and answer your question I'd say that you should add a switch case or make branches to check wich weapon is equipped at the time.
But I'd say a better way to do this would be to add a camera to your weapon blueprint then you could access the camera from the weapon directly (assuming you have a master weapon class). This way you would configure 1 ADS camera per weapon and align it properly in it own blueprint.
you can use "Set View Targent With Blend" function to change your cameras, it is very good for changing speed, and blending other things.
I know this is old but even cleaner than Deimos's suggestion would be to have an ADS camera component on your character and attach it to a socket you create on each of your weapons. You can adjust the socket position and rotation on each weapon's skeleton and then all you do from the character side is attach the camera to the weapon any time you equip one.

Should I change draw order?

I'm creating mobile racing(in space) game and it's first person so there is always big cockpit occluding big part of scene. Can I somehow use the fact that I know it to optimize rendering? I have heard that draw call order can be changed, but I don't know how exactly it would work.
The thing you are looking for is called "Occlusion Culling". Here is a guide from the Unity manual explaining how it works and how to set it up.
NOTE: This only culls static objects, if your cockpit moves with the player objects covered by the cockpit will not be culled by using this method. If you want to do occlusion culling with dynamic objects you need to get a 3rd party asset from the store like InstantOC, it even has a "Mobile Aircraft Controls" prefab (Note, I have never used InstantOC but I have heard good things about it).