Unity2D: Mirror Multiplying - How to view an opponent's screen in a match 3 game - unity3d

I'm making my own match 3 multiplayer game, the concept is to have two people face off against each other person can face off another person by swapping tiles to make a line of the same form. I want to introduce multiplayer by connecting two players together and allowing each person to see their opponent's screen, as well as syncing their moves. So far, I have a simple match 3 game (I created one using different tutorials, mainly this playlist) and followed a simple multiplayer tutorial (Mirror) for a player to host or be a client. My problem is that I have no idea how to show both players their opponent's screen to each other. I even found an example of what I want the multiplayer mode in my game to be like. Can anyone point me in the right direction, please and thank you.
Additional information:
I'm using mirror for multiplayer
I created a network manager gameobject and added the necessary components to it. I also added the game pieces into the 'registered spawnable prefabs' and created an empty gameobject, called player for the player prefab.
Each game piece has a network transform and network identity component attached.
The player prefab object has a camera child under it to.
This is what I want my game to look like:
Overall, I want to have player's view each other's screen:
As you can see, both player's are connected, what I want to do it to allow each player see their opponent's screen. Does anyone have an idea on how I can do it?
Thank you! :)

Related

Personal First Person Cameras for each Player using Unity's 'Netcode for GameObjects'

I'm currently learning how to use Netcode for GameObjects and have just gotten the client, host, and server working as well as a movement script for the players that only effects each player individually. the only problem with it now is that when a new player is spawned, any existing players camera is changed to the camera that is spawned within the prefab of the new player. I have looked around online for a solution and this is the closest thing I could find but it's not done using Netcode for GameObjects and I'm not sure how to translate the information in the video over.
Any idea how i could achieve this?

Why is the NetworkIdentity component making my Game Object invisible? (Unity)

Currently trying to make a multiplayer snake game, I have never made a multiplayer game before. I am having a very strange issue where whenever I add the NetworkIdentity component to my 'Snake' game object, it becomes invisible, but is still there. The game is still functional; you just can't see the snake.
I have two pictures attached, one is the game with the NetworkIdentity component, one is the game without it. Thank you for the help.
Without component
With component
A) your image labels seem to be flipped .. exchange the Without and With ;)
and B) Afaik this is the expected behavior for any GameObject with a NetworkIdentity as long as you are not connected to any network yet. It will get enabled once you host or join a session.
You probably would rather convert this into the player prefab though and not have it in your scene from the beginning but rather let the NetworkManager automatically spawn it for every player in the session including yourself.

HoloLens/Unity shared experience: How to track a user's "world" position instead of Unity's position?

I have here an AR game I'm developing for the HoloLens that involves rendering holograms according the the users's relative position. It's a multiplayer shared experience where everyone in the same physical room connects to the same instance (shared Unity scene) hosted via cloud or LAN, and the players who have joined can see holograms rendering at other player's positions.
For example: Player A, and B join an instance, they're in the same room together. Player A can see a hologram above Player B tracking Player B's position (A Sims cursor if you will). Then once Player A gets closer to Player B, a couple more holographic panels can open up displaying the stats of Player B. These panels are also tracking Player B's position and are always rendered with a slight offset relative to Player B's headset position. Player B also sees the same on Player A and vice versa.
That's fundamentally what my AR game does for the time being.
Problem:
The problem I'm trying to solve is tracking the user's position absolutely to the room itself instead of using the coordinate positions Unity says Player A's game object is at and Player B's.
My app works beautifully if I mark a physical position on the floor and a facing direction that all the players must assume when starting the Unity app. This then forces the coordinate system in all the player's Unity app to have a matching origin point and initial heading in the real world. Only then am I able to render holograms relative to a User's position and have it correlate 1:1 between the Unity space and real physical space around the headset.
But what if I want Player A to start the app on one side of the room and have Player B start the app on the other side of the room? When I do this, the origin point of Player A's Unity world is at different physical spot than Player B. Then this would result in Holograms rendering A's position or B's position at a tremendous offset.
I have some screenshots showing what I mean.
In this one, I have 3 HoloLenses. The two on the floor, plus the one I'm wearing to take screenshots.
There's a blue X on the floor (It's the sheet of paper. I realized you can't see it in the image.) where I started my Unity app on all three HoloLenses. So the origin of the Unity world for all three is that specific physical location. As you can see, the blue cursor showing connected players works to track the headset's location beautifully. You can even see the headsets's location relative to the screenshooter on the minimap.
The gimmick here to make the hologram tracking be accurate is that all three started in the same spot.
Now in this one, I introduced a red X. I restarted the Unity app on one of the headsets and used the red X as it's starting spot. As you can see in this screenshot, the tracking is still precise, but it comes at a tremendous offset. Because my relative origin point in Unity (the blue X) is different than the others headset's relative origin point (the red X).
Problem:
So this here is the problem I'm trying to solve. I don't want all my users to have to initialize the app in the same physical spot one after the other to make the holograms appear in the user's correct position. The HoloLens does a scan of the whole room right?
Is there not a way to synchronize these maps together with all the connected HoloLenses then they can share what their absolute coordinates are? Then I can use those as a transform point in the Unity scene instead of having to track multiplayer game objects.
Here's a map on my headset I used the get the screenshots from the same angel
This is tricky with inside-out tracking as everything is relative to the observer (as you've discovered). What you need is to be able to identify a common, unique real-location that your system will then treat as 'common origin'. Either a QR code or unique object that the system can detect and localise should suffice, then keep track of your user's (and other tracked objects) offset from that known origin within the virtual world.
My answer was deleted because reasons, so round #2. Something about link-only answers.
So, here's the link again.
https://learn.microsoft.com/en-us/windows/mixed-reality/develop/unity/tutorials/mr-learning-sharing-05
And to avoid the last situation, I'm going to add that whomever wants a synchronized multiplayer experience with HoloLens should read through the whole tutorial series. I am not providing a summary on how to do this wihtout having to copy and paste the docs. Just know that you need a spatial anchor that others load into their scene.

UE4 - Changing ADS Camera when using a different weapon

I'm very new to Unreal Engine 4 and have been following an fps guide online!
Currently have an AK and M4 in the game and can switch between the two using 1 / 2 on the keypad. I had to setup the first aim down sights camera to the AK and it works well! However if I equip the M4 and aim down sights then the camera is no longer in the correct spot and it doesn't line up at all with the ironsights. So I added another camera called M4A1 ADS Camera, but can't figure out how to switch to that camera when aiming down sights then going back to the AK camera if using that weapon.
Is there a better way of doing this or any tutorials / tips to help with the process for the future?
If I want to try and answer your question I'd say that you should add a switch case or make branches to check wich weapon is equipped at the time.
But I'd say a better way to do this would be to add a camera to your weapon blueprint then you could access the camera from the weapon directly (assuming you have a master weapon class). This way you would configure 1 ADS camera per weapon and align it properly in it own blueprint.
you can use "Set View Targent With Blend" function to change your cameras, it is very good for changing speed, and blending other things.
I know this is old but even cleaner than Deimos's suggestion would be to have an ADS camera component on your character and attach it to a socket you create on each of your weapons. You can adjust the socket position and rotation on each weapon's skeleton and then all you do from the character side is attach the camera to the weapon any time you equip one.

Unity Photon Player Instantiation

I have a SteamVR Unity project which I'm converting to multiplayer.
When another client joins the game, instead of two different players seeing each other, each player has it's own version of the game where he controls all fo the player instances.
For example, while one player is connected everything is fine, but when a second player joins, the game just adds another Player prefab which the first player controls as well.
I tried replacing the Player with a simple cube and everything seems fine.
both the Player and the cube have Photon Transform View and Photon View scripts.
I would appreciate any help I can get.
This is a common problem, when you start with PUN. You probably setup a player prefab with network synchronization and instantiate that for each player. All scripts on the instances will act on the local input, which is what you see now.
You want two variants of the prefab, for local and remote representation. As it's impractical to always configure two prefabs, instead you build one which initializes itself (in Awake or Start) as local or remote. Your scripts should check the object's PhotonView if it's .isMine or not.
This can be done per Component (in each distinct script) or you could add a component which enables/disables the scripts on a GameObject, depending on isMine.
The Basics Tutorial does this, for example.
Unity doesn't know if it's multiplayer or not. When you give an input all of the scripts who are waiting for input takes it and behaves accordingly. To solve this basically create another player that doesn't take any input and spawn it for other players.