Do there any dev who wrote iPhone wifi/bluetooth multiplay before? - iphone

do there any dev who wrote iPhone wifi/bluetooth multiplay before?
Recently, I'm trying to make my latest game Doodle Kart to have mulityplay via bluetooth. But I found that there are hugh data need to share between the 2 devices.
-your car's position and direction
-your car's status(it is in normal state, it is hitting by bullet, it is falling into to hole....)
-CUP car's position, dicretion, and their status
-items position and status (pencil, bullet...)
I'm thinking about one device calculate all the things, and the other device just wait and receive the data to display on the screen. Does it make sense?
Hey, I should ask you the most important question first: Do you think it's possible to make bluetooth multiplay work on my game? It's just too much data need to share between the device.

Usually Multiplayer Games just share "events", like:
Player begins to turn left/right.
Player begins to accelerate.
Player shoots from x/y/z to direction x/y/z.
Item spawns at x/y/z.
Player aquires item.
The other parts just calculate the rest themselves as if everything would happen for them.
This reduces the data needed to transmit but requires periodic "full updates" that sync the game state again (i.e. every 10 seconds).
In short:
Transfer actions, not data.

Related

HoloLens/Unity shared experience: How to track a user's "world" position instead of Unity's position?

I have here an AR game I'm developing for the HoloLens that involves rendering holograms according the the users's relative position. It's a multiplayer shared experience where everyone in the same physical room connects to the same instance (shared Unity scene) hosted via cloud or LAN, and the players who have joined can see holograms rendering at other player's positions.
For example: Player A, and B join an instance, they're in the same room together. Player A can see a hologram above Player B tracking Player B's position (A Sims cursor if you will). Then once Player A gets closer to Player B, a couple more holographic panels can open up displaying the stats of Player B. These panels are also tracking Player B's position and are always rendered with a slight offset relative to Player B's headset position. Player B also sees the same on Player A and vice versa.
That's fundamentally what my AR game does for the time being.
Problem:
The problem I'm trying to solve is tracking the user's position absolutely to the room itself instead of using the coordinate positions Unity says Player A's game object is at and Player B's.
My app works beautifully if I mark a physical position on the floor and a facing direction that all the players must assume when starting the Unity app. This then forces the coordinate system in all the player's Unity app to have a matching origin point and initial heading in the real world. Only then am I able to render holograms relative to a User's position and have it correlate 1:1 between the Unity space and real physical space around the headset.
But what if I want Player A to start the app on one side of the room and have Player B start the app on the other side of the room? When I do this, the origin point of Player A's Unity world is at different physical spot than Player B. Then this would result in Holograms rendering A's position or B's position at a tremendous offset.
I have some screenshots showing what I mean.
In this one, I have 3 HoloLenses. The two on the floor, plus the one I'm wearing to take screenshots.
There's a blue X on the floor (It's the sheet of paper. I realized you can't see it in the image.) where I started my Unity app on all three HoloLenses. So the origin of the Unity world for all three is that specific physical location. As you can see, the blue cursor showing connected players works to track the headset's location beautifully. You can even see the headsets's location relative to the screenshooter on the minimap.
The gimmick here to make the hologram tracking be accurate is that all three started in the same spot.
Now in this one, I introduced a red X. I restarted the Unity app on one of the headsets and used the red X as it's starting spot. As you can see in this screenshot, the tracking is still precise, but it comes at a tremendous offset. Because my relative origin point in Unity (the blue X) is different than the others headset's relative origin point (the red X).
Problem:
So this here is the problem I'm trying to solve. I don't want all my users to have to initialize the app in the same physical spot one after the other to make the holograms appear in the user's correct position. The HoloLens does a scan of the whole room right?
Is there not a way to synchronize these maps together with all the connected HoloLenses then they can share what their absolute coordinates are? Then I can use those as a transform point in the Unity scene instead of having to track multiplayer game objects.
Here's a map on my headset I used the get the screenshots from the same angel
This is tricky with inside-out tracking as everything is relative to the observer (as you've discovered). What you need is to be able to identify a common, unique real-location that your system will then treat as 'common origin'. Either a QR code or unique object that the system can detect and localise should suffice, then keep track of your user's (and other tracked objects) offset from that known origin within the virtual world.
My answer was deleted because reasons, so round #2. Something about link-only answers.
So, here's the link again.
https://learn.microsoft.com/en-us/windows/mixed-reality/develop/unity/tutorials/mr-learning-sharing-05
And to avoid the last situation, I'm going to add that whomever wants a synchronized multiplayer experience with HoloLens should read through the whole tutorial series. I am not providing a summary on how to do this wihtout having to copy and paste the docs. Just know that you need a spatial anchor that others load into their scene.

How do I track the Unity position of physical objects the player is interacting with using Hololens2 hand tracking data?

Basically I am working on a mixed reality experience using the Hololens2 and Unity, where the player has several physical objects they need to interact with, as well as virtual objects. One of the physical objects is a gun controller that has an IMU to detect acceleration and orientation. My main challenge is this : how do I get the physical object's position in Unity, in order to accurately fire virtual projectiles at a virtual enemy?
My current idea is to have the player position the physical weapon inside a virtual bounding box at the start of the game. I can then track the position of the virtual box through collision with the player's hands when they pick up the physical controller. Does OnCollisionEnter, or a similar method, work with the Players hands? (see attached image)
I am also looking into the use of spatial awareness / image recognition / pose estimation to accomplish this task, as well as researching the use of a tracking base station to determine object position (similar to HTC Vive / Oculus Rift ).
Any suggestions, resources, and assistance is greatly appreciated here. Thank you!
EDIT UPDATE 11/30/2020 :
Hernando commented below suggesting QR codes, assume for this project we are not allowed to use QR codes, and we want as as precise orientation data as possible. Thanks Hernando!
For locating the object, QR code would definitely be the recommendation to find quickly with HL2 device. I have seen the QR approach in multiple venues too for VR LBE experiences like being described here. QR code is just sitting on top the device.
Otherwise, if the controller in question supports Bluetooth, can possibly pair the device and if device has location information, can possible transmit the location of where it is at. Based on what I am seeing from all of the above, this would be a custom solution and highly dependent on the controller abilities to be seen if QR codes are out of the equation. I have witnessed some controller solutions to first start the user experience to do something like touch the floor to get an initial reference point. Or alternatively doing something like always picking up the gun from specific location in the real world like some local based experiences do before starting.
Good luck with project, just my advice from using systems with VR
Is the controller allowed to paste multiple QRcodes? If allowed, we recommend you use QRCode tracking to assist in locating your controller. If you prefer to use image recognition, object detection, or other technologies, it needs Azure service or some third-party library, more information please see:Computer Vision documentation

Using GameCenter for parallel turn based games?

I've played around with making turn-based games using GameCenter. I understand that by default, GameCenter assumes that out of a number of participants, at any given time, one player holds the "play baton", and that this player is the only one who can affect the current game state. Gameplay is asynchronous, i.e. whoever's turn it is can take their time, and the other players will be notified once it's their turn.
So far, so good.
Now I want to use GameCenter to implement a similar, but slightly different kind of turn-based game: an asynchronous game where, instead of a serial player succession, players make their turns in parallel, which are then consolidated into a new game state once all players have "turned in" their moves.
A good model game for this would be Rock, Paper, Scissors: both players secretly decide on their move ("rock", or "paper", or "scissors"). The order in which those are then submitted to the server is arbitrary; i.e. no player should ever get a "not your turn"-type error when they try to submit a move in an ongoing round. Once they both turned in their moves, all player choices are revealed, and the winner of the current round is determined/declared.
The question is: is it possible at all to use the GameCenter infrastructure for this kind of game, either by design or by work-around? And if so, what would be considered a good approach?
It is not possible to implement this with Game Center the way you suggested, but you can take an approach that will look as if you did manage to do this.
When you start a turn-based match, it's always the local player's turn. Either Game Center provides you with a blank match, or you will receive a match in which someone else already took their turn. There is no way to control this, so you need to be prepared for both.
The approach you can take is to have a player always take their turn before you show them anyone else's move. Only then do you check if in your local case, everyone has now taken their turn and you show the result. This will provide the illusion of what you are asking for. In the case of Rock-Paper-Scissors you can now decide the match outcome. The other player will be notified.
However, if not everyone has taken their turn in this round, don't show anything, update the game state as well, but tell the user you're now waiting for others to take their turn. You will be able to show the result when you are notified that it's your turn again, with a game state already indicating the outcome.

iOS dev - sharing screen captures

I'm looking to write an app that lets users draw a game, something like tic-tac-toe, for example. One user would begin by drawing the grid, and then each player could literally draw their own O or X. It would be a multiplayer game where each user has his/her own device.
I'm not sure what would be the best way to share this data from device to device. I've only been able to think of 2 options:
Should I attempt to upload a screenshot to the server after each player makes a move?
Should I upload the exact points where the user is drawing and then redraw these points on the other user's screen?
Any other suggestions, or maybe a point in the right direction? I'm fairly new to all of this so please don't be too harsh ;)
starting out on a multiplayer game (even if it is a straightforward one) is ambitious :) if you don't mind restricting yourself to iOS5 and above, then i would check out the new turn based multiplayer game functionality. there's a good tutorial here:
http://www.raywenderlich.com/5480/beginning-turn-based-gaming-with-ios-5-part-1

iOS: Get how fast user is moving

I'm wanting to figure out if a user is not moving at all, walking, or running using the iPhone. I'm not trying to implement a pedometer. I just want to know around about if someone is moving briskly, slowly, or not at all. I don't need mph or anything like that.
I think the accelerometer may be able to do this for me, but I was wondering if someone knows of any tutorials or example code that might be able to point me in the right direction?
Thanks to all that reply
The accelerometer won't do you any good here - it will only capture changes in velocity.
Just track the current location periodically and calculate the speed.
There are no hard thresholds for walking vs. running motion, so you will have to experiment a bit. The AccelerometerGraph sample code should get you started on how to get and interpret accelerometer data.
The Accelerometer is good, but if the user has an iPhone 4 or iPad 2 you should use the gyroscope.
CMMotionManager and Event Handeling Guide - Motion Events
Apple Documentation is the best example you can get!
People have a different bounce in their step between walking and running which can be measured with the accelerometer, but this differs between individuals (what shoes they are wearing, what surface they are upon, what part of the body is attached to the iPhone etc.), and this motion can probably be imitated by shaking the iPhone just right while standing still.
Experiment by recording the two types of acceleration profiles, and then use some sort of pattern matching to pick the most likely profile candidate from the current recorded acceleration data.