Hi I am new to multiplayer development and I am using photon voice and wanted to make private voice chat between two player in a room created which has many players. I was directed to
https://doc.photonengine.com/en-us/voice/current/getting-started/voice-for-pun?utm_campaign=sendgrid&utm_source=sendgrid.com&utm_medium=email
by support of photon but I am not able to get it working. How should I make private voice chat in this multiplayer. please give example for explanation. Thanks
There is a demo scene for Push To Talk which showcases how to do this.
Let me try to explain how to implement player to player voice chat using current Photon Voice:
Photon Voice uses voice groups (which is nothing but Photon LoadBalancing's "Interest Groups") to separate voice channels/targets.
Filter incoming sounds (select "what to hear" or "who do you want to listen to"):
Each actor needs to subscribe to voice groups it's interested in. By default all actors listen to audio group 0 which could be seen as a global audio group for voice broadcast. If you want to listen to voice sent to other groups you need to subscribe to them. You can also unsubscribe from previously subscribed ones. The operation to do all this is: PhotonVoiceNetwork.Client.ChangeAudioGroups(byte[] groupsToRemove, byte[] groupsToAdd);
Select a single transmission target audio group (select "who do you want to talk to"):
Each actor needs to decide to which voice group it wants to transmit audio. The target audio group can be set using PhotonVoiceRecorder.AudioGroup.
So depending on the use case what you can do is:
Speak to a single group and listen to multiple groups. You can speak to a group other than those you listen to. You can listen to all available groups.
Speak to a single group and listen only to default group.
Speak and listen to a single same audio group. For this particular use case, there is a shortcut to switch between this single in/out group by setting: PhotonVoiceNetwork.Client.GlobalAudioGroup. If you choose to set GlobalAudioGroup no need to call ChangeAudioGroups or set PhotonVoiceRecorder.AudioGroup as it's done internally for you.
In the three cases, you always listen to default audio group 0.
The Photon Voice demo offers two options for private (1 to 1) voice chat:
"MuteOthersWhileTalking" enabled: corresponds to case n°3.
"MuteOthersWhileTalking" disabled: corresponds to case n°1.
The audio groups in the demo are constructed this way:
We have rooms of 4 actors.
We need 6 audio groups.
For each pair of actors we calculate a unique group code.
actor A with actor number (player ID) equal to x
actor B with actor number equal to y
Here is how we get the audio group of private voice chat between A and B (if an actor number reaches 24 we have a problem):
if (x < y)
{
AudioGroup = y + x * 10;
}
else if (x > y)
{
AudioGroup = x + y * 10;
}
else
{
// error
}
Example: The audio group for actors 1 and 2 is 12.
Another approach of "calculating" private voice groups is to use the actor number as audio group: each actor subscribes to a single audio group with a code equal to its actor number. whenever you want to talk to a remote actor you set the target audio group (using PhotonVoiceRecorder.AudioGroup only) to the target actor number.
The advantage of this approach:
Less audio groups: We need as many audio groups as actors.
Less audio groups switching: single audio group to subscribe to and no unsubscribing.
The drawback of this approach:
You can't mute any other actor. You will listen to anyone who wants to talk to you privately.
Related
Let's say I have a car with different sensors: several cameras, LIDAR and so on, the data from this sensors are going to be send to some host over 5G network (omnetpp + inet + simu5g). For video it is like 5000 packets 1400 bytes each, for lidar 7500 packets 1240 bytes and so on. Each flow is encoded in UDP packets.
So in omnetpp module in handleMessage method I have two sentTo calls, each is scheduled "as soon as possible", i.e., with no delay - that corresponds to the idea of multiple parallel streaming. How does omnetpp handle situations, when it needs to send two different packets at the same time from the same module to the same module (some client, which receives sensor data streams)? Does it create some inner buffer on the sender or receiver side, therefore allowing really only one packet sending per handleMessage call or is it wrong? I want to optimize data transmission and play with packet sizes and maybe with sending intervals, so I want to know, how omnetpp handles multiple streaming at the same time, because if it actually buffers, maybe than it makes sense to form a single package from multiple streams, each such package will consist of a certain amount of data from each stream.
There is some confusion here that needs to be clarified first:
OMNeT++ is a discrete event simulator framework. An OMNeT++ model contains modules that communicate with each other, using OMNeT++ API calls like sendTo() and handleMessage(). Any call of the sendTo() method just queues the provided message into the future event queue (an internal, time ordered queue). So if you send more than one packet in a single handleMessage() method, they will be queued in that order. The packets will be delivered one by one to the requested destination modules when the requested simulation time is reached. So you can send as many packets as you wish and those packets will be delivered one by one to the destination's handleMessage() method. But beware! Even if the different packets will be delivered one by one sequentially in the program's logic, they can still be delivered simultaneously considering the simulation time. There are two time concepts here: real-time that describes the execution order of the code and simulation-time which describes the time passes from the point of the simulated system. That's why, while OMNeT++ is a single threaded application that runs each events sequentially it still can simulate infinite number of parallel running systems.
BUT:
You are not modeling directly with OMNeT++ modules, but rather using INET Framework which is a model directly created to simulate internet protocols and networks. INET's core entity is a node which is something that has network interface(s) (and queues belonging to them). Transmission between nodes are properly modeled and only a single packet can travel on an ethernet line at a time. Other packets must queue in the network interface queue and wait for an opportunity to be delivered from there.
This is actually the core of the problem for Time Sensitive Networks: given a lot of pre-defined data streams in a network, how the various packets interfere and affect each other and how they change the delay and jitter statistics of various streams at the destination, Plus, how you can configure the source and network gate scheduling to achieve some desired upper bounds on those statistics.
The INET master branch (to be released as INET 4.4) contains a lot TSN code, so I highly recommend to try to use it if you want to model in vehicle networks.
If you are not interested in the in-vehicle communication, bit rather want to stream some data over 5G, then TSN is not your interest, but you should NOT start to multiplex/demultiplex data streams at application level. The communication layers below your UDP application will fragment/defragment and queue the packets exactly how it is done in the real world. You will not gain anything by doing mux/demux at application layer.
I have a #KafkaListener class that listens to a particular topic and consumes records that contain either a Person object or a Phone object (and only one of them). Every Phone has a reference / correlation id to the corresponding Person. The listener class performs certain validations that are specific to the type received, saves the object into a database and produces a transfer success / failed response back to Kafka that is consumed by another service.
So a Person can successfully be transferred without any corresponding Phone, but a Phone transfer should only succeed if the corresponding Person transfer has succeeded. I can't wrap my head around how to implement this "synchronization", because Persons and Phones get into Kafka independently as separate records and it's not guaranteed that the Person corresponding to a particular Phone will be processed before the Phone.
Is it at all possible to have such a synchronization given the current architecture or should I redesign the producer and send a Person / Phone pair as a separate type?
Thanks.
It's not clear how you're using the same serializer for different object types, but you should probably create separate topics and/or branch your current one into two (refer Kafka Streams API)
I assume there are less people than phones, in which case you could build a KTable from a people topic, then as you get phone records, you can perform a left join or lookup against this table for some person ID
Other solutions could involve using Kafka Connect to dump records into a system where you can do the join
I'm writing an application using Elixir Channels to handle realtime events. I understand that there will be 1 socket open per client and can multiplex multiple channels over it. So my app is a chat application where users are part of multiple group chats. I have 1 Phoenix Channel called MessageChannel where the join method will handle dynamic topics.
def join("groups:" <> group_id, payload, socket) do
....
Let's say John joins groups/topics A and B while Bob only join group/topic B. When john sends a message to group/topic A, broadcast!/3 will also send that message to Bob too correct? Because handle_in doesn't have a context of which topic/group the message was sent to.
How would I handle it so that Bob doesn't receive the events that was sent to group A. Am I designing this right?
Because handle_in doesn't have a context of which topic/group the message was sent to.
When Phoenix.Channel.broadcast/3 is called, apparently it does have the topic associated with the message (which is not obvious from the signature). You can see the code starting on this line of channel.ex:
def broadcast(socket, event, message) do
%{pubsub_server: pubsub_server, topic: topic} = assert_joined!(socket)
Server.broadcast pubsub_server, topic, event, message
end
So when the call to broadcast/3 is made using the socket, it pattern matches out the current topic, and then makes a call to the underlying Server.broadcast/4.
(If you're curious like I was, this in turn makes a call to the underlying PubSub.broadcast/3 which does some distribution magic to route the call to your configured pubsub implementation server, most likely using pg2 but I digress...)
So, I found this behavior not obvious from reading the Phoenix.Channel docs, but they do state it explicitly in the phoenixframework channels page in Incoming Events:
broadcast!/3 will notify all joined clients on this socket's topic and invoke their handle_out/3 callbacks.
So it's only being broadcasted "on this socket's topic". They define topic on that same page as:
topic - The string topic or topic:subtopic pair namespace, for example “messages”, “messages:123”
So in your example, the "topics" are actually the topic:subtopic pair namespace strings: "groups:A" and "groups:B". John would have to subscribe to both of these topics separately on the client, so you would actually have references to two different channels, even though they're using the same socket. So assuming you're using the javascript client, the channel creation looks something like this:
let channelA = this.socket.channel("groups:A", {});
let channelB = this.socket.channel("groups:B", {});
Then when you go to send a message on the channel from a client, you are using only the channel that has a topic that gets pattern matched out on the server as we saw above.
channelA.push(msgName, msgBody);
Actually, the socket routing is done based on how to define your topics in your projects Socket module with the channel API. For my Slack clone, I use three channels. I have a system level channel to handle presence update, a user channel, and a room channel.
Any given user is subscribed to 0 or 1 channels. However, users may be subscribed to a number of channels.
For messages going out to a specific room, I broadcast them over the room channel.
When I detect unread messages, notifications, or badges for a particular room, I use the user channel. Each user channel stores the list of rooms the user has subscribed too (they are listed on the client's side bar).
The trick to all this is using a couple channel APIs, mainly intercept, handle_out, My.Endpoint.subscribe, and handle_info(%Broadcast{},socket).
I use intercept to catch broadcasted messages that I want to either ignore, or manipulate before sending them out.
In the user channel, I subscribe to events broadcast from the room channel
When you subscribe, you get a handle_info call with the %Broadcast{} struct that includes the topic, event, and payload of the broadcasted message.
Here are couple pieces of my code:
defmodule UcxChat.UserSocket do
use Phoenix.Socket
alias UcxChat.{User, Repo, MessageService, SideNavService}
require UcxChat.ChatConstants, as: CC
## Channels
channel CC.chan_room <> "*", UcxChat.RoomChannel # "ucxchat:"
channel CC.chan_user <> "*", UcxChat.UserChannel # "user:"
channel CC.chan_system <> "*", UcxChat.SystemChannel # "system:"
# ...
end
# user_channel.ex
# ...
intercept ["room:join", "room:leave", "room:mention", "user:state", "direct:new"]
#...
def handle_out("room:join", msg, socket) do
%{room: room} = msg
UserSocket.push_message_box(socket, socket.assigns.channel_id, socket.assigns.user_id)
update_rooms_list(socket)
clear_unreads(room, socket)
{:noreply, subscribe([room], socket)}
end
def handle_out("room:leave" = ev, msg, socket) do
%{room: room} = msg
debug ev, msg, "assigns: #{inspect socket.assigns}"
socket.endpoint.unsubscribe(CC.chan_room <> room)
update_rooms_list(socket)
{:noreply, assign(socket, :subscribed, List.delete(socket.assigns[:subscribed], room))}
end
# ...
defp subscribe(channels, socket) do
# debug inspect(channels), ""
Enum.reduce channels, socket, fn channel, acc ->
subscribed = acc.assigns[:subscribed]
if channel in subscribed do
acc
else
socket.endpoint.subscribe(CC.chan_room <> channel)
assign(acc, :subscribed, [channel | subscribed])
end
end
end
# ...
end
I also use the user_channel for all events related to a specific user like client state, error messages, etc.
Disclaimer: I have not looked at the internal workings of a channel, this information is completely from my first experience of using channels in an application.
When someone joins a different group (based on the pattern matching in your join/3), a connection over a separate channel (socket) is made. Thus, broadcasting to A will not send messages to members of B, only A.
It seems to me the Channel module is similar to a GenServer and the join is somewhat like start_link, where a new server (process) is spun up (however, only if it does not already exist).
You can really ignore the inner workings of the module and just understand that if you join a channel with a different name than already existing ones, you are joining a unique channel. You can also just trust that if you broadcast to a channel, only members of that channel will get the message.
For instance, in my application, I have a user channel that I want only a single user to be connected to. The join looks like def join("agent:" <> _agent, payload, socket) where agent is just an email address. When I broadcast a message to this channel, only the single agent receives the message. I also have an office channel that all agents join and I broadcast to it when I want all agents to receive the message.
Hope this helps.
I am working on a Chat application using WebSockets(in Play 2.3 with scala). The message has to be broadcasted to all users or specific set of users based on the incoming message. One user can participate in more than one group chat and able to chat with individuals simultaneously.
The Concurrent.broadcast[JsValue] returns the tuple(enumerator, channel). I dont know how to apply filter to this channel, so only specific group of clients will get the message.
We can apply filters on enumerator like
(enumerator &> Enumeratee.filter[JsValue] {...} ). but we can not push messages via this enumerator.
I don't want to parse the message on the client side.
My code looks like this,
val (public_enumerator, public_channel) = Concurrent.broadcast[JsValue]
def chat = WebSocket.using[JsValue] { request =>
val in = Iteratee.foreach[JsValue]{ msg =>
public_channel.push(msg)
}.map { _ =>
// Quit connection
}
(in ,public_enumerator)
}
Most of the examples I found online are using the deprecated methods, some of them removed in Play 2.3 (like Enumerators.imperative). I dont know how Concurrent.unicast works.
I would like to know if there is another way of doing the same using Actors. I also like to know, that, this design will handle the higher load( more than 1000 users). Thank you.
Yes you can handle it with actors, I would even prefer that since you will have some kind of mutable state (list of users that are in a specific room or something like that).
Basically you get one actor per attached websocket, you can then see that actor as representing one user and let it interact with other actors. You could let it register with an actor that will represent a chat room for example and then let messages to that room be sent to all registered participant actors.
Each actor in itself takes very little memory, so whether your app would be able to handle more than 1000 users is more about the rest of your use case, how many messages that are sent, how big the messages are etc.
There are some code samples in the docs with websockets+actors: http://www.playframework.com/documentation/2.3.x/ScalaWebSockets
The Situation:
I would like to ask what's the best logic for synchronizing objects in a multiplayer 1:1 game using BT or a web server. The game has two players, each of them has multiple guns & bullets, the bullets are created dynamically and disappear after a while, the players my move objects around simultaneously.
The Problem:
I have a real issue with synchronization, since the bullets on one device may be faster than other, also they may have already gone or hit an object on one device while on the other its still in the air.
Possibilities?
What is the best way of handling synchonization in this case? Should all the objects be controlled by one device acting as the server, while th other just gets the values, positions and does very little thinking. Or should control be distributed where each device creates, destroys and moves its own objects and then through synchronization tells the other device.
What is the best to handle transmission delay in this, since BT might be faster than playing over the web?
The best would be a working sample - thanks very much!
You seem to have started on some good ideas about synchronization, but it's possible there are two problems you are running into that are getting overlapped: the synchronization of game clocks and the sychronization of gamestate.
(1) synchronizing game clocks
you need some representation of 'game time' for your game. for a 2 player game it is very reasonable to simply declare one the authority.
so on the authoritative client:
OnUpdate()
gameTime = GetClockTime();
msg.gameTime = gameTime
SendGameTimeMessage(msg);
on the other client might be something like:
OnReceivGameTimeeMessage(msg)
lastGameTimeFromNetwork = msg.gameTime;
lastClockTimeOfGameTimeMessage = GetClockTime();
OnUpdate()
gameTime = lastGameTimeFromNetwork + GetClockTime() - lastClockTimeOfGameTimeMessage;
there are complications like skipping/slipping (ie getting times from over the network that go forward/backward too much) that require further work, but hopefully you get the idea. follow up with another question if you need.
note: this example doesn't differentiate 'ticks' vs 'seconds' nor does is it tied to your network protocol nor the type of device your game is running on (save the requirement 'the device has a local clock').
(2) synchronizing gamestate
after you have a consistent game clock, you still need to work out how to consistently simulate and propagate your gamestate. for synchronizing gamestate you have a few choices:
asynchronous
each unit of gamestate is 'owned' by one process. only that process is allowed to change that gamestate. those changes are propagated to all other processes.
if everything is owned by a single process, this is often called a 'client/server' game.
note, with this model each client has a different view of the game world at any time.
example games: quake, world of warcraft
to optimize bandwidth and hide latency, you can often do some local simulation for fields with a high update frequency. example:
drawPosition = lastSyncPostion + (currentTime - lastSyncTime) * lastSyncVelocity
of course you to having to reconcile new information with your simulated version in this case.
synchronous
each unit of gamestate is identical in all processes.
commands from each process are propagated to each other with their desired initiation time (sometime in the future).
in its simplest form, one process (often called the host) sends special messages indicating when to advance the game time. when everyone recieves that message they are allowed to simulate the game up to that point.
the 'in the future' requirement leads to high latency between input command and gamestate change.
in non-real time games like civilization, this is fine. in a game like starcraft, normally the sound acknowledging the input comes immediately, but the actually gamestate affecting action is delayed. this style is not appropriate for games like shooters that require time-sensitive actions (on the ~100ms scale).
synchronous with resimulation
each unit of gamestate is identical in all processes.
each process sends all other processes its input with its current timestamp. additionally a 'nothing happened' message is periodically sent.
each process has 2 copies of the gamestate.
copy 1 of the gamestate is propagated to the 'last earliest message' it has receive from all other clients. this is equivalent to the synchronous model, but has the weakness that it represents a gamestate from 'a little bit ago'
copy 2 of the gamestate is copy 1 plus all the remaining messages. it is a prediction of what is gamestate at the current time on the client, assuming nothing new happens.
the player interacts with some combination of the two gamestate (ideally 100% copy 2, but some consideration must be taken to avoid pops as new messages come in)
example games: street fighter 4 (internet play)
from your description, options (1) and (3) seem to fit your problem. again if you have further questions or require more detail, ask a follow up.
since the bullets on one device may be faster than other
This should not happen if the game has been architected properly.
Most games these days (particularly multiplayer ones) work on ticks - small timeslices. Each system should get the exact same result when it computes what happened during a tick - no "bullets moving faster on one machine than they do on another".
Then it's a much simpler matter of making sure each system gets the same inputs for each player (you'll need to broadcast each player's input to each other player, along with the tick the input was registered during), and making sure that each system calculates ticks at the same rate.