When does a zero-time MIDI event trigger? - midi

I'm reading a MIDI file and I'm having trouble determining when next events trigger.
Let's say I have a midi file that has a track like this (where T=n is the delta time):
[T=0: Note On, C4] [T=128: Note Off, C4] [T=0: Note On, D4] [T=128: Note Off, D4]
Does the second Note On (D4) take place at the EXACT same time/tick as the previous Note Off (C4)? Or do you trigger it on the next tick?

In theory, the two events happen at the same time.
In practice, events need a certain time to be sent over MIDI (about one millisecond for three bytes), but the second event will be sent as soon as possible after the first one.
When no actual MIDI cable is involved, the events actually could take effect at the same time.

All events happen on a tick. However, they're sent out over the MIDI cable one at a time since MIDI is both a serial protocol and serial hardware. This became a problem with devices that sent out huge numbers of controller change messages, originally like the MIDI guitar controllers. They simply sent out more MIDI messages per second than the cable could transmit.
On alternate transport, like USB, those events can happen closer together but because they are serial, they must still happen one after the other. That time frame may be indistiguishable, (we hope), but there will always be a tiny lag.
For them to happen at the "same" time, you must either a) buffer or b) make them happen in different places, as with parallel players, which still leaves you with a delay in syncing.

Related

What is the correct operation of a CANopen inhibit timer?

I understand that the operation of a CANopen inhibit timer is to ensure a minimum time between successive transmissions of the same message, but the specification does not make it clear what to do if the data changes during the inhibit time (and the transmission is on change-of-state). Should I buffer the data and transmit it when the inhibit timer expires, or discard it and wait for a change after the timer has expired?
My assumption would be, since it is not clearly defined, I can choose whichever approach I want, but I'd appreciate the input of any experienced architects / developers on this.
Thanks.
You're correct that the inhibit time is simply the minimum time between consecutive CAN frames with the same CAN-ID. The standard does not specify the behavior for multiple events during the inhibit time window, because it depends on the situation.
For services like NMT, EMCY and perhaps LSS, you'd want to buffer the messages and send them later. In this case the inhibit time is simply a means to help slow (or badly programmed) devices to handle short bursts of messages. I've seen devices that could only handle 3 CAN frames at once, so it's often necessary, but you would not want them to miss messages.
For event-driven Transmit-PDOs, it depends on what the PDO represents. If you use it to track state, it might make sense to drop events during the inhibit window. They're invalidated by subsequent events anyway. To ensure you always emit the latest state, you can store the most recent event and transmit it once the inhibit time has elapsed, or use the event-timer to ensure you're never too far behind. I've used this strategy in the past for analog inputs where line noise would sometimes cause event bursts.
If you use PDOs to track events (or state changes), you'd be better of buffering them so no events get lost. However, this can introduce potentially unbounded delays if the event period is shorter than the inhibit time.
For the products we're working on at Lely (dairy farm robots), we actually prefer to use SYNC-driven PDOs instead. It results in a much more predictable CAN bus load. And we don't have to track state at the receiver side because we receive a full update on every SYNC. However, the receiver is always one SYNC period behind the transmitter, so this may not be appropriate for your use case.

How to handle duplicate note_on, note_off, tempo change in more than one tracks, and tracks without program_change in a midi file?

I'm using Mido for python, working on parsing midi files into <start_time, duration, program, pitch> tuples and met some problems.
Some files that I parse has multiple note_on, resulting in notes at the same pitch and same program being opened more than once.
Some files contains multiple note_off resulting in trying to close notes that is no longer on due to being closed before (assuming only one note at the same program and same pitch can be on).
Some tracks does not have a program_change in the beginning of the track (or even worse, not even having one in the whole track).
Some files has more than one track containing set_tempo messages.
What should I do in each of these cases to ensure I get the correct interpretation?
In general, to get a correct MIDI message stream, you have to merge all tracks in a type 1 file. What matters for a synthesizer are not tracks, but channels.
The MIDI specification says:
ASSIGNMENT OF NOTE ON/OFF COMMANDS
If an instrument receives two or more Note On messages with the same key number and MIDI channel, it must make a determination of how to handle the additional Note Ons. It is up to the receiver as to whether the same voice or another voice will be sounded, or if the messages will be ignored. The transmitter, however, must send a corresponding Note Off message for every Note On sent. If the transmitter were to send only one Note Off message, and if the receiver in fact assigned the two Note On messages to different voices, then one note would linger. Since there is no harm or negative side effect in sending redundant Note Off messages this is the recommended practice.
The General MIDI System Level 1 Developer Guidelines say that in response to a “GM System On” message, a device should set Program Change to 0. So you can assume this to be the initial value for channels that have notes without a preceding Program Change.
The Standard MIDI Files specification says that
tempo information should always be stored in the first MTrk chunk.
But "should" is not "must".

Custom Messages with Veins(oment++, sumo, veins traffic simulation)

I am using latest version of veins. I have been playing it with for a while and understand the basics now. I followed tictoc tutorial for omentpp, but I still couldn't figure out how to solve the following probelm:
I want Vehicles and RSU to send messages to each other. I want these messages to be sent in all the four catagories. When a message is received I want to measure the time it took to travel from source to destination.
By default, veins, can send data, and based on this post, I know that I have to change someparts in TraCIDemo11p, but I couldn't figure out what. It would be great if someone could provide an answer.
To answer my own question. I modified BaseWaveAppLayer.cc to accomplish my goal(though it is not right way to do it. The right way would be to extend this class and make your changes in that class. But since I just wanted to make changes quickly I chose this quicker way). I modified the method for sending beacons. Since beacons will be scheduled to be sent based on the time that the user can specify in .ini file. Now every time a beacon is scheduled to be sent, I randomly generate a priority from the range [0-4) and assign it to the packet. In this way I get to send beacons with different priorities over the network.
Also as I had a requirement of sending each packet in a different rate. To achieve this I implemented the random generation function in such a way that certain numbers of the range gets generated more than others. It's sorta biased. So as an example, in .ini file I would specify that priorities 0-2 should be sent at rate of 0.2 while priority 4 should be sent at rate of 0.4(it can interpreted as the sending rate for each priority). The random generation function would then generate 4 twice more than any other number, while numbers 0,1,2 would get generated the same number of times.

When listening for messages from a device, what is the unit of AbosoluteTime?

When listening for MidiEvents in NAudio from a MidiDevice, we get the long "AbsoluteTime" property on each event. But what unit is this time in and from what starting point is it measured?
In a MIDI file, each event has a delta in "ticks" since the last event. To make MIDI files easier to work with, NAudio keeps a running total, storing the value in AbsoluteTime. The meaning of this depends on delta ticks per quarter note (which is a property on the MidiFile class), and the tempo (MIDI files ought to include at least one TempoEvent).
When listening for MIDI events from a device, the AbsoluteTime of the MIDI Event created will be 0. However, you can use the TimeStamp property of the MidiInMessageEventArgs which I believe is in milliseconds since MidiInStart was called.

Mutli Player Game synchronization

The Situation:
I would like to ask what's the best logic for synchronizing objects in a multiplayer 1:1 game using BT or a web server. The game has two players, each of them has multiple guns & bullets, the bullets are created dynamically and disappear after a while, the players my move objects around simultaneously.
The Problem:
I have a real issue with synchronization, since the bullets on one device may be faster than other, also they may have already gone or hit an object on one device while on the other its still in the air.
Possibilities?
What is the best way of handling synchonization in this case? Should all the objects be controlled by one device acting as the server, while th other just gets the values, positions and does very little thinking. Or should control be distributed where each device creates, destroys and moves its own objects and then through synchronization tells the other device.
What is the best to handle transmission delay in this, since BT might be faster than playing over the web?
The best would be a working sample - thanks very much!
You seem to have started on some good ideas about synchronization, but it's possible there are two problems you are running into that are getting overlapped: the synchronization of game clocks and the sychronization of gamestate.
(1) synchronizing game clocks
you need some representation of 'game time' for your game. for a 2 player game it is very reasonable to simply declare one the authority.
so on the authoritative client:
OnUpdate()
gameTime = GetClockTime();
msg.gameTime = gameTime
SendGameTimeMessage(msg);
on the other client might be something like:
OnReceivGameTimeeMessage(msg)
lastGameTimeFromNetwork = msg.gameTime;
lastClockTimeOfGameTimeMessage = GetClockTime();
OnUpdate()
gameTime = lastGameTimeFromNetwork + GetClockTime() - lastClockTimeOfGameTimeMessage;
there are complications like skipping/slipping (ie getting times from over the network that go forward/backward too much) that require further work, but hopefully you get the idea. follow up with another question if you need.
note: this example doesn't differentiate 'ticks' vs 'seconds' nor does is it tied to your network protocol nor the type of device your game is running on (save the requirement 'the device has a local clock').
(2) synchronizing gamestate
after you have a consistent game clock, you still need to work out how to consistently simulate and propagate your gamestate. for synchronizing gamestate you have a few choices:
asynchronous
each unit of gamestate is 'owned' by one process. only that process is allowed to change that gamestate. those changes are propagated to all other processes.
if everything is owned by a single process, this is often called a 'client/server' game.
note, with this model each client has a different view of the game world at any time.
example games: quake, world of warcraft
to optimize bandwidth and hide latency, you can often do some local simulation for fields with a high update frequency. example:
drawPosition = lastSyncPostion + (currentTime - lastSyncTime) * lastSyncVelocity
of course you to having to reconcile new information with your simulated version in this case.
synchronous
each unit of gamestate is identical in all processes.
commands from each process are propagated to each other with their desired initiation time (sometime in the future).
in its simplest form, one process (often called the host) sends special messages indicating when to advance the game time. when everyone recieves that message they are allowed to simulate the game up to that point.
the 'in the future' requirement leads to high latency between input command and gamestate change.
in non-real time games like civilization, this is fine. in a game like starcraft, normally the sound acknowledging the input comes immediately, but the actually gamestate affecting action is delayed. this style is not appropriate for games like shooters that require time-sensitive actions (on the ~100ms scale).
synchronous with resimulation
each unit of gamestate is identical in all processes.
each process sends all other processes its input with its current timestamp. additionally a 'nothing happened' message is periodically sent.
each process has 2 copies of the gamestate.
copy 1 of the gamestate is propagated to the 'last earliest message' it has receive from all other clients. this is equivalent to the synchronous model, but has the weakness that it represents a gamestate from 'a little bit ago'
copy 2 of the gamestate is copy 1 plus all the remaining messages. it is a prediction of what is gamestate at the current time on the client, assuming nothing new happens.
the player interacts with some combination of the two gamestate (ideally 100% copy 2, but some consideration must be taken to avoid pops as new messages come in)
example games: street fighter 4 (internet play)
from your description, options (1) and (3) seem to fit your problem. again if you have further questions or require more detail, ask a follow up.
since the bullets on one device may be faster than other
This should not happen if the game has been architected properly.
Most games these days (particularly multiplayer ones) work on ticks - small timeslices. Each system should get the exact same result when it computes what happened during a tick - no "bullets moving faster on one machine than they do on another".
Then it's a much simpler matter of making sure each system gets the same inputs for each player (you'll need to broadcast each player's input to each other player, along with the tick the input was registered during), and making sure that each system calculates ticks at the same rate.