When listening for messages from a device, what is the unit of AbosoluteTime? - midi

When listening for MidiEvents in NAudio from a MidiDevice, we get the long "AbsoluteTime" property on each event. But what unit is this time in and from what starting point is it measured?

In a MIDI file, each event has a delta in "ticks" since the last event. To make MIDI files easier to work with, NAudio keeps a running total, storing the value in AbsoluteTime. The meaning of this depends on delta ticks per quarter note (which is a property on the MidiFile class), and the tempo (MIDI files ought to include at least one TempoEvent).
When listening for MIDI events from a device, the AbsoluteTime of the MIDI Event created will be 0. However, you can use the TimeStamp property of the MidiInMessageEventArgs which I believe is in milliseconds since MidiInStart was called.

Related

What is the best way to handle obsolete information in a database with a spring boot application

I am working on an application tracking objects detected by multiple sensors. I receive my inputs by consuming a kafka message and I save the informations in a postgresql database.
An object is located in a specific location if it's last scan was detected by sensor in that exact location, example :
Object last scanned by sensor in room 1 -> means last location known for the object is room 1
The scans are continously happening and we can set a frequency of a few seconds to a few minutes. So for an object that wasn't scanned in the last hour for example. It needs to be considered out of range.
So my question now is, how can I design a system that generates some sort of notification when a device is out of range ?
for example if the timestamp of the last detection goes longer than 5 minutes ago, it triggers a notification.
The only solution I can think of, is to create a batch that repeatedly checks for all the objects with last detection time is more than 5 minutes for example. But I am wondering if that is the right approach and I would like to ask if there is a better way.
I use kotlin and spring boot for my application.
Thank you for your help.
You would need some type of heartbeat mechanism, yes.
Query all detection events with "last seen timestamp" greater than your threshold, and fire an alert when that returned result set is more than some tolerable threshold (e.g. if you are willing to accept intermittent lost devices and expect them to be found in the next scan).
As far as "where/how" to alert - up to you. Slack webhooks are a popular example. Grafana can do alerting and query your database.

Handle Too Late data in Spark Streaming

Watermark allows late arriving data to be considered for inclusion against already computed results for a period of time using windows. Its premise is that it tracks to a point in time before which it is assumed no more late events are supposed to arrive, but if they do, they are none-the-less discarded.
Is there a way to store the discarded data, that can be used for reconciliation purpose later?
Say In my Structured Streaming, I set the watermark to 1 hour.
I am doing window operation for each 10 min and received a later event 20 min late.
Is there a way I can store the discarded data say at a different location rather than discarding it?
No, there is no way to achieve this aspect.

How to handle duplicate note_on, note_off, tempo change in more than one tracks, and tracks without program_change in a midi file?

I'm using Mido for python, working on parsing midi files into <start_time, duration, program, pitch> tuples and met some problems.
Some files that I parse has multiple note_on, resulting in notes at the same pitch and same program being opened more than once.
Some files contains multiple note_off resulting in trying to close notes that is no longer on due to being closed before (assuming only one note at the same program and same pitch can be on).
Some tracks does not have a program_change in the beginning of the track (or even worse, not even having one in the whole track).
Some files has more than one track containing set_tempo messages.
What should I do in each of these cases to ensure I get the correct interpretation?
In general, to get a correct MIDI message stream, you have to merge all tracks in a type 1 file. What matters for a synthesizer are not tracks, but channels.
The MIDI specification says:
ASSIGNMENT OF NOTE ON/OFF COMMANDS
If an instrument receives two or more Note On messages with the same key number and MIDI channel, it must make a determination of how to handle the additional Note Ons. It is up to the receiver as to whether the same voice or another voice will be sounded, or if the messages will be ignored. The transmitter, however, must send a corresponding Note Off message for every Note On sent. If the transmitter were to send only one Note Off message, and if the receiver in fact assigned the two Note On messages to different voices, then one note would linger. Since there is no harm or negative side effect in sending redundant Note Off messages this is the recommended practice.
The General MIDI System Level 1 Developer Guidelines say that in response to a “GM System On” message, a device should set Program Change to 0. So you can assume this to be the initial value for channels that have notes without a preceding Program Change.
The Standard MIDI Files specification says that
tempo information should always be stored in the first MTrk chunk.
But "should" is not "must".

When does a zero-time MIDI event trigger?

I'm reading a MIDI file and I'm having trouble determining when next events trigger.
Let's say I have a midi file that has a track like this (where T=n is the delta time):
[T=0: Note On, C4] [T=128: Note Off, C4] [T=0: Note On, D4] [T=128: Note Off, D4]
Does the second Note On (D4) take place at the EXACT same time/tick as the previous Note Off (C4)? Or do you trigger it on the next tick?
In theory, the two events happen at the same time.
In practice, events need a certain time to be sent over MIDI (about one millisecond for three bytes), but the second event will be sent as soon as possible after the first one.
When no actual MIDI cable is involved, the events actually could take effect at the same time.
All events happen on a tick. However, they're sent out over the MIDI cable one at a time since MIDI is both a serial protocol and serial hardware. This became a problem with devices that sent out huge numbers of controller change messages, originally like the MIDI guitar controllers. They simply sent out more MIDI messages per second than the cable could transmit.
On alternate transport, like USB, those events can happen closer together but because they are serial, they must still happen one after the other. That time frame may be indistiguishable, (we hope), but there will always be a tiny lag.
For them to happen at the "same" time, you must either a) buffer or b) make them happen in different places, as with parallel players, which still leaves you with a delay in syncing.

Google Measurement Protocol offline apps and event dates

I want to use Google Measurement Protocol to record offline events, i.e. take data from an EPOS system and track them in Google Analytics. This would be a batch process once a day. How do I tell Google what the date of the event is? If the console app went offline for a few days I wouldn't want three days worth of events to be associated with one day.
Your best best currently is to use the Queue Time Measurement Protocol Parameter.
v=1&tid=UA-123456-1&cid=5555&t=pageview&dp=%2FpageA&qt=343
Queue Time is used to collect offline / latent hits. The value represents the time delta (in milliseconds) between when the hit being reported occurred and the time the hit was sent. The value must be greater than or equal to 0. Values greater than four hours may lead to hits not being processed.