How to find how long notes are in MIDI - midi

I have one file that is 4/4 time, has 24 clocks per click, and 8 32nd notes per beat. I have no other information that could plausibly relate to the tempo. By experimentation, I think each tick (or whatever measurement delta time uses) is around one millisecond.
I have another file that is also 4/4 time, has 24 clocks per click, and 8 32nd notes per beat. It also has a tempo of 500000, which from what I can find is the default. By experimentation, each tick is about one 380th of a second.
Googling around was not helpful. I keep finding things that talk about stuff like pulses per quarter note. Which would be great if that was one of the numbers in the MIDI file. And they convert it to beats per minute, which isn't what I need. Though I do at least know what that means.
Is there an equation I can use to find how long a tick is using only numbers that are actually given in the MIDI file?
I'm using Mido to read the MIDI files, if that matters. Neither file has messages that fail to parse that would plausibly contain any missing information on tempo.

Related

How to persist previous data point when time range doesn't include a data point

TL;DR:
Can I get Grafana to show me the previous data point, when the currently selected time period does not have a data point? I have an example which sounds ridiculous, but at least it's simple to understand: I send data every 1 minute, and I wish to zoom into the last 30 seconds, and still see data. You may ask "why not just zoom out to 2 minutes" but the reason is that other data is on the same graph that has updated more often, and I wish to compare with that data. Also, for the more lengthy reasons below.
If not, how can I achieve what I want to achieve, see below?
Context
For a few years, I have been monitoring the water level in three of our basement sumps (which have pumps installed) by sending this data from Node-RED to InfluxDB, then visualising the sump levels in Grafana. I have set up three waterproof ultrasonic distance sensors, each pointed down a pipe that is inserted vertically into each sump. The water fills the pipe and the distance sensor, connected to an Arduino, sends me the reading. The Arduino also has other sensors connected (temp / humidity) and deals with distance calibrations to calculate the percent full of each sump. All this data is sent to Node-RED. In total, I am sending 4 values per sump: distance measurement in mm, percent full, temp, humidity. So that's 12 fields. Data is sent every 2 seconds, because I wished to have a reasonably high resolution to see nice curves in graphs.
Also I decided to store all this data so that I could later troubleshoot issues (we have had sewage floods resulting in water not being able to be pumped away, etc...) and design some warning systems for these issues based on data.
Storing 12 values for every 2 seconds, over the course of a number of years, takes up a lot of space (8GB).
Nature of the data
Storing this resolution of data has also helped me be able to describe the nature of the data. I will do so here.
(1) Non-meaningful NOISE (see below) - the percent-full reading goes up and down by 1 or 2 percent every couple of seconds:
(2) Meaningful DRIFT (see below) - I don't mean sensor drift, I am referring to actual water levels changing slowly over time, e.g. over 1 day or 1 week. Perhaps condensation on the walls drips down into the sump, or water evaporates from the sump, and the value can waver by a few percent over the course of a day. Each sump has slightly different characteristics.
(3) Meaningful MONITORING DATA - during wet weather, depending on rainfall amount, the sumps fill up over the course of say 30 mins to 3 hours. Then the pumps run and the water level drops again, wavers a bit, then the sumps continue to fill up. If the rain stopped, you can see a lovely curve as the water fills in progressively more slowly (see the green line below):
Solution to downsample
I know Influx has its own downsampling possibilities, however because of the nature of the data (which can hardly vary for 2 months but when it does, I really need to capture it in detail), I don't think lowering the sample rate is a great idea.
I have some understanding of digital filters (e.g. low pass etc) but have never programmed one myself. So I have written a basic filter in javascript (a Node-RED function) to filter the data in realtime as follows: only send each reading when it has changed from the previous one by x amount. (And update the previous one, when that occurs.)
This has already vastly reduced the amount of data being stored, and I can vary x to filter out noise shown in my first graph above, at the expense of resolution when the pumps run. Even if I set the x value to 2, it still vastly reduces data over long periods of dry weather.
So - onto my problem! Now data is not being logged to InfluxDB unless there is some meaningful change. Which means that when I zoom in to e.g. 15 minute timeframe of data, there is nothing to see.
Grafana does have the option of "fill (previous)" but this draws a line between points on the existing graph, rather than showing the previous data as if it hasn't changed since that point. Now my grafana dashboard looks a bit sad :(
One proposed solution is, in addition to sending "delta" data, send "summary" data, that is - send a full suite of data every 1 minute regardless of whether data changed or not. But then we get noise back again, and pointless storage.
Any other ideas?

STM32 ADC: leave it running at 'high' speed or switch it off as much as possible?

I am using a G0 with one ADC and 8 channels. Works fine. I use 4 channels. One is temperature that is measured constantly and I am interested in the value every 60s. Another one is almost the opposite: it is measuring sound waves for a couple a minutes per day and I need those samples at 10kHz.
I solved this by letting all 4 channels sample at 10kHz and have the four readings moved to memory by DMA (array of length 4 with 1 measurement each). Every 60s I take the temperature and when I need the audio, I retrieve the audio values.
If I had two ADC's, I would start the temperature ADC reading for 1 conversion every 60s. Non-stop. And I would only start the audio ADC for the the couple of minutes a day that it is needed. But with the one ADC solution, it seems simple to let all conversions run at this high speed continuously and that raised my question: Is there any true downside in having 40.000 conversions per second, 24 hours per day? If not, the code is simple. I just have the most recent values in memory all the time. But maybe I ruin the chip? I use too much energy I know. But there is plenty of it in this case.
You aren't going to "wear it out" by running it when you don't need to.
The main problems are wasting power and RAM.
If you have enough of these, then the lesser problems are:
The wasted power will become heat, this may upset your temperature measurements (this is a very small amount though).
Having the DMA running will increase your interrupt latency and maybe also slow down the processor slightly, if it encounters bus contention (this only matters if you are close to capacity in these regards).
Having it running all the time may also have the advantage of more stable readings, not being perturbed turning things on and off.

Alsa tempo vs. PPQ

Recently I've been playing with the haskell ALSA interface, and I had to notice, that I do not really understand the concepts of tempo and PPQ.
Earlier I've written a Swig-Python interface to ALSA and in there I find the following piece of code (probably copied from somewhere else):
1 void AlsaMidiIfc::setTempo (int bpm) {
2 int queue = this->getOutQueue();
3 snd_seq_queue_tempo_t *tempo;
4 snd_seq_queue_tempo_alloca (&tempo);
5 snd_seq_queue_tempo_set_tempo(tempo, 60 * 1000000 / bpm);
6 snd_seq_queue_tempo_set_ppq(tempo, PPQ);
7 snd_seq_set_queue_tempo (mySeq, queue, tempo);
8 }
When I put an event into a queue, the time is always specified in ticks, right? So the only timing question to answer is "how long is a tick?".
What is the point of specifying two values, i.e. tempo and PPQ?
What would be the effect of changing the tempo, but leaving PPQ as it
is?
If I don't set PPQ at all, but only the tempo, what would be the
result?
Standard MIDI Files use these two values (tempo and PPQ) to specify the tempo.
The ALSA sequencer just uses the same mechanism.
The tempo value is the number of microseconds per quarter note.
Increasing it will increase the length of a tick, i.e., make playback slower.
A PPQ value of zero would be invalid.

Tempo and time signatures from MIDI

I'm currently building a software for displaying music notes from MIDI file. I can get every letter of tones from NoteOn and NoteOff events but I don`t know how I get or how calculate types of notes (whole, half, eigth..) and other time signatures.
How I can get it? I looked for some example but without success.
MIDI doesn't represent notes in absolute quantities, like in classical music. Instead, the length of the note continues until a corresponding note off event is parsed (also it's quite common that MIDI files use a note on event with 0 velocity as a note off, just keep this in mind). So basically you will need to translate the time in ticks between the two events to musical time to know whether use a whole, half, quarter note, etc.
This translation obviously depends on knowing the tempo and time signature, which are MIDI meta events. More information about parsing those can be found here:
http://www.sonicspot.com/guide/midifiles.html
Basically you take the PPQ to find the number of milliseconds per tick, then use the time signature and tempo to find the length of a quarter note in milliseconds. There are some answers on StackOverflow with this conversion, but I'm writing this post on my phone and can't be bothered to look them up right now. :-)
Hope this points you in the right direction!

AudioToolbox - Callback delay while recording

I've been working on a very specific project for iOS, lately, and my researches lead me to an almost final code. I've solved all the extreme difficulties I've found until now, but on this one I don't seem to have a clue (about the reason nor the possibility of solving it).
I set up my audioqueue (sample rate 44100, format LinearPCM, 16 bits per channel, 2 bytes per frame, 1 channel per frame...) and start recording the sound with 12 audio buffers. However, there seems to be a delay after every 4 callbacks.
The situation is the following: the first 4 callbacks are called with an interval each of about 2 ms. However, between the 4th and the 5th, there is a delay of about 60ms. The same thing happens between the 8th and the 9th, the 12th and 13th and on...
There seems to be a relation between the bytes per frame and the moment of the delay. I know this because if I change to 4 bytes per frame, I start having the delay between the 8th and the 9th, then between the 16th and the 17th, the 24th and the 25th... Nonetheless, there doesn't seem to be any relation between the moment of the delay and the number of buffers.
The callback function does only two things: store the audio data (inBuffer->mAudioData) on a array my class can use; and call another AudioQueueEnqueueBuffer, to put the current buffer back on the queue.
Did anyone go through this problem already? Does anyone know, at least, what could be the cause of it?
Thank you in advance.
The Audio Queue API seems to run on top of the RemoteIO Audio Unit API, who's real audio buffer size is probably unrelated to, and in your example larger than, whatever size your Audio Queue buffers are. So whenever a RemoteIO buffer is ready, a bunch of your smaller AQ buffers quickly get filled from it. And then you get a longer delay waiting for some larger buffer to be filled with samples.
If you want better controlled (more evenly spaced) buffer latency, try using the RemoteIo Audio Unit directly.