How does Midi TEMPO message apply to other tracks? - midi

A fairly simple question for which I have a guess, but I can't find a definitive answer anywhere.
The background: I have a multi-track midi file with TEMPO controls in the first track. I need to translate the ABSOLUTE_TICK count in other tracks to "Seconds" (fractional seconds offset from the beginning of the midi file).
I have the formula to relate ABSOLUTE_TICK to Seconds based on the PulsePerQuarterNote (PPQN) for the file to the tempo (MS per quarter note).
The question is: do TEMPO changes in the first track (track 0) apply to all the other tracks?
If so, then while I'm parsing other tracks (e.g. track 4, which has NOTE_ON and NOTE_OFF messages I am interested in) I will need to keep a finger pointing to the TEMPO changes in track 0 in parallel. Is that right?
Thanks,
Mark

In short, yes. The first track contains the timing info which will be applied for the entire arrangement, so you apply these messages for each of the tracks with the same absolute time. Since all events use an offset in ticks, you need to first extract the tempo change messages, convert them to absolute time, and then as you are reading in the other tracks you will apply these messages based on that timeline.
From the MIDI fanatic's technical brainwashing center:
In a format 0 file, the tempo changes are scattered throughout the one MTrk. In format 1, the very first MTrk should consist of only the tempo (and time signature) events so that it could be read by some device capable of generating a "tempo map". It is best not to place MIDI events in this MTrk. In format 2, each MTrk should begin with at least one initial tempo (and time signature) event.
That said, some sequencers do break this rule and put actual MIDI events in the first track alongside timing info, since the standard isn't so specific in this regard. Your program should deal with both cases, since it is likely to encounter MIDI files in the wild which are formatted in this way.

Related

Is there a way to run a separate Update at a faster rate than the Application.targetFrameRate?

I am looking to poll Unity inputs over the last several frames and use that data to interpret the user's button presses. From what I've tested however, it feels like polling at the application's 60 FPS framerate leads to some very fast inputs getting missed.
Simple example, user tries to go to Forward from Down:
frame 1 - user is holding Down
frame 1.5 - user taps Forward, but hasn't released Down yet
frame 2 - user lets go of Down and has reached Forward fully
In my current set up, using unity's framerate update, frame 1.5 where the user has both Down and Forward held is missed entirely. Is there a way to run a separate update, that runs at a faster rate than the regular Unity MonoBehaviour Update function? Or would a different solution be needed, such as querying via events(iirc that's an option, but I may be misremembering)?
I don't believe it's possible to decouple polling frequency from frame rate in legacy input system, however in new input system there is a pollingFrequency property that allows you define the frequency in Hertz. It uses a background thread to poll the data. You also need to subscribe to an input action change and record your values there to finally consume them in update method.

How to change the index of a ConcatenatingAudioSource?

I am using ConcatenatingAudioSource in order to play multiple tracks without gaps like this :
player.setAudioSource(ConcatenatingAudioSource(children: [
///some audio sources
]));
If I want now to change the next track to be played, intto a track at an index specified by me, how to do that other than keeping calling player.seekToNext() ?
Is there for example some method like player.setNextTrackIndex(someIndex) ?
p.s. : This question is about just_audio package.
The closest thing to what you're trying to do is:
await player.seek(Duration.zero, trackIndex);
This immediately starts buffering the requested track in order to play it, but there will still be a gap initially while you wait for the audio of that track to buffer. It will be a short gap if the media is stored locally, and a longer gap if the media is accessed over a network.
This is distinct from actual gapless playback which can only happen when the player knows ahead of time which track is next so that it can start buffering it early and avoid the gap. That is, if the user can choose a track at any time, there is no way for the player to predict in advance which track the user will click on next.
Yes, ConcatenatingAudioSource is designed to do gapless playback, but between items that it know are coming next. It is intended to concatenate tracks A,B,C,D together so that there is no gap between A-B, between B-C and between C-D. It can do this because when the player is reaching the end of track A, it knows that B is coming up next and starts buffering B early. Gapless playback doesn't apply to your scenario.

Different Pseudo Clock for groups of Facts

I am new to drools / fusion (7.x) and am not sure how to solve this requirement. Assume I have event objects as Event{long: timestamp, id: string} where id identifies a physical asset (like tractor) and timestamp represents the time the event fired relative to the asset. In my scenario these Events do not arrive in my system in 'real-time', meaning they can be seconds, minutes or even days late. And my rules system needs to monitor multiple assets. Given this, when rules are evaluate the clock needs to be relative to the asset being monitored, it can't be a clock that spans assets.
I'm aware of Pseudo Clock, is there a way to assign Pseudo clocks per Asset?
My assumption is that a clock must always progress forward or temporal functions will not work properly. Take for the example the following scenario:
Fact A for Asset 1 arrive at 1:00 it is inserted into memory and rules fired. Then Fact B arrives for same Asset 1 at 2:00. It too is inserted and rules fired. Now Fact Z arrives for Asset 2 at 1:30 (- 30 minutes from clock). I'm assuming I shouldn't simply progress the clock backwards and evaluate, furthermore I'd want to set the clock back to 2:00 since that was the "latest" data I received. Now assume I am monitoring thousands of assets, all sending data at different times...
The best way I can think to address this is to keep a clock per asset and then save the engine state when each assets data is evaluated. Can individual KieSession's have different clocks, or is it at a container level?
Sample rule: When Fact 1 arrives after Fact 2 for the same Asset.
You're approaching the problem incorrectly. Regardless of whether you're using a realtime or psuedo clock, you're using a clock. You can't say "Fact #1 use clock A, and Fact #2 use clock B."
Instead you should be leveraging the metadata tags for events, specifically the #timestamp tag. This tag indicates to Drools that a specific field inside of the event is actually the timestamp for the Event, rather than the actual time the fact enters working memory.
For example:
import com.example.SampleEvent
declare SampleEvent
#role( event )
// this field is actually in the object, it's not the time the fact was inserted
#timestamp( createdDateTime )
end
Not knowing anything about what your rules are actually doing, the major issue I can foresee here is that if your rules rely on the temporal operators or define an expiry (#expires), they're not going to work and you'll need to redesign them. Especially for expirations: once an event expires, it is removed from working memory; when your out-of-band events come in any previously expired events are already gone and can't be worked against.
Of course that concern would be true regardless of whether you use #timestamp or your original "different psuedo clock" plan. Either way you're going to have to manage the fact that events cannot live forever in working memory -- you will eventually run out of resources and your system will crash. Events must be evicted at some point, so you'll need to design around that in both your models and your rules.

Are there a way to know how much of the EEPROM memmory that is used?

I have looked trough the "logbook" and "datalogger" APIs and there are no way of telling that the data logger is almost full. I found the API call with the following path "/Mem/Logbook/IsFull". If I have understood it correct this will notify me when log is full and the datalogger has stopped logging.
So my question is: Are there a way to know how much of the memmory is currently in use so that I do a cleanup old data (need to do some calculations on them before they are deleted) before the EEPROM is full and the Datalogger stops recording?
The data memory of Logbook/DataLogger is conceptually a ring buffer. That's why /Mem/DataLogger/IsFull always returns false on Movesense sensor (Suunto uses the same API in its watches where the situation is different). Therefore the sensor never stops recording, it just replaces oldest data with new.
Here are a couple of strategies that you could use:
Plan A:
Create a new log (POST /Mem/Logbook/Entries => returns the logId for it)
Start Logging (PUT /Mem/DataLogger/State: LOGGING)
Every once in a while create a new log (POST /Mem/Logbook/Entries). Note: This can be done while logging is ongoing!
When you want to know what is the status of the log, read /Mem/Logbook/Entries. When the oldest entry has completely been overwritten, it disappears from the list. Note: The GET /Entries is a heavy operation so you may not want to do it when the logger is running!
Plan B
Every now and then start a new log and process the previous one. That way the log never overwrites something you have not processed.
Plan C
(Note: This is low level and may break with some future Movesense sensor release)
GET the first 256 bytes of EEPROM chip #0 using the /Component/EEPROM API. This area contains a number of ExtflashChunkStorage::StorageHeader structs (see: ExtflashChunkStorage.h), rest is filled with 0xFF. The last StorageHeader before 0xFF is the current one. With that StorageHeader one can see where the ring buffer starts (firstChunk) and where next data is written (cursor). The difference of the two is the used memory. (Note: Since it is a ring buffer the difference can be negative. In that case add the "Size of Logbook area - 256" to it)
Full disclosure: I work for Movesense team

Is it possible to move event label locations in EEGLAB after data has already been collected?

I was recently added onto a project to analyse EEG data, only to discover that data collection had been faulty.
The experiment was run using EPrime for stimuli presentation, with a BioSemi Active 2 system for recording of EEG. Triggers were sent from E-Prime at stimulus onset, and were supposed to have been sent at response. However, due to the nature of the experiment, stimuli did not disappear at response, which somehow affected trigger timing. Triggers for response only registered after the stimulus disappeared from the screen. This means that every response event label in the EEG data is postponed by a couple hundred milliseconds, differing on a trial-by-trial basis. RT data WAS recorded accurately, however, and we have all that data in an .edat file (which can be extracted as excel or whatever else).
My question now would be: is it possible to adjust the event label locations in the EEG data? We use the EEGLAB toolbox in Matlab for analysis. I was thinking that it may be possible to 'sync' an excel file of RTs with the corresponding event in the EEG and run a script to do all the processing. Not sure how to go about it though, if it is possible in the first place. Help is greatly appreciated, thank you! (and if this is not the correct forum to ask, let me know and I will delete)
It is definetely possible to edit event field values. You can try to do this by using the function pop_editeventvals (either via command line or GUI > Edit > event values) which requires an EEG structure and 'key' 'value' argument pairs, eg.:
EEG = pop_editeventvals(EEG,'changefield',{34 'latency' 320.4});
will change latency of event 34 to 320.4 msec.
Alternatively, loop through or index the corresponding events and directly change absolute latency within the event field or the relative-to-epoch-locking eventlatency within the epoch field.
Just for record, you can import E-Prime logfile into EEGLAB and match your timing that way.
If you use LPT triggers in Biosemi the timing of you triggers is best from this source. But when in trouble, you can try this out. Just make sure your timing accuracy.