Write chords using MIDO to a file - midi

I have a hard time writing chords in a MIDI file using MIDO, the MIDI library for Python.
I have a list of 100 chords with notes stored in the list. So each chord in the code below is [60, 63, 67] as in Cmin. The timing of each in seconds is stored in the chordTimes list.
I iterate in the list,
for i in range(1, len(chords)):
chordNotes = chordMidiNotes(chords[i], extraBass= False)[0]
chordSymbol = chordMidiNotes(chords[i], extraBass= False)[1]
for note_value in chordNotes: # result has chord notes
track.append(Message('note_on', note=note_value, velocity=100, time=0))
for note_value in chordNotes: # result has chord notes
track.append(Message('note_off', note=note_value, velocity=127, time=time_in_ticks(chordTimes[i], mo)))
mo.save("songWithChords.mid")
But then when I open the files, the chords start at the same time, however, the top note ends just before last chords, the one below ends before that, the one before the later stops again several beats before it, .... as you see in the image. I am using type 1 midi file.

The MIDO documentation says:
All messages must be tagged with delta time (in ticks). (A delta time is how long to wait before the next message.)
So the delta times of all note-off messages except the first one must be zero.

Related

ORSSerialPort data that should be a single line is coming in over multiple lines

I have an arduino that spits out a single line of GPS data down the serial line every half second, which I know works because I can look at the serial monitor in the arduino IDE and every half second, a new single line of data appears.
I'm now in the process of writing a Mac program using Swift that puts each coordinate on a map as it comes in through the serial port, and am using the ORSSerialPort library to connect to the arduino and receive its data. This works fine and I had a basic version working earlier, however I noticed that there were gaps in the GPS data (they were appearing in small groups on the map, with a noticeable space in between when it should be a constant line of them).
Before I had the map I had a text field that would have each GPS data line added to it as it came in, which produced the exact same output as the arduino IDE serial monitor, so I thought everything was working fine.
To try and fix the problem with the map I removed the map code and simply print()ed out each line into the XCode IDE as it came in through the serial port. To my surprise there were random line breaks in the data and I don't understand why. I feel that this may be causing the problems I am having (with splitting the string at every comma so I can extract the individual values) so would like to know why it comes out as a single line in the arduino IDE and the text field, but not in the XCode IDE and presumably whenever else I am working with the string.
EDIT: I prefixed the print to XCode IDE and the output to the text field with five plusses and suffixed them with five dashes, then set the serial port to close after sending a single report (what should be a single line of data). The output I got to both things ended up being three lines, each prefixed and suffixed with the plusses and dashes. See the photo below, which shows what should be a single line:
Why are my single lines of data coming through over multiple lines and behaving like individual variables (as in getting the last character of the line returns the last character of the first line of the three, not a semi colon)?
The issue isn't likely that there are extra newlines being inserted. Rather, ORSSerialPort (like the underlying POSIX API it uses) simply reports data to its delegate as it comes in. It has no way of knowing that for your particular use case you only want complete lines.
You need to buffer the incoming data and only process it when you've received a complete "line"/packet. ORSSerialPort includes an API, ORSSerialPacketDescriptor that makes this easier. There is further documentation for that API here: https://github.com/armadsen/ORSSerialPort/wiki/Packet-Parsing-API
Do note that this API doesn't (yet) support the use of a end delimiter only. You need to validate the entire packet beginning to end, as the parsing routine is "lazy". That is, it tries to find the smallest match possible starting from the end of the packet.

MCE chunk size when reading from STDIN

I'm writing a Perl program that processes a high number of log entries. To speed things up I'm using MCE to create a number of worker processes to handle processing in parallel. So far things are great, but I've found myself trying different chunk sizes in a very unscientific manner. Here's some background before I get to my question.
We're receiving logs from a number of syslog sources and collecting them at a central location. The central syslog server writes these log entries to a single text file. The program in question picks up the logs from this text file, does some munging, and ships them elsewhere. The raw log files are then archived.
To read the log files I'm doing this:
my $tail = 'tail -q -F '.$logdir.$logFile;
open my $tail_fh, "-|", $tail or die "Can't open tail\n";
I'm then using mce_loop_f to iterate over the file handle:
mce_loop_f { my $hr = process($_); MCE->gather($hr); } $tail_fh;
This works well for the most part, but if there is a spike in log activity the program starts to get bogged down. While there are a number of factors to make things "go faster" one of those factors I'm a little unsure of is chunk_size in MCE.
I understand that chunk_size has an "auto" value, but how would that work on a file handle that is a pipe from tail? Would auto be appropriate here?
What factors should I consider when adjusting the chunk_size? Log entries occur at a rate of 1000-2000 events per second depending on time of day (1000 at night, 2000 during the day).
I'm also a neophyte when it comes to MCE, so if mce_loop_f is a bad idea for this use case, please let me know.

Tcl How to write data to a specific line number in the middle of operating

Is there any way or command in Tcl for writing in the middle of {data.txt} and also specific line number ... ?
for example after writing data in text file, when I'm writing in line number 1000, is there any way for turning back to line number 20 and adding the data in this line for output. (something look like llappend & append for list variables, but in puts command)
You can use seek to move the current location in a channel opened on a file (it's not meaningful for pipes, sockets and serial lines); the next read (gets, read) or write (puts) will happen at that point. Except in append mode, when writes always go to the end.
seek $channel $theNewLocation
However, the location to seek to is in bytes; the only locations that it is trivially easy to go to are the start of the file and the end of the file (the latter using end-based indexing). Thus you need to either remember where “line 20” really is from the first time, or go to the start and read forward a few lines.
seek $channel 0
for {set i 0} {$i < 20} {incr i} {gets $channel}
Also be aware that the data afterwards does not shuffle up or down to accommodate what you've written the second time. If you don't write exactly the same length of data as was already there, you end up with partial lines in the file. Truncating the file with chan truncate might help, but might also be totally the wrong thing to do. If you're going to go back and patch a text file, writing an extra long line where you're going to do the patch (e.g., with lots of spaces on the end) can make it much easier. It's a cheesy hack, but simple to do.

Showing midi pitch numbers from Mid file with music21

I am using music21 to extract the midi pitch numbers (in order) for a bunch of midi files.
I have been reading through the documentation and I can load one file like this:
from music21 import *
sBach = corpus.parse('bach/bwv7.7')
Now how do I show a sequence of midi numbers? I am sure this is possible but I can't find the function in the documentation.
And is there a way to do it for multiple files at the same time?
from music21 import *
sBach = corpus.parse('bach/bwv7.7')
for p in sBach.parts:
print("Part: ", p.id)
for n in p.flat.notes:
print(n.pitch.midi)
Note that .notes includes Chord objects, which don't have a .pitch property. So for complex scores you may need to separate out the chords from notes or iterate over p.pitches. I think you'll want to go through the music21 User's Guide a bit more before continuing.

Extract the most recent values from appended .CSV in MATLAB

I have a .csv file which is appended with 3 new values in the row below the previous set:
dlmwrite('MyFile.csv', [MyValue,MyValue2,MyValue3], '-append');
This happens every minute. It happens indefinitely because of a timer i.e it accumulates data over time:
How can I continually copy over the 60 most recent sets of values from the file and store them in a new csv file, say MyFile2.The row number of the .csv file is increasing by 1 with every minute. i.e 60 values stored in 60 minutes but I may have 100 values and want to extract the latest 60 for another file.
Image of the CSV file - 2nd column is time in HOURS:MINUTES without the : separator (ignore the lapse in time between rows 38 and 39 or any lapses elsewhere):
Note: MyValue is added to the file every minute because the script is run every 60 seconds from another script. I.e there is no internal timer in the main script:
Period = 60; % Update period in seconds
tim = timer('Period', Period, 'ExecutionMode', 'fixedRate',...
'TimerFcn', 'TESTINGFINAL');
start(tim)
stop(tim)
runtmp = fullfile('MyScriptLocation','MyScript');
run(runtmp);
If you want to do this continuously while running I'd suggest some sort of circular buffer arrangement so you always have the last 60 values in memory. This will be easier than trying to work out the current length of your continuously logging file. Basic idea (minus the actual timing code):
% initialising buffer
MyValue1 = zeros(60,1);
while true % for certain values of true
% these go once a minute
mv1 = myfunc1(inputs);
MyValue1 = [MyValue1(2:end); mv1];
dlmwrite('MyFile.csv', [mv1], '-append');
% this goes less frequently, I presume
filename = [datestr(now,30),'.csv']; % dynamic filename
dlmwrite(filename, MyValue1);
end
This way you have both your continuously logging file (updated every minute), and a series of smaller files containing what were the last 60 values at the time they were written (updated hourly, or on some other trigger, as required).
With your timer, one way of doing this would appear to be to keep a simple counter of how many times the acquisition script has run and then use the mod function to check for when this hits a multiple of 60.
This is not a full answer, but I do have an idea if I understand you correctly. This means you probably want to run Matlab 24/7? Or at least non-stop for a certain amount of time? If so, you could try out the command clock, it shows and stores the system time. In your case
time=clock;
where
time(4) holds the hour. So as soon as this parameter changes, you should open your .csv file and save the last 60 values.
However, doing it this way I think is highly energy-consuming. And sadly Matlab does not have a sleep command like for example Unix does, so maybe it would be interesting to look into running this program in another programming language?
Please provide me with comments and feedback since I see that this is not a complete answer (yet)!