How do I calculate the number of ticks per measure from a MIDI file - midi

I am trying to calculate the number of ticks per measure (bar) from a MIDI file, but I am a bit stuck.
I have a MIDI file from which I can extract the following information (provided in meta messages):
#0: Time signature: 4/4, Metronome pulse: 24 MIDI clock ticks per click, Number of 32nd notes per beat: 8
There are two tempo messages, which I'm not sure are relevant:
#0: Microseconds per quarternote: 400000, Beats per minute: 150.0
#1800: Microseconds per quarternote: 441176, Beats per minute: 136.0001450668214
From trial and error, looking at the Note On messages, and looking at the MIDI file in Garageband, I can 'guess' that the number of ticks per measure is 2100, with a quarternote 525 ticks.
My question is: can I arrive at the 2100 number using the tempo information that was provided above, and if so how? Or have I not parsed enough information from the MIDI file and is there some other control message that I need to look at?

Use the following Java 11 code to extract the ticks per measure. This assumes 4 quarter notes per bar.
public MidiFile(String filename) throws Exception {
var file = new File(filename);
var sequence = MidiSystem.getSequence(file);
System.out.println("Tick length: " + sequence.getTickLength());
System.out.println("Division Type: " + sequence.getDivisionType());
System.out.println("Resolution (PPQ if division = " + javax.sound.midi.Sequence.PPQ + "): " + sequence.getResolution());
System.out.println("Ticks per measure: " + (4 * sequence.getResolution()));
}

Related

How do I synchronize clocks on two Raspberry Pi Picos?

I'm sending bits as LED blinks from one Pi Pico, and using another Pico to receive voltage from a photodiode and read the bits.
When sending/receiving from the same PiPico, it works accurately at 60us a bit. However, when sending from one pico and receiving from a second pico, I can only send/receive bits accurately at 0.1s a bit. If I move to sending/receiving at 0.01s, I start losing information. This leads me to believe my issue is with the clocks or sampling rates on the two pipicos, that they are slightly different.
Is there a way to synchronize two pipico clocks using Thonny/Micropython?
Code for Pico #1 sending blinks/bits:
led=Pin(13,Pin.OUT) #Led to blink
SaqVoltage = machine.ADC(28) #receive an input signal, when over a certain voltage, send the time using bits
#Ommitted code here to grab first low voltage values from SaqVoltage signal and use to determine if there is a high voltage received, when high voltage is received, send "hi" in led bits
#More omitted calibration code here for led/photodiode
hi = "0110100001101001"
while True:
SaqVoltage_value = SaqVoltage.read_u16()*3.3 / 65536 #read input signal to see if high voltage is sent
finalBitString = ""
if SaqVoltage_value > saQCutOff: #high voltage found, send "hi" sequence
finalBitString = #hi plus some start/stop sequences
for i in range (0, len(finalBitString)):
if finalBitString[i]=="1":
led(1)
elif finalBitString[i]=="0":
led(0)
utime.sleep(.01)
Code for Pico#2 receiving bits:
SaqDiode = machine.ADC(28) #read photodiode
#ommitted code here to calibrate leds/photodiodes
startSeqSaq = []
wordBits=[]
while True:
utime.sleep(.01) #sample diode every 0.01 seconds
SaqDiode_value = SaqDiode.read_u16()*3.3 / 65536
if (saqDiodeHi - saqCutOff <= SaqDiode_value): #read saq diode and determine 1 or 0
bit=1
elif (SaqDiode_value <= saqDiodeLo + saqCutOff):
bit=0
if len(startSeqSaq)==10: #record last 10 received bits to check if start sequence
startSeqSaq.pop(0)
startSeqSaq.append(bit)
elif len(startSeqSaq)<10:
startSeqSaq.append(bit)
if startSeqSaq == startSeq: #found start sequence, start reading bits
while True:
utime.sleep(.01)
SaqDiode_value = SaqDiode.read_u16()*3.3 / 65536
if (saqDiodeHi - saqCutOff <= SaqDiode_value):
bit=1
wordBits.append(bit)
elif (SaqDiode_value < saqDiodeLo + saqCutOff):
bit=0
wordBits.append(bit)
if len(wordBits)>10: #check for stop sequence
last14=wordBits[-10:]
else:
last14 = []
if last14==endSeq:
char = frombits(wordBits[:-10])
wordBits=[]
print("Function Generator Reset Signal Time in ms from Start: ", char)
break

Why is my program not scanning and outputting the text file properly?

I'm currently having trouble with my text file. When I run my program, it is supposed to print the students on academic warning, but when I run it, it only prints the header and does not scan the text file. Is this a problem with my Netbeans or with my program?
Here is the program assignment in more detail:
Write a program that will read in a file of student academic credit data and create a list of students on academic
warning. The list of students on warning will be written to a file. Each line of the input file will contain the
student name (a single String with no spaces), the number of semester hours earned (an integer), the total
quality points earned (a double). The following shows part of a typical data file:
Smith 27 83.7
Jones 21 28.35
Walker 96 182.4
Doe 60 150
The program should compute the GPA (grade point or quality point average) for each student (the total quality
points divided by the number of semester hours) then write the student information to the output file if that
student should be put on academic warning. A student will be on warning if he/she has a GPA less than 1.5 for
students with fewer than 30 semester hours credit, 1.75 for students with fewer than 60 semester hours credit,
and 2.0 for all other students. The file Warning.java contains a skeleton of the program. Do the following:
Set up a Scanner object scan from the input file and a PrintWriter outFile to the output file inside the try
clause (see the comments in the program). Note that you’ll have to create the PrintWriter from a FileWriter,
but you can still do it in a single statement.
Inside the while loop add code to read and parse the input—get the name, the number of credit hours, and
the number of quality points. Compute the GPA, determine if the student is on academic warning, and if so
write the name, credit hours, and GPA (separated by spaces) to the output file.
After the loop close the PrintWriter.
Think about the exceptions that could be thrown by this program:
• A FileNotFoundException if the input file does not exist
• A NumberFormatException if it can’t parse an int or double when it tries to - this indicates an error in
the input file format
• An IOException if something else goes wrong with the input or output stream
Add a catch for each of these situations, and in each case give as specific a message as you can. The program
will terminate if any of these exceptions is thrown, but at least you can supply the user with useful information.
Test the program. Test data is in the file students.dat. Be sure to test each of the exceptions as well.
Code:
package Chapter11LabProgram;
import java.util.*;
import java.io.*;
public class Warning
{
// --------------------------------------------------------------------
// Reads student data (name, semester hours, quality points) from a
// text file, computes the GPA, then writes data to another file
// if the student is placed on academic warning.
// --------------------------------------------------------------------
public static void main(String[] args) {
int creditHrs; // number of semester hours earned
double qualityPts; // number of quality points earned
double gpa; // grade point (quality point) average
String line, name, inputName = "/Users/Downloads/students.dat.txt";
String outputName = "/Users/Downloads/students.dat.txt";
try {
// Set up scanner to input file
PrintWriter outFile = new PrintWriter(new FileWriter(outputName));
Scanner scan = new Scanner(new File(inputName));
// Set up the output file stream
// Print a header to the output file
outFile.println();
outFile.println("Students on Academic Warning");
outFile.println();
// Process the input file, one token at a time
while (scan.hasNext()) {
line = scan.nextLine();
creditHrs = Integer.parseInt(line.split("\\s+")[1]);
qualityPts = Double.parseDouble(line.split("\\s+")[2]);
gpa = qualityPts / creditHrs;
//outFile.print(line);
if (gpa < 1.5 && creditHrs < 30)
{
outFile.write(line + "\r\n");
}
else if (gpa < 1.75 && creditHrs < 60)
{
outFile.write(line + "\r\n");
}
else if (gpa < 2)
{
outFile.write(line + "\r\n");
}
//outFile.println ();
// Get the credit hours and quality points and
// determine if the student is on warning. If so,
// write the student data to the output file.
}
//outFile.println ("test");
outFile.close();
} catch (FileNotFoundException exception) {
System.out.println("The file " + inputName + " was not found.");
} catch (IOException exception) {
System.out.println(exception);
} catch (NumberFormatException e) {
System.out.println("Format error in input file: " + e);
}
}
}
My text file:
Smith 27 83.7
Jones 21 28.35
Walker 96 182.4
Doe 60 150
Wood 100 400
Street 33 57.4
Taylor 83 190
Davis 110 198
Smart 75 2 92.5
Bird 84 168
Summers 52 83.2
Any help would be appreciated! Thank you!

How do I get the per-spec output of a LV-MaxSonar LV-EZ0 rangefinder with Android Things?

I'm using this sensor with a Raspberry Pi B 3 and Android Things 1.0
I have it connected as per these instructions
The spec for its output suggests I should receive "an ASCII capital “R”, followed by three ASCII character digits representing the range in inches up to a maximum of 255, followed by a carriage return (ASCII 13)"
I have connected to the device and configured it as follows (connection parameters map to the "Serial, 0 to Vcc, 9600 Baud, 81N" in that spec, I think):
PeripheralManager manager = PeripheralManager.getInstance();
mDevice = manager.openUartDevice(name);
mDevice.setBaudrate(9600);
mDevice.setDataSize(8);
mDevice.setParity(UartDevice.PARITY_NONE);
mDevice.setStopBits(1);
mDevice.registerUartDeviceCallback(mUartCallback);
I am reading from its buffer in that callback as follows:
public void readUartBuffer(UartDevice uart) throws IOException {
// "The output is an ASCII capital “R”, followed by three ASCII character digits
// representing the range in inches up to a maximum of 255, followed by a carriage return
// (ASCII 13)
ByteArrayOutputStream bout = new ByteArrayOutputStream();
final int maxCount = 1024;
byte[] buffer = new byte[maxCount];
int total = 0;
int cycles = 0;
int count;
bout.write(23);
while ((count = uart.read(buffer, buffer.length)) > 0) {
bout.write(buffer, 0, count);
total += count;
cycles++;
}
bout.write(0);
byte[] buf = bout.toByteArray();
String bufStr = Arrays.toString(buf);
Log.d(TAG, "Got " + total + " in " + cycles + ":" + buf.length +"=>" + bufStr);
}
private UartDeviceCallback mUartCallback = new UartDeviceCallback() {
#Override
public boolean onUartDeviceDataAvailable(UartDevice uart) {
// Read available data from the UART device
try {
readUartBuffer(uart);
} catch (IOException e) {
Log.w(TAG, "Unable to access UART device", e);
}
// Continue listening for more interrupts
return true;
}
When I connect the sensor and use this code I get readings back of the form:
05-10 03:59:59.198 1572-1572/org.tomhume.blah D/LVEZ0: Got 7 in 1:9=>[23, 43, 0, 6, -77, -84, 15, 0, 0]
05-10 03:59:59.248 1572-1572/org.tomhume.blah D/LVEZ0: Got 7 in 1:9=>[23, 43, 0, 6, 102, 101, 121, 0, 0]
05-10 03:59:59.298 1572-1572/org.tomhume.blah D/LVEZ0: Got 7 in 1:9=>[23, 43, 0, 6, 102, 99, 121, 0, 0]
The initial 23 and the final 0 on each line are values I have added. Instead of an expected R\d\d\d\13 I expect between them, I'm getting 7 signed bytes. The variance in some of these byte values appears when I move my hand towards and away from the sensor - i.e. the values I'm getting back vary in a way I might expect, even though the output is completely wrong in form and size.
Any ideas what I'm doing wrong here? I suspect it's something extremely obvious, but I'm stumped. Examining the binary values themselves it doesn't look like bits are shifted around by e.g. a mistake in protocol configuration.
I can't test it on a hardware now, but seems algorithm for reading serial data from a LV-MaxSonar LV-EZ0 rangefinder should be slightly different: you should find "an ASCII capital “R” in UART sequence buffer (because LV-EZ0 starts sending data after power on and not synchronized with Raspberry Pi board and first received byte may be not “R” and quantity of received bytes can be less or more then 5 bytes of LV-MaxSonar response^ in one buffer can be several (or part)of LV-MaxSonar responses) and THAN try to parse 3 character digits representing the range in inches and ending CR symbol (if there is no all 3 bytes for digits and CR symbol - than start to find capital “R” again because LV-MaxSonar response is broken (it also can be)). For example take a look at GPS contrib driver especially at processBuffer() of NmeaGpsModule.java and processMessageFrame() of NmeaParser.java - the difference is in your have only one "sentence" and its start is not "GPRMC", but only "R" symbol.
And other important thing: seems there no capital “R” (ASCII 82) in your response buffers - may be some issues in LV-MaxSonar LV-EZ0 connection. Your schematics should be exactly like top right picture on page 6 of datasheet: Independent Sensor Operation: Serial Output Sensor Operation. Also try, for example, to connect LV-MaxSonar LV-EZ0 to USB<->UART (with power output) cable (e.g. like that) and test it in terminal with your PC.

haproxy stats: qtime,ctime,rtime,ttime?

Running a web app behind HAProxy 1.6.3-1ubuntu0.1, I'm getting haproxy stats qtime,ctime,rtime,ttime values of 0,0,0,2704.
From the docs (https://www.haproxy.org/download/1.6/doc/management.txt):
58. qtime [..BS]: the average queue time in ms over the 1024 last requests
59. ctime [..BS]: the average connect time in ms over the 1024 last requests
60. rtime [..BS]: the average response time in ms over the 1024 last requests
(0 for TCP)
61. ttime [..BS]: the average total session time in ms over the 1024 last requests
I'm expecting response times in the 0-10ms range. ttime of 2704 milliseconds seems unrealistically high. Is it possible the units are off and this is 2704 microseconds rather than 2704 millseconds?
Secondly, it seems suspicious that ttime isn't even close to qtime+ctime+rtime. Is total response time not the sum of the time to queue, connect, and respond? What is the other time, that is included in total but not queue/connect/response? Why can my response times be <1ms, but my total response times be ~2704 ms?
Here is my full csv stats:
$ curl "http://localhost:9000/haproxy_stats;csv"
# pxname,svname,qcur,qmax,scur,smax,slim,stot,bin,bout,dreq,dresp,ereq,econ,eresp,wretr,wredis,status,weight,act,bck,chkfail,chkdown,lastchg,downtime,qlimit,pid,iid,sid,throttle,lbtot,tracked,type,rate,rate_lim,rate_max,check_status,check_code,check_duration,hrsp_1xx,hrsp_2xx,hrsp_3xx,hrsp_4xx,hrsp_5xx,hrsp_other,hanafail,req_rate,req_rate_max,req_tot,cli_abrt,srv_abrt,comp_in,comp_out,comp_byp,comp_rsp,lastsess,last_chk,last_agt,qtime,ctime,rtime,ttime,
http-in,FRONTEND,,,4707,18646,50000,5284057,209236612829,42137321877,0,0,997514,,,,,OPEN,,,,,,,,,1,2,0,,,,0,4,0,2068,,,,0,578425742,0,997712,22764,1858,,1561,3922,579448076,,,0,0,0,0,,,,,,,,
servers,server1,0,0,0,4337,20000,578546476,209231794363,41950395095,,0,,22861,1754,95914,0,no check,1,1,0,,,,,,1,3,1,,578450562,,2,1561,,6773,,,,0,578425742,0,198,0,0,0,,,,29,1751,,,,,0,,,0,0,0,2704,
servers,BACKEND,0,0,0,5919,5000,578450562,209231794363,41950395095,0,0,,22861,1754,95914,0,UP,1,1,0,,0,320458,0,,1,3,0,,578450562,,1,1561,,3922,,,,0,578425742,0,198,22764,1858,,,,,29,1751,0,0,0,0,0,,,0,0,0,2704,
stats,FRONTEND,,,2,5,2000,5588,639269,8045341,0,0,29,,,,,OPEN,,,,,,,,,1,4,0,,,,0,1,0,5,,,,0,5374,0,29,196,0,,1,5,5600,,,0,0,0,0,,,,,,,,
stats,BACKEND,0,0,0,1,200,196,639269,8045341,0,0,,196,0,0,0,UP,0,0,0,,0,320458,0,,1,4,0,,0,,1,0,,5,,,,0,0,0,0,196,0,,,,,0,0,0,0,0,0,0,,,0,0,0,0,
In haproxy >2 you now get two values n / n which is the max within a sliding window and the average for that window. The max value remains the max across all sample windows until a higher value is found. On 1.8 you only get the average.
Example of haproxy 2 v 1.8. Note these proxies are used very differently and with dramatically different loads.
So looks like the average response times at least since last reboot are 66m and 275ms.
The average is computed as:
data time + cumulative http connections - 1 / cumulative http connections
This might not be a perfect analysis so if anyone has improvements it'd be appreciated. This is meant to show how I came to the answer above so you can use it to gather more insight into the other counters you asked about. Most of this information was gathered from reading stats.c. The counters you asked about are defined here.
unsigned int q_time, c_time, d_time, t_time; /* sums of conn_time, queue_time, data_time, total_time */
unsigned int qtime_max, ctime_max, dtime_max, ttime_max; /* maximum of conn_time, queue_time, data_time, total_time observed */```
The stats page values are built from this code:
if (strcmp(field_str(stats, ST_F_MODE), "http") == 0)
chunk_appendf(out, "<tr><th>- Responses time:</th><td>%s / %s</td><td>ms</td></tr>",
U2H(stats[ST_F_RT_MAX].u.u32), U2H(stats[ST_F_RTIME].u.u32));
chunk_appendf(out, "<tr><th>- Total time:</th><td>%s / %s</td><td>ms</td></tr>",
U2H(stats[ST_F_TT_MAX].u.u32), U2H(stats[ST_F_TTIME].u.u32));
You asked about all the counter but I'll focus on one. As can be seen in the snippit above for "Response time:" ST_F_RT_MAX and ST_F_RTIME are the values displayed on the stats page as n (rtime_max) / n (rtime) respectively. These are defined as follows:
[ST_F_RT_MAX] = { .name = "rtime_max", .desc = "Maximum observed time spent waiting for a server response, in milliseconds (backend/server)" },
[ST_F_RTIME] = { .name = "rtime", .desc = "Time spent waiting for a server response, in milliseconds, averaged over the 1024 last requests (backend/server)" },
These set a "metric" value (among other things) in a case statement further down in the code:
case ST_F_RT_MAX:
metric = mkf_u32(FN_MAX, sv->counters.dtime_max);
break;
case ST_F_RTIME:
metric = mkf_u32(FN_AVG, swrate_avg(sv->counters.d_time, srv_samples_window));
break;
These metric values give us a good look at what the stats page is telling us. The first value in the "Responses time: 0 / 0" ST_F_RT_MAX, is some max value time spent waiting. The second value in "Responses time: 0 / 0" ST_F_RTIME is an average time taken for each connection. These are the max and average taken within a window of time, i.e. however long it takes for you to get 1024 connections.
For example "Responses time: 10000 / 20":
max time spent waiting (max value ever reached including http keepalive time) over the last 1024 connections 10 seconds
average time over the last 1024 connections 20ms
So for all intents and purposes
rtime_max = dtime_max
rtime = swrate_avg(d_time, srv_samples_window)
Which begs the question what is dtime_max d_time and srv_sample_window? These are the data time windows, I couldn't actually figure how these time values are being set, but at face value it's "some time" for the last 1024 connections. As pointed out here keepalive times are included in max totals which is why the numbers are high.
Now that we know ST_F_RT_MAX is a max value and ST_F_RTIME is an average, an average of what?
/* compue time values for later use */
if (selected_field == NULL || *selected_field == ST_F_QTIME ||
*selected_field == ST_F_CTIME || *selected_field == ST_F_RTIME ||
*selected_field == ST_F_TTIME) {
srv_samples_counter = (px->mode == PR_MODE_HTTP) ? sv->counters.p.http.cum_req : sv->counters.cum_lbconn;
if (srv_samples_counter < TIME_STATS_SAMPLES && srv_samples_counter > 0)
srv_samples_window = srv_samples_counter;
}
TIME_STATS_SAMPLES value is defined as
#define TIME_STATS_SAMPLES 512
unsigned int srv_samples_window = TIME_STATS_SAMPLES;
In mode http srv_sample_counter is sv->counters.p.http.cum_req. http.cum_req is defined as ST_F_REQ_TOT.
[ST_F_REQ_TOT] = { .name = "req_tot", .desc = "Total number of HTTP requests processed by this object since the worker process started" },
For example if the value of http.cum_req is 10, then srv_sample_counter will be 10. The sample appears to be the number of successful requests for a given sample window for a given backends server. d_time (data time) is passed as "sum" and is computed as some non-negative value or it's counted as an error. I thought I found the code for how d_time is created but I wasn't sure so I haven't included it.
/* Returns the average sample value for the sum <sum> over a sliding window of
* <n> samples. Better if <n> is a power of two. It must be the same <n> as the
* one used above in all additions.
*/
static inline unsigned int swrate_avg(unsigned int sum, unsigned int n)
{
return (sum + n - 1) / n;
}

Instrument for count the number of method calls on iPhone

The Time Profiler can measure the amount of time spent on certain methods. Is there a similar method that measures the number of times a method is called?
DTrace can do this, but only in the iPhone Simulator (it's supported by Snow Leopard, but not yet by iOS). I have two writeups about this technology on MacResearch here and here where I walk through some case studies of using DTrace to look for specific methods and when they are called.
For example, I created the following DTrace script to measure the number of times methods were called on classes with the CP prefix, as well as total up the time spent in those methods:
#pragma D option quiet
#pragma D option aggsortrev
dtrace:::BEGIN
{
printf("Sampling Core Plot methods ... Hit Ctrl-C to end.\n");
starttime = timestamp;
}
objc$target:CP*::entry
{
starttimeformethod[probemod,probefunc] = timestamp;
methodhasenteredatleastonce[probemod,probefunc] = 1;
}
objc$target:CP*::return
/methodhasenteredatleastonce[probemod,probefunc] == 1/
{
this->executiontime = (timestamp - starttimeformethod[probemod,probefunc]) / 1000;
#overallexecutions[probemod,probefunc] = count();
#overallexecutiontime[probemod,probefunc] = sum(this->executiontime);
#averageexecutiontime[probemod,probefunc] = avg(this->executiontime);
}
dtrace:::END
{
milliseconds = (timestamp - starttime) / 1000000;
normalize(#overallexecutiontime, 1000);
printf("Ran for %u ms\n", milliseconds);
printf("%30s %30s %20s %20s %20s\n", "Class", "Method", "Total CPU time (ms)", "Executions", "Average CPU time (us)");
printa("%30s %30s %20#u %20#u %20#u\n", #overallexecutiontime, #overallexecutions, #averageexecutiontime);
}
This generates the following nicely formatted output:
Class Method Total CPU time (ms) Executions Average CPU time (us)
CPLayer -drawInContext: 6995 352 19874
CPPlot -drawInContext: 5312 88 60374
CPScatterPlot -renderAsVectorInContext: 4332 44 98455
CPXYPlotSpace -viewPointForPlotPoint: 3208 4576 701
CPAxis -layoutSublayers 2050 44 46595
CPXYPlotSpace -viewCoordinateForViewLength:linearPlotRange:plotCoordinateValue: 1870 9152
...
While you can create and run DTrace scripts from the command line, probably your best bet would be to create a custom instrument in Instruments and fill in the appropriate D code within that instrument. You can then easily run that against your application in the Simulator.
Again, this won't work on the device, but if you just want statistics on the number of times something is called, and not the duration it runs for, this might do the job.