Build measures and notes in those measures from midi note messages - midi

I'm using mido and python to convert a midi file to represent the midi note messages into an array of notes that is stored in an array of measures to represent the notes in every measure of a song.
I first tried by simply taking the ticks per beat given to me by the midi file, multiplying it by 4, then counting the time deltas of both the note_on and note_off messages. This doesn't work due to some edge cases that I am not sure how to interpret.
Here is the midi header for reference:
MidiTrack([
MetaMessage('set_tempo', tempo=500000, time=0),
MetaMessage('time_signature', numerator=4, denominator=2, clocks_per_click=24, notated_32nd_notes_per_beat=8, time=0),
MetaMessage('end_of_track', time=0)]),
The first measure of the midi track contains for notes, and is represented in midi this way:
Message('note_on', channel=0, note=60, velocity=100, time=0),
Message('note_off', channel=0, note=60, velocity=64, time=6),
Message('note_on', channel=0, note=60, velocity=100, time=18),
Message('note_off', channel=0, note=60, velocity=64, time=6),
Message('note_on', channel=0, note=61, velocity=100, time=18),
Message('note_off', channel=0, note=61, velocity=64, time=6),
Message('note_on', channel=0, note=63, velocity=100, time=18),
Message('note_off', channel=0, note=63, velocity=64, time=6),
By simply using ticks per beat * 4, I will end up with an extra note.
Message('note_on', channel=0, note=60, velocity=100, time=0),
Message('note_off', channel=0, note=60, velocity=64, time=6),
Message('note_on', channel=0, note=60, velocity=100, time=18),
Message('note_off', channel=0, note=60, velocity=64, time=6),
Message('note_on', channel=0, note=61, velocity=100, time=18),
Message('note_off', channel=0, note=61, velocity=64, time=6),
Message('note_on', channel=0, note=63, velocity=100, time=18),
Message('note_off', channel=0, note=63, velocity=64, time=6),
Message('note_on', channel=0, note=61, velocity=100, time=18),
With this extra note, I now equal 96 ticks with is one measure.
How can I look at time delta to properly count the notes in a measure?
Some other issues I am having would probably be solved if I can figure this part out. Another issue I have is when a note message spans between two measures and then the total tick delta for this measure + current message time > 96 I also have issues.

Related

Postgresql 12.5 "relation already exists" when using "create table if not exists"

I'm using PostgreSQL 12.5 on AWS for millions of events. My tables are partitioned by day by day so when the request comes, I use "CREATE TABLE IF NOT EXISTS ... PARTITION OF" syntax for every request.
I expect to skip the command if the table already exists. But PostgreSQL didn't do that. PostgreSQL tried to create tables multiple times instead of skip.
PostgreSQL Error Logs are following
2021-05-30 00:00:00 UTC:IP_ADDRESS(24921):dbname#username:[16666]:ERROR: relation "tablename_20210530" already exists
2021-05-30 00:00:00 UTC:IP_ADDRESS(24921):dbname#username:[16666]:STATEMENT: CREATE TABLE IF NOT EXISTS tablename_20210530 PARTITION OF tablename FOR VALUES FROM ('2021-05-30') TO ('2021-05-31')
The SQL Exception is following;
{
"errorType": "error",
"errorMessage": "relation \"tablename_20210530\" already exists",
"code": "42P07",
"length": 110,
"name": "error",
"severity": "ERROR",
"file": "heap.c",
"line": "1155",
"routine": "heap_create_with_catalog",
"stack": [
"error: relation \"tablename_20210530\" already exists",
" at Parser.parseErrorMessage (/var/task/node_modules/pg-protocol/dist/parser.js:278:15)",
" at Parser.handlePacket (/var/task/node_modules/pg-protocol/dist/parser.js:126:29)",
" at Parser.parse (/var/task/node_modules/pg-protocol/dist/parser.js:39:38)",
" at Socket.<anonymous> (/var/task/node_modules/pg-protocol/dist/index.js:10:42)",
" at Socket.emit (events.js:315:20)",
" at addChunk (internal/streams/readable.js:309:12)",
" at readableAddChunk (internal/streams/readable.js:284:9)",
" at Socket.Readable.push (internal/streams/readable.js:223:10)",
" at TCP.onStreamRead (internal/stream_base_commons.js:188:23)"
]
}
Any idea why this is happening?
PostgreSQL Version:
PostgreSQL 12.5 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-11), 64-bit
Table Scheme:
create table tablename
(
id bigint default nextval('tablename_partition_id_seq'::regclass) not null,
col1 timestamp(0) default CURRENT_TIMESTAMP not null,
col2 varchar(255) not null,
constraint tablename_partition_pkey
primary key (id, col1)
)
partition by RANGE ("col1");
create table tablename_20210530
partition of tablename
(
constraint tablename_20210530_pkey
primary key (id, col1)
)
FOR VALUES FROM ('2021-05-30 00:00:00') TO ('2021-05-31 00:00:00');
SQL Statement:
CREATE TABLE IF NOT EXISTS tablename_20210530 PARTITION OF tablename FOR VALUES FROM ('2021-05-30') TO ('2021-05-31');
My code runs the create statement before the insert statement under the thousands of requests. Because every request has a different "col1" value. And I can not control that. so the create table statement is running every time.

Look for a unique value using two variables

I need to implement the ISSUES column by using REFERENCE and LOCALISATION variables for each rows with unique values stocked on Table_issues_Localisation.
The problem is, those two variables make a two dimension table, so I have to dynamically select the right column of LOCALISATION.
Here is an explanation of what I need to do with an image. I'm sorry for posting an image but I think this is way more understandable.
I tried to make a query to update row by row the Table_Observation.ISSUES column with informations stocked on variable columns (=SELECT(SELECT))) of Table_issues_Localisation.
On Table_Observation.ROW_NUMBER column indicates the number of each rows. It is used for the loop.
DO $$
DECLARE
my_variable TEXT;
BEGIN
FOR i IN 1..35 LOOP
my_variable = SELECT((SELECT LOCALISATION FROM Table_Observation WHERE Table_Observation.ROW_NUMBER = i) FROM Table__issues_Localisation ON Table_Observation.REFERENCE = Table__issues_Localisation.REFERENCE)
UPDATE Table_Observation
SET ISSUES = my_variable
WHERE Table_Observation.ROW_NUMBER = i
END LOOP;
END;
$$
Postgres v 9.4
I hope I'm clear enough,
You don't need PL/pgSQL or a loop for this. You can do that with a single update statement:
update observation o
set issues = row_to_json(il) ->> o.localisation
from issues_localisation il
where il.reference = o.reference;
This requires that the values in observation.localisation exactly map to the column names in issues_localisation.
With the following test data:
create table observation
(
rn integer primary key,
reference integer,
localisation text,
issues text
);
create table issues_localisation
(
reference integer,
issues_12 text,
issues_17 text,
issues_27 text,
issues_34 text
);
insert into observation (rn, reference, localisation)
values
(1, 27568, 'issues_27'),
(2, 6492, 'issues_34'),
(3, 1529, 'issues_34'),
(4, 1529, 'issues_34'),
(5, 709, 'issues_12');
insert into issues_localisation (reference, issues_12, issues_17, issues_27, issues_34)
values
(29, 'FB', 'FB', 'TFB', null),
(506, 'M', null, 'M', null),
(709, 'TF', null, null, null),
(1234, null, 'TF', 'TF', null),
(1529, 'FB', 'FB', 'FB', 'M'),
(3548, null, 'M', null, null),
(6492, 'FB', 'FB', 'FB', null),
(18210, 'TFB', null, 'TFB', 'TFB'),
(27568, 'TF', null, 'TF', 'TF');
The update will result in this data in the table observation:
rn | reference | localisation | issues
---+-----------+--------------+-------
1 | 27568 | issues_27 | TF
2 | 6492 | issues_34 |
3 | 1529 | issues_34 | M
4 | 1529 | issues_34 | M
5 | 709 | issues_12 | TF
Online example: http://rextester.com/OCGFM81609
For your next question you should supply the sample data (and the expected output) the way I did in my answer.
I removed the completely useless prefix table_ from the table names as well. That is horrible naming convention.
And here is an (unfinished, still need to execute) example of dynamic sql:
CREATE FUNCTION bagger (_name text) RETURNS text
AS
$func$
DECLARE upd text;
BEGIN
upd := format('
UPDATE observation dst
SET issues = src.%I
FROM issues_localisation src
WHERE src.reference = dst.reference
AND dst.localisation = %L
;', _name, _name);
-- RAISE NOTICE upd;
RETURN upd;
END
$func$
LANGUAGE plpgsql
;
-- SELECT bagger('issues_12' );
WITH loc AS (
SELECT DISTINCT localisation AS loc
FROM observation
)
SELECT bagger( loc.loc)
FROM loc
;

WSO2 DAS does not suport Postgres?

I'm using API manager 1.10.0 and DAS 3.0.1.
I'm trying to setup Postgres for DAS. There is no postpresql.sql, so I used oracle.sql.
But I get exception.
[2016-08-11 15:06:25,079] ERROR {org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter} - Error in executing task: Don't know how to save StructField(max_request_time,DecimalType(30,0),true) to JDBC
java.lang.RuntimeException: Don't know how to save StructField(max_request_time,DecimalType(30,0),true) to JDBC
at org.apache.spark.sql.jdbc.carbon.JDBCRelation.insert(JDBCRelation.scala:194)
at org.apache.spark.sql.sources.InsertIntoDataSource.run(commands.scala:53)
at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:57)
at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:57)
at org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:68)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:87)
at org.apache.spark.sql.SQLContext$QueryExecution.toRdd$lzycompute(SQLContext.scala:950)
at org.apache.spark.sql.SQLContext$QueryExecution.toRdd(SQLContext.scala:950)
at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:144)
at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:128)
at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:51)
at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:755)
at org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQueryLocal(SparkAnalyticsExecutor.java:731)
at org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQuery(SparkAnalyticsExecutor.java:709)
at org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeQuery(CarbonAnalyticsProcessorService.java:201)
at org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeScript(CarbonAnalyticsProcessorService.java:151)
at org.wso2.carbon.analytics.spark.core.AnalyticsTask.execute(AnalyticsTask.java:59)
at org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter.execute(TaskQuartzJobAdapter.java:67)
at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.IllegalArgumentException: Don't know how to save StructField(max_request_time,DecimalType(30,0),true) to JDBC
at org.apache.spark.sql.jdbc.carbon.package$JDBCWriteDetails$$anonfun$schemaString$1$$anonfun$2.apply(carbon.scala:55)
at org.apache.spark.sql.jdbc.carbon.package$JDBCWriteDetails$$anonfun$schemaString$1$$anonfun$2.apply(carbon.scala:42)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.sql.jdbc.carbon.package$JDBCWriteDetails$$anonfun$schemaString$1.apply(carbon.scala:41)
at org.apache.spark.sql.jdbc.carbon.package$JDBCWriteDetails$$anonfun$schemaString$1.apply(carbon.scala:38)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
at org.apache.spark.sql.jdbc.carbon.package$JDBCWriteDetails$.schemaString(carbon.scala:38)
at org.apache.spark.sql.jdbc.carbon.JDBCRelation.insert(JDBCRelation.scala:180)
... 26 more
Create table for API_REQUEST_SUMMARY script is:
CREATE TABLE API_REQUEST_SUMMARY (
api character varying(100)
, api_version character varying(100)
, version character varying(100)
, apiPublisher character varying(100)
, consumerKey character varying(100)
, userId character varying(100)
, context character varying(100)
, max_request_time decimal(30)
, total_request_count integer
, hostName character varying(100)
, year SMALLINT
, month SMALLINT
, day SMALLINT
, time character varying(30)
, PRIMARY KEY(api,api_version,apiPublisher,consumerKey,userId,context,hostName,time)
);
How to make this work with Postgres?
I had to define the column max_request_time as bigint

Postgres 9.3 gives ERROR: Array values must start with ... for hex bytea value

Given a simple table:
PGresult *res = PQexec(usersconn, "CREATE TABLE userfiles (username
varchar[100] PRIMARY KEY, mydata bytea);");
I try to add a data with this:
PGresult *res = PQexec(usersconn, "INSERT into userfiles VALUES (
'peter' , '\\\\x1A' );" );
or this:
PGresult *res = PQexec(usersconn, "INSERT into userfiles VALUES ( 'peter' , '\x1A' );" );
and I get an error message about array values must start with ...
What am I doing wrong in trying to insert a simple hex constant into this record?
Hex values have to use escape syntax:
INSERT into userfiles VALUES ( 'peter' , e'\\x1A' );
See http://www.postgresql.org/docs/9.3/static/datatype-binary.html
Your problem isn't the bytea field, it's the username field.
varchar[100] is an array of 100 varchar elements, each of which is of unbounded length.
I think you probably meant varchar(100), a single varchar scalar of length 0-100.

ERROR: syntax error at or near "(" when creating a new table

I am extremely new to PostgreSQL and every time I try to create a new table, I run into the following error:
ERROR: syntax error at or near "(" LINE 1: ..." ("id_azucarusuario"
SERIAL, "id_usuario" integer(128) NOT ...
Here is the SQL for the table I am trying to define:
CREATE TABLE "public"."usuario_azucar"
( "id_azucarusuario" SERIAL,
"id_usuario" integer(128) NOT NULL,
"codigogeneral" character varying(240) NOT NULL,
"razonsocial" character(240),
"nombrecomercial" character(240),
"nit" integer(128),
"nummatricula" integer(128),
"direccionempresa" character(240),
"subdepartamento" character(240),
"subciudad" character(240),
"subdireccion" character varying(240),
"subcalle" character varying(240),
"subreferencia" character varying(240),
"subtelefono" integer(128),
"subpagweb" character(240),
"subemail" character varying(240),
"rai" character varying(240),
"descripcion_proceso_azucar" character varying(240),
"descripcion_proceso_alcohol" character varying(240),
"balance_energeticoomasic" character varying(240),
"productos_obtenidos" character varying(240),
"capacidad_azuoalco" character varying(240),
"capacidadreal_azuoalcoho" character varying(240),
PRIMARY KEY ("id_azucarusuario")
)
WITHOUT OIDS;
There is no type as integer(...), choose smallint, integer or bigint given the ranges here:
http://www.postgresql.org/docs/current/interactive/datatype-numeric.html
Do not use any reserved keywords..
You must change the table name from 'public' to something else.. as it is a reserved keyword on postgresql..
Here is a link for reserved keywords in postgresql