Flyway - Postgresql partitioned table - postgresql

I would like to generate partitioned table on PostgreSQL 11 database using Flyway. When I try to execute simple SQL file like
CREATE TABLE blabla (id varchar(100) NOT NULL, name varchar(100) NULL)
PARTITION BY LIST(name);
I have an error saying that "PARTITION" is not validate even if I'm using last release of flyway core library.
Does anyone know if partitioned table on PostgreSQL are managed with Flyway or what is the correct way for partition table creation ?

Related

How can we execute Oracle sequence in Postgres?

In between migration from Oracle to Postgres, I need to execute some insert statement for an Oracle table from Postgres (in which the primary key field is using a sequence for uniqueness).
Now at the time of the migration I am converting some procedure that is used to insert a row in a table, but I can't move table directly from oracle to Postgres due to a higher dependency on the table.
That's why I need to execute an Oracle sequence from Postgres.
The simplest solution is probably to create a view in Oracle that doesn't contain the column that is to be filled from the sequence.
Then define a trigger on the table that fills the column from the sequence when NULL and creare a foreign table on the view.
Wheb you INSERT into the foreign table, the column will get filled by the trigger.

Postgresql Query Returns No Data

I'm trying to migrate my sqlite database to postgresql.
I'm new to postgresql but stuck on this simple issue.
I fist used the pgloader tool to migrate to postgresql and have a working database. I see it created the table this way:
CREATE TABLE "AOrder"
(
"OrderNumber" bigserial NOT NULL,
"BuyerName" text,
etc
)
WITH (
OIDS=FALSE
);
ALTER TABLE "AOrder"
OWNER TO postgres;
Using pgadmin3 if I run the SQL query:
select
*
from AOrder
This returns the column names plus all the data as expected.
However, the following:
select
*
from "AOrder"
This returns just the column names and no data. Where's the data?
So it's not a problem with capitalization. Is there a setting in postgresql that is making this happen?
(This is ultimately the root problem for me since I am then using Sqlachemy which inserts the double quotes in queries. I could not figure out a way to change that behavior.)
Thanks!

Apache Spark - Error persisting Dataframe to MemSQL database using JDBC driver

I'm currently facing an issue while trying to save an Apache Spark DataFrame loaded from an Apache Spark temp table to a distributed MemSQL database.
The trick is that I cannot use MemSQLContext connector for the moment. So I'm using JDBC driver.
Here is my code:
//store suppliers data from temp table into a dataframe
val suppliers = sqlContext.read.table("tmp_SUPPLIER")
//append data to the target table
suppliers.write.mode(SaveMode.Append).jdbc(url_memsql, "R_SUPPLIER", prop_memsql)
Here is the error message (occuring during the suppliers.write statement):
java.sql.SQLException: Distributed tables must either have a PRIMARY or SHARD key.
Note:
R_SUPPLIER table has exactly the same fields and datatypes than the temp table and has a primary key set.
FYI, here are some clues:
R_SUPPLIER script:
`CREATE TABLE R_SUPPLIER
(
SUP_ID INT NOT NULL PRIMARY KEY,
SUP_CAGE_CODE CHAR(5) NULL,
SUP_INTERNAL_SAP_CODE CHAR(5) NULL,
SUP_NAME VARCHAR(255) NULL,
SHARD KEY(SUP_ID)
);`
The suppliers.write statement has worked once, but data was then loaded in the DataFrame with a sqlContext.read.jdbc command and not sqlContext.sql (data was stored in a distant database and not in Apache Spark local temp table).
Did anyone face the same issue, please?
Are you getting that error when you run the create table, or when you run the suppliers.write code? That is an error that you should only get when creating a table. Therefore if you are hitting it when running suppliers.write, your code is probably trying to create and write to a new table, not the one you created before.

apache phoenix DoNotRetryIOException

when i run the sql to create table, like this:
CREATE TABLE FM_DAY(
APPID VARCHAR NOT NULL,
CREATETIME VARCHAR NOT NULL,
PLATFORM VARCHAR NOT NULL,
USERCOUNT UNSIGNED_LONG,
LONGCOUNT UNSIGNED_LONG,
USERCOUNT UNSIGNED_LONG,
CONSTRAINT PK PRIMARY KEY (APPID,CREATETIME,PLATFORM)
)
this sql has wrong with duplicate key USERCOUNT, and error occur when i run it. However, although it thows a exception, this table is created, and the table is exactly like created with this sql:
CREATE TABLE FM_DAY(
APPID VARCHAR NOT NULL,
CREATETIME VARCHAR NOT NULL,
PLATFORM VARCHAR NOT NULL,
USERCOUNT UNSIGNED_LONG,
LONGCOUNT UNSIGNED_LONG,
CONSTRAINT PK PRIMARY KEY (APPID,CREATETIME,PLATFORM)
)
Unfortunately, the follow exception was throwed when excuting both delete table and select table, and I can't drop this table.
Error: org.apache.hadoop.hbase.DoNotRetryIOException: FM_DAY: 34
at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:84)
at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:1316)
at org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:10525)
at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7435)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1875)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1857)
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32209)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.ArrayIndexOutOfBoundsException: 34
at org.apache.phoenix.schema.PTableImpl.init(PTableImpl.java:354)
at org.apache.phoenix.schema.PTableImpl.<init>(PTableImpl.java:276)
at org.apache.phoenix.schema.PTableImpl.makePTable(PTableImpl.java:265)
at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:826)
at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:462)
at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doDropTable(MetaDataEndpointImpl.java:1336)
at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:1289)
... 10 more
If someone know this situation? And how can I delete this table.
thanks.
I think I ran into this issue before. First, backup your db (in case my instructions don't work :))
Second:
hbase shell
Then use hbase commands to disable and then drop the table.
disable ...
drop ...
After doing this, the table may still show up in Phoenix despite the table not existing in HBase. This is because Phoenix caches metadata in a HBase table. So now you have to find the Phoenix metadata table and drop it (it will be regenerated the next time you start Phoenix).
https://mail-archives.apache.org/mod_mbox/phoenix-user/201403.mbox/%3CCAAF1JditzYY6370DVwajYj9qCHAFXbkorWyJhXVprrDW2vYYBA#mail.gmail.com%3E

PostgreSQL syntax error related to INSERT and/or WITH. Occurs in 8.4 but not 9.1. Ideas?

Here is some SQL for PostgreSQL (I know it's a silly query; I've boiled the original query down to the simplest broken code):
CREATE TABLE entity (
id SERIAL PRIMARY KEY
);
WITH new_entity
AS (INSERT INTO entity DEFAULT VALUES
RETURNING id
)
SELECT id FROM new_entity;
Here it is running on PostgreSQL 9.1:
psql:../sandbox/test.sql:3: NOTICE: CREATE TABLE will create implicit sequence "entity_id_seq" for serial column "entity.id"
psql:../sandbox/test.sql:3: NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "entity_pkey" for table "entity"
CREATE TABLE
id
----
1
(1 row)
Here it is not running on PostgreSQL 8.4:
psql:../sandbox/test.sql:3: NOTICE: CREATE TABLE will create implicit sequence "entity_id_seq" for serial column "entity.id"
psql:../sandbox/test.sql:3: NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "entity_pkey" for table "entity"
CREATE TABLE
psql:../sandbox/test.sql:9: ERROR: syntax error at or near "INSERT"
LINE 2: AS (INSERT INTO entity DEFAULT VALUES
Obviously, the table creation goes fine in both cases, but it wipes out on the second query in PostgreSQL 8.4. From this error message I am unable to gather exactly what the problem is. I don't know what it is that 9.1 has and 8.4 doesn't have that could result in this syntax error. It's hilariously hard to google it. I am approaching the level of desperation required to trawl through the pages of PostgreSQL release notes between 8.4 and 9.1 and finding out if anything related to WITH … AS or INSERT … RETURNING was changed or added, but before I go there I am hoping one of you has the experience and/or godly google-fu to help me out here.
Data-modifying statements in WITH were introduced in Postgres 9.1