apache phoenix DoNotRetryIOException - apache-phoenix

when i run the sql to create table, like this:
CREATE TABLE FM_DAY(
APPID VARCHAR NOT NULL,
CREATETIME VARCHAR NOT NULL,
PLATFORM VARCHAR NOT NULL,
USERCOUNT UNSIGNED_LONG,
LONGCOUNT UNSIGNED_LONG,
USERCOUNT UNSIGNED_LONG,
CONSTRAINT PK PRIMARY KEY (APPID,CREATETIME,PLATFORM)
)
this sql has wrong with duplicate key USERCOUNT, and error occur when i run it. However, although it thows a exception, this table is created, and the table is exactly like created with this sql:
CREATE TABLE FM_DAY(
APPID VARCHAR NOT NULL,
CREATETIME VARCHAR NOT NULL,
PLATFORM VARCHAR NOT NULL,
USERCOUNT UNSIGNED_LONG,
LONGCOUNT UNSIGNED_LONG,
CONSTRAINT PK PRIMARY KEY (APPID,CREATETIME,PLATFORM)
)
Unfortunately, the follow exception was throwed when excuting both delete table and select table, and I can't drop this table.
Error: org.apache.hadoop.hbase.DoNotRetryIOException: FM_DAY: 34
at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:84)
at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:1316)
at org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:10525)
at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7435)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1875)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1857)
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32209)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.ArrayIndexOutOfBoundsException: 34
at org.apache.phoenix.schema.PTableImpl.init(PTableImpl.java:354)
at org.apache.phoenix.schema.PTableImpl.<init>(PTableImpl.java:276)
at org.apache.phoenix.schema.PTableImpl.makePTable(PTableImpl.java:265)
at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:826)
at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:462)
at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doDropTable(MetaDataEndpointImpl.java:1336)
at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:1289)
... 10 more
If someone know this situation? And how can I delete this table.
thanks.

I think I ran into this issue before. First, backup your db (in case my instructions don't work :))
Second:
hbase shell
Then use hbase commands to disable and then drop the table.
disable ...
drop ...
After doing this, the table may still show up in Phoenix despite the table not existing in HBase. This is because Phoenix caches metadata in a HBase table. So now you have to find the Phoenix metadata table and drop it (it will be regenerated the next time you start Phoenix).
https://mail-archives.apache.org/mod_mbox/phoenix-user/201403.mbox/%3CCAAF1JditzYY6370DVwajYj9qCHAFXbkorWyJhXVprrDW2vYYBA#mail.gmail.com%3E

Related

Duplicate Key error even after using "On Conflict" clause

My table has following structure
CREATE TABLE myTable
(
user_id VARCHAR(100) NOT NULL,
task_id VARCHAR(100) NOT NULL,
start_time TIMESTAMP NOT NULL,
SOME_COLUMN VARCHAR,
col1 INTEGER,
col2 INTEGER DEFAULT 0
);
ALTER TABLE myTable
ADD CONSTRAINT pk_4_col_constraint UNIQUE (task_id, user_id, start_time, SOME_COLUMN);
ALTER TABLE myTable
ADD CONSTRAINT pk_3_col_constraint UNIQUE (task_id, user_id, start_time);
CREATE INDEX IF NOT EXISTS index_myTable ON myTable USING btree (task_id);
However when i try to insert data into table using
INSERT INTO myTable VALUES (...)
ON CONFLICT (task_id, user_id, start_time) DO UPDATE
SET ... --updating other columns except for [task_id, user_id, start_time]
I get following error
ERROR: duplicate key value violates unique constraint "pk_4_col_constraint"
Detail: Key (task_id, user_id, start_time, SOME_COLUMN)=(XXXXX, XXX, 2021-08-06 01:27:05, XXXXX) already exists.
I got the above error when i tried to programatically insert the row. I was successfully able to the execute query successfully via SQL-IDE.
Now i have following questions:
How is that possible? When 'pk_3_col_constraint' is ensuring my data is unique at 3 columns, adding one extra column will not change anything. What's happening here?
I am aware that although my constraint name starts with 'pk' i am using UNIQUE constraint rather than Primary Key constraint(probably a mistake while creating constraints but either way this error shouldn't have occurred)
Why didn't i get the error when using SQL-IDE?
I read in few articles unique constraint works little different as compared to primary key constraint hence causes this issue at time. If this is known issue is there any way i can replicate this error to understand in more detail.
I am running PostgreSQL 11.9 on x86_64-pc-linux-gnu, compiled by x86_64-pc-linux-gnu-gcc (GCC) 7.4.0, 64-bit version Postgres. My programmatic env was a Java AWS Lambda.
I have noticed people have faced this error occasionally in the past.
https://www.postgresql.org/message-id/15556-7b3ae3aba2c39c23%40postgresql.org
https://www.postgresql.org/message-id/flat/65AECD9A-CE13-4FCB-9158-23BE62BB65DD%40msqr.us#d05d2bb7b2f40437c2ccc9d485d8f41e but there are conclusions as to why it is happening

Flyway - Postgresql partitioned table

I would like to generate partitioned table on PostgreSQL 11 database using Flyway. When I try to execute simple SQL file like
CREATE TABLE blabla (id varchar(100) NOT NULL, name varchar(100) NULL)
PARTITION BY LIST(name);
I have an error saying that "PARTITION" is not validate even if I'm using last release of flyway core library.
Does anyone know if partitioned table on PostgreSQL are managed with Flyway or what is the correct way for partition table creation ?

Altering db migration script with Flyway

I'm not really sure what I've done wrong. But this caused the other services to mess up the other db migration stuff.
Hoping someone will help me with the cause.
Thank you!
We have a db migration script that creates a table
V6__add_subscription_tables.sql
CREATE TABLE plan_subscription (
id bigint NOT NULL,
version bigint NOT NULL,
team_id bigint NOT NULL,
plan_id bigint NOT NULL,
PRIMARY KEY (id),
FOREIGN KEY (plan_id) REFERENCES plan (id),
UNIQUE (team_id)
);
Added another script that would add to dev environment a plan_subscription
But in my current task, the migration will fail if it's a fresh database so I deleted the insertion
V5002__add_test_data.sql
//There are other test data here
/* THIS IS THE DATA THAT I DELETED
INSERT INTO plan_subscription VALUES (nextval('plan_subscription_sequence'), 0, 3, currval('plan_sequence'));
*/
And since I have to alter the table and add a column with constraint, I moved the adding of test data in the new db migration script but.
There seems to be no error but it messed up something that I'm not sure what's the cause.
V5004__add_date_occurred_in_plan_subscription.sql
ALTER TABLE plan_subscription ADD
date_occurred TIMESTAMP WITHOUT TIME ZONE NOT NULL;
INSERT INTO plan_subscription VALUES (nextval('plan_subscription_sequence'), 0, 3, currval('plan_sequence'), current_date);
So what I did, I just removed the NOT NULL constraint and reverted the deletion of the old test data.
I know this is kinda long and weird but I'm hoping someone would know the reason.
Thank you!

Postgres on AWS RDS: Create table succeeds but only creates a relation which I can not find anywhere and can not delete

The create table query is as followed.
CREATE TABLE xxx (
id BIGSERIAL PRIMARY KEY,
user_id BIGINT NOT NULL,
name VARCHAR(255) NOT NULL,
created DATE
);
it returns :
Table xxx created
Execution time: 0.11s
If I now try to select then I get:
SELECT * FROM xxx;
ERROR: relation "xxx" does not exist
Position: 15
If I try to recreate table I get
ERROR: relation "xxx" already exists
1 statement failed.
Execution time: 0.12s
And to top it off. If I reconnect. Then I can do it all over again.
I am using SQL Workbench to connect to the database on AWS RDS.
I am using the master account for these queries.
Can you use PgAdmin to see if it helps. I have my Postgres RDS configured with PgAdmin and haven't faced this issue
Okay I found the problem and in retro spec it makes a lot of sense. The problem was
that I was not committing the changes to database. I guess as I have never worked in a non auto commit environment then I did not know to look for this. Butting the create statement between begin and end like so:
BEGIN;
CREATE TABLE xxx (
id BIGSERIAL PRIMARY KEY,
user_id BIGINT NOT NULL,
name VARCHAR(255) NOT NULL,
created DATE
);
END;
worked

Play Evolutions: getting a syntax error. What gives?

I'm trying to write an evolutions file, and keep getting a syntax error that simply baffles me. Below is the entire evolution.
The error message I'm getting is: syntax error at end of input Position: 32 [ERROR:0, SQLSTATE:42601]
Stack:
Play Framework 2.4
Postgresql 9.4
Slick 3.1.1
Scala 2.11
Play-Slick 1.1.1
Play-Slick Evolutions 1.1.1
I can run both the ups script and the downs script manually just fine. I've tried dropping my database and running through all my evolutions from scratch, and keep getting this error.
What gives? I can't find anything wrong with my syntax.
# --- !Ups
ALTER TABLE "blockly_challenge"
ADD COLUMN "diagram" CHAR(10) NOT NULL DEFAULT 'none',
ADD COLUMN "success_diagram" CHAR(10),
ADD COLUMN "robot_type" SMALLINT NOT NULL DEFAULT 1001,
ADD COLUMN "icon" CHAR(10);
CREATE TABLE "blockly_challenge_coordinates" (
"id" SERIAL,
PRIMARY KEY (id),
"x" SMALLINT NOT NULL,
"y" SMALLINT NOT NULL,
"challenge_uuid" CHAR(10) NOT NULL,
"dependent_on_uuid" CHAR(10)[] NOT NULL DEFAULT '{}',
"created_at" TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
"updated_at" TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
CREATE INDEX "blockly_coordinates_challenge_uuid_idx" ON "blockly_challenge_coordinates" ("challenge_uuid");
CREATE INDEX "blockly_coordinates_challenge_depends_on_uuid_idx" ON "blockly_challenge_coordinates" ("dependent_on_uuid");
# --- !Downs
ALTER TABLE "blockly_challenge"
DROP COLUMN IF EXISTS "diagram",
DROP COLUMN IF EXISTS "success_diagram",
DROP COLUMN IF EXISTS "robot_type",
DROP COLUMN IF EXISTS "icon",
DROP COLUMN IF EXISTS "dependent_on",
DROP COLUMN IF EXISTS "coordinates_y",
DROP COLUMN IF EXISTS "coordinates_x";
DROP INDEX IF EXISTS "blockly_coordinates_challenge_uuid_idx" CASCADE;
DROP INDEX IF EXISTS "blockly_coordinates_challenge_depends_on_uuid_idx" CASCADE;
DROP TABLE IF EXISTS "blockly_challenge_coordinates";
Screenshot:
I figured this out. This file is my 2.sql, and I had started working on a 3.sql but deleted it in lieu of just refactoring my 2.sql. Seems like there's some compilation step that grabs my evolution files and generates something based off of it. Once I ran an sbt clean everything started working fine.