Play Evolutions: getting a syntax error. What gives? - postgresql

I'm trying to write an evolutions file, and keep getting a syntax error that simply baffles me. Below is the entire evolution.
The error message I'm getting is: syntax error at end of input Position: 32 [ERROR:0, SQLSTATE:42601]
Stack:
Play Framework 2.4
Postgresql 9.4
Slick 3.1.1
Scala 2.11
Play-Slick 1.1.1
Play-Slick Evolutions 1.1.1
I can run both the ups script and the downs script manually just fine. I've tried dropping my database and running through all my evolutions from scratch, and keep getting this error.
What gives? I can't find anything wrong with my syntax.
# --- !Ups
ALTER TABLE "blockly_challenge"
ADD COLUMN "diagram" CHAR(10) NOT NULL DEFAULT 'none',
ADD COLUMN "success_diagram" CHAR(10),
ADD COLUMN "robot_type" SMALLINT NOT NULL DEFAULT 1001,
ADD COLUMN "icon" CHAR(10);
CREATE TABLE "blockly_challenge_coordinates" (
"id" SERIAL,
PRIMARY KEY (id),
"x" SMALLINT NOT NULL,
"y" SMALLINT NOT NULL,
"challenge_uuid" CHAR(10) NOT NULL,
"dependent_on_uuid" CHAR(10)[] NOT NULL DEFAULT '{}',
"created_at" TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
"updated_at" TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
CREATE INDEX "blockly_coordinates_challenge_uuid_idx" ON "blockly_challenge_coordinates" ("challenge_uuid");
CREATE INDEX "blockly_coordinates_challenge_depends_on_uuid_idx" ON "blockly_challenge_coordinates" ("dependent_on_uuid");
# --- !Downs
ALTER TABLE "blockly_challenge"
DROP COLUMN IF EXISTS "diagram",
DROP COLUMN IF EXISTS "success_diagram",
DROP COLUMN IF EXISTS "robot_type",
DROP COLUMN IF EXISTS "icon",
DROP COLUMN IF EXISTS "dependent_on",
DROP COLUMN IF EXISTS "coordinates_y",
DROP COLUMN IF EXISTS "coordinates_x";
DROP INDEX IF EXISTS "blockly_coordinates_challenge_uuid_idx" CASCADE;
DROP INDEX IF EXISTS "blockly_coordinates_challenge_depends_on_uuid_idx" CASCADE;
DROP TABLE IF EXISTS "blockly_challenge_coordinates";
Screenshot:

I figured this out. This file is my 2.sql, and I had started working on a 3.sql but deleted it in lieu of just refactoring my 2.sql. Seems like there's some compilation step that grabs my evolution files and generates something based off of it. Once I ran an sbt clean everything started working fine.

Related

syntax error in my postgres statement for alter sequence

UPDATE:
using postgres 14,
I just get error :Query 1 ERROR: ERROR: syntax error at or near "ok"
This is my database:
-- Sequence and defined type
CREATE SEQUENCE IF NOT EXISTS id_seq;
-- Table Definition
CREATE TABLE "public"."ok" (
"id" int8 NOT NULL DEFAULT nextval('id_seq'::regclass),
PRIMARY KEY ("id")
);
And I want to modify the sequence:
ALTER SEQUENCE ok_id_seq RESTART;
I keep getting errors at ok_id_seq.
I tried id_seq only
Tried quotes everywhere.
Ok, I now learnt that sequences are not table specific. I was trying to do alter table alter sequence.
I also used SELECT * FROM information_schema.sequences; to view all available sequences.
I don't know what happened so I recreated the table, checked for the sequence name.
Then was able to alter it with restart

Duplicate Key error even after using "On Conflict" clause

My table has following structure
CREATE TABLE myTable
(
user_id VARCHAR(100) NOT NULL,
task_id VARCHAR(100) NOT NULL,
start_time TIMESTAMP NOT NULL,
SOME_COLUMN VARCHAR,
col1 INTEGER,
col2 INTEGER DEFAULT 0
);
ALTER TABLE myTable
ADD CONSTRAINT pk_4_col_constraint UNIQUE (task_id, user_id, start_time, SOME_COLUMN);
ALTER TABLE myTable
ADD CONSTRAINT pk_3_col_constraint UNIQUE (task_id, user_id, start_time);
CREATE INDEX IF NOT EXISTS index_myTable ON myTable USING btree (task_id);
However when i try to insert data into table using
INSERT INTO myTable VALUES (...)
ON CONFLICT (task_id, user_id, start_time) DO UPDATE
SET ... --updating other columns except for [task_id, user_id, start_time]
I get following error
ERROR: duplicate key value violates unique constraint "pk_4_col_constraint"
Detail: Key (task_id, user_id, start_time, SOME_COLUMN)=(XXXXX, XXX, 2021-08-06 01:27:05, XXXXX) already exists.
I got the above error when i tried to programatically insert the row. I was successfully able to the execute query successfully via SQL-IDE.
Now i have following questions:
How is that possible? When 'pk_3_col_constraint' is ensuring my data is unique at 3 columns, adding one extra column will not change anything. What's happening here?
I am aware that although my constraint name starts with 'pk' i am using UNIQUE constraint rather than Primary Key constraint(probably a mistake while creating constraints but either way this error shouldn't have occurred)
Why didn't i get the error when using SQL-IDE?
I read in few articles unique constraint works little different as compared to primary key constraint hence causes this issue at time. If this is known issue is there any way i can replicate this error to understand in more detail.
I am running PostgreSQL 11.9 on x86_64-pc-linux-gnu, compiled by x86_64-pc-linux-gnu-gcc (GCC) 7.4.0, 64-bit version Postgres. My programmatic env was a Java AWS Lambda.
I have noticed people have faced this error occasionally in the past.
https://www.postgresql.org/message-id/15556-7b3ae3aba2c39c23%40postgresql.org
https://www.postgresql.org/message-id/flat/65AECD9A-CE13-4FCB-9158-23BE62BB65DD%40msqr.us#d05d2bb7b2f40437c2ccc9d485d8f41e but there are conclusions as to why it is happening

SET*5 set_config ------------ (1 row) SET*4 CREATE TABLE ERROR: syntax error at or near "AS" LINE 2: AS integer ^ Import error: exit status 3 [duplicate]

Postgresql lost the autoincrement feature after a restore. My database was created on Windows 10 (v 10.1) and I restored it to Postgresql on Ubuntu (v 9.6). Now that I posted the question I saw that the versions are different. I didn't use any obscure feature, only tables, functions, and columns with serials. Also, the restore process didn't complain about anything. I checked the dump options but I couldn't find anything that caused the problem.
With Pgadmin right-clicking the table > scripts > create a script on my original table gives this:
CREATE TABLE public.produto
(
produto_id integer NOT NULL DEFAULT nextval('produto_produto_id_seq'::regclass),
...
);
In my server, the restored database. It seems it lost the feature.
CREATE TABLE public.produto
(
produto_id integer NOT NULL,
...
);
You didn't check for errors during restore of the database; there should have been a few.
A dump of a table like yours will look like this in PostgreSQL v10 (this is 10.3 and it looks slightly different in 10.1, but that's irrelevant to this case):
CREATE TABLE public.produto (
produto_id integer NOT NULL
);
ALTER TABLE public.produto OWNER TO laurenz;
CREATE SEQUENCE public.produto_produto_id_seq
AS integer
START WITH 1
INCREMENT BY 1
NO MINVALUE
NO MAXVALUE
CACHE 1;
ALTER TABLE public.produto_produto_id_seq OWNER TO laurenz;
ALTER SEQUENCE public.produto_produto_id_seq
OWNED BY public.produto.produto_id;
ALTER TABLE ONLY public.produto
ALTER COLUMN produto_id
SET DEFAULT nextval('public.produto_produto_id_seq'::regclass);
Now the problem is that AS integer was introduced to CREATE SEQUENCE in PostgreSQL v10, so that statement will fail with a syntax error in 9.6.
What is the consequence?
The table is created like in the first statement.
The third statement creating the sequence fails.
All the following statements that require the sequence will also fail.
Note: It is not supported to downgrade PostgeSQL with dump and restore.
The solution is to manually edit the dump until it works, in particular you'll have to remove the AS integer or AS bigint clause in CREATE SEQUENCE.

After restoring my database serial removed from column in Postgresql

Postgresql lost the autoincrement feature after a restore. My database was created on Windows 10 (v 10.1) and I restored it to Postgresql on Ubuntu (v 9.6). Now that I posted the question I saw that the versions are different. I didn't use any obscure feature, only tables, functions, and columns with serials. Also, the restore process didn't complain about anything. I checked the dump options but I couldn't find anything that caused the problem.
With Pgadmin right-clicking the table > scripts > create a script on my original table gives this:
CREATE TABLE public.produto
(
produto_id integer NOT NULL DEFAULT nextval('produto_produto_id_seq'::regclass),
...
);
In my server, the restored database. It seems it lost the feature.
CREATE TABLE public.produto
(
produto_id integer NOT NULL,
...
);
You didn't check for errors during restore of the database; there should have been a few.
A dump of a table like yours will look like this in PostgreSQL v10 (this is 10.3 and it looks slightly different in 10.1, but that's irrelevant to this case):
CREATE TABLE public.produto (
produto_id integer NOT NULL
);
ALTER TABLE public.produto OWNER TO laurenz;
CREATE SEQUENCE public.produto_produto_id_seq
AS integer
START WITH 1
INCREMENT BY 1
NO MINVALUE
NO MAXVALUE
CACHE 1;
ALTER TABLE public.produto_produto_id_seq OWNER TO laurenz;
ALTER SEQUENCE public.produto_produto_id_seq
OWNED BY public.produto.produto_id;
ALTER TABLE ONLY public.produto
ALTER COLUMN produto_id
SET DEFAULT nextval('public.produto_produto_id_seq'::regclass);
Now the problem is that AS integer was introduced to CREATE SEQUENCE in PostgreSQL v10, so that statement will fail with a syntax error in 9.6.
What is the consequence?
The table is created like in the first statement.
The third statement creating the sequence fails.
All the following statements that require the sequence will also fail.
Note: It is not supported to downgrade PostgeSQL with dump and restore.
The solution is to manually edit the dump until it works, in particular you'll have to remove the AS integer or AS bigint clause in CREATE SEQUENCE.

apache phoenix DoNotRetryIOException

when i run the sql to create table, like this:
CREATE TABLE FM_DAY(
APPID VARCHAR NOT NULL,
CREATETIME VARCHAR NOT NULL,
PLATFORM VARCHAR NOT NULL,
USERCOUNT UNSIGNED_LONG,
LONGCOUNT UNSIGNED_LONG,
USERCOUNT UNSIGNED_LONG,
CONSTRAINT PK PRIMARY KEY (APPID,CREATETIME,PLATFORM)
)
this sql has wrong with duplicate key USERCOUNT, and error occur when i run it. However, although it thows a exception, this table is created, and the table is exactly like created with this sql:
CREATE TABLE FM_DAY(
APPID VARCHAR NOT NULL,
CREATETIME VARCHAR NOT NULL,
PLATFORM VARCHAR NOT NULL,
USERCOUNT UNSIGNED_LONG,
LONGCOUNT UNSIGNED_LONG,
CONSTRAINT PK PRIMARY KEY (APPID,CREATETIME,PLATFORM)
)
Unfortunately, the follow exception was throwed when excuting both delete table and select table, and I can't drop this table.
Error: org.apache.hadoop.hbase.DoNotRetryIOException: FM_DAY: 34
at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:84)
at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:1316)
at org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:10525)
at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7435)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1875)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1857)
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32209)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.ArrayIndexOutOfBoundsException: 34
at org.apache.phoenix.schema.PTableImpl.init(PTableImpl.java:354)
at org.apache.phoenix.schema.PTableImpl.<init>(PTableImpl.java:276)
at org.apache.phoenix.schema.PTableImpl.makePTable(PTableImpl.java:265)
at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:826)
at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:462)
at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doDropTable(MetaDataEndpointImpl.java:1336)
at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:1289)
... 10 more
If someone know this situation? And how can I delete this table.
thanks.
I think I ran into this issue before. First, backup your db (in case my instructions don't work :))
Second:
hbase shell
Then use hbase commands to disable and then drop the table.
disable ...
drop ...
After doing this, the table may still show up in Phoenix despite the table not existing in HBase. This is because Phoenix caches metadata in a HBase table. So now you have to find the Phoenix metadata table and drop it (it will be regenerated the next time you start Phoenix).
https://mail-archives.apache.org/mod_mbox/phoenix-user/201403.mbox/%3CCAAF1JditzYY6370DVwajYj9qCHAFXbkorWyJhXVprrDW2vYYBA#mail.gmail.com%3E