Postgres migration - postgresql

I am trying to add column to the table from .Net to the Postgres by migration. while update-database I had an error says unrelated table name already exist. It is not even table that I am trying add column? How can I fix that issue? Thank you in advance for your response
Failed executing DbCommand (5ms) [Parameters=[], CommandType='Text', CommandTimeout='30']
CREATE TABLE "Customers" (
"Id" uuid NOT NULL,
"Name" text NULL,
"CreateDate" timestamp with time zone NOT NULL,
"UpdatedDate" timestamp with time zone NOT NULL,
CONSTRAINT "PK_Customers" PRIMARY KEY ("Id")
);
This is the error. But I am trying to add column another name table
Also this 42P07: relation "Customers" already exists

Related

How to reset sequence to match migration history

I did some manual modifications to my database table, so now Prisma won't let me run migrate dev. I want to undo my changes, so I'm back in sync with what Prisma want me to have.
I had a lot of changes that I've managed to fix. But there's still one left that I don't know how to handle.
[*] Changed the `Product` table
[*] Altered column `id` (sequence changed)
This is a Prostgres database. How do I reset the sequence to whatever value Prisma wants it to be? How do I know what value Prisma wants?
The problem was that the id column had been re-created with an INTEGER type. The original table had a SERIAL type. After fixing that I could successfully run my migration.
The correct types to use was found in my very first migration.sql file in the migrations folder.
-- CreateTable
CREATE TABLE "Product" (
"id" SERIAL NOT NULL,
"name" TEXT NOT NULL,
"desc" TEXT,
"longDesc" TEXT,
"price" INTEGER,
"imgUrl" TEXT,
"createdAt" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY ("id")
);

How to use timescale hypertables with foreign keys and keep a one-to-many relation?

I am trying to create a database with minimum redundancy in mind. We would like to use the timescaledb hypertables (I run postgreSQL v. 12 and timescaledb v. 1.7.4). The postgreSQL code to create the tables are as follows - you can see the dbdiagram here https://dbdiagram.io/d/5f992f0e3a78976d7b797ca2 or view the tables here Image of database
CREATE TABLE "datapoints" (
"id" bigserial UNIQUE NOT NULL,
"tstz" timestamptz NOT NULL,
"entity_id" bigint NOT NULL,
"value" real NOT NULL,
PRIMARY KEY ("tstz", "entity_id")
);
CREATE TABLE "datapoint_quality" (
"tstz" timestamptz NOT NULL,
"datapoint_id" bigint NOT NULL,
"flag_id" bigint NOT NULL,
PRIMARY KEY ("tstz", "datapoint_id", "flag_id")
);
CREATE TABLE "quality_flags" (
"id" bigserial PRIMARY KEY,
"value" text
);
CREATE TABLE "sensor_types" (
"id" bigserial PRIMARY KEY,
"name" text UNIQUE NOT NULL
);
CREATE TABLE "sensors" (
"tstz" timestamptz NOT NULL DEFAULT (now()),
"id" bigserial UNIQUE NOT NULL,
"name" text NOT NULL,
"parent" bigint NOT NULL,
"type" bigint NOT NULL,
PRIMARY KEY ("tstz", "id")
);
CREATE TABLE "datapoint_annotation" (
"tstz" timestamptz NOT NULL,
"datapoint_id" bigint NOT NULL,
"annotation_id" bigint NOT NULL,
PRIMARY KEY ("tstz", "datapoint_id", "annotation_id")
);
CREATE TABLE "annotations" (
"id" bigserial PRIMARY KEY NOT NULL,
"value" text NOT NULL
);
ALTER TABLE "datapoints" ADD FOREIGN KEY ("entity_id") REFERENCES "sensors" ("id");
ALTER TABLE "datapoint_quality" ADD FOREIGN KEY ("datapoint_id") REFERENCES "datapoints" ("id");
ALTER TABLE "datapoint_quality" ADD FOREIGN KEY ("flag_id") REFERENCES "quality_flags" ("id");
ALTER TABLE "sensors" ADD FOREIGN KEY ("parent") REFERENCES "sensors" ("id");
ALTER TABLE "sensors" ADD FOREIGN KEY ("type") REFERENCES "sensor_types" ("id");
ALTER TABLE "datapoint_annotation" ADD FOREIGN KEY ("datapoint_id") REFERENCES "datapoints" ("id");
ALTER TABLE "datapoint_annotation" ADD FOREIGN KEY ("annotation_id") REFERENCES "annotations" ("id");
CREATE UNIQUE INDEX ON "quality_flags" ("value");
CREATE UNIQUE INDEX ON "annotations" ("value");
So far so good - next I want to create the hypertables, which I do as:
SELECT create_hypertable('datapoint_annotation', 'tstz');
SELECT create_hypertable('datapoint_quality', 'tstz');
SELECT create_hypertable('datapoints', 'tstz');
SELECT create_hypertable('sensors', 'tstz');
This works well for the first two lines, but for the latter two I get the following error:
ERROR: cannot create a unique index without the column "tstz" (used in partitioning)
SQL state: TS103
I can include the tstz in the primary key as ("id", "tstz") and use that as foreign key, but this gives me a one-to-one relation, and for minimum redundancy I would like to have a one-to-many relation.
I am sure there should be some way to do this - so what am I missing?
I'll take the foreign key constraint from datapoint_quality to datapoints as an example.
To make that work with a partitioned table, you need a unique constraint on datapoint. As the error message tell you, such a constraint must contain the partitioning key. So you end up with
ALTER TABLE datapoints ADD UNIQUE (id, tstz);
To reference that unique constraint from datapoint_quality, you need to have the timestamp there too:
ALTER TABLE datapoint_quality ADD datapoints_tstz timestamp with time zone;
You have to fill it with the appropriate values:
UPDATE datapoint_quality AS dq
SET datapoints_tstz = d.tstz
FROM datapoints AS d
WHERE d.id = dq.datapoint_id;
Then set it NOT NULL:
ALTER TABLE datapoint_quality ALTER datapoints_tstz SET NOT NULL;
Now you can define your foreign key:
ALTER TABLE datapoint_quality
ADD FOREIGN KEY (datapoint_id, datapoints_tstz)
REFERENCES datapoints (id, tstz) MATCH FULL;
There is no other way to have foreign key constraints with partitioned tables.
After testing the proposed solution by Laurenz in a database I have and also after replicating the original database of this case. I use PostgreSQL 12.6 and timescaledb 1.7.5.
Basically, I arrived well until defining the Foreign Key for Table datapoint_quality:
ALTER TABLE datapoint_quality
ADD FOREIGN KEY (datapoint_id, datapoints_tstz)
REFERENCES datapoints (id, tstz) MATCH FULL;
The next error is present in both databases I've tested after several attempts (included above one) to define the foreign key to a hypertable:
ERROR: foreign keys to hypertables are not supported Blockquote SQL state: 0A000
According to https://docs.timescale.com/timescaledb/latest/overview/limitations/##distributed-hypertable-limitations, it looks like the above error is part of the hypertable limitations:
Foreign key constraints referencing a hypertable are not supported.
Considering this, does anyone know any solution at the DB level to establish the relationships (1..* or ...) among a table without hypertables to other tables with hypertables behind?
Maybe could be a solution to deal with this at even a REST API level (e.g. Django or Flask) given at timescaledb or PostgreSQL I have not found much more solutions.

Play Evolutions: getting a syntax error. What gives?

I'm trying to write an evolutions file, and keep getting a syntax error that simply baffles me. Below is the entire evolution.
The error message I'm getting is: syntax error at end of input Position: 32 [ERROR:0, SQLSTATE:42601]
Stack:
Play Framework 2.4
Postgresql 9.4
Slick 3.1.1
Scala 2.11
Play-Slick 1.1.1
Play-Slick Evolutions 1.1.1
I can run both the ups script and the downs script manually just fine. I've tried dropping my database and running through all my evolutions from scratch, and keep getting this error.
What gives? I can't find anything wrong with my syntax.
# --- !Ups
ALTER TABLE "blockly_challenge"
ADD COLUMN "diagram" CHAR(10) NOT NULL DEFAULT 'none',
ADD COLUMN "success_diagram" CHAR(10),
ADD COLUMN "robot_type" SMALLINT NOT NULL DEFAULT 1001,
ADD COLUMN "icon" CHAR(10);
CREATE TABLE "blockly_challenge_coordinates" (
"id" SERIAL,
PRIMARY KEY (id),
"x" SMALLINT NOT NULL,
"y" SMALLINT NOT NULL,
"challenge_uuid" CHAR(10) NOT NULL,
"dependent_on_uuid" CHAR(10)[] NOT NULL DEFAULT '{}',
"created_at" TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
"updated_at" TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
CREATE INDEX "blockly_coordinates_challenge_uuid_idx" ON "blockly_challenge_coordinates" ("challenge_uuid");
CREATE INDEX "blockly_coordinates_challenge_depends_on_uuid_idx" ON "blockly_challenge_coordinates" ("dependent_on_uuid");
# --- !Downs
ALTER TABLE "blockly_challenge"
DROP COLUMN IF EXISTS "diagram",
DROP COLUMN IF EXISTS "success_diagram",
DROP COLUMN IF EXISTS "robot_type",
DROP COLUMN IF EXISTS "icon",
DROP COLUMN IF EXISTS "dependent_on",
DROP COLUMN IF EXISTS "coordinates_y",
DROP COLUMN IF EXISTS "coordinates_x";
DROP INDEX IF EXISTS "blockly_coordinates_challenge_uuid_idx" CASCADE;
DROP INDEX IF EXISTS "blockly_coordinates_challenge_depends_on_uuid_idx" CASCADE;
DROP TABLE IF EXISTS "blockly_challenge_coordinates";
Screenshot:
I figured this out. This file is my 2.sql, and I had started working on a 3.sql but deleted it in lieu of just refactoring my 2.sql. Seems like there's some compilation step that grabs my evolution files and generates something based off of it. Once I ran an sbt clean everything started working fine.

apache phoenix DoNotRetryIOException

when i run the sql to create table, like this:
CREATE TABLE FM_DAY(
APPID VARCHAR NOT NULL,
CREATETIME VARCHAR NOT NULL,
PLATFORM VARCHAR NOT NULL,
USERCOUNT UNSIGNED_LONG,
LONGCOUNT UNSIGNED_LONG,
USERCOUNT UNSIGNED_LONG,
CONSTRAINT PK PRIMARY KEY (APPID,CREATETIME,PLATFORM)
)
this sql has wrong with duplicate key USERCOUNT, and error occur when i run it. However, although it thows a exception, this table is created, and the table is exactly like created with this sql:
CREATE TABLE FM_DAY(
APPID VARCHAR NOT NULL,
CREATETIME VARCHAR NOT NULL,
PLATFORM VARCHAR NOT NULL,
USERCOUNT UNSIGNED_LONG,
LONGCOUNT UNSIGNED_LONG,
CONSTRAINT PK PRIMARY KEY (APPID,CREATETIME,PLATFORM)
)
Unfortunately, the follow exception was throwed when excuting both delete table and select table, and I can't drop this table.
Error: org.apache.hadoop.hbase.DoNotRetryIOException: FM_DAY: 34
at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:84)
at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:1316)
at org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:10525)
at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7435)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1875)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1857)
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32209)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.ArrayIndexOutOfBoundsException: 34
at org.apache.phoenix.schema.PTableImpl.init(PTableImpl.java:354)
at org.apache.phoenix.schema.PTableImpl.<init>(PTableImpl.java:276)
at org.apache.phoenix.schema.PTableImpl.makePTable(PTableImpl.java:265)
at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:826)
at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:462)
at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doDropTable(MetaDataEndpointImpl.java:1336)
at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:1289)
... 10 more
If someone know this situation? And how can I delete this table.
thanks.
I think I ran into this issue before. First, backup your db (in case my instructions don't work :))
Second:
hbase shell
Then use hbase commands to disable and then drop the table.
disable ...
drop ...
After doing this, the table may still show up in Phoenix despite the table not existing in HBase. This is because Phoenix caches metadata in a HBase table. So now you have to find the Phoenix metadata table and drop it (it will be regenerated the next time you start Phoenix).
https://mail-archives.apache.org/mod_mbox/phoenix-user/201403.mbox/%3CCAAF1JditzYY6370DVwajYj9qCHAFXbkorWyJhXVprrDW2vYYBA#mail.gmail.com%3E

PostgreSQL tables creation with wrong order

I have an .sql file, that creates lots of tables, that are related to each other.
I made other file for testing, that holds only two statements:
CREATE TABLE "USER" (
"id" bigint NOT NULL,
"name" varchar(50),
PRIMARY KEY ("id"));
CREATE TABLE "PERSON" (
"id" bigint NOT NULL,
"name" varchar(50),
"user" bigint,
PRIMARY KEY ("id"),
CONSTRAINT "fk_user" FOREIGN KEY ("user") REFERENCES "USER" ("id"));
This works fine if i'm trying to execute such file, but if i have other order - where table "PERSON" is created first - i'm getting ERROR: relation "USER" does not exist.
Is it possible to make some changes (or use some additional options when running 'psql' command), leaving the order as it is, to make it work?
EDIT: I understand why this error happens in given case, but i was thinking about some solution, where i don't need to change the order of my CREATE statements (Imagine you have hundreds of tables)... In MySQL you can simply use SET FOREIGN_KEY_CHECKS=0; and this will work. Do i have similar possibilities in PostgreSQL?
If you want table a to reference table b, you must either create table b before table a, or add the foreign key reference after creation:
ALTER TABLE a ADD FOREIGN KEY (a_col) REFERENCES b(b_col);
That works to create two tables that reference each other, too, but you won't be able to create rows unless you make one of them DEFERRABLE INITIALLY DEFERRED.
You are getting the error because at the point you are creating the foreign key on the the PERSON table it references the USER table which does not exist yet.
You can work round this issue by separating out FOREIGN KEY CONSTRAINT into it's own statement and applying this after you have created both tables:
CREATE TABLE "PERSON" (
"id" bigint NOT NULL,
"name" varchar(50),
"user" bigint,
PRIMARY KEY ("id"));
CREATE TABLE "USER" (
"id" bigint NOT NULL,
"name" varchar(50),
PRIMARY KEY ("id"));
ALTER TABLE "PERSON"
ADD CONSTRAINT fk_user
FOREIGN KEY ("user")
REFERENCES "USER" (id);