Altering db migration script with Flyway - postgresql

I'm not really sure what I've done wrong. But this caused the other services to mess up the other db migration stuff.
Hoping someone will help me with the cause.
Thank you!
We have a db migration script that creates a table
V6__add_subscription_tables.sql
CREATE TABLE plan_subscription (
id bigint NOT NULL,
version bigint NOT NULL,
team_id bigint NOT NULL,
plan_id bigint NOT NULL,
PRIMARY KEY (id),
FOREIGN KEY (plan_id) REFERENCES plan (id),
UNIQUE (team_id)
);
Added another script that would add to dev environment a plan_subscription
But in my current task, the migration will fail if it's a fresh database so I deleted the insertion
V5002__add_test_data.sql
//There are other test data here
/* THIS IS THE DATA THAT I DELETED
INSERT INTO plan_subscription VALUES (nextval('plan_subscription_sequence'), 0, 3, currval('plan_sequence'));
*/
And since I have to alter the table and add a column with constraint, I moved the adding of test data in the new db migration script but.
There seems to be no error but it messed up something that I'm not sure what's the cause.
V5004__add_date_occurred_in_plan_subscription.sql
ALTER TABLE plan_subscription ADD
date_occurred TIMESTAMP WITHOUT TIME ZONE NOT NULL;
INSERT INTO plan_subscription VALUES (nextval('plan_subscription_sequence'), 0, 3, currval('plan_sequence'), current_date);
So what I did, I just removed the NOT NULL constraint and reverted the deletion of the old test data.
I know this is kinda long and weird but I'm hoping someone would know the reason.
Thank you!

Related

PostgreSQL "duplicate key violation" with SEQUENCE

[Issue resolved. See Answer below.]
I have just encountered a series of “duplicate key value violates unique constraint” errors with a system that has been working well for months. And I cannot determine why they occurred.
Here is the error:
org.springframework.dao.DuplicateKeyException: PreparedStatementCallback;
SQL [
INSERT INTO transaction_item
(transaction_group_id, transaction_type, start_time, end_time) VALUES
(?, ?::transaction_type_enum, ?, ?)
];
ERROR: duplicate key value violates unique constraint "transaction_item_pkey"
Detail: Key (transaction_id)=(67109) already exists.;
Here is the definition of the relevant SEQUENCE and TABLE:
CREATE SEQUENCE transaction_id_seq AS bigint;
CREATE TABLE transaction_item (
transaction_id bigint PRIMARY KEY DEFAULT NEXTVAL('transaction_id_seq'),
transaction_group_id bigint NOT NULL,
transaction_type transaction_type_enum NOT NULL,
start_time timestamp NOT NULL,
end_time timestamp NOT NULL
);
And here is the only SQL statement used for inserting to that table:
INSERT INTO transaction_item
(transaction_group_id, transaction_type, start_time, end_time) VALUES
(:transaction_group_id, :transaction_type::transaction_type_enum, :start_time, :end_time)
As you can see, I’m not explicitly trying to set the value of transaction_id. I've defined a default value for the column definition, and using that to fetch a value formthe SEQUENCE.
I have been under the impression that the above approach is safe, even for use in high-concurrency situation. A SEQUENCE should never return the same value twice, right?
I’d really appreciate some help to understand why this has occurred, and how to fix it. Thank you!
I found the cause of this issue.
A few months ago (during development of this system) an issue was discovered that meant it was necessary to purge any existing test data from the database. I did this using DELETE FROM statements for all TABLES and ALTER ... RESTART statements for all SEQUENCES. These statements were added to the Liquibase configuration to be executing during startup for the new code. From inspecting the logs at the time, it appears that an instance of the system was still running at the time of the migration. And this happened: The new instance of the system deleted all data from the TRANSACTION_ITEM table, the still-running instance then added more data to that table, and then the new instance restarted the SEQUENCE use for inserting those records. So yesterday, when I received the duplicate key violations, it was because the SEQUENCE finally reached the ID values corresponding to the TRANSACTION_ITEM records that were added by still-running instance back when DB purge and migration occurred.
Long story, but it all makes sense now. Thanks to those who commented on this issue.

apache phoenix DoNotRetryIOException

when i run the sql to create table, like this:
CREATE TABLE FM_DAY(
APPID VARCHAR NOT NULL,
CREATETIME VARCHAR NOT NULL,
PLATFORM VARCHAR NOT NULL,
USERCOUNT UNSIGNED_LONG,
LONGCOUNT UNSIGNED_LONG,
USERCOUNT UNSIGNED_LONG,
CONSTRAINT PK PRIMARY KEY (APPID,CREATETIME,PLATFORM)
)
this sql has wrong with duplicate key USERCOUNT, and error occur when i run it. However, although it thows a exception, this table is created, and the table is exactly like created with this sql:
CREATE TABLE FM_DAY(
APPID VARCHAR NOT NULL,
CREATETIME VARCHAR NOT NULL,
PLATFORM VARCHAR NOT NULL,
USERCOUNT UNSIGNED_LONG,
LONGCOUNT UNSIGNED_LONG,
CONSTRAINT PK PRIMARY KEY (APPID,CREATETIME,PLATFORM)
)
Unfortunately, the follow exception was throwed when excuting both delete table and select table, and I can't drop this table.
Error: org.apache.hadoop.hbase.DoNotRetryIOException: FM_DAY: 34
at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:84)
at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:1316)
at org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:10525)
at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7435)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1875)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1857)
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32209)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.ArrayIndexOutOfBoundsException: 34
at org.apache.phoenix.schema.PTableImpl.init(PTableImpl.java:354)
at org.apache.phoenix.schema.PTableImpl.<init>(PTableImpl.java:276)
at org.apache.phoenix.schema.PTableImpl.makePTable(PTableImpl.java:265)
at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:826)
at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:462)
at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doDropTable(MetaDataEndpointImpl.java:1336)
at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:1289)
... 10 more
If someone know this situation? And how can I delete this table.
thanks.
I think I ran into this issue before. First, backup your db (in case my instructions don't work :))
Second:
hbase shell
Then use hbase commands to disable and then drop the table.
disable ...
drop ...
After doing this, the table may still show up in Phoenix despite the table not existing in HBase. This is because Phoenix caches metadata in a HBase table. So now you have to find the Phoenix metadata table and drop it (it will be regenerated the next time you start Phoenix).
https://mail-archives.apache.org/mod_mbox/phoenix-user/201403.mbox/%3CCAAF1JditzYY6370DVwajYj9qCHAFXbkorWyJhXVprrDW2vYYBA#mail.gmail.com%3E

How should I actually implement a record history table with PostgreSQL?

I want to add revisioning for records in an existing application which stores data in a PostgreSQL database. I read about strategies e.g. in this question, this question and this blog post.
I think that the approach to create a second history table which will rarely be queried will work best. However I do have some practical problems. Let's say that this is my table I want to add revision control to:
create table people(
id serial not null primary key,
name varchar(255) not null
);
For this very simple table my history table could look like this:
create table people_history(
peopleId int not null references people(id) on delete cascade on update restrict,
revision int not null,
revisionTimestamp timestamptz not null default current_timestamp,
name character varying(255) not null,
primary key(peopleId, revision)
);
And this brings the first problems up:
How do I generate the revision number?
Of course I could create a sequence from which I request revision numbers which would be easy. However that would leave large gaps between revisions per person as many people share the same sequence and it would feel more natural if the revision numbers were ascending numbers without gaps per person.
So I am tempted to find my revision number by select max(revision)+1 from ... where peopleId=.... However that could lead to a race condition if two threads ask for the next revision number and try to insert. That is very unlikely I have to admit (especially in my case where only few updates happen anyway) and would not cause data to corrupt as that would be a duplicate primary key and thus cause a transaction rollback, but it is not pretty either. I wonder if there is a prettier solution.
How do I insert data into the history table?
Two ways come to mind: Manually on every statement that updates the main table or using a trigger. A trigger sounds less error-prone as it is less likely that I forget about a query somewhere. However I cannot communicate to the application exactly which revision number was just created, can I? So if I want to create a couple of event tables like this:
create table peopleUserEditEvent (
poepleId int not null,
revision int not null,
userId int not null references users(id) on delete set null on update restrict,
comment text not null default '',
primary key(paopleId, revision),
foreign key (peopleId, revision) references people_history
);
That lists some metadata for revisions which explains why the revision was changed. In this case a user with a specific ID edited the data and might have supplied a comment.
In another case (and another event table) a cronjob might have changed something and documents the event which probably has no userId and no comment but other metadata.
To add those event data I need the revision id and if the revision id was created by a trigger it will be difficult to find out (or is there a practical way to do so?).
Well, you need one replication strategy for all tables and column you have , you can create one table to maintain all changes and insert on anytime you make a UPDATE INSERT or DELETE statement, maybe with this exemple of framwork idempiere changelog can help you
CREATE TABLE ad_changelog (
ad_changelog_id NUMERIC(10,0) NOT NULL,
ad_session_id NUMERIC(10,0) NOT NULL,
ad_table_id NUMERIC(10,0) NOT NULL,
ad_column_id NUMERIC(10,0) NOT NULL,
isactive CHAR(1) DEFAULT 'Y'::bpchar NOT NULL,
created TIMESTAMP WITHOUT TIME ZONE DEFAULT now() NOT NULL,
createdby NUMERIC(10,0) NOT NULL,
updated TIMESTAMP WITHOUT TIME ZONE DEFAULT now() NOT NULL,
updatedby NUMERIC(10,0) NOT NULL,
record_id NUMERIC(10,0) NOT NULL,
oldvalue VARCHAR(2000),
newvalue VARCHAR(2000),
undo CHAR(1),
redo CHAR(1),
iscustomization CHAR(1) DEFAULT 'N'::bpchar NOT NULL,
description VARCHAR(255),
ad_changelog_uu VARCHAR(36) DEFAULT NULL::character varying,
CONSTRAINT adcolumn_adchangelog FOREIGN KEY (ad_column_id)
REFERENCES adempiere.ad_column(ad_column_id)
MATCH PARTIAL
ON DELETE CASCADE
ON UPDATE NO ACTION
DEFERRABLE
INITIALLY DEFERRED,
CONSTRAINT adsession_adchangelog FOREIGN KEY (ad_session_id)
REFERENCES adempiere.ad_session(ad_session_id)
MATCH PARTIAL
ON DELETE NO ACTION
ON UPDATE NO ACTION
DEFERRABLE
INITIALLY DEFERRED,
CONSTRAINT adtable_adchangelog FOREIGN KEY (ad_table_id)
REFERENCES adempiere.ad_table(ad_table_id)
MATCH PARTIAL
ON DELETE CASCADE
ON UPDATE NO ACTION
DEFERRABLE
INITIALLY DEFERRED
)
WITH (oids = false);
CREATE INDEX ad_changelog_speed ON adempiere.ad_changelog
USING btree (ad_table_id, record_id);
CREATE UNIQUE INDEX ad_changelog_uu_idx ON adempiere.ad_changelog
USING btree (ad_changelog_uu COLLATE pg_catalog."default");

How to work around error "Delete Prevented by referential constraint" in DB2?

So the problem I have is in my task provided to us by the Professor we are to
create tables
insert records to each table.
update and delete (minimum of 1 record) from each table
using a DB2 Script that is following the old standard where COLLECTIONS are created instead of SCHEMAS
steps 1 and 2 are done. the updates are done. my deletes are giving me a hard time. an example would be this.
CREATE TABLE UMALIK8.CAMPUS (
CAMPUS_ID VARCHAR (10) NOT NULL,
CAMPUS_NAME VARCHAR (30) NOT NULL,
MANAGER_NUM VARCHAR (10) NOT NULL,
CONSTRAINT UMALIK8.CAMPUS_PK PRIMARY KEY (CAMPUS_ID),
CONSTRAINT UMALIK8.CAMPUS_FK FOREIGN KEY (MANAGER_NUM)
REFERENCES UMALIK8.MANAGER(MANAGER_NUM)
ON DELETE CASCADE);
INSERT INTO UMALIK8.CAMPUS (CAMPUS_ID, CAMPUS_NAME, MANAGER_NUM)
VALUES ('King', 'King Campus', 'M021386');
DELETE FROM UMALIK8.CAMPUS
WHERE CAMPUS_ID = 'King';
so when I try to delete it, it says delete prevented by referential constraint "roomassign_fk" which doesn't make sense to me because the roomassign table is like 3 or 4 tables AFTER the campus table, the campus is the parent table, and the manager number is from the manager table and the parent table for manager table is Employee table....all throughout the delete script im getting referential errors and I don't know why. Even in my adult table but my adult table has no foreign keys, its only got a primary key on its own, and its got a bunch of child tables....
Now the order of my script is
Tables,
Inserts,
Updates,
Deletes
all separated from each other in one long script
any idea how to fix this? what am i doing wrong?
your help is greatly appreciated, thanks!
As discussed on the comments with the OP turns out that the issue is about a trigger on the table CAMPUS. As the OP asked I'm putting this as an answer.
Is it possible to exist on this table UMALIK8.CAMPUS a trigger which is inserting registries in a table that has an FK to it?
What I mean with a trigger is that if your table has an after insert trigger that would mean something like this: you run the insert command on CAMPUS, after the insert happens the DB2 will call the trigger and insert in a ROOM (i think that is the name of other table given the FK name) one registry which will be linked (by FK) to the one you just inserted on CAMPUS, then if you try to delete the registry on CAMPUS the referential constraint "roomassign_fk" will happen because you have a child registry that is linked to the one in CAMPUS

Adding Foreign Key constraint sucks up memory and causes paging

I'm having a lot of issues adding a simple foreign key constraint to a newly created empty table. Reference table is a tiny one with less than 40 records in it, but it gets referenced quite a bit.
Here's what happens: new table gets created successfully, but when adding a FK constraint, it "thinks" for a really long time and increases CPU load. Memory usage increases, the server starts paging like crazy and becomes unresponsive (connections time out). Cancelling the query does not help. The only thing that works is rebooting the server, which is very costly.
Here's the script I'm trying to run. I'm hoping SQL server gurus can help out. Thx!
USE [my_db]
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[MyNewTable](
[Column1ID] [int] NOT NULL,
[Column2ID] [int] NOT NULL
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[MyNewTable] WITH CHECK ADD CONSTRAINT [FK_MyNewTable_Column1ID] FOREIGN KEY([Column1ID])
REFERENCES [dbo].[ReferenceTable] ([Column1ID])
ON UPDATE CASCADE
ON DELETE CASCADE
GO
ALTER TABLE [dbo].[MyNewTable] CHECK CONSTRAINT [FK_MyNewTable_Column1ID]
GO
EDIT: ReferenceTable is a small table that looks something like this:
[Column1ID] [int] IDENTITY(1,1) NOT NULL,
[TxtCol1] [varchar](50) NOT NULL,
[TxtCol2] [varchar](50) NOT NULL,
[TxtCol3] [varchar](200) NOT NULL,
[TxtCol4] [nvarchar](2000) NOT NULL,
[TxtCol5] [varchar](200) NOT NULL,
[BitCol1] [bit] NOT NULL,
[TxtCol6] [varchar](200) NOT NULL,
[NumCol1] [smallint] NOT NULL,
[ExternalColumnId] [int] NOT NULL,
[NumCol2] [int] NOT NULL
Column1ID is referenced a lot by other tables (FK's). ExternalColumnId is a FK to another table. The problem happens during one of the ALTER TABLE calls. Unfortunately both of those were run together, so I'm unable to say which one caused it.
EDIT: Once the DB goes into "thinking" mode, it's possible to bring it back up by switching it to single mode and then back to multi user mode. It is much better than rebooting the server but still unacceptable.
Random thought: do you have any transaction open?
The ALTER TABLE will require exclusive access (as does most DDL) and it could be that it's blocked by a schema lock, which in turn will block ReferenceTable, which in turn will block other queries...
I'd suggest running each query batch in isolation.
First, create the table and see if that succeeds.
Next, try adding the foreign key constraint on its own using WITH NOCHECK instead of WITH CHECK. WITH NOCHECK will suppress any validation of the content in MyNewTable.Column1ID against the values in the column of the referenced table while the constraint is being created. If MyNewTable is empty or has few rows, I wouldn't think that this would have much effect, but I've encountered symptoms like you describe -- except that the table getting the new constraint had millions of rows in it.
Finally, run your last batch to try setting WITH CHECK on your new constraint. If this bogs down, you may just need to leave the new FK set WITH NOCHECK, however that isn't recommended since constraints defined WITH NOCHECK are ignored by the query optimizer until they are set back to WITH CHECK.
If this issue is reproducible, I would suggest you to open a Microsoft Support case. May be it's a bug and you are hitting it. If it's found that it's a known issue they would refund you the charges for opening up the case.
A handful of things to look into -- not solutions, but they might lead to something.
Are there any triggers defined?
Is the database being used or accessed at the time you are creating the new table, or is it idle?
Does anything (at time of deployment or otherwise) UPDATE Column1ID in the reference table, or delete rows in that table?
Is there a primary key or unique constraint on on Column1ID in the reference table? (You don't have one listed, but I'd think SQL would fail right off if one wasn't present.)