We have a PostgreSQL database that has consistently been updated using liquibase by a single user (a DB owner role). We recently ran an update command as another user (a system owner) and it registered as if it was a new DB/schema, meaning that liquibase tried to execute all changesets since the beginning, not just those that we expected to be the last few that were not in the databasechangelog table. Obviously this failed since those changesets had already been applied as the other user. However, it raised the question of how to handle this. Do you know why it's doing this? Is this a DB-specific issue or is this an issue at the liquibase level? Or is this an issue at all and we should accept as part of our business processes that all updates to a particular DB need to be executed by the same user?
Liquibase determines which changeSets have ran by selecting from the DATABASECHANGELOG table. What I would guess is happening is that the new user has a different default schema and so is looking in a different place for that table.
Depending on how you run Liquibase, there is a changelogSchemaName or similar attribute to control where Liquibase looks for the table.
It appears that Liquibase does is user-agnostic and that information is not recorded nor needed in the DATABASECHANGELOG.
Related
In one of my migration files on my development box I have this DB2 request:
CALL SYSPROC.ADMIN_CMD('REORG TABLE COST_RULES.LOW_DLL_EXCEP');
This call seems to be needed for a subsequent ALTER on a column done in a subsequent migration. In the past the devops person manually executed the call to reorg on the test database, but I'd like to put it into the migration so it gets done automatically.
If I add this, it will change the checksum on the migration file, causing a flyway issue when the deployment happens. What Flyway steps should be taken before the deployed job works?
When a table has had certain kinds of alterations, or a certain number of alterations, Db2 can put the table into reorg pending status.
When the table is NOT in reorg_pending condition, it is not necessary to preform reorg at this time solely for the purposes of migrations.
Consider changing your migration to make reorg conditional, and also consider online reorg if the table type is compatible.
You can use view SYSIBMADM.ADMINTABINFO and check REORG_PENDING='Y' for your table to decide whether or not to perform a reorg. You can use an SQL PL anonymous block to run the conditional logic and conditional reorg.
You can use the INPLACE option for reorg (and related options) if the table is suitable for online reorg.
You could also use an entirely separate migration, to test if any tables in your schema(s) of interest are in reorg_pending state, and take appropriate action at that time, including checking the table type to see if an online reorg is appropriate. Such a migration would be re-runnable. It would have its own checksum.
If you're correcting a script that has already been run, and you don't want it to run again (but you need the change either to reflect a change done manually on production or to spin up copies faithfully) then run flyway repair once the change is made - that will recalculate the checksums to align with the current state of the scripts.
I am getting the ERROR: cannot execute TRUNCATE TABLE in a read-only transaction in Heroku PostgreSQL. How could I fix it?
I am trying to TRUNCATE a table.
I am using the Heroku Postgres.
I have tried to figure out in the UI how I could change my permissions or something similar to be allowed to run not only the read-only transactions. But with no success.
This is currently possible, you have to set the transaction to "READ WRITE" when creating a data clip. Here is an example:
begin;
set transaction read write;
Delete FROM mytable where id > 2130;
COMMIT;
The feature you're looking at (Heroku Dataclips docs here) is intentionally made read-only. Its a reporting tool, not a database management tool. The express purpose is to allow surfacing data to a wider group of people associated with a project without the risk of someone accidentally (or otherwise) deleting or modifying data improperly. There is no way to make Dataclips read-write.
If you want full control to delete/modify data you'll need to use an appropriate interface, psql or pgAdmin if you prefer a GUI.
I had the same problem, but the error was solved by adding the following:
begin; set transaction read write;
(Without COMMIT;)
I don't get errors anymore, but I can't see my new table in the Schema Explorer. Do you know why?
I want to get the list of stored procedures which were recently changed.
In MS SQL Server, there are system tables which store those information and we can easily retrieve what has changed. Similarly I want to find most recent changed SPs and tables in PostgreSql
Thanks
You can use an EVENT TRIGGER for logging. More information about how to create and use event triggers, can be found in the manual and www.youlikeprogramming.com.
You need at least version 9.3.
I have a large postgresql database, and I want to track all it's tables if a change has been made.
The reason for that is that I can't know a relation between two different tables in the database.
I googled about it but I couldn't find anything helpful.
So how can I know if a change has been made to a table ?
There isn't currently a global audit function in PostgreSQL.
It'll be possible to build one using the new logical changeset extraction feature in 9.4, and I know some people are working on that.
In the mean time, you need to add some form of audit trigger to every table.
Actually a simple question, but I wasn't able to find any good conclusive answer.
Assuming a production database foo_prd, and a newer version of the same foo_new (on the same server) that is supposed to replace the old one. What is the cleanest way to seamlessly switch from _prd to _new?
RENAME-ing the databases would require to disconnect the current users via their pid. That would take down some requests, and new users might connect during the process. I was thinking of creating the tables of the new database as different SCHEMA and then change the search_path, e.g. from "$user",prd to "$user",new,prd.
What could possibly go wrong? Do you have any better suggestions? Am I taking the wrong approach altogether?
Do as you suggest: create the tables of the new database as different schema and then change the search_path.
But also create a user with the same name as the new schema and test everything before changing the search_path by logging in as this user with each of your apps - the new schema will be first in that user's search_path by default because the name matches.
Finally, take care when you come to drop the old schema - I suggest renaming first in case anything refers to it's objects using a qualified reference (eg prd.table or prd.function). After a few days/weeks it can then be dropped with confidence.
I would version my schema, and change my app to point to the new schema when ready.