I'm using liquibase in project and it is working fine so far.
I added a new changeset and it works good locally, once deployed , the container's state hungs with the following statement:
"liquibase: Waiting for changelog lock...".
The limit resources of the deployment are not set.
The update of table "databasechangeloglock" is not working, cause the pod keeps locking it.
How can i solve this?
See other question here. If the lock happens and the process exits unexpectedly, then the lock will stay there.
According to this answer, you can remove the lock by running SQL directly:
UPDATE DATABASECHANGELOGLOCK SET LOCKED=0, LOCKGRANTED=null, LOCKEDBY=null where ID=1;
Note: Depending on your DB engine, you may need to use FALSE or 'f' instead of 0 for the LOCKED value.
Per your question, your process itself is creating a new lock and still failing every time, then most likely it is the process that is exiting/failing for a different reason (or checking for the lock in the wrong order.
Another option is to consider the Liquibase No ChangeLog Lock extension.
Note: This is probably a last resort. The extension could be an option if you were having more trouble with the changelog lock than getting any benefit (e.g. only running one instance of the app and don't really need locking). It is likely not the "best" solution, but is certainly an option depending on what you need. The README in the link says this too.
If you are completely sure that there is no active migration (pod) running, you can manually release the lock:
UPDATE <your table name> (f.e. DATABASECHANGELOG)
SET locked=false, lockgranted=null, lockedby=null
WHERE id=1;
Usually the lock is cleared automatically, you might want to check your isolation level for the database connection as well.
There is an extension to handle it using session lock, this supports most of the RDMBS. The way it works is, if the database connection closes, it will release lock https://liquibase.jira.com/wiki/spaces/CONTRIB/pages/2933293057/SessionLock
Related
My network looks like this:
App (many connections) pg_bouncer (few sessions) PostgreSql
nodes ----------------------- nodes ----------------- nodes
So pg_bouncer multiplexes connections giving app nodes the illusion that they are all connected directly.
The issue comes when I launch pg_dump: few milliseconds after the dump finishes, all app nodes fail with errors saying "relation xxxx does not exist" though the table or sequence is actually there. I'm pretty sure the cause is pg_bouncer manipulating the "search_path" variable, so that app nodes no longer find tables in my schema. This happens at dump time even if the dump file is not imported nor executed.
Note, I've searched SO and google and I've seen there are many threads asking about the search_path in the generated file, but that's not what I'm asking about. I have no problems with the generated file, my issue is the pg_bouncer session that other clients are using, and I haven't found anything about this.
The most obvious workaround would probably be to set the search_path manually in the app, but attention, don't fall into this fallacy: it's useless for the app to do it at the beginning since it may be assigned a different pg_bouncer session at the next transaction. And I cannot be setting it all the time.
The next most obvious workaround would be to set it back to the intended value immediately after launching pg_dump, but there's a race condition here, and other nodes are quick enough so that I fear they will still fail.
Is there a way to avoid letting pg_dump manipulate this variable, or making sure it resets it before exiting?
(Also, I'm taking for granted pg_dump and search_path are the cause for this, can you suggest a way to confirm that? All the evidence I have is the errors few milliseconds later and the set search_path instruction in the generated file which produces the same errors if executed.)
Thanks
Don't connect pg_dump through pgbouncer with transaction pooling. Just change the port number so it connects directly to the database. pg_dump is incompatible with transaction pooling.
You might be able to get it to work anyway by setting server_reset_query_always = 1
I'm running backup restore on a schema every day and get this every now and then:
pg_dump: Error message from server: ERROR: relation not found (OID
86157003) DETAIL: This can be validly caused by a concurrent delete
operation on this object. pg_dump: The command was: LOCK TABLE
myschema.products IN ACCESS SHARE MODE
How can this be avoided? It seems the table was being used at the time, or someone was running something against the table. can I just kill all connections to the DB before restoring or is there another alternative?
As far as I understand, pg_dump could run even if users are doing something with the table but it doesn't seem to be the case.
Thanks,
It is somewhat buried but the answer lies here:
https://www.postgresql.org/docs/current/app-pgdump.html
"
-j njobs
...
To detect this conflict, the pg_dump worker process requests another shared lock using the NOWAIT option. If the worker process is not granted this shared lock, somebody else must have requested an exclusive lock in the meantime and there is no way to continue with the dump, so pg_dump has no choice but to abort the dump.
"
Which is borne out by the this in the error message:
"LOCK TABLE myschema.products IN ACCESS SHARE MODE"
ACCESS SHARE will cooperate with all other locks modes except ACCESS EXCLUSIVE. ACCESS EXCLUSIVE is used by DROP TABLE, TRUNCATE, REINDEX, etc. See here Locks for more information. So you need to do the dump during a time where the operations listed for ACCESS EXCLUSIVE are known to not happen or by blocking/dropping connections.
Somebody dropped a table between the time pg_dump took an inventory of the tables and the time it tries to dump the table.
This can happen if your application is in the habit of dropping tables all the time.
This is not an answer to your main question, but a caution regarding:
As far as I understand, pg_dump could run even if users are doing something with the table but it doesn't seem to be the case.
It assumes that the application performs every action in a single transaction. I have known of applications which accomplish some tasks using more than one.
I don't know exactly what the tasks were or if it was unavoidable that they use multiple transactions, but dumps could only be trusted when the application was idle or, better yet, when the service was stopped.
For the function that those applications performed, it wasn't a big deal to work around down times or stop services.
I don't know how you'd determine this behaviour without being told by the developers. Just something to consider.
We have a requirement that says we should have a copy of all the items that were in our system at one point. The most simple way to explain it would be replication but ignoring the delete statement (INSERT and UPDATE are ok)
Is this possible ? or maybe the better question would be what is the best approach to tackle this kind of problem?
Make a copy/replica of current database and use triggers via dblink from current database to the replica. Use after insert and after update trigger to insert and update data in replica.
So whenever a row insertion/updation take place in current database it will directly reflect to replica.
I'm not sure that I understand the question completely, but I'll try to help:
First (opposite to #Sunit) - I suggest avoiding triggers. Triggers are introducing additional overhead and impacting performance.
The solution I would use (and I'm actually using in few of my projects with similar demands) - don't use DELETE at all. Instead you can add bit (boolean) column called "Deleted", set its default value to 0 (false), and instead of deleting the row you update this field to 1 (true). You'll also need to change your other queries (SELECT) to include something like "WHERE Deleted = 0".
Another option is to continue using DELETE as usual, and to allow deleting records from both primary and replica, but configure WAL archiving, and store WAL archives in some shared directory. This will allow you to moment-in-time recovery, meaning that you'll be able to restore another PostgreSQL instance to state of your cluster in any moment in time (i.e. before the deletion). This way you'll have a trace of deleted records, but pretty complicated procedure to reach the records. Depending on how often deleted records will be checked in the future (maybe they are not checked at all, but simply kept for just-in-case tracking) this approach my also help.
While editing some records in my PostgreSQL database using sql in the terminal (in ubuntu lucid), I made a wrong update.
Instead of -
update mytable set start_time='13:06:00' where id=123;
I typed -
update mytable set start_time='13:06:00';
So, all records are now having the same start_time value.
Is there a way to undo this change? There are some 500+ records in the table, and I do not know what the start_time value for each record was
Is it lost forever?
I'm assuming it was a transaction that's already committed? If so, that's what "commit" means, you can't go back.
Some data may be recoverable if you're lucky. Stop the database NOW.
Here's an answer I wrote on the same topic earlier. I hope it's helpful.
This might be too: Recoved deleted rows in postgresql .
Unless the data is absolutely critical, just restore from backups, it'll be lots easier and less painful. If you didn't have backups, consider yourself soundly thwacked.
If you catch the mistake and immediately bring down any applications using the database and take it offline, you can potentially use Point-in-Time Recovery (PITR) to replay your Write Ahead Log (WAL) files up to, but not including, the moment when the errant transaction was made. This would return the database to the state it was in prior, thus effectively 'undoing' that transaction.
As an approach for a production application database it has a number of obvious limitations, but there are circumstances in which PITR may be the best option available, especially when critical data loss has occurred. However, it is of no value if archiving was not already configured before the corruption event.
https://www.postgresql.org/docs/current/static/continuous-archiving.html
Similar capabilities exist with other relational database engines.
I'm working with SQL 2000 and I need to determine which of these databases are actually being used.
Is there a SQL script I can used to tell me the last time a database was updated? Read? Etc?
I Googled it, but came up empty.
Edit: the following targets issue of finding, post-facto, the last access date. With regards to figuring out who is using which databases, this can definitively monitored with the right filters in the SQL profiler. Beware however that profiler traces can get quite big (and hence slow/hard to analyze) when the filters are not adequate.
Changes to the database schema, i.e. addition of table, columns, triggers and other such objects typically leaves "dated" tracks in the system tables/views (can provide more detail about that if need be).
However, and unless the data itself includes timestamps of sorts, there are typically very few sure-fire ways of knowing when data was changed, unless the recovery model involves keeping all such changes to the Log. In that case you need some tools to "decompile" the log data...
With regards to detecting "read" activity... A tough one. There may be some computer-forensic like tricks, but again, no easy solution I'm afraid (beyond the ability to see in server activity the very last query for all still active connections; obviously a very transient thing ;-) )
I typically run the profiler if I suspect the database is actually used. If there is no activity, then simply set it to read-only or offline.
You can use a transaction log reader to check when data in a database was last modified.
With SQL 2000, I do not know of a way to know when the data was read.
What you can do is to put a trigger on the login to the database and track when the login is successful and track associated variables to find out who / what application is using the DB.
If your database is fully logged, create a new transaction log backup, and check it's size. The log backup will have a fixed small lengh, when there were no changes made to the database since the previous transaction log backup has been made, and it will be larger in case there were changes.
This is not a very exact method, but it can be easily checked, and might work for you.