SQL Database keeps going into restoring... state? - sql-server-2019

I am following a coworkers guide to setting up SQL mirroring with a witness that had worked previously. Something went wrong, had to delete all related endpoints/certificates/logins and users and started over.
I restored the database to my principal with right click->Restore database and that restores fine. I thought maybe it was somehow configured for mirroring so I ran
ALTER DATABASE myDB SET PARTNER OFF;
ALTER DATABASE myDB SET WITNESS OFF;
and both said that myDB was not configured for mirroring.
I take a backup of myDB database from my principal with
USE MASTER
GO
BACKUP DATABASE ATF_Data
TO DISK = 'D:\_Backups\myDB.bak'
GO
BACKUP LOG ATF_Data
TO DISK = 'D:\_Backups\myDB.trn'
GO
Then I tried to restore to my Mirror SQL Server with
USE MASTER
GO
RESTORE DATABASE ATF_Data
FROM DISK = 'D:\_Backups\myDB.bak'
WITH NORECOVERY
GO
RESTORE LOG ATF_Data
FROM DISK = 'D:\_Backups\myDB.trn'
WITH NORECOVERY
GO
However now on my mirror the the database is stuck like myDB (Restoring...). Now I can't drop the database or get it out of this state so I cannot set up the PARTNER or WITNESS part. How can I resolve this?

Related

I have loaded wrong psql dump into my database, anyway to revert?

Ok, I screwed up.
I dumped one of my psql (9.6.18) staging database with the following command
pg_dump -U postgres -d <dbname> > db.out
And after doing some testing, I "restored" the data using the following command.
psql -f db.out postgres
Notice the absence of -d option? yup. And that was supposed to be the username.
Annnd as the database happend to have the same name as its user, it overwrote the 'default' database (postgres), which had data that other QAs are using.
I cancelled the operation quickly as soon as I realised my mistake, but the damage was still done. Around 1/3 ~ 1/2 of the database is roughly identical to the staging database - at least in terms of the schema.
Is there any way to revert this? I am still looking for any other dumps if any of these guys made one. But I don't think there is any past two to three months. Seems like I got no choice but to own up and apologise to them in the morning.
Without a recent dump or some sort of PITR replication setup, you can't un-revert this easily. The only option is to manually go through the log of what was restored and remove/alter it in the postgres database. This will work for the schema, the data is another matter. FYI, the postgres database should not really be used as a 'working' database. It is there to be a database to connect to for doing other operations, such as CREATE DATABASE or to bootstrap your way into a cluster. If left empty then the above would not have been a problem. You could have done, from another database, DROP DATABASE postgres; and then CREATE DATABASE postgres.
Do you have a capture of the output of the psql -f db.out postgres run?
Since the pg_dump didn't specify --clean or -c, it should not have overwritten anything, just appended. And if your tables have unique or primary keys, most of the data copy operations should have failed with unique key violations and rolled back. Even one overlapping row (per table) would roll back the entire dataset for that table.
Without having the output, it will be hard to figure out what damage has actually been done.
You should also immediately copy the pg_xlog data someplace safe. If it comes down to it, you might be able to use pg_xlogdump to figure out what changes committed and what did not.

How to exit out of database recovery mode (currently locked in read-only mode)

A slave database was set up some time ago for the purpose of backing up or replicating a remote database. However I can no longer write to the database using a Delphi based ETL (the ETL works for another database pair, but to date has never been used for this particular pair). The replication database was setup by somebody else who has since left the company. I am reasonably sure this has been setup as a replication database, however the employee who has since left told me that replication never worked for unrelated reasons. Using the ETL we can (using SQL queries) read from the one database, and write back to the replication database, Or should be able to, as it is currently read only.
I have tried:
Maintenance such as VACUUM
Attempt to drop tables and the entire database
Restore a full backup from the master database
None of these work, and I am told the database is read-only.
I have looked at postgresql.conf and see that hot_standby is checked, so I think (but am not 100% certain) that the database is in some sort of replication mode (I've never touched replication as supported by Postgres, so I wouldn't know).
I have checked permissions in pg_hba.conf and see there are some credentials in there for replication. I am not sure whether this activates "replication mode" for the database, or simply means these credentials are for replication only.
I have been through months worth of log files (This has not been working since our IT department upgraded the entire network about 5 months ago). I see the log file contents seen below, repeated over and over with nothing else for months. Note the IP address shown below is listed in the pg_hba.conf file, so credentials are valid.
The database is in recovery mode, as I have found by using:
select pg_is_in_recovery();
This explains to me why it's read only, but why can I not restore databases, or just simply dump the entire database and start again (it's a backup so losing/restoring it is not an issue)?
I was tempted to try modifying the recovery.conf file (which exists) but I read/believe that once recovery has been initiated (which in my case it has) modifying the file will have no effect.
I'm using a legacy version of Postgres: 9.2.9
Any help here would be greatly appreciated, as I have been working solidly on this for more than a day now.
Log File entry (sample):
FATAL: could not connect to the primary server:
FATAL: no pg_hba.conf entry for replication connection from host "192.168.20.2", user "postgres", SSL off
FATAL: could not connect to the primary server: server closed the connection unexpectedly
This probably means the server terminated abnormally before or while processing the request.
A couple of options would work for me:
Convert the database from being a read-only replication database, to a standard read/write database or
Dump/drop the entire database so I can create a new one with write capabilities.
It looks like the two database clusters have been set up for replication, but configuration changes on one of the machines broke the replication (changed pg_hba.conf on the primary, changed IP addresses, …).
Here is the way to your desired solutions:
Bringing the standby out of recovery mode: Run
/path/to/pg_ctl promote -D /path/to/data/directory
on the standby as operating system user postgres.
Nuking the standby: Remove the data directory on the standby with rm -rf (or the equivalent on your operating system). Kill all PostgreSQL processes.
Then use initdb to create a new database cluster in the same location.

ERROR: cannot execute CREATE TABLE in a read-only transaction

I'm trying to setup the pgexercises data in my local machine. When I run: psql -U <username> -f clubdata.sql -d postgres -x I get the error: psql:clubdata.sql:6: ERROR: cannot execute CREATE SCHEMA in a read-only transaction.
Why did it create a read-only database on my local machine? Can I change this?
Normally the most plausible reasons for this kind of error are :
trying create statements on a read-only replica (the entire instance is read-only).
<username> has default_transaction_read_only set to ON
the database has default_transaction_read_only set to ON
The script mentioned has in its first lines:
CREATE DATABASE exercises;
\c exercises
CREATE SCHEMA cd;
and you report that the error happens with CREATE SCHEMA at line 6, not before.
That means that the CREATE DATABASE does work, when run by <username>.
And it wouldn't work if any of the reasons above was directly applicable.
One possibility that would technically explain this would be that default_transaction_read_only would be ON in the postgresql.conf file, and set to OFF for the database postgres, the one that the invocation of psql connects to, through an ALTER DATABASE statement that supersedes the configuration file.
That would be why CREATE DATABASE works, but then as soon as it connects to a different database with \c, the default_transaction_read_only setting of the session would flip to ON.
But of course that would be a pretty weird and unusual configuration.
Reached out to pgexercises.com and they were able to help me.
I ran these commands(separately):
psql -U <username> -d postgres
begin;
set transaction read write;
alter database exercises set default_transaction_read_only = off;
commit;
\q
Then I dropped the database from the terminal dropdb exercises and ran script again psql -U <username> -f clubdata.sql -d postgres -x -q
I was having getting cannot execute CREATE TABLE in a read-only transaction, cannot execute DELETE TABLE in a read-only transaction and others.
They all followed a cannot execute INSERT in a read-only transaction. It was like the connection had switched itself over to read-only in the middle of my batch processing.
Turns out, I was running out of storage!
Write access was disabled when the database could no longer write anything. I am using Postgres on Azure. I don't know if the same effect would happen if I was on a dedicated server.
I had same issue for Postgre Update statement
SQL Error: 0, SQLState: 25006 ERROR: cannot execute UPDATE in a read-only transaction
Verified Database access by running below query and it will return either true or false
SELECT pg_is_in_recovery()
true -> Database has only Read Access
false -> Database has full Access
if returns true then check with DBA team for the full access and also try for ping in command prompt and ensure the connectivity.
ping <database hostname or dns>
Also verify if you have primary and standby node for the database
In my case I had a master and replication nodes, and the master node became replication node, which I believe switched it into hot_standby mode. So I was trying to write data into a node that was meant only for reading, therefore the "read-only" problem.
You can query the node in question with SELECT pg_is_in_recovery(), and if it returns True then it is "read-only", and I suppose you should switch to using whatever master node you have now.
I got this information from: https://serverfault.com/questions/630753/how-to-change-postgresql-database-from-read-only-to-writable.
So full credit and my thanks goes to Craig Ringer!
Dbeaver: In my case
This was on.
This doesn't quite answer the original question, but I received the same error and found this page, which ultimately led to a fix.
My issue was trying to run a function with temp tables being created and dropped. The function was created with SECURITY DEFINER privileges, and the user had access locally.
In a different environment, I received the cannot execute DROP TABLE in a read-only transaction error message. This environment was AWS Aurora, and by default, non-admin developers were given read-only privileges. Their server connections were thus set up to use the read-only node of Aurora (-ro- is in the connection url), which must put the connection in the read-only state. Running the same function with the same user against the write node worked.
Seems like a good use case for table variables like SQL Server has! Or, at least, AWS should modify their flow to allow temp tables to be created and dropped on read nodes.
This occurred when I was restoring a production database locally, the database is still doing online recovery from the WAL records.
A little bit unexpected as I assumed pgbackgrest was creating instantly recoverable restores, perhaps not.
91902 postgres 20 0 1445256 14804 13180 D 4.3 0.3 0:28.06 postgres: startup recovering 000000010000001E000000A5
If like me you are trying to create DB on heroku and are stuck as this message shows up on the dataclip tab
I did this,
Choose Resources from(Overview Resources Deploy Metrics Activity Access Settings)
Choose Settings out of (Overview, Durability, Settings, Dataclip)
Then in Administration->Database Credentials choose View Credentials...
then open terminal and fill that info here and enter
psql --host=***************.amazonaws.com --port=5432 --username=*********pubxl --password --dbname=*******lol
then it'll ask for password, copy-paste from there and you can run Postgres cmds.
I suddenly started facing this error on postgres installed on my windows machine, when I was running alter query from dbeaver, all I did was deleted the connection of postgres from dbeaver and created a new connection
If you are using Azure Database for PostgreSQL your server gets into read-only mode when the storage used is near total capacity.
The error you get is exactly:
ERROR: cannot execute XXXXXXXXX in a read-only transaction
https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/concepts-compute-storage
I just had this error. My cause was not granting permission to the SEQUENCE
GRANT ALL ON SEQUENCE word_mash_word_cube_template_description_reference_seq TO ronshome_user;
If you are facing this issue with an RDS instance cluster, please check your endpoint and use the Writer instance endpoint. Then it should work now.
Issue can be dur to Intellij config:
Go to Database view> click on Data Source Properties (Shift + enter)> (Select your data source)>
Options tab> Under Connection : uncheck Read-only
For me it was Azure PostgreSQL failing over to standby during maintaince in Azure and never failing back to master when PostgreSQL was in HA mode. You can check this event in Service Health and also check which zone you current VM is running from. If it's 2 and not 1 them most likely that's the result of events described above.

How to restore database in PostgreSQL with pgadmin3?

I'm using pgAdmin to restore PostgreSQL database. To restore the database I need to delete, drop and remake it. How to restore the database without deleting and remaking it?
This cannot be done in pgAdmin or with any database tools. Regular backup files cannot be restored without deleting the data first because they consist of normal COPY statements which will fail if you have rows in the database (primary keys collide etc).
For a simple way to get back to an earlier snapshot in a testing environment take a look at PostgreSQL documentation - 24.2. File System Level Backup:
For backup:
Shut down your database
copy all the files from your data directory
For restore:
Shut down your database
replace your data directory with the backup directory
Note:
the size of the data might be significantly larger than with a regular backup especially if you have a lot of indexes
this is a server wide backup so you can't do this on individual databases
don't attempt to use it on a different version of PostgreSQL
this really deletes the data too - by replacing it with the backup
Also with regular backups you don't have to do a DROP TABLE if you do a data-only restore with pg_restore --data-only for example. You still have to delete the data though.

DB2: not able to restore from backup

I am using command
db2 restore db S18 from /users/intadm/s18backup/ taken at 20110913113341 on /users/db2inst1/ dbpath on /users/db2inst1/ redirect without rolling forward
to restore database from backup file located in /users/intadm/s18backup/ .
Command execution gives such output:
SQL1277W A redirected restore operation is being performed. Table space
configuration can now be viewed and table spaces that do not use automatic
storage can have their containers reconfigured.
DB20000I The RESTORE DATABASE command completed successfully.
When I'm trying to connect to restored DB (by executing 'db2 connect to S18'), I'm getting this message:
SQL0752N Connecting to a database is not permitted within a logical unit of
work when the CONNECT type 1 setting is in use. SQLSTATE=0A001
When I'm trying to connect to db with db viewer like SQuireL, the error is like:
DB2 SQL Error: SQLCODE=-1119, SQLSTATE=57019, SQLERRMC=S18, DRIVER=3.57.82
which means that 'error occurred during a restore function or a restore is still in progress' (from IBM DB2 manuals)
How can I resolve this and connect to restored database?
UPD: I've executed db2ckbkp on backup file and it did not identified any issues with backup file itself.
without rolling forward can only be used when restoring from an offline backup. Was your backup taken offline? If not, you'll need to use roll forward.
When you do a redirected restore, you are telling DB2 that you want to change the locations of the data files in the database you are restoring.
The first step you show above will execute very quickly.
Normally, after you execute this statement, you would have one or more SET TABLESPACE CONTAINERS to set the new locations of each data file. It's not mandatory to issue these statements, but there's no point in specifying the redirect option in your RESTORE DATABASE command if you're not changing anything.
Then, you would issue the RESTORE DATABASE S18 COMPLETE command, which would actually read the data from the backup image, writing it to the data files.
If you did not execute the RESTORE DATABASE S18 COMPLETE, then your restore process is incomplete and it makes sense that you can't connect to the database.
What I did and what has worked:
Executed:
db2 restore db S18 from /users/intadm/s18backup/ taken at 20110913113341 on /<path with sufficient disk space> dbpath on /<path with sufficient disk space>
I got some warnings before, that some table spaces are not moved. When I specified dbpath to partition with sufficient disk space - warning has disappeared.
After that, as I have an online backup, I issued:
db2 rollforward db S18 to end of logs and complete
That's it! Now I'm able to connect.