Need to properly set USEREXIT and LOGARCHMETH1 - db2

Running DB2 version 9.7 on Windows Server. I'm new to DB2, but not to databases in general.
My underlying problem is this error in the Event Viewer:
ADM1848W Failed to archive log file "S0000880.LOG" to "USEREXIT" from
"C:\DB2\NODE0000\SQL00003\SQLOGDIR\".
I don't want to user a USEREXIT program.
If I'm understanding what I've read correctly, the new method to specify a log archive method is to use LOGARCHMETH1. However some documentation (and some observed behavior) lead me to believe that it isn't that simple.
My current DB configuration is this:
> get db config for $my_db
....
Log retain for recovery status = RECOVERY
User exit for logging status = YES
....
First log archive method (LOGARCHMETH1) = LOGRETAIN
I'm trying to turn off USEREXIT with this :
update db cfg for $my_db using userexit off
but with no effect.
How can I set my db to use LOGRETAIN but not USEREXIT ?
And a follow up, if I do get this set correctly, is a backup required to complete the change?
Thanks!

Although I don't have a complete understanding of the USEREXIT still, I solved it by turning all logging off, then turning it back on with these steps (db2 commands):
To turn off:
connect to $db
update database configuration for $db using logretain off
db2stop
db2start
Then, turning on on:
connect to $db
update database configuration for $db using logretain on
db2stop
db2start
db2 backup database $db to <filename> without prompting
Note: a backup is NOT required when turning logging OFF but IS REQUIRED when turning logging ON.

Related

ERROR: cannot execute CREATE TABLE in a read-only transaction

I'm trying to setup the pgexercises data in my local machine. When I run: psql -U <username> -f clubdata.sql -d postgres -x I get the error: psql:clubdata.sql:6: ERROR: cannot execute CREATE SCHEMA in a read-only transaction.
Why did it create a read-only database on my local machine? Can I change this?
Normally the most plausible reasons for this kind of error are :
trying create statements on a read-only replica (the entire instance is read-only).
<username> has default_transaction_read_only set to ON
the database has default_transaction_read_only set to ON
The script mentioned has in its first lines:
CREATE DATABASE exercises;
\c exercises
CREATE SCHEMA cd;
and you report that the error happens with CREATE SCHEMA at line 6, not before.
That means that the CREATE DATABASE does work, when run by <username>.
And it wouldn't work if any of the reasons above was directly applicable.
One possibility that would technically explain this would be that default_transaction_read_only would be ON in the postgresql.conf file, and set to OFF for the database postgres, the one that the invocation of psql connects to, through an ALTER DATABASE statement that supersedes the configuration file.
That would be why CREATE DATABASE works, but then as soon as it connects to a different database with \c, the default_transaction_read_only setting of the session would flip to ON.
But of course that would be a pretty weird and unusual configuration.
Reached out to pgexercises.com and they were able to help me.
I ran these commands(separately):
psql -U <username> -d postgres
begin;
set transaction read write;
alter database exercises set default_transaction_read_only = off;
commit;
\q
Then I dropped the database from the terminal dropdb exercises and ran script again psql -U <username> -f clubdata.sql -d postgres -x -q
I was having getting cannot execute CREATE TABLE in a read-only transaction, cannot execute DELETE TABLE in a read-only transaction and others.
They all followed a cannot execute INSERT in a read-only transaction. It was like the connection had switched itself over to read-only in the middle of my batch processing.
Turns out, I was running out of storage!
Write access was disabled when the database could no longer write anything. I am using Postgres on Azure. I don't know if the same effect would happen if I was on a dedicated server.
I had same issue for Postgre Update statement
SQL Error: 0, SQLState: 25006 ERROR: cannot execute UPDATE in a read-only transaction
Verified Database access by running below query and it will return either true or false
SELECT pg_is_in_recovery()
true -> Database has only Read Access
false -> Database has full Access
if returns true then check with DBA team for the full access and also try for ping in command prompt and ensure the connectivity.
ping <database hostname or dns>
Also verify if you have primary and standby node for the database
In my case I had a master and replication nodes, and the master node became replication node, which I believe switched it into hot_standby mode. So I was trying to write data into a node that was meant only for reading, therefore the "read-only" problem.
You can query the node in question with SELECT pg_is_in_recovery(), and if it returns True then it is "read-only", and I suppose you should switch to using whatever master node you have now.
I got this information from: https://serverfault.com/questions/630753/how-to-change-postgresql-database-from-read-only-to-writable.
So full credit and my thanks goes to Craig Ringer!
Dbeaver: In my case
This was on.
This doesn't quite answer the original question, but I received the same error and found this page, which ultimately led to a fix.
My issue was trying to run a function with temp tables being created and dropped. The function was created with SECURITY DEFINER privileges, and the user had access locally.
In a different environment, I received the cannot execute DROP TABLE in a read-only transaction error message. This environment was AWS Aurora, and by default, non-admin developers were given read-only privileges. Their server connections were thus set up to use the read-only node of Aurora (-ro- is in the connection url), which must put the connection in the read-only state. Running the same function with the same user against the write node worked.
Seems like a good use case for table variables like SQL Server has! Or, at least, AWS should modify their flow to allow temp tables to be created and dropped on read nodes.
This occurred when I was restoring a production database locally, the database is still doing online recovery from the WAL records.
A little bit unexpected as I assumed pgbackgrest was creating instantly recoverable restores, perhaps not.
91902 postgres 20 0 1445256 14804 13180 D 4.3 0.3 0:28.06 postgres: startup recovering 000000010000001E000000A5
If like me you are trying to create DB on heroku and are stuck as this message shows up on the dataclip tab
I did this,
Choose Resources from(Overview Resources Deploy Metrics Activity Access Settings)
Choose Settings out of (Overview, Durability, Settings, Dataclip)
Then in Administration->Database Credentials choose View Credentials...
then open terminal and fill that info here and enter
psql --host=***************.amazonaws.com --port=5432 --username=*********pubxl --password --dbname=*******lol
then it'll ask for password, copy-paste from there and you can run Postgres cmds.
I suddenly started facing this error on postgres installed on my windows machine, when I was running alter query from dbeaver, all I did was deleted the connection of postgres from dbeaver and created a new connection
If you are using Azure Database for PostgreSQL your server gets into read-only mode when the storage used is near total capacity.
The error you get is exactly:
ERROR: cannot execute XXXXXXXXX in a read-only transaction
https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/concepts-compute-storage
I just had this error. My cause was not granting permission to the SEQUENCE
GRANT ALL ON SEQUENCE word_mash_word_cube_template_description_reference_seq TO ronshome_user;
If you are facing this issue with an RDS instance cluster, please check your endpoint and use the Writer instance endpoint. Then it should work now.
Issue can be dur to Intellij config:
Go to Database view> click on Data Source Properties (Shift + enter)> (Select your data source)>
Options tab> Under Connection : uncheck Read-only
For me it was Azure PostgreSQL failing over to standby during maintaince in Azure and never failing back to master when PostgreSQL was in HA mode. You can check this event in Service Health and also check which zone you current VM is running from. If it's 2 and not 1 them most likely that's the result of events described above.

How to determine in PgAdmnin if a database is completely restored?

When PgAdmin III displays a list of databases, a database in the middle of restoring looks just like any other one. How can I determine if the restore has completed or not?
If by restore you mean pg_restore command in progress you cannot see that directly from pgAdmin. What pg_restore does in fact is execute simple CREATE TABLE, INSERT or COPY commands that differ in no way from normal commands. What you can do is you can open the Server status window. If you know where the command is executed (IP address) or if there is nothing else connecting to the database you can check if there are open connections to the database. If there are no open connections the restore has finished. If you can't deduce the info from connections you could look if there are any transactions (no transactions for some time = restore finished).
It would be simpler to get this information if you had access to the place where the command is executed.

How to access the database imported through datapump

I just imported data dump through below command:
IMPDP user/pass FULL=Y DUMPFILE=BIRDV24012014.DMP LOGFILE=BIRDV24012014.log;
The dump has been restored the issue is i dont know how to connect to this database that i just imported, what service or TNS does it resides and how can i query it?
You didn't import a database, you imported the contents of your file into your existing database. If you could successfully run impdp user/pass then your ORACLE_SID etc. is already set and you should be able to log in and query with sqlplus user/pass.
If you've come from another RDBMS background you may be confusing 'database' with 'schema'. Depending on what was in the dump, you've probably created a load of schema objects and data under the USER schema or whatever your real 'user' value was).
The import makes no difference to this, but if you want to access the database from another client (e.g. from another machine, or over JDBC) then you'll need to check your listener configuration to get the hostname/IP address and port it's listening on, and get the service name for the database; all of which can be obtained from lsnrctl services if you have permission to run that. You can then use those values for a JDBC URL, or in a tnsnames.ora entry, or ODBC, etc.
Look at your ORACLE_SID environment variable. There you'll find the instance ID. If you ran the IMPDP tool as user Oracle, you should also be able to connect to the database using
sqlplus / as sysdba
If all fails, look at your /etc/oratab file to see which instances are available on this host.
On another note, your command seems incomplete. Datapump requires a DIRECTORYparameter to know where to look for the dumpfile you specified.

DB2: not able to restore from backup

I am using command
db2 restore db S18 from /users/intadm/s18backup/ taken at 20110913113341 on /users/db2inst1/ dbpath on /users/db2inst1/ redirect without rolling forward
to restore database from backup file located in /users/intadm/s18backup/ .
Command execution gives such output:
SQL1277W A redirected restore operation is being performed. Table space
configuration can now be viewed and table spaces that do not use automatic
storage can have their containers reconfigured.
DB20000I The RESTORE DATABASE command completed successfully.
When I'm trying to connect to restored DB (by executing 'db2 connect to S18'), I'm getting this message:
SQL0752N Connecting to a database is not permitted within a logical unit of
work when the CONNECT type 1 setting is in use. SQLSTATE=0A001
When I'm trying to connect to db with db viewer like SQuireL, the error is like:
DB2 SQL Error: SQLCODE=-1119, SQLSTATE=57019, SQLERRMC=S18, DRIVER=3.57.82
which means that 'error occurred during a restore function or a restore is still in progress' (from IBM DB2 manuals)
How can I resolve this and connect to restored database?
UPD: I've executed db2ckbkp on backup file and it did not identified any issues with backup file itself.
without rolling forward can only be used when restoring from an offline backup. Was your backup taken offline? If not, you'll need to use roll forward.
When you do a redirected restore, you are telling DB2 that you want to change the locations of the data files in the database you are restoring.
The first step you show above will execute very quickly.
Normally, after you execute this statement, you would have one or more SET TABLESPACE CONTAINERS to set the new locations of each data file. It's not mandatory to issue these statements, but there's no point in specifying the redirect option in your RESTORE DATABASE command if you're not changing anything.
Then, you would issue the RESTORE DATABASE S18 COMPLETE command, which would actually read the data from the backup image, writing it to the data files.
If you did not execute the RESTORE DATABASE S18 COMPLETE, then your restore process is incomplete and it makes sense that you can't connect to the database.
What I did and what has worked:
Executed:
db2 restore db S18 from /users/intadm/s18backup/ taken at 20110913113341 on /<path with sufficient disk space> dbpath on /<path with sufficient disk space>
I got some warnings before, that some table spaces are not moved. When I specified dbpath to partition with sufficient disk space - warning has disappeared.
After that, as I have an online backup, I issued:
db2 rollforward db S18 to end of logs and complete
That's it! Now I'm able to connect.

Postgres turn on log_statement programmatically

I want to turn on logging of all SQL statements that modify the database. I could get that on my own machine by setting the log_statement flag in the configuration file, but it needs to be enabled on the user's machine. How do you enable it from program code? (I'm using Python with psycopg2 if it matters.)
Turning on logging of SQL statements that modify the database can be achieved by:
ALTER SYSTEM SET log_statement TO 'mod';
-- Make it effective by triggering configuration reload (no server restart required).
SELECT pg_reload_conf();
-- To make sure the modification is not limited to the current session scope
-- it is better to log out from postgresql and log back in.
-- Check value of log_statement configuration, expected: mod
SELECT * FROM pg_settings WHERE name = 'log_statement';
This requires superuser access rights.
Check hereafter links to documentation for more details:
ALTER SYSTEM
pg_reload_conf()
The "it needs to be enabled on the user's machine" phrase is confusing, indeed... I assume you mean "from the user (client) side".
In Postgresql some server run-time parameters can be changed from a connection, but only from a superuser - and only for those settings that do not require a server restart.
I'm not sure if that includes the many log options. You might try with something like:
SELECT set_config('log_XXX', 'off', false);
where log_XXX is to be replaced by the respective logging setting, and 'false' by the value you want to set.
If that does not work, I guess you are out of luck.