migrating from db2 Express-C to Developer version - db2

I have a backup file from db2 express-c 11.1 version and I'd like to restore it into db2 developer version (both on Windows machine). The RESTORE completed successfully and I can list the tables from the db2 command line
db2 list tables for schema XYZ
but when I'm trying to access the data I'm getting the following error message
SQL0551N The statement failed because the authorization ID does not have the
required authorization or privilege to perform the operation. Authorization
ID: "DB2USER". Operation: "SELECT". Object: "XYZ.Table1". SQLSTATE=42501
I logged in as the user who RESTORE the database. WHat's the issue here?

When restoring a DB2-luw database backup to a different Db2-instance , it is wise to first set a Db2-registry variable on the target Db2-instance before performing the database-restore. The account performing the Db2-restore will then be granted SECADM, DBADM, DATAACCESS, and ACCESSCTRL authorities on the restored database.
db2set DB2_RESTORE_GRANT_ADMIN_AUTHORITIES=on
More information here.
Then perform the Db2-restore command.
If you have not taken this action then you can also use manual GRANT statements (on database level and object level) to adjust to the new Db2-instance, but for best results you should use the registry variable above.
You can also use the TRANSFER OWNERSHIP statement at various levels to achieve the security model. Details here. This is useful when the previous owner was the Db2-instance and the restored-database is in a different Db2-instance than the backed-up-database.

Related

GCP pg_stat_statements insufficient privileges and read replicas

I'm running postgres on GCP SQL service.
I have a main and a read replica.
I've enabled pg_stat_statements on the main node but still I get messages that I have insufficient privileges for almost each and every row.
When i've tried to enable the extension on the read replica it gave me an error that: cannot execute CREATE EXTENSION in a read-only transaction.
All of those actions I have tried to do with the highest privilege user that I have (using a user who is a member of cloudsqlsuperuser, basically same as the default postgres user)
So I have 2 questions:
How do I fix the privileges issue so I can see the statistics in the table?
How do I enable extension on the read replica?
Thanks!
After having run some more tests on postgres 9.6, I have also obtained the messages <insufficient privilege>.
I have run the following query on both postgres 9.6 and 13 and obtained different results:
SELECT userid, usename, query
FROM pg_stat_statements
INNER JOIN pg_catalog.pg_user
ON userid = usesysid;
I noticed in postgres 9.6 that the queries I cannot see come from the roles/users cloudsqlagent and cloudsqladmin(preconfigured Cloud SQL postgres roles).
This does not happen with postgres 13 or better said versions 10 and higher and it is because when using EXTENSION pg_stat_statements, SQL statements from all users are visible to users with the cloudsqlsuperuser. This is the behavior of the product across different versions and it is described in the blue box of this link.
Basically only in version 9.6 the SQL statements from all users are NOT visible to users with the cloudsqlsuperuser role.
So if I enable it on the master, it should be enabled on the replica
as well?
Yes, after enabling the extension in the master you can connect to the replica and check with the following command that pg_stat_statements has been enabled:
SELECT * FROM pg_extension;
If you would like a more uniform behavior across postgres versions or if you strongly need the SQL statements from all users to be visible to the cloudsqlsuperuser role, I would recommend then creating a public issue tracker with the template feature request.
I hope you find this useful.
On the permissions side of things, cloudsqlsuperuser is not a real superuser (but is as close as you'll get in GCP cloudsql). Due to this I've sometimes found that I've needed to explicitly grant it access to objects / roles to be able to access things.
Therefore I'd try doing:
GRANT pg_read_all_stats TO cloudsqlsuperuser;
I'm not too sure about how to enable on the read replica unfortunately.
However, you might be interested in the recently released insights feature https://cloud.google.com/sql/docs/postgres/insights-overview - I haven't been able to play with this properly yet, but from what I've seen it's pretty nifty.

SYSDBA user is blocked from access to Firebird 2.x database

I have a firebird database in a .fdb file, but the database do not have the SYSDBA user and I don't remember the credentials to login into the database. Are there any way that could reset the database credentials?
Like said by Mark, it is not that the database "does not have SYSDBA user" - databases in Firebird 2.x never have users - but that old trick was used to create SYSDBA named role in order to trigger names collision on login.
After scanning through 2007 Security presentation I have two suggestions for you.
You can try some tool that opens Firebird databases without using Firebird itself to learn what username can pull you out of the deadlock.
One such tool is Database Explorer in the IBExpert. Full IBExpert is paid for non-USSR states and free IBExpert Personal probably does not have the tool. But I hope the tool works in IBExpert Trial. Another tool is IBSurgeon FirstAID. And probably there are more tools featuring data extraction from corrupt databases. You only need to find and read one specific row.
The query to create the blocking role is given on the 23rd page of the presentation.
INSERT INTO RDB$ROLES(RDB$ROLE_NAME, RDB$OWNER_NAME)
VALUES (‘SYSDBA’, ‘LOCKSMITH’);
So you would have to look into the said table, find the row with the said role, and learn the username that has authority over that role (in the example it was LOCKSMITH).
After that you connect to any other database on the same server and you create the user with the name you learnt. Then you use that name to login into the problematic database and to DROP ROLE SYSDBA; COMMIT;.
You also can use Firebird Embedded. All server-coded security checks are bypassed in the Embedded edition of FB 2.x (but if DB designer added some ad hoc security checks in triggers - they will work). So you login into the problematic database using Firebird Embedded edition, any username and any password, and after that you drop the access blocking role.
In Firebird database doesn't contain password (until v3.0 as mentioned by #Arioch'The). The password is used only for server. Another words, you can copy database file from existed server to another (with known password) and open the database file.

Is there any way I can restore a DB2 backup file onto IBM DashDB?

I am trying to restore a DB2 backup file into my BlueMix DashDB service. How do I go about doing this?
You cannot restore your DB2 backup image into dashDB for several reasons.
In an entry-level, shared dashDB instance you only have access to one schema in a physical database shared by others.
Even if you have a dedicated instance, you need 1) access to the database local disk to upload the image and 2) sufficient privileges (at least SYSMAINT authority) to perform the restore. I doubt either will be available to you.
What you can do is run db2look and db2move locally to extract your database DDL statements and data respectively. You can then run the extracted DDL script against dashDB provided you replace the original schema name(s) with the one available to you in dashDB and, after creating the tables, load your data into them.

ERROR: cannot execute CREATE TABLE in a read-only transaction

I'm trying to setup the pgexercises data in my local machine. When I run: psql -U <username> -f clubdata.sql -d postgres -x I get the error: psql:clubdata.sql:6: ERROR: cannot execute CREATE SCHEMA in a read-only transaction.
Why did it create a read-only database on my local machine? Can I change this?
Normally the most plausible reasons for this kind of error are :
trying create statements on a read-only replica (the entire instance is read-only).
<username> has default_transaction_read_only set to ON
the database has default_transaction_read_only set to ON
The script mentioned has in its first lines:
CREATE DATABASE exercises;
\c exercises
CREATE SCHEMA cd;
and you report that the error happens with CREATE SCHEMA at line 6, not before.
That means that the CREATE DATABASE does work, when run by <username>.
And it wouldn't work if any of the reasons above was directly applicable.
One possibility that would technically explain this would be that default_transaction_read_only would be ON in the postgresql.conf file, and set to OFF for the database postgres, the one that the invocation of psql connects to, through an ALTER DATABASE statement that supersedes the configuration file.
That would be why CREATE DATABASE works, but then as soon as it connects to a different database with \c, the default_transaction_read_only setting of the session would flip to ON.
But of course that would be a pretty weird and unusual configuration.
Reached out to pgexercises.com and they were able to help me.
I ran these commands(separately):
psql -U <username> -d postgres
begin;
set transaction read write;
alter database exercises set default_transaction_read_only = off;
commit;
\q
Then I dropped the database from the terminal dropdb exercises and ran script again psql -U <username> -f clubdata.sql -d postgres -x -q
I was having getting cannot execute CREATE TABLE in a read-only transaction, cannot execute DELETE TABLE in a read-only transaction and others.
They all followed a cannot execute INSERT in a read-only transaction. It was like the connection had switched itself over to read-only in the middle of my batch processing.
Turns out, I was running out of storage!
Write access was disabled when the database could no longer write anything. I am using Postgres on Azure. I don't know if the same effect would happen if I was on a dedicated server.
I had same issue for Postgre Update statement
SQL Error: 0, SQLState: 25006 ERROR: cannot execute UPDATE in a read-only transaction
Verified Database access by running below query and it will return either true or false
SELECT pg_is_in_recovery()
true -> Database has only Read Access
false -> Database has full Access
if returns true then check with DBA team for the full access and also try for ping in command prompt and ensure the connectivity.
ping <database hostname or dns>
Also verify if you have primary and standby node for the database
In my case I had a master and replication nodes, and the master node became replication node, which I believe switched it into hot_standby mode. So I was trying to write data into a node that was meant only for reading, therefore the "read-only" problem.
You can query the node in question with SELECT pg_is_in_recovery(), and if it returns True then it is "read-only", and I suppose you should switch to using whatever master node you have now.
I got this information from: https://serverfault.com/questions/630753/how-to-change-postgresql-database-from-read-only-to-writable.
So full credit and my thanks goes to Craig Ringer!
Dbeaver: In my case
This was on.
This doesn't quite answer the original question, but I received the same error and found this page, which ultimately led to a fix.
My issue was trying to run a function with temp tables being created and dropped. The function was created with SECURITY DEFINER privileges, and the user had access locally.
In a different environment, I received the cannot execute DROP TABLE in a read-only transaction error message. This environment was AWS Aurora, and by default, non-admin developers were given read-only privileges. Their server connections were thus set up to use the read-only node of Aurora (-ro- is in the connection url), which must put the connection in the read-only state. Running the same function with the same user against the write node worked.
Seems like a good use case for table variables like SQL Server has! Or, at least, AWS should modify their flow to allow temp tables to be created and dropped on read nodes.
This occurred when I was restoring a production database locally, the database is still doing online recovery from the WAL records.
A little bit unexpected as I assumed pgbackgrest was creating instantly recoverable restores, perhaps not.
91902 postgres 20 0 1445256 14804 13180 D 4.3 0.3 0:28.06 postgres: startup recovering 000000010000001E000000A5
If like me you are trying to create DB on heroku and are stuck as this message shows up on the dataclip tab
I did this,
Choose Resources from(Overview Resources Deploy Metrics Activity Access Settings)
Choose Settings out of (Overview, Durability, Settings, Dataclip)
Then in Administration->Database Credentials choose View Credentials...
then open terminal and fill that info here and enter
psql --host=***************.amazonaws.com --port=5432 --username=*********pubxl --password --dbname=*******lol
then it'll ask for password, copy-paste from there and you can run Postgres cmds.
I suddenly started facing this error on postgres installed on my windows machine, when I was running alter query from dbeaver, all I did was deleted the connection of postgres from dbeaver and created a new connection
If you are using Azure Database for PostgreSQL your server gets into read-only mode when the storage used is near total capacity.
The error you get is exactly:
ERROR: cannot execute XXXXXXXXX in a read-only transaction
https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/concepts-compute-storage
I just had this error. My cause was not granting permission to the SEQUENCE
GRANT ALL ON SEQUENCE word_mash_word_cube_template_description_reference_seq TO ronshome_user;
If you are facing this issue with an RDS instance cluster, please check your endpoint and use the Writer instance endpoint. Then it should work now.
Issue can be dur to Intellij config:
Go to Database view> click on Data Source Properties (Shift + enter)> (Select your data source)>
Options tab> Under Connection : uncheck Read-only
For me it was Azure PostgreSQL failing over to standby during maintaince in Azure and never failing back to master when PostgreSQL was in HA mode. You can check this event in Service Health and also check which zone you current VM is running from. If it's 2 and not 1 them most likely that's the result of events described above.

Running Heroku Postgres with least privilege

Can I connect to a Heroku Postgres database via an web/application without the risk of dropping a table?
I'm building a Heroku application for a third party which uses Heroku Postgres for the backend. The third party are very security sensitive so I'm looking at applying "Layered security" throughout the application. So for example checking for SQL injection attacks at the web/application layer. Applying a "Layered security" approach I should also secure the database in case a potential SQL injection attack is missed, which might drop a database table.
In other systems I have built there would be a minimum of two users in the database. Firstly the database administrator who creates/drops tables, index, triggers, etc and the application user who would run with less privileges than the database administrator who could only insert and update records for example.
Within the Heroku Postgres setup there doesn't appear to be a way to create another user with less privileges (without the “drop table” option). So the application must connect with the default Heroku Postgres user and therefore the risk of a “drop table” might exist.
I'm running the Heroku Postgres Crane add-on.
Has anyone come up against this or got any creative work arounds for this scenario?
With Heroku Postgres you do only have a single account to connect with. One option that does exist for this type of functionality is to create a follower on Heroku Postgres. A follower is asynchronously kept up to date (usually only a second or so behind) and is read only. This would allow you to grant access to the follower to those that need it while not providing them with the details for the leader db.