Can't create new schema in an OVH postgres - postgresql

I have a PostgreSQL server on OVH's Cloud DB and have been using its databases for my web apps.
So far so good.
I got a project where It's a requirement to have schemas. Strangely enough, I am unable to create schemas on the user with "Administrator" privileges.
I have prepared scripts using schemas, so I just need to run them on a prepared database but I need a database with schemas to run them.
Here is my process:
Create a new database
Select option "Create user"
Select option for privilages: "Administrator"
Commit configuration
Wait for database creation
Connect to database with the new config via PGAdmin
Run command create schema if not exists "vMobile";
Recieve following error:
ERROR: permission denied for database my-database-dev
SQL state: 42501
I created a ticket for this but the wait is taking too long.

Support answer
Ok, so I got a response from the OVH support and there is no option for the user to create new schemas as their CloudDB enables access only to schema public and mentioned privileges Administrator, Read/Write, Read, None are only applicable to the public schema.
Workaround
My solution to this is to create tables with schema name included in their names
like so:
Desired outcome: "vCommon"."Route"
Workaround: "public"."vCommon_Route"

Related

Unable to import a Database to new Cloud SQL instance

I am trying to import a cloud sql postgres db instance to a newer instance with the same users created but i am getting this error:
.....ALTER DEFAULT PRIVILEGES ALTER DEFAULT PRIVILEGES stderr: ERROR: must be member of role "readonly"
Btw readonly is a user which use to connect our read only apps, but the problem is that to my expectation I should be able to export an instance and import it without any problem what am I missing here.
My exact steps
Export DB from cloud sql interface
Create a new db with a user named "proxyuser" (old convention in the company)
Import through the cloud sql, and select the user to "proxyuser"
After this the import fails at 2 hour mark with the above error message.
Side note: The import is from a replica DB in the another instance (i dont think so it matters, but let me know if it does)
This is a permission issue and to fix this as per the suggestions provided in the GCP documentation and the tips from this thread, Google Postgres team here.
The process while creating a new database and importing the user must be a member of the role that will be the owner of the database. Additionally, as stated in the documentation, before importing a SQL dump, all the database users who own objects or were granted permissions on objects in the dumped database must exist.
You can find more information on CloudSQL PostgreSQL users and roles here 4.
As per the PostgreSQL documentation, "you can change default privileges only for objects that will be created by yourself or by roles that you are a member of."
You may also try to run the import itself by the user who is the owner of the database that you want to import data to.
You may have a look at the following examples for similar example:
Error must be member of role when creating schema
Must be member of role error

prisma db pull doesn't see a new table

I have existing schema for prisma.
I copied the table from another schema to be included in the Prisma schema.
But when I run prisma db pull new table doesn't appear in Prisma schema.
Why?
If you use supabase and running this command and it returns something like this 'The following models were commented out because we couldn't retrieve columns for them. Please check your privileges.' or something similar regarding privileges, the solution is to go to your SQL editor from supabase and put this command and execute it
GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA public TO postgres;
GRANT ALL PRIVILEGES ON ALL SEQUENCES IN SCHEMA public TO postgres;
You're misinterpreting the function of the prisma db pull command.
From the docs
The db pull command connects to your database and adds Prisma models to your Prisma schema that reflect the current database schema.
Basically, it will make your Prisma schema match the existing database schema. What you want is the opposite here: Update the database schema to match the current Prisma schema.
You can do this in two ways:
If you want to update the database and keep the changes in your migration history, use the prisma migrate dev command. More Info
If you just want to update the database without creating a migration, use the prisma db push command. More info
More information is available in the docs explaining how you should choose between 1 and 2.
I had a similar issue once and a quick check confirmed for me that it was the lack of security permissions granted for prisma on new table in the database itself.
Try this:
Note the name of the database user that Prisma connects to the database with. You'll likely find this via your schema.prisma file, or perhaps via a DATABASE_URL config setting in the related .env file if you're using that with prisma.
Go into the database itself and ensure that database user which Prisma connects with has been granted sufficient security privileges to that new table. (note: what 'sufficient' is I cannot say since it depends on your own needs. At a guess, I'd say at least 'select' permission would be needed.)
Once you've ensure that user has sufficient privileges, try running a prisma db pull command once again.
For reference, another thing you could do is:
cross-check against one of the other tables that is already in your database that works correctly with prisma.
compare the security privileges of that old table with the security privileges of the new table and see if there are any differences.

GCLOUD Postgres, using foreign data wrapper extesion results permission denied for relation

I'm really stuck with the following problem.
At GCloud SQL I have a running postgres' instance.
That instance contains two databases. From one database (source_db) I want to access to another database's (another_db) table (foreign_table) using postgres_fdw extension. The recipe I'm employing currently is this:
1)
CREATE EXTENSION postgres_fdw;
CREATE SERVER foreign_db
FOREIGN DATA WRAPPER postgres_fdw
OPTIONS (dbname 'another_db', port '5432', host '<A_PRIVATE_IP>');
CREATE USER MAPPING for guest
SERVER foreign_db
OPTIONS (user 'guest', password 's3cr3t');
CREATE FOREIGN TABLE foreign_table
(
// columns descripions
)
SERVER foreign_db OPTIONS (table_name 'foreign_table');
-- Alternatively I also tried with
CREATE SCHEMA external;
IMPORT FOREIGN SCHEMA public from SERVER foreign_db into external;
GRANT SELECT ON TABLE foreign_table TO guest;
The above commands runs without error, but when I tried to actually access the table I got this:
If using "external" schema
source_db=> select 1 from external.foreign_table limit 1;
ERROR: permission denied for relation foreign_table
CONTEXT: Remote SQL command: SELECT NULL FROM public.foreign_table (*)
If not using "external" schema
source_db=> select 1 from foreign_table limit 1;
ERROR: permission denied for relation foreign_table
CONTEXT: Remote SQL command: SELECT NULL FROM public.foreign_table
The only thing that smells a little is that the error message (at *) displays "public.foreign_table" instead of "external.foreign_table" even when I'm using external schema... but i don't know is that actually means something :S
As far I researched there is no way to login into the posgres instance as a superuser as that is not allowed by the Gcloud's SQL services neither a way to edit the pg_hba.conf file in order to adjust client's authentication affairs.
I searched in a lot of places but without finding what i can do to sort this out. Among the sites and pages i looked are the below list
The official documentation
A personal blog's post
This other SO post having a related issue
This post and this other post regarding permissions and authorizations.
A Nice tutorial about authentication and authorization
P.S.
I was able to make this on a postgres' instance that i ran locally.
User guest on the remote server doesn't have permissions to SELECT from the table. Since the query on the remote server is executed as user guest, you get an error.
GRANT the SELECT privilege on the table on the remote server to the user.

How to shut down one database in a db2 instance?

I want to shut down one database in a db2 instance with multiple dbs.
I don't want to deactivate the db as it will reconnect when I try to connect. It should be completely shut down so I get a connection error when trying to connect to the db.
This is not a programming question so it can be viewed as off topic.
There are different techniques, each has advantages/disadvantages.
You can quiesce the database and later unquiesce it.
or you can revoke connect rights, and later grant them, but this depends on how well your role separation is done.
or you force off existing applications and then connect in exclusive mode as the instance owner (provided that your applications NEVER connect with instance-owner credentials).
One trick you could use is to temporarily recatalog the database you want to deactivate under a different name; this will prevent applications from connecting to it using the original name, regardless of the authority they use.
First, determine the database path by looking at its catalog entry:
db2 list db directory
The value of the "Local database directory" property is what you need.
Now you can recatalog the database:
db2 uncatalog db orig_db
db2 catalog db orig_db as foobar on <path>
where <path> is the local database directory determined previously.
Once you force all applications currently connected to the database in question you will be able to deactivate the database:
db2 list applications
db2 "force application (<app id 1>, <app id 2>,...)
db2 deactivate db foobar
Later on you can restore the catalog entry to its original value:
db2 uncatalog db foobar
db2 catalog db orig_db on <path>

Updated User Permissions Don't Reflect In Replica in Postgres

I have a master-slave configuration of PostgreSQL servers and there are multiple schemas defined in the database. Now both of master and replication servers have a user readonly which initially had only access to the public schema.
I have another schema, let's say alt_schema; and I want to give readonly user access to all it's tables.
Henceforth, I run the following query in master server to provide access of the schema to the user.
GRANT ALL ON ALL TABLES IN SCHEMA alt_schema TO readonly;
The above command successfully provided access of the schema's tables to the user.
But, the permissions are not propagated to the replication server (I waited for about 30 mins expecting there maybe some lag). Since, the automated replication failed, I tried to run the above query manually in the replication server itself, but obviously it gave me the below error:
ERROR: cannot execute GRANT in a read-only transaction
Is there way to achieve the above.
Note: My Postgres Servers are hosted in Google Cloud SQL.