I am trying to import a cloud sql postgres db instance to a newer instance with the same users created but i am getting this error:
.....ALTER DEFAULT PRIVILEGES ALTER DEFAULT PRIVILEGES stderr: ERROR: must be member of role "readonly"
Btw readonly is a user which use to connect our read only apps, but the problem is that to my expectation I should be able to export an instance and import it without any problem what am I missing here.
My exact steps
Export DB from cloud sql interface
Create a new db with a user named "proxyuser" (old convention in the company)
Import through the cloud sql, and select the user to "proxyuser"
After this the import fails at 2 hour mark with the above error message.
Side note: The import is from a replica DB in the another instance (i dont think so it matters, but let me know if it does)
This is a permission issue and to fix this as per the suggestions provided in the GCP documentation and the tips from this thread, Google Postgres team here.
The process while creating a new database and importing the user must be a member of the role that will be the owner of the database. Additionally, as stated in the documentation, before importing a SQL dump, all the database users who own objects or were granted permissions on objects in the dumped database must exist.
You can find more information on CloudSQL PostgreSQL users and roles here 4.
As per the PostgreSQL documentation, "you can change default privileges only for objects that will be created by yourself or by roles that you are a member of."
You may also try to run the import itself by the user who is the owner of the database that you want to import data to.
You may have a look at the following examples for similar example:
Error must be member of role when creating schema
Must be member of role error
Related
I've been working on maintenance on this GitHub repo that has been left undeveloped for almost a year. When rerunning the GitHub Actions job that finished to completion last May, there are now issues related to permission for CREATE in the public schema in PostgreSQL. At first I suspected, this might be because of the recent PostgreSQL 15 update that made it so that users do not by default have create access on the public schema. However, for our job GitHub Actions uses Postgres 14 for Ubuntu 22.04 (postgresql_14+238), so this change to public schema access in PostgreSQL shouldn't be affecting us. Our previous passing run used Postgres 12 for Ubuntu 20.04 (postgresql-12_12.10-0ubuntu0.20.04.1), so the changed environment could still be relevant.
The job is erroring out during a step where we create a few tables within our database using <user>:
peewee.ProgrammingError: permission denied for schema public
LINE 1: CREATE TABLE IF NOT EXISTS "articles" ("id" INTEGER NOT NULL...
Before this step, we configure the PostgreSQL database, creating the <user> and granting it all permissions to the database: `
CREATE USER <user>;
GRANT ALL PRIVILEGES ON DATABASE <db_name> to <user>
To remedy this problem (while still being confused on why it arose), I tried to explicitly grant <user> permissions on the public schema before attempting any CREATEs following the suggestions from this post: https://www.cybertec-postgresql.com/en/error-permission-denied-schema-public/
GRANT ALL ON SCHEMA public TO <name>;
which seems to go through based on the returned GRANT .
Locally, I'm having no issues with permissions even without the GRANT using PostgreSQL 14, but the permission error still comes up on GitHub Actions, even after granting access to the public schema to the user (and in a desperate attempt--to all users).
I've done a bunch of sanity checks related to making sure that we are in fact using the <user> during the CREATE step, but it seems like the <user> just never ends up getting the permissions even after the GRANT. I followed postgresql - view schema privileges to view schema privileges, and locally, the <user> has permissions to the public schema even before the GRANT. However, on GitHub Actions, the <user> doesn't have permissions before nor after the GRANT, even though there is output confirmation that the GRANT completed successfully.
Does anyone know why I would be having these permission errors now on GitHub Actions, despite the code working locally and on GitHub Actions months ago? Is there any way I can grant permissions differently that might work better in this environment?
The permissions on schema public changed in v15. This change finally got rid of the insecure default setting of letting every user create objects in that schema. Now only the database owner is allowed to create objects by default.
Your GRANT statement is good to allow a user to create objects in schema public:
GRANT CREATE ON SCHEMA public TO user_that_creates_objects;
Just remember that you have to connect to the target database before running that statement. Also, the GRANT must be executed by the database owner or a superuser.
My recommendation is to leave the public schema for extension objects and create your own schema for your application objects.
I have existing schema for prisma.
I copied the table from another schema to be included in the Prisma schema.
But when I run prisma db pull new table doesn't appear in Prisma schema.
Why?
If you use supabase and running this command and it returns something like this 'The following models were commented out because we couldn't retrieve columns for them. Please check your privileges.' or something similar regarding privileges, the solution is to go to your SQL editor from supabase and put this command and execute it
GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA public TO postgres;
GRANT ALL PRIVILEGES ON ALL SEQUENCES IN SCHEMA public TO postgres;
You're misinterpreting the function of the prisma db pull command.
From the docs
The db pull command connects to your database and adds Prisma models to your Prisma schema that reflect the current database schema.
Basically, it will make your Prisma schema match the existing database schema. What you want is the opposite here: Update the database schema to match the current Prisma schema.
You can do this in two ways:
If you want to update the database and keep the changes in your migration history, use the prisma migrate dev command. More Info
If you just want to update the database without creating a migration, use the prisma db push command. More info
More information is available in the docs explaining how you should choose between 1 and 2.
I had a similar issue once and a quick check confirmed for me that it was the lack of security permissions granted for prisma on new table in the database itself.
Try this:
Note the name of the database user that Prisma connects to the database with. You'll likely find this via your schema.prisma file, or perhaps via a DATABASE_URL config setting in the related .env file if you're using that with prisma.
Go into the database itself and ensure that database user which Prisma connects with has been granted sufficient security privileges to that new table. (note: what 'sufficient' is I cannot say since it depends on your own needs. At a guess, I'd say at least 'select' permission would be needed.)
Once you've ensure that user has sufficient privileges, try running a prisma db pull command once again.
For reference, another thing you could do is:
cross-check against one of the other tables that is already in your database that works correctly with prisma.
compare the security privileges of that old table with the security privileges of the new table and see if there are any differences.
I have created a new sql account and assigned it dbmanager and loginmanager roles. It can be used to create new databases but I am not able to access the database afterwards with that user. When right clicking the new database to run a query, the login prompt apears and says that the security principal %user% can access the database under the current security context.
I am not able to alter or grant user any access to the DB now that I can't even run any queries.
The purpose here is that I have a powershell script that creates the databases and handles the automation under a spesific SQL user. What am I missing?
The login might lacks the necessary permissions to connect to the specified database. Logins that can connect to this instance of SQL Server but do not have particular database rights inherit the guest user's permissions. This is a security feature that prevents users from connecting to databases where they do not have permissions. When the guest user does not have CONNECT permission to the identified database and the trustworthy attribute is not set, this error message appears. When the guest user does not have CONNECT authorization to the listed database, this error message appears.
You can connect to the database in one of the following ways:
Grant the specific login access to the named database.
Grant the CONNECT permission to the database named in the error message for the guest user.
Enable the TRUSTWORTHY property on the database that has authenticated the user.
Please refer to the Microsoft Document for this error: MSSQLSERVER_916
I am connecting from a containerized asp.net core 3.1 application running code-first EF core to an Amazon Aurora instance with PostgreSQL compatibility and wish to perform database credential rotation. I have set up a role representing the database owner, and a role representing the current valid login credentials that we will expire and replace with new credentials.
I have followed the suggestion from this blog post:
http://davidhollenberger.com/2017/03/16/postgres-credential-rotation/, which is essentially:
create role db_owner nologin;
create role foo_a with encrypted password ...;
grant db_owner to foo_a;
alter role foo_a set role db_owner;
I understand that whenever foo_a logs in to postgres, their default role is set to db_owner. If I log into the database using psql this seems to work consistently.
However, with EF Core, when connecting using the foo_a database credentials migrating the database to a new schema the object owner of new objects is listed as foo_a.
Example:
List of relations
Schema | Name | Type | Owner
--------+------+-------+----------------------------------------
public | test | table | foo_a
I expected the owner to be db_owner, since foo_a should always log in as db_owner.
Is there something I can do, either from EF Core, or when setting up the postgresql database that will allow us to set default ownership for all objects created by the user to the role group representing our database owner? I do not wish to make these temporary accounts some kind of 'superuser' for the instance, since we have multiple tenants in our database instance, instead I wish to have something similar to a 'dbo' role that has ownership of the database and the temporary users will always connect as the 'dbo' role.
Pooled connections are reset using the DISCARD ALL statement, which in turn resets the session and current user identifiers to be the originally authenticated user name. In other words:
Pooled connection reset (https://www.npgsql.org/doc/performance.html#pooled-connection-reset)
Runs DISCARD ALL (https://www.postgresql.org/docs/current/sql-discard.html)
Which in turn runs SET SESSION AUTHORIZATION DEFAULT (https://www.postgresql.org/docs/8.1/sql-set-session-authorization.html)
During a db migration, this sets the owner of any objects created to the user instead of the dbo role that he automatically inherits by ALTER ROLE... SET ROLE...
Options to resolve this issue will depend on your needs. Here are a few options we considered to resolve this question:
Make the db owner a member of the db user. Supplement this with a script that runs as part of our database/user provisioning and rotation strategy that sets all object ownership to the owner instead of the user. This requires zero code changes for the .net core app, but does seem messy from the perspective of object ownership.
In .Net use an IDBCommandInterceptor to set the appropriate role when we reuse a connection from the pool. This is a more invasive solution affecting your .net core project, but if your requirements involve credential rotation for one or only a few projects it may be practical.
Append the option No reset on close=true to the npgsql connection string. However, be aware that this risks leaking session state if you are using connection pooling. Reference: https://www.npgsql.org/doc/connection-string-parameters.html#performance
Other options, such as running a proxy are also worthy of consideration.
I have a PostgreSQL server on OVH's Cloud DB and have been using its databases for my web apps.
So far so good.
I got a project where It's a requirement to have schemas. Strangely enough, I am unable to create schemas on the user with "Administrator" privileges.
I have prepared scripts using schemas, so I just need to run them on a prepared database but I need a database with schemas to run them.
Here is my process:
Create a new database
Select option "Create user"
Select option for privilages: "Administrator"
Commit configuration
Wait for database creation
Connect to database with the new config via PGAdmin
Run command create schema if not exists "vMobile";
Recieve following error:
ERROR: permission denied for database my-database-dev
SQL state: 42501
I created a ticket for this but the wait is taking too long.
Support answer
Ok, so I got a response from the OVH support and there is no option for the user to create new schemas as their CloudDB enables access only to schema public and mentioned privileges Administrator, Read/Write, Read, None are only applicable to the public schema.
Workaround
My solution to this is to create tables with schema name included in their names
like so:
Desired outcome: "vCommon"."Route"
Workaround: "public"."vCommon_Route"