We are using EF4.3 and Code First.
Our DBAs are keen to reduce the permissions developers have on the development database. But we've had many situations where we've been unable to create the database after it's been dropped when we call DropCreateDatabaseIfModelChanges
What are the minimum database permissions required for a user to be able to call both DropCreateDatabaseIfModelChanges and DropCreateDatabaseAlways?
Do these require different permissions?
Is there any documentation that I can refer our DBA's to that will help explain what is required?
Both initializers need permissions for dropping and creating database. These operations require very different permissions:
Database can be dropped by its owner - this permission is localized to the database (db_owner in SQL Server).
Database can be created only by the user with global server permission to create database - that is usually permission provided only to highly privileged users (dbcreator role in SQL Server)
You also need read access on Master database.
Btw. if this is related to development environment you should use your own local database server instead.
Related
I'd like to limit the privileges afforded to any given user that I create via the Google Terraform provider. By default, any user created is placed in the cloudsqlsuperuser group, and any new database created has that role/group as owner. This gives any user created via the GCP console or google_sql_user Terraform resource total control over any database that is (or was) created in a similar fashion.
So far, the best we've been able to come up with is creating and altering a user via a single-run k8s job. This seems circuitous, at best, especially given that that resource must then be manually imported later if we want to manage it via Terraform.
Is there a better way to create a user that has privileges limited to a single, application-specific database?
I was puzzled by this behaviour too. Its probably not the answer you want but if you can use GCP IAM accounts the user gets created in the PostgreSQL instance with NO roles.
There are 3 types of account you can create from "gcloud sql users create" or terraform module "google_sql_user"
"CLOUD_IAM_USER", "CLOUD_IAM_SERVICE_ACCOUNT" or "BUILT_IN"
The default is the built_in type if not specified.
CLOUD_IAM_USER and CLOUD_IAM_SERVICE_ACCOUNTS get created with NO roles.
We are using these as integration with IAM is useful in lots of ways (no managing passwords at database level is a major plus esp. when used in conjunction with SQL Auth Proxy).
BUILT_IN accounts (ie old school need a postgres username and password) for some reason are granted the "cloudsqlsuperuser" role.
In the absence of being allowed the superuser role on GCP this is about as privileged as you can get so to me (and you) seems a bizarre default.
I am interested in barring pgAdmin access to my PostgreSQL server from any station other than the server. Is is possible to do this using pg_hba.conf? The PostgreSQL server should still allow access to the server for my application from other stations.
No, this isn't possible. Nor is it sensible, since the client (mode of access) isn't the issue, but what you do on the connection.
If the user managed to trick your app into running arbitrary SQL via SQL injection or whatever, you'd be back in the same position.
Instead, set your application up to use a restricted user role that:
is not a superuser
does not own the tables it uses
has only the minimum permissions GRANTed to it that it needs
and preferably also add guards such as triggers to preserve data consistency within the DB. This will help mitigate the damage that can be done if someone extracts database credentials from the app and uses them directly via a SQL client.
You can also make it harder for someone with your app's binary etc to extract the credentials and use them to connect to postgres directly by:
using md5 authentication
if you use a single db role shared between all users, either (a) don't do that or (b) store a well-obfuscated copy of the db password somewhere non-obvious in the configuration, preferably encrypted against the user's local credentials
using sslmode=verify-full and a server certificate
embedding a client certificate in your app and requiring that it be presented in order for the connection to be permitted by the server (see client certificates
Really, though, if you can't trust your uses not to be actively malicious and run DELETE FROM customer; etc ... you'll need middleware to guard the SQL connection and apply further limits. Rate-limit access, disallow bulk updates, etc etc.
We've built a web tool (C# WebAPI) to administrate migration of SQL Azure databases with Entity Framework Code First Migrations.
By default, when we create databases we also create logins and user accounts per database.
These accounts get db_datareader and db_datawriter permissions.
We use these accounts from the web app to connect to the database to get current migrations and if there are any pending migrations, apply them.
For some reason, this operation takes about 10 seconds every time (without applying any updates).
If we use the admin account (associated when setting up the sql server in Azure) instead the time drops to less than a second.
I've come to the conclusion that there must be some kind of permission thing that gives us the decrease of performance.
I've added db_ddladmin role explained here without any success.
We use the DbMigrator class in Entity Framework Migrations to get pending migrations.
After some more investigation I found out that the solution to the problem is to create sql users in the master database also.
Before we only created logins and a corresponding user in the database.
According to this post SQL Azure doesn't support default database and therefore it defaults to the master database were I didn't have any rights.
Adding user to the master database solved the "performance" issue.
Connections using trusted authentication can be established by passing isc_dpb_trusted_auth and isc_spb_trusted_auth in the respective parameter blocks when using Firebird 2.1.
The connected user will have administrative rights depending on their being member of a Windows group with administrative rights.
For Firebird 2.5 the role "rdb$admin" can be specified to connect with administrative rights to the database, provided the user has been granted permission to that role.
I want to establish a service connection with administrative rights, using that role, but haven't found a way yet to do it. The connection is made but I can't for example list database users, which I can when connecting as SYSDBA.
What combination of isc_spb_trusted_auth, isc_spb_trusted_role and isc_spb_sql_role_name or other parameter blocks do I need, and what parameters do I need to pass?
There is a difference between a role and a user regarding where appropriate data are stored. The former is stored inside database in the RDB$ROLES table. The latter is stored in a special separate database file named security2.fdb, which usually lays in a Firebird directory.
When attaching to a service manager through Firebird API particular database is not known yet. Only server name is specified. Because of this you can not use roles. At this point the server simply doesn't know from what database to read role credentials.
The only service accepting role parameter in FB2.5 is users' management - that's how services work since interbase 6. Ability to use it for other services will be present starting with FB3.
On IBM DB2 v.9 windows, when someone connect to database by Server\Administrator user
DB2 database will automatically accept and grant all the permissions to this user?
But, in some case environment Administrator of server does not need to see every data in the database. So how to prevent Administrator use connect to database?
On 9.5 and older this would not be possible because the account under which your instance runs is SYSADM. Also Administrator can reset at least local account passwords and gain access to them, making changing the instance owner account useless.
However on 9.7 and onwards the instance owner will not have access to the data anymore. One option is to upgrade to 9.7. Furthermore you can set up an AD account for the connections your applications use. Local Administrator is not necessarily able to change into those credentials.
Still, the Administrator ultimately has access to the (usually unencrypted) database files. You can mostly improve the administrative aspect of security.
Umm... For many times I try to revoke with this command but when I connect to database by Administrator account DB2 will automatic grant permission to Administrator again.
I will try again for make sure.
By default, DB2 databases are created with CONNECT authority granted to public. If you want to restrict some users from connecting, you need to do
GRANT CONNECT ON DATABASE TO <user1>, <user2>, ...
Then revoke the CONNECT authority from PUBLIC
REVOKE CONNECT ON DATABASE FROM PUBLIC
I don't think it's possible under normal circumstances simply because Administrator is in the sysadm group.
Options I can think of (but haven't tried) include:
Setting the sysadm group to something else ("db2 update dbm cfg using sysadm_group blah"). Check the docs for caveats and gotchas when doing this, as I'm sure there are some.
Stop using OS authentication. Use a different security plugin (8.2 and higher only). This would move the authentication, and thus groups, to a new location (say an LDAP server). Then you just don't add Administrator to the new location, and especially don't add Administrator to the sysadm group again.
On Windows, the database manager configuration parameter SYSADM_GROUP controls who has SYSADMauthority at the instance level. When SYSADM_GROUP is blank (as is the default on Windows), then DB2 defaults to using the Administrators group on the local machine.
To fix this, you can create a new group in Windows and then modify the value of SYSADM_GROUP to use this new group. Make sure that the ID that the DB2 Service runs under belongs to this new group. After making this change, members of the Administrators group will no longer have SYSADM authority.
As Kevin Beck states, you may also want to look at restricting CONNECT authority on databases, too, because by default the CONNECT privilege is granted to PUBLIC.