Running Heroku Postgres with least privilege - postgresql

Can I connect to a Heroku Postgres database via an web/application without the risk of dropping a table?
I'm building a Heroku application for a third party which uses Heroku Postgres for the backend. The third party are very security sensitive so I'm looking at applying "Layered security" throughout the application. So for example checking for SQL injection attacks at the web/application layer. Applying a "Layered security" approach I should also secure the database in case a potential SQL injection attack is missed, which might drop a database table.
In other systems I have built there would be a minimum of two users in the database. Firstly the database administrator who creates/drops tables, index, triggers, etc and the application user who would run with less privileges than the database administrator who could only insert and update records for example.
Within the Heroku Postgres setup there doesn't appear to be a way to create another user with less privileges (without the “drop table” option). So the application must connect with the default Heroku Postgres user and therefore the risk of a “drop table” might exist.
I'm running the Heroku Postgres Crane add-on.
Has anyone come up against this or got any creative work arounds for this scenario?

With Heroku Postgres you do only have a single account to connect with. One option that does exist for this type of functionality is to create a follower on Heroku Postgres. A follower is asynchronously kept up to date (usually only a second or so behind) and is read only. This would allow you to grant access to the follower to those that need it while not providing them with the details for the leader db.

Related

How to get rid of extra Postgres databases shown in pgAmin?

I am having a Postgres database deployed. When I connect to it using pgAdmin I see so many databases that I don't have access to and I haven't created at all.
The picture shows some of them. My actual database is one of them.
What are these database and why are they here? How can get rid of them? Can I just delete them without any problem?
If this is your database, then you better know what databases you have and why you have them.
One possibility is that you have lost control of your database, probably to cryptomining hackers (they do create databases with gibberish names).
You can delete the extra's, but the hackers will just keep on getting back in if you don't fix the underlying problem. You need to give good passwords to all your superuser accounts (and all non-superuser accounts too), block access to your database to all but white-listed hosts in pg_hba.conf, maybe block super-user access from all but localhost, as well as blocking access to 5432 on your firewall to all but trusted hosts. Any one of these might be sufficient, but you will be better off to do all 4 of these things.
I faced the same issue using Heroku Postgres addon...
The solution was setting DB Restriction in Advanced options.
Setting your database there you will only see your DB and not the other.

Heroku Permanent Database Credentials

I've decided to save time on the ops side of things and move to Heroku. I'm planning to have a production dyno on Heroku with a postgres database AND another dyno that reads from the same database.
However when I opened the settings of postgres, it said:
Database Credentials
Get credentials for manual connections to this database.
Please note that these credentials are not permanent.
Heroku rotates credentials periodically and updates applications where this database is attached.
What's a good way to go about this?
From Heroku Documentation,
Credentials
Do not copy and paste database credentials to a separate environment or into your application’s code. The database URL is managed by Heroku and will change under some circumstances such as:
User initiated database credential rotations using heroku pg:credentials:rotate.
Catastrophic hardware failure leading to Heroku Postgres staff recovering your database on new hardware.
Automated failover events on HA enabled plans.
It is best practice to always fetch the database URL config var from the corresponding Heroku app when your application starts. For example, you may follow 12Factor application configuration principles by using the Heroku CLI and invoke your process like so:
DATABASE_URL=$(heroku config:get DATABASE_URL -a your-app-name) your_process
This way, you ensure your process or application always has correct database credentials.
May be attaching the same database to two heroku-apps will better suit you. In this way, pg creds will be auto-managed by heroku.
I am also using this technique. I have one client-facing app and another operation-app sharing the same database instance.
You can either do this using UI or via CLI
see Share database between 2 apps in Heroku

Is it possible to have multiple databases per one heroku postgres plan?

Is it possible to have multiple databases per one Heroku postgreSQL plan(instance)?
Unlike Amazon RDS, Heroku doesn't allow creating multiple databases – your DB role simply doesn't have permissions to CREATE DATABASE ..;.
However, you can create several "apps", each one with its own Postgres and then use multiple Postgres DBs in single app (see for example https://devcenter.heroku.com/articles/heroku-postgresql#sharing-heroku-postgres-between-applications – this is a way to change "attached" database, but you can just add config vars pointing to multiple database credentials with heroku config and then use those credentials inside your app).
Alternatively, you can create Amazon RDS Postgres (one or as many as you wish) in the same Availability Zone as your Heroku app, and use this Postgres instance (or several) in your Heroku app.
Actually this is not completly true.
You are right, you do not have the permisson to create a database,
but instead it is possible to just add more Heroku Postgres databases as Resources.
This way you will have multiple plans(instances)

AWS pg_dump Does Not Include Globals

We have multiple PostgreSQL Instances in AWS RDS. We need to maintain an on-premise copy of each database to comply with our disaster recovery policy. I have been successful is using pg_dump and pg_restore to export the database schemas and tables to our on-premise server, but I have been unsuccessful in exporting the roles and tablespaces. I have found that this is only possible by using pg_dumpall, but as this requires super_user access, and that is not allowed in RDS, how can I export those aspects of the database to on our on-premise server?
My pg_dump command:
Pg_dump -h {AWS Endpoint} -U {Master Username}-p 5432 -F c -f C:\AWS_Backups\{filename}.dmp {database name}
My pg_restore command:
pg_restore -h {AWS Endpoint} -p 5432 -U {Master Username} -d {database name} {filename}.dmp
I have found multiple examples of people using pg_dump to export their PostgreSQL databases, however, they are not addressing the "Globals" that are ignored using pg_dump. Have I misread the documentation? After performing my pg_restore, my logins were not created on the database.
Any help you can provide on getting the FULL database (including globals) to our offsite location would be greatly appreciated.
UPDATE: My patch is now a part of Postgres v10+.
You can read about how this works here 3.
Earlier, I had also posted a working solution posted to my Github account. Then, you'd need to compile the binary and use that however, with the patch now a part of Postgres v10+, any pg_dumpall since that version now supports this feature.
You can read some more detailed inner workings here.
I haven't been able to find an answer to my question anywhere online. Just in case someone else may be experiencing this problem, I thought I would post a high-level outline of my "solution". I go around my elbow to get to my knee, but this is the option I have come up with:
Create a table (I created 2 - 1 for roles, and one for logins) in each PostgreSQL database within AWS. This table(s) will need to have all columns that you will need to dynamically create the SQL to do CREATE, GRANT, REVOKE, etc.
Insert all roles, logins, privileges, and permissions into this table. These are scattered everywhere, but here are the ones I used:
pg_auth_members (role and login relationships)
pg_roles (role and login permissions ie can login, inherit parent, etc)
information_schema.role_usage_grants (schema privileges)
information_schema.role_table_grants (table privileges)
information_schema.role_routine_grants (function privileges)
To fill in the gaps, there are clever queries on the web page below to use the built in functions to check for access. You will have to loop through the tables and process a row at a time
https://vibhorkumar.wordpress.com/2012/07/29/list-user-privileges-in-postgresqlppas-9-1/
Specifically, I used a variation of database_privs function
Once all of the data is in those tables, you can execute pg_dump, and it will extract that info from each database to your on-premise location. I did this through a Python script.
On your server, use the data in the tables to dynamically create the SQL statements needed to run the CREATE, GRANT, REVOKE, etc. scripts. Save in a .sql file that you can instruct a Python script to execute against the database and recreate the AWS roles and logins.
One thing I forgot to mention - because we are unable to access the pg_auth_id table in AWS, I have found no way to extract the passwords out of AWS. We are going to store these in a password manager, and when I create the CREATE ROLE statements, I'll pass a default to be updated.
I haven't completed the process, but it has taken me several days to track down a viable option to the absence of pg_dumpall's functionality. If anyone sees any flaws in my logic, or has a better solution, I'd love to read about it. Good luck!

Prevent Firebird database access on other server with different username/password

I created a Firebird database by an account other than sysdba. If I put a copy of this db to another machine, I can open it by sysdba account and the 'masterkey' password. Thus this is real risk if some one can take a copy of it.
Is there some way to prevent this scenario?
The user that created a database is "just" the owner of the database, the sysdba user is administrator and is allowed to do anything to all databases on a Firebird server. This is a very good reason to never use masterkey as your password on a production server.
The usernames and passwords in Firebird 2.5 and earlier are stored in a security database (security2.fdb) that is part of the Firebird installation. So moving a database to another server (or replacing the security2.fdb) will allow "unauthorized" persons to access the database. Note that I put unauthorized in quotes here, because if a person has direct file access so they are able to make a copy of the database, or replace the security2.fdb, they have sufficient authorization on your server to do anything they want (or the security of your system has been breached).
In Firebird 3, it will be possible to store users in the database itself, but this still requires server-side configuration, so - as far as I know - this will not restrict much in this scenario. Firebird 3 will also provide support for database encryption which could allow you to only give access on a specific server, or with users that provide a specific key. Unfortunately Firebird 3 only provides the API, but not the encryption. That is left to users or library providers to implement.
There is also a trick to create a role with the name SYSDBA in your database which will prevent a user with username sysdba to connect to the database. But this is easy to circumvent by using a hex editor and some knowledge of the internal structure of a Firebird database to undo this. If the person really wants access to your data, they can also just compile a Firebird server that skips or ignores authentication.
All in all, this means that if someone has direct access to the database file, then they can create a copy and open it on another Firebird install one way or another. So the only real way to protect a database file is to make sure that users can only access the database through the Firebird server, don't have direct access to the database files and - except admins - are not able to create a backup of the database.
Even if users only have access through the server, they can still make a logical copy of the entire database structure, and all data they are allowed to access.
Consider reading Firebird File and Metadata Security