Currently in Postgres the largest security hole is the .conf files that the database relies on, this is because someone with access to the system (not necessarily the database) can modify the files and gain entry. Because of this I am seeking out resources on how to encrypt those .conf files and then decrypt them during each session of the database. Performance is not really an issue at this point. Does anyone have any resources on this or has anyone developed any prototypes that utilize this functionality?
Edit
Since there seems to be some confusion here about what it is I am asking. The scenario can best be illustrated on a Windows box with the following groups:
1) Administrators System Administrators
2) Database Administrators Postgres Administrators
3) Auditors Security Auditors
The Auditors group typically needs access to log files and configuration files to ensure system security. However, the issue comes when a member of the Auditors group needs to view the Postgres configuration and log files. If this member decides that they want to access the database even though they do not have a database account it is a very short task to break in . How does one go about preventing this? Answers such as: Get better auditors are quite poor as you can never fully predict what people will do.
You are fine. No need to encrypt, so long as you have permissions on the *.conf files correct.
Your postgresql.conf and pg_hba.conf should both be marked as readable only by the postgres user/group. If you don't have actual users with those permissions, then only root can see them.
So, are you trying to prevent root from making changes? Cause just a normal user can't change those files, and if you don't trust root, you've already lost.
I think you might be stuck - here's what you said:
The Auditors group typically needs access to log files and configuration files
and then:
How does one go about preventing [Auditors from accessing the database using the values in the configuration files]?
If you really want to let Auditors get at your config files but are nervous about them accessing your database, your best bet would be to move your config files off of your server to somewhere else - and then make sure Auditors don't actually have access to your production systems. They could still look at the log files all they wanted, but they wouldn't be able to access the database server to try to get at the database itself.
Related
I'm looking for a data at rest solution for our setup.
Our application runs on our clients' machines, set up by their IT guys (however, they don't possess any credentials), and located on-premise. We log in via SSH. The machines are meant to stay up. We're storing sensitive information and would need to encrypt it to meet data-at-rest requirements. We're using Ubuntu 18.04+ and PostgreSQL.
I've looked into different solutions and gathered some information from several related previously asked questions:
Full disk encryption - since their IT is not really available to us, going in that direction might be problematic, as it would require performing more steps on their side. Also, if (when) the server ever gets rebooted, we would need to log in via SSH to enter the passphrase or use some kind of network-bound encryption, which again requires additional set up and the additional resources might not even be available to us.
File-based encryption - use something like eCryptfs and store the PostgreSQL data directory in an encrypted file system. This is currently the only solution I found that solves most of the issues; however, there might be other directories that would need encryption, and I'm not certain if they can be encrypted in that method (like /tmp). Once rebooted, the file system wouldn't be mounted automatically, and we would need to mount it manually. I don't see how we can solve this without, again, network-bound encryption. eCryptfs also let the user enter whatever configurations and passphrase they want upon every mounting, even if they don't match the previously used settings, which I think is prone to files getting corrupted. Writing a program that intercepts the mounting and validates the passphrase might be a possible solution to this. Handling problems like hanging processes when the FS isn't mounted is also okay for now, but overall this solution doesn't scale nicely.
Column-based encryption, client-side encryption, etc - doesn't work for our setup. We want to be able to query the data over SSH. The client is stored on the same machine with the data. Using something like PGP keys would mean the data is effectively unencrypted.
We don't use cloud services of sorts.
Maybe we need a different setup, or there are other solutions I'm not aware of. I'm really new to this subject and in the Stack Overflow community. The solutions I've found over the internet seem sparse and dated, and I'm not sure they're still relevant.
I have a website which runs in Perl cgi files. When a user logs in it creates a new session using Perl CGI::Session.
The problem comes from accessing two duplicated websites located under different user directories. For example, www.abc.edu/~AAA/project/ and www.abc.edu/~BBB/project/
These are exactly the same website on the same machine, so they share the same /tmp directory.
When I login to AAA's website (~AAA/project/*), it creates a session cookie on my
computer, in which the domain name is abc.edu. Then it creates session
information in /tmp directory which is owned by ‘AAA’, because the owner of the script is supposed to be 'AAA'.
Then if I access BBB's website (~BBB/project/*), it tries to use the session info
stored on my computer because the domain name is the same. However,
the session info stored in /tmp is owned by ‘AAA’, it cannot read or write the session information.
[edit] This is like A/B testing websites, and I agree that they should not share the sessions information.
I am thinking that the session information stored in /tmp should be readable and writable by anyone in this case to resolve the issues.
[edit] I realized the security issues that #simbabque pointed out, and also I found that -path parameter of session cookies can be used to differentiate those two groups of users. So now my question is what if I indeed want to use common authentication system between those two website, how can I share the session information without causing security issues? What is the typical way to handle in this A/B testing and shared authentication system? Thanks for your helps.
I was planning to write a long answer with an example application, but after rereading your comments and the question I think the answer is rather simple:
If you intend to use one login mechanism and the site's users are aware of this, then there is no security concern. It's being done all the time. A lot of systems today are made up of more then just one program to form one application, and they need to do that.
If the ownership of the files in the temp directory is a problem because the applications run as different system users, then simply don't use files as the session storage. Use a database or a key/value-store for example.
Or you could put both users into the same group and make the files group-read-writable. There are a lot of solutions here.
I'm using MongoDB as my database, and as a first-time back-end developer the ease with which I can delete an entire database/collection really bothers me.
Simply typing db.collection.remove() removes all records from that collection!
I know that an effective backup strategy should render this a non-issue, but I occasionally do run .remove() on some collections, and I'd hate to type in the wrong collection name by accident and (a) have to go through a backup restore, and (b) lose whatever data I had gathered between the backup and the restore, especially as my app gathers a lot of user data.
Is there any 'safeguard' I can set up my database to use, even if it's just a warning/confirmation that says
"Yo, are you sure you want to remove everything from <collectionname>? Choose: Yes/No"
User roles won't fix your problem. If your account has permissions to delete one user, you could accidentally delete them all. If your account has permissions to update an attribute for one user, you could accidentally update all of your users.
There's a simple fix for this however.
Step 0: Backup your database. And test your backups regularly. And make sure you get alerted if the backup did not run, or errored. Replica sets are not backups. I know this is obvious, but evidentally it's not obvious to everybody.
Step 1: Write a web admin GUI interface for your database. This it will only take a day or two -- and it should be simple enough that a secretary or intern could use it without fear for your data. (If you think this will take a long time, find a framework with more bells and whistles. Your admin console doesn't even need to be written in the same language as your app.)
Step 2: Data migrations (maintenance transformations of your database) should always be run from scripts checked into source control and tested on non-prod beforehand. The script could be as simple as mongo -e "foo.update(blah)", but you should run it as a script to avoid cut-n-paste errors. Ideally, you would even have a checklist for all migrations. (Check that you have a recent backup. Check the database log and system load beforehand. Write a before and after query that will tell you if the migration was successful...)
Step 3: You now no longer need to use the production Mongo console. So don't. It's a useful tool for development, but that's only needed on local development databases.
The above-mentioned Roles might be useful for read-only queries. But you can already do that against the non-master replica set member.
tl;dr: You can go pretty far using cowboy admin techniques, but eventually you're going to figure out that it's better (and not much more work) to automate everything.
There is nothing you can do in the current version to provide this functionality.
In a future version when user defined roles are available you could define a role which allows insert() and update() but not remove() or drop() etc. and therefore make yourself log-in as a different higher-role user, but that's not available in the current (2.4) version.
We are using perforce in my company and heavily rely on it. I need some suggestion for the following scenario:
Our Depot structure is something like this:
//depot
/product1
/component1
/component2
.
.
/componentN
/*.java
/*.xml
/product2
/component1
/component2
.
.
/componentN
/*.java
/*.xml
Every product has multiple components and every component consist of java or xml or some other program file. Every component has a manager/owner associated with it.
Right now, we have blocked the write permissions for every user and only when it is approved by the manager/owner after code review, we open the write permission for that user for any file/folder to check in. This process becomes a little untidy because the manager/developer have to wait for perforce admin to allow permissions (update protections table of perforce). Also, we give them a window of only 24 hrs to check in (due to agile, which i dont understand much :)), after which we are supposed to block the write access again for that user.
What I am looking for is a mechanism where perforce admins can delegate this responsibility to respective managers/owners without giving them super user or admin access and which automatically disables the write permission after 24 hrs.
Any suggestions ?
Thanks in advance.
There's nothing to do this out of the box, per se.
The closest thing I can think of is if the mainline version of these components were permissioned by a group with an owner. The owner of the group is allowed to add and remove members from the group, thus delegating the permissioning to the "gatekeeper" rather than the admins, themselves.
Let me know if you require further clarification about this.
One common solution is to build a simple tool which reads and writes the protections table, the group memberships, etc., to implement the policies that you desire.
The protections and groups data are not complex in format, and you can easily write a little bit of text-processing code that writes and re-writes these specs according to your needs.
Then install your tool on the server machine in a secure fashion, granting the tool the rights to update the protections table, and have your component administrators use the tool to manage the permissions.
For example, I've seen this done by writing a small web application, in Java or Perl for example, installing that on a web server on a secure machine, and letting the component admins operate that tool through a web interface.
All your tool has to provide is (a) a simple login/logout mechanism for your component admins (the web server may already do this for you), (b) a command that takes a user name and a folder name and grants permission, and (c) a command (or a timer) that removes that permissions subsequently.
I've read that things can go wrong with your web server which may lead to display of PHP scripts as plain text files in a web browser; consequently I've moved most of my PHP scripts to a directory outside the web root. Now I've been wondering whether the same could happen to the CGI scripts in my cgi-bin.
My main concern is one script which contains a user name and password for my MySQL database. If this is a possible security hole (at least as far as the database content is concerned), is there a way of putting sensitive data in a different location and getting it from there (like saving it in a file in a different directory and reading it from that file, for example)? My scripts are written in Perl btw.
I've read that things can go wrong with your web server which may lead to display of PHP scripts as plain text files in a web browser; consequently I've moved most of my PHP scripts to a directory outside the web root. Now I've been wondering whether the same could happen to the CGI scripts in my cgi-bin.
Yes. If something goes wrong that causes the programs to be served instead of executed, then any of their content will be exposed. It is exactly the same issue as with PHP (except that given the way that cgi-bin directories are usually configured (i.e. aliased to a directory outside the web root), it is slightly harder for the problems to occur).
My main concern is one script which contains a user name and password for my MySQL database. If this is a possible security hole (at least as far as the database content is concerned), is there a way of putting sensitive data in a different location and getting it from there (like saving it in a file in a different directory and reading it from that file, for example)?
Yes. Exactly that, just make sure the directory is outside the webroot.
For additional security, make sure the database only accepts the credentials for connections from the minimum set of hosts that need to access it. e.g. if the database is on the same server as the web server, then only let the credentials work for localhost. Causing the database to only listen on the localhost network interface would also be a good idea in that case.
My scripts are written in Perl btw.
I'd look at using one of the Config::* modules for this.
One concern worth mentioning is specific to shared hosting.
If you're on a host shared with other users, it may be impossible to hide the password from them.
This depends on configuration details for the OS and the webserver.
For instance, it is common to have an Apache configuration on Linux on which the only way for a user offering a website to make files readable or writable to the webserver user is to make them readable/writable to all users.
You may trust all of these users not to abuse this themselves, but if one of these websites has a vulnerability that allows intruders to view the full file system, the intruder can then exploit that on all other websites.
There are countermeasures against this, but they complicate things for the users, so many hosters don't implement them.
It's definitely not a good idea to hardcode a password in a script if you can avoid it. Fortunately both Postgres and MySQL support loading DB credentials from a file. For Postgres you use ~/.pgpass and for MySQL I believe it's ~/.my.cnf. In either case you would adjust the permissions so that only the user running the script has permission to read the file. The advantage of this approach is that you don't have to write the code to read the file - the DB client library does it automatically.
It is definitely a security concern. You should store the password encrypted in a separate file and make sure that only your app has access to it.
If you use directory configured as cgi-bin, there is no way for file to be shown except error with Apache configuration. If you use Perl programs outside cgi-bin directories but inside site root, it may happen.
Also, you may configure DB to accept connections only from local socket, so knowing DB password would be useless.
You've already gotten better answers than I can provide, but as a note:
It's very bad form to store passwords as plaintext, period.
In the same way it's very bad form to overwrite or delete files without asking permission. If you do it, it will bite you or your client in the butt eventually.