Is a plain-text password in a CGI script a security hole? - perl

I've read that things can go wrong with your web server which may lead to display of PHP scripts as plain text files in a web browser; consequently I've moved most of my PHP scripts to a directory outside the web root. Now I've been wondering whether the same could happen to the CGI scripts in my cgi-bin.
My main concern is one script which contains a user name and password for my MySQL database. If this is a possible security hole (at least as far as the database content is concerned), is there a way of putting sensitive data in a different location and getting it from there (like saving it in a file in a different directory and reading it from that file, for example)? My scripts are written in Perl btw.

I've read that things can go wrong with your web server which may lead to display of PHP scripts as plain text files in a web browser; consequently I've moved most of my PHP scripts to a directory outside the web root. Now I've been wondering whether the same could happen to the CGI scripts in my cgi-bin.
Yes. If something goes wrong that causes the programs to be served instead of executed, then any of their content will be exposed. It is exactly the same issue as with PHP (except that given the way that cgi-bin directories are usually configured (i.e. aliased to a directory outside the web root), it is slightly harder for the problems to occur).
My main concern is one script which contains a user name and password for my MySQL database. If this is a possible security hole (at least as far as the database content is concerned), is there a way of putting sensitive data in a different location and getting it from there (like saving it in a file in a different directory and reading it from that file, for example)?
Yes. Exactly that, just make sure the directory is outside the webroot.
For additional security, make sure the database only accepts the credentials for connections from the minimum set of hosts that need to access it. e.g. if the database is on the same server as the web server, then only let the credentials work for localhost. Causing the database to only listen on the localhost network interface would also be a good idea in that case.
My scripts are written in Perl btw.
I'd look at using one of the Config::* modules for this.

One concern worth mentioning is specific to shared hosting.
If you're on a host shared with other users, it may be impossible to hide the password from them.
This depends on configuration details for the OS and the webserver.
For instance, it is common to have an Apache configuration on Linux on which the only way for a user offering a website to make files readable or writable to the webserver user is to make them readable/writable to all users.
You may trust all of these users not to abuse this themselves, but if one of these websites has a vulnerability that allows intruders to view the full file system, the intruder can then exploit that on all other websites.
There are countermeasures against this, but they complicate things for the users, so many hosters don't implement them.

It's definitely not a good idea to hardcode a password in a script if you can avoid it. Fortunately both Postgres and MySQL support loading DB credentials from a file. For Postgres you use ~/.pgpass and for MySQL I believe it's ~/.my.cnf. In either case you would adjust the permissions so that only the user running the script has permission to read the file. The advantage of this approach is that you don't have to write the code to read the file - the DB client library does it automatically.

It is definitely a security concern. You should store the password encrypted in a separate file and make sure that only your app has access to it.

If you use directory configured as cgi-bin, there is no way for file to be shown except error with Apache configuration. If you use Perl programs outside cgi-bin directories but inside site root, it may happen.
Also, you may configure DB to accept connections only from local socket, so knowing DB password would be useless.

You've already gotten better answers than I can provide, but as a note:
It's very bad form to store passwords as plaintext, period.
In the same way it's very bad form to overwrite or delete files without asking permission. If you do it, it will bite you or your client in the butt eventually.

Related

Data at rest encryption for remote unattended Ubuntu & PostgreSQL machine

I'm looking for a data at rest solution for our setup.
Our application runs on our clients' machines, set up by their IT guys (however, they don't possess any credentials), and located on-premise. We log in via SSH. The machines are meant to stay up. We're storing sensitive information and would need to encrypt it to meet data-at-rest requirements. We're using Ubuntu 18.04+ and PostgreSQL.
I've looked into different solutions and gathered some information from several related previously asked questions:
Full disk encryption - since their IT is not really available to us, going in that direction might be problematic, as it would require performing more steps on their side. Also, if (when) the server ever gets rebooted, we would need to log in via SSH to enter the passphrase or use some kind of network-bound encryption, which again requires additional set up and the additional resources might not even be available to us.
File-based encryption - use something like eCryptfs and store the PostgreSQL data directory in an encrypted file system. This is currently the only solution I found that solves most of the issues; however, there might be other directories that would need encryption, and I'm not certain if they can be encrypted in that method (like /tmp). Once rebooted, the file system wouldn't be mounted automatically, and we would need to mount it manually. I don't see how we can solve this without, again, network-bound encryption. eCryptfs also let the user enter whatever configurations and passphrase they want upon every mounting, even if they don't match the previously used settings, which I think is prone to files getting corrupted. Writing a program that intercepts the mounting and validates the passphrase might be a possible solution to this. Handling problems like hanging processes when the FS isn't mounted is also okay for now, but overall this solution doesn't scale nicely.
Column-based encryption, client-side encryption, etc - doesn't work for our setup. We want to be able to query the data over SSH. The client is stored on the same machine with the data. Using something like PGP keys would mean the data is effectively unencrypted.
We don't use cloud services of sorts.
Maybe we need a different setup, or there are other solutions I'm not aware of. I'm really new to this subject and in the Stack Overflow community. The solutions I've found over the internet seem sparse and dated, and I'm not sure they're still relevant.

Perl CGI::Session permission issues

I have a website which runs in Perl cgi files. When a user logs in it creates a new session using Perl CGI::Session.
The problem comes from accessing two duplicated websites located under different user directories. For example, www.abc.edu/~AAA/project/ and www.abc.edu/~BBB/project/
These are exactly the same website on the same machine, so they share the same /tmp directory.
When I login to AAA's website (~AAA/project/*), it creates a session cookie on my
computer, in which the domain name is abc.edu. Then it creates session
information in /tmp directory which is owned by ‘AAA’, because the owner of the script is supposed to be 'AAA'.
Then if I access BBB's website (~BBB/project/*), it tries to use the session info
stored on my computer because the domain name is the same. However,
the session info stored in /tmp is owned by ‘AAA’, it cannot read or write the session information.
[edit] This is like A/B testing websites, and I agree that they should not share the sessions information.
I am thinking that the session information stored in /tmp should be readable and writable by anyone in this case to resolve the issues.
[edit] I realized the security issues that #simbabque pointed out, and also I found that -path parameter of session cookies can be used to differentiate those two groups of users. So now my question is what if I indeed want to use common authentication system between those two website, how can I share the session information without causing security issues? What is the typical way to handle in this A/B testing and shared authentication system? Thanks for your helps.
I was planning to write a long answer with an example application, but after rereading your comments and the question I think the answer is rather simple:
If you intend to use one login mechanism and the site's users are aware of this, then there is no security concern. It's being done all the time. A lot of systems today are made up of more then just one program to form one application, and they need to do that.
If the ownership of the files in the temp directory is a problem because the applications run as different system users, then simply don't use files as the session storage. Use a database or a key/value-store for example.
Or you could put both users into the same group and make the files group-read-writable. There are a lot of solutions here.

Site on two different servers

Im considering taking web server from China to reduce site loading times from China/China users. Problem is, how to sync/keep same data between two sites? When editing content in the site it should update these changes to site in China server.
Server is running Linux, Apache and MySQL. Website is using WordPress.
FYI I'm already using CDN and site loading speed is still too long from China.
Basically your solution would need to...
Copy the entire contents of your http'd directory from the main server to the Chinese server.
Copy the entire contents of your MySQL database from the main server to the Chinese server.
Perform these tasks at a regular interval without manual intervention.
I can guide you to references that will help with each task and sometimes can show you a quick example. However, if you want to get it to work and especially if you want to optimize the process, you're going to have to look through the references yourself.
If I didn't do it this way this answer would get even more horrendously long that it already is.
Before we start you should remember...
Thing 0 - Please Try Not to be Intimidated by the Length of this Answer
I know I've written a lot, perhaps more than I should have, but I guarantee you are capable of implementing this in no more than a day. I have tried to be thorough but that does not mean that what I'm describing is particularly complicated.
Thing 1 - Shutdown your Chinese Server During Transfer
This transfer of data is going to make your Chinese server unusable while it's in progress, as you might have guessed. You need to make sure that you're Chinese server is not operational during the transfer. Otherwise the server might have only partial data available which could cause problems for both client and server, particularly in relation to MySQL.
Thing 2 - Use Compression as much as You Can
As time consuming as compression and decompression can be for large amounts of data, believe me it is nothing compared to the time you will waste sending the uncompressed data to China. Network usage, not processor time, is really going to be the limiting factor in getting the transfer done quickly. Try to send compressed files whenever possible.
Thing 3 - Try to Use Checksums
Sending all your data, particularly in compressed format, will leave it vulnerable to corruption in transit. Whenever you send a file I encourage you to use some kind of checksum on the data to verify that it has not been corrupted. For brevity I will not be showing you how to do this but I'm sure you're smart enough to figure out how to pepper in some verification.
In case you're not familiar with checksums, the Wikipedia article about them is pretty straight forward. The most commonly used are the MD5 and the SHA-1, but both of those are somewhat collision prone. I would recommend the SHA-2 (also called SHA-256/512) or the very new SHA-3.
Copying your Http'd Directory to the Chinese Server
As far as I know (and I could be wrong) there is no built in way to transfer files from one Apache server to another...so you're going to have to write your own script for this.
You're also going to need to have two separate scripts: one for the main server and one for the Chinese server. Here's a breakdown of what each script needs to do.
On your main server...
Log in as you're Apache server's user. (Reference for switching users.)
zip/gzip/tar.gz your http'd directory's contents. (Reference for zip. Reference for gzip. Reference for tar.)
scp (secure copy) the compressed file to your Chinese server. Make sure to copy it to the username that Apache runs under. (Reference for scp.)
Delete the compressed file.
Initiate the Chinese server's script (this will be discussed later).
You will likely be using a shell script for all of this, so I hope you're familiar with the terminal. A simple example would look like this.
#!/bin/sh
## First I'll define some variables to explain this better.
APACHE_USER="whatever your Apache server's username is (usually it's www-data)";
WWW_DIR="your http'd directory relative to ~ (usually it's /var/www)";
CHINA_HOST="the host name/IP address of your Chinese server"
CHINA_USER="Apache's username on the Chinese server";
CHINA_PWD="Apache's user password on the Chinese server";
CHINA_HOME="the home directory of the Apache user on your Chinese server";
## Now to the real scripting. I will be using zip for compression.
su - "$APACHE_USER";
zip -r copy.zip "$WWW_DIR";
scp copy.zip "$CHINA_USER#$CHINA_HOST:$CHINA_HOME" < echo $CHINA_PWD;
rm copy.zip;
## Then you initiate the next step of the process.
## Like I said this will be covered later.
On your Chinese server...
Log in as the Apache user.
Delete the content of the http'd directory (probably /var/www relative to ~).
Decompress the scp'd file (this will change depending on how you compressed it).
Copy the decompressed directory to the http'd directory (this step is unnecessary if you choose to compress with zip).
Deleted the compressed, scp'd file.
Notify main server to continue next step (again, will be discussed later).
This is pretty straight forward and I don't think you need another example for this part.
Copying the MySQL Database Contents
You can find a good reference for how to do this in this article from the MySQL website. Basically copying database contents is a built in feature. Try to make use of the compression options!
Performing these Tasks at Regular Intervals without Manual Intervention
Ok this is where things get kind of complicated.
The first thing you need to know is how to schedule tasks at regular intervals on Linux. This is done with a command line tool called crontab. You can see good examples for setting up cron jobs in this article, and the full crontab documentation here.
However what will take more skill than just scheduling the job at regular intervals will be synchronizing the data transfer. If you simply set one server to send data at a certain time and the other to receive it at a certain time, you will get many bugs. Be sure of that.
My recommendation would be to create a socket in the Chinese server that listens for instructions from the main server.
This can be done in a variety of languages. Because you're using Linux I would recommend doing this in C, but it can be done in almost any language including Bash.
A full example would be too much but basically this will be the flow of what you have to do.
Socket in China listens for connections.
Cron job in main server connects to China socket.
Main server authenticates itself.
Chinese server stops Apache, stops accepting requests.
Chinese server acknowledges authentication approved.
Main server scp's website contents to Chinese server.
Main server tells Chinese server that scp is complete.
Chinese server replaces Apache's http'd directory's contents with the data that has been scp'd.
Chinese server announces success to main server.
Main server copies MySQL data.
Main server tells Chinese server process is complete.
Chinese server resumes Apache service.
Chinese server notify's main server that service is resumed.
Socket is closed.
Chinese server goes back to listening for connection from main server.
I hope this helps!

Guaranteeing consistency while accessing files on a web server

I'm in the process of building a simple update server for an application. The parts of the application being updated are configuration files; the most up-to-date copies of these files exist on the update server and these files can be edited by the individual managing the application (the "application manager") at any time. However, I don't want the application to be able to download one of these files while the file is being edited by the application manager; this would obviously cause consistency issues. How can I prevent these files from being accessed in an inconsistent state? Alternatively, would a solution be to provide a checksum along with the file that the application could use to determine if the file was received in a consistent state?
EDIT: I've seen this post concerning access restrictions using .htcaccess and think it could be of use. However, I want the application manager to do as little thinking as possible; having them forget to re-allow connections might be problematic. That being said, they're going to have to do some work at some point; maybe this is the way I should go?

Postgres Encryption of configuration files

Currently in Postgres the largest security hole is the .conf files that the database relies on, this is because someone with access to the system (not necessarily the database) can modify the files and gain entry. Because of this I am seeking out resources on how to encrypt those .conf files and then decrypt them during each session of the database. Performance is not really an issue at this point. Does anyone have any resources on this or has anyone developed any prototypes that utilize this functionality?
Edit
Since there seems to be some confusion here about what it is I am asking. The scenario can best be illustrated on a Windows box with the following groups:
1) Administrators System Administrators
2) Database Administrators Postgres Administrators
3) Auditors Security Auditors
The Auditors group typically needs access to log files and configuration files to ensure system security. However, the issue comes when a member of the Auditors group needs to view the Postgres configuration and log files. If this member decides that they want to access the database even though they do not have a database account it is a very short task to break in . How does one go about preventing this? Answers such as: Get better auditors are quite poor as you can never fully predict what people will do.
You are fine. No need to encrypt, so long as you have permissions on the *.conf files correct.
Your postgresql.conf and pg_hba.conf should both be marked as readable only by the postgres user/group. If you don't have actual users with those permissions, then only root can see them.
So, are you trying to prevent root from making changes? Cause just a normal user can't change those files, and if you don't trust root, you've already lost.
I think you might be stuck - here's what you said:
The Auditors group typically needs access to log files and configuration files
and then:
How does one go about preventing [Auditors from accessing the database using the values in the configuration files]?
If you really want to let Auditors get at your config files but are nervous about them accessing your database, your best bet would be to move your config files off of your server to somewhere else - and then make sure Auditors don't actually have access to your production systems. They could still look at the log files all they wanted, but they wouldn't be able to access the database server to try to get at the database itself.