Can I copy/paste htpasswd files to my new server? - .htpasswd

I'm documenting the procedure for a full redeploy on my development server. Small staff, using Basic authentication (over SSL, of course) with an htpasswd file backend.
Is it safe to transfer the .htpasswd file as-is?
The Operating Systems will potentially differ, but the software on top (ie. Apache) will be the same.

It's safe to transfer the htpasswd file no matter what architectures you are on. It is a text file. The only case in which you might need to do some conversions is to deal with line endings if you were moving between Unix and Windows, but between Linux/Unix boxes, no problems.

Short answer: Yes you can.
That's all you need to know.

If you're using apache, then the .htpasswd file should be the same. Just make sure you aren't overriding the password file to use in your new server config. And make sure you set proper permissions on the file so that you don't get any additional unauthorized entries.

If you use the same apache version you shouldn't have any problems just copying the .htpasswd files: They just contain the hashes that people's passwords are being compared with and the hashing is always the same (or it wouldn't work on any machine ;))

Related

Default UNIX permissions of Mongodb files in the hard drive

I noticed that the files in the data/ directory, hosting the databases and collections, are the r permission for others.
So basically, anyone can read the data! Isn't it strange, or is it something I'm missing?
I found no solution to change this behavior in the mondodb configuration (ubuntu 18.04). When you search mongodb file permissions, you will find threads about user permissions inside the database.
Thank you!
Im going to assume you're using WiredTiger, the default storage engine for mongo. Either way, the same concept applies.
You'll see the .wt files (the ones you're talking about), although readable by permission, are not very readable to the eye. Try look for yourself with less <example>.wt.
They're stored in a specific format, with compression and some encryption. Realistically, they shouldn't be able to be retrieved from outside of your server - and your users in the server should trusted, or given limited access to the locations of these files.
In short, if you apply the proper policies, and keep your actual database and server secure, then this is normal and expected. I hope this makes sense.
When you launch mongod you need to specify a path to the data directory, and this directory must already exist.
You can set the permissions on this directory to deny world-read access by running:
chmod o-rwx /path/to/data/dir
Normally this would be done prior to the first start of mongod.
Once this is done, none of the files in the data directory will be world-readable regardless of their individual permissions.
MongoDB does not need to have a provision to do this because it never creates the data directory.
A different way of accomplishing similar end result is to use umask, but changing permissions on data directory generally would be more reliable.

three ways to let PHP and a regular user edit the same files

I am a web developer, and for some upcoming projects I would like to use a file-based CMS. This means that many of the files I create at the start must be editable by the PHP user later, but also remain editable for my user (and also the other way around). My PC runs Debian 9, which I love but am not super knowledgeable about, and I have also just set up a local network server with Debian 9 for backups and possibly file sharing. (I'm using Webmin to configure this, which reflects my level of command line skills).
On my online shared hosting server, the PHP user and the FTP user seem to be the same, and 644/755 permissions work fine, this is also recommended by the CMS I'm using. I would like to mimic this on my computer so I don't have to fiddle with permissions all the time. But how do I do this? Currently, my regular user (anna) does not have access to www-data's files and vice versa. Putting them in the same group still means changing file permissions. Making anna the PHP user is a Bad Idea (as far as I understand it) because anna has sudo permissions.
So far I have researched three possible solutions that I don't really know very much about, and I would like to know which is the best route to take.
Develop locally on my computer and use apache-mpm-itk or suPHP to let PHP edit the files (I got that idea from this question on ServerFault).
Develop locally on my computer and rsync the files to my server with grunt-rsync, and somehow get rsync to set the ownership to www-data (another ServerFault thread helping here).
Mount the project's server directory, which is owned by www-data, on my computer with SSHFS and then either edit the files on the server directly or copy them over from my local directory with grunt-copy.
What do you think: from a security and ease of use perspective, which is the best way? Or do you know an even better one?
Thank you for taking the time to read and think about this!
Anna~
I figured it out! I finally ended up reading about running PHP as CGI instead of as an Apache module, and that this would solve my permissions problem. Plus, as far as I understand it, there are no extra security precautions to take when I'm the only one working with it on my local computer.
In case someone comes across this who might find it helpful, here's what I did (basically following these instructions):
I installed php7.0-fpm
Edited /etc/apache2/sites-enabled/000-default.conf and put the following just before </VirtualHost>:
DirectoryIndex index.php
<LocationMatch "^(.*\.php)$">
ProxyPass fcgi://127.0.0.1:9000/var/www/html
</LocationMatch>
I activated the Apache module proxy_fcgi (via Webmin, which apparently does an automatic Apache restart)
In /etc/php/7.0/fpm/pool.d/www.conf I commented out a listen line and put another below like this:
; listen = /run/php/php7.0-fpm.sock
listen = 127.0.0.1:9000
I then restarted PHP-FPM with this command: /etc/init.d/php7.0-fpm restart (a little different from the instructions, I'm on Debian 9). After that, phpinfo() gave me the Server API "FPM/FastCGI".
And finally, I changed the user and group from www-data to anna in three places, twice in /etc/php/7.0/fpm/pool.d/www.conf and then once more in /usr/lib/tmpfiles.d/php7.0-fpm.conf (this last bit may be Ubuntu/Debian specific, my thanks go to Keith for a comment on StackExchange).
And that was it! :-)

need perl script to connect to database, but don't want the password in plain text [duplicate]

When a PHP application makes a database connection it of course generally needs to pass a login and password. If I'm using a single, minimum-permission login for my application, then the PHP needs to know that login and password somewhere. What is the best way to secure that password? It seems like just writing it in the PHP code isn't a good idea.
Several people misread this as a question about how to store passwords in a database. That is wrong. It is about how to store the password that lets you get to the database.
The usual solution is to move the password out of source-code into a configuration file. Then leave administration and securing that configuration file up to your system administrators. That way developers do not need to know anything about the production passwords, and there is no record of the password in your source-control.
If you're hosting on someone else's server and don't have access outside your webroot, you can always put your password and/or database connection in a file and then lock the file using a .htaccess:
<files mypasswdfile>
order allow,deny
deny from all
</files>
The most secure way is to not have the information specified in your PHP code at all.
If you're using Apache that means to set the connection details in your httpd.conf or virtual hosts file file. If you do that you can call mysql_connect() with no parameters, which means PHP will never ever output your information.
This is how you specify these values in those files:
php_value mysql.default.user myusername
php_value mysql.default.password mypassword
php_value mysql.default.host server
Then you open your mysql connection like this:
<?php
$db = mysqli_connect();
Or like this:
<?php
$db = mysqli_connect(ini_get("mysql.default.user"),
ini_get("mysql.default.password"),
ini_get("mysql.default.host"));
Store them in a file outside web root.
For extremely secure systems we encrypt the database password in a configuration file (which itself is secured by the system administrator). On application/server startup the application then prompts the system administrator for the decryption key. The database password is then read from the config file, decrypted, and stored in memory for future use. Still not 100% secure since it is stored in memory decrypted, but you have to call it 'secure enough' at some point!
This solution is general, in that it is useful for both open and closed source applications.
Create an OS user for your application. See http://en.wikipedia.org/wiki/Principle_of_least_privilege
Create a (non-session) OS environment variable for that user, with the password
Run the application as that user
Advantages:
You won't check your passwords into source control by accident, because you can't
You won't accidentally screw up file permissions. Well, you might, but it won't affect this.
Can only be read by root or that user. Root can read all your files and encryption keys anyways.
If you use encryption, how are you storing the key securely?
Works x-platform
Be sure to not pass the envvar to untrusted child processes
This method is suggested by Heroku, who are very successful.
if it is possible to create the database connection in the same file where the credentials are stored. Inline the credentials in the connect statement.
mysql_connect("localhost", "me", "mypass");
Otherwise it is best to unset the credentials after the connect statement, because credentials that are not in memory, can't be read from memory ;)
include("/outside-webroot/db_settings.php");
mysql_connect("localhost", $db_user, $db_pass);
unset ($db_user, $db_pass);
If you are using PostgreSQL, then it looks in ~/.pgpass for passwords automatically. See the manual for more information.
Previously we stored DB user/pass in a configuration file, but have since hit paranoid mode -- adopting a policy of Defence in Depth.
If your application is compromised, the user will have read access to your configuration file and so there is potential for a cracker to read this information. Configuration files can also get caught up in version control, or copied around servers.
We have switched to storing user/pass in environment variables set in the Apache VirtualHost. This configuration is only readable by root -- hopefully your Apache user is not running as root.
The con with this is that now the password is in a Global PHP variable.
To mitigate this risk we have the following precautions:
The password is encrypted. We extend the PDO class to include logic for decrypting the password. If someone reads the code where we establish a connection, it won't be obvious that the connection is being established with an encrypted password and not the password itself.
The encrypted password is moved from the global variables into a private variable The application does this immediately to reduce the window that the value is available in the global space.
phpinfo() is disabled. PHPInfo is an easy target to get an overview of everything, including environment variables.
Your choices are kind of limited as as you say you need the password to access the database. One general approach is to store the username and password in a seperate configuration file rather than the main script. Then be sure to store that outside the main web tree. That was if there is a web configuration problem that leaves your php files being simply displayed as text rather than being executed you haven't exposed the password.
Other than that you are on the right lines with minimal access for the account being used. Add to that
Don't use the combination of username/password for anything else
Configure the database server to only accept connections from the web host for that user (localhost is even better if the DB is on the same machine) That way even if the credentials are exposed they are no use to anyone unless they have other access to the machine.
Obfuscate the password (even ROT13 will do) it won't put up much defense if some does get access to the file, but at least it will prevent casual viewing of it.
Peter
We have solved it in this way:
Use memcache on server, with open connection from other password server.
Save to memcache the password (or even all the password.php file encrypted) plus the decrypt key.
The web site, calls the memcache key holding the password file passphrase and decrypt in memory all the passwords.
The password server send a new encrypted password file every 5 minutes.
If you using encrypted password.php on your project, you put an audit, that check if this file was touched externally - or viewed. When this happens, you automatically can clean the memory, as well as close the server for access.
Put the database password in a file, make it read-only to the user serving the files.
Unless you have some means of only allowing the php server process to access the database, this is pretty much all you can do.
If you're talking about the database password, as opposed to the password coming from a browser, the standard practice seems to be to put the database password in a PHP config file on the server.
You just need to be sure that the php file containing the password has appropriate permissions on it. I.e. it should be readable only by the web server and by your user account.
An additional trick is to use a PHP separate configuration file that looks like that :
<?php exit() ?>
[...]
Plain text data including password
This does not prevent you from setting access rules properly. But in the case your web site is hacked, a "require" or an "include" will just exit the script at the first line so it's even harder to get the data.
Nevertheless, do not ever let configuration files in a directory that can be accessed through the web. You should have a "Web" folder containing your controler code, css, pictures and js. That's all. Anything else goes in offline folders.
Just putting it into a config file somewhere is the way it's usually done. Just make sure you:
disallow database access from any servers outside your network,
take care not to accidentally show the password to users (in an error message, or through PHP files accidentally being served as HTML, etcetera.)
Best way is to not store the password at all!
For instance, if you're on a Windows system, and connecting to SQL Server, you can use Integrated Authentication to connect to the database without a password, using the current process's identity.
If you do need to connect with a password, first encrypt it, using strong encryption (e.g. using AES-256, and then protect the encryption key, or using asymmetric encryption and have the OS protect the cert), and then store it in a configuration file (outside of the web directory) with strong ACLs.
Actually, the best practice is to store your database crendentials in environment variables because :
These credentials are dependant to environment, it means that you won't have the same credentials in dev/prod. Storing them in the same file for all environment is a mistake.
Credentials are not related to business logic which means login and password have nothing to do in your code.
You can set environment variables without creating any business code class file, which means you will never make the mistake of adding the credential files to a commit in Git.
Environments variables are superglobales : you can use them everywhere in your code without including any file.
How to use them ?
Using the $_ENV array :
Setting : $_ENV['MYVAR'] = $myvar
Getting : echo $_ENV["MYVAR"]
Using the php functions :
Setting with the putenv function - putenv("MYVAR=$myvar");
Getting with the getenv function - getenv('MYVAR');
In vhosts files and .htaccess but it's not recommended since its in another file and its not resolving the problem by doing it this way.
You can easily drop a file such as envvars.php with all environment variables inside and execute it (php envvars.php) and delete it. It's a bit old school, but it still work and you don't have any file with your credentials in the server, and no credentials in your code. Since it's a bit laborious, frameworks do it better.
Example with Symfony (ok its not only PHP)
The modern frameworks such as Symfony recommends using environment variables, and store them in a .env not commited file or directly in command lines which means you wether can do :
With CLI : symfony var:set FOO=bar --env-level
With .env or .env.local : FOO="bar"
Documentation :

WebApp configuration in mod_perl 2 environment

I have a web app I'm writing in mod_perl 2. (It's a custom handler module, not registry or perlrun scripts.) There are several configuration options I'd like to have set at server initialization, preferably from a configuration file. The problem I'm having is that I haven't found a good place to pass a filename for my app's config file.
I first tried loading "./app.conf" but the current directory isn't the location of the modules, so it's unpredictable and error-prone. Or, I have to assume some path -- relative or absolute. This is inflexible and could be problematic if the host OS distribution is changed. I don't want to hard-code a path (though, something in /etc may be acceptable if there's just no better way).
I also tried PerlSetVar, but the value isn't available until request time. While this is workable, it means I'm potentially reading a config file from disk at least once per child (thread) init. I would rather load at server init and have an immutable static hash that is part of the spawned environment when a child is created.
I considered using a config.pl, but this means I either have a config.pl with one option to configure where to find the app.conf file, or I move the options themselves into config.pl and require end-users to respect Perl syntax when setting options. Future users will be internal admins, so that's not unreasonable, but it's more complicated than I'd like.
So what am I missing? Any good alternatives?
Usually a top priority is to avoid having configuration files amongst your executables. Otherwise a server misconfiguration could accidentally show your private configuration info to the world. I put everything the app needs under /srv/app0, with subdir cfg which is a sibling to the dirs containing executables. (More detail.)
If you're pre-loading modules via PerlPostConfigRequire startup.pl to access mod/startup.pl then that's the best place to put the configuration file location ../cfg/app.cnf and you have complete flexibility re how to store the configuration in memory. An alternative is to PerlModule your modules and load the configuration (with a relative path as above) in a BEGIN block within one of them.
Usually processing a configuration file doesn't take appreciable time, so a popular option is to lazy-load: if the code detects the configuration is missing it loads it before continuing. That's no use if the code needed to know the configuration earlier than that, but it avoids lots of problems, especially when migrating code to a non-modperl env.

Need an opinion on a method for pull data from a file with Perl

I am having a conflict of ideas with a script I am working on. The conflict is I have to read a bunch of lines of code from a VMware file. As of now I just use SSH to probe every file for each virtual machine while the file stays on the server. The reason I am now thinking this is a problem is because I have 10 virtual machines and about 4 files that I probe for filepaths and such. This opens a new SSH channel every time I refer to the ssh object I have created using Net::OpenSSH. When all is said and done I have probably opened about 16-20 ssh objects. Would it just be easier in a lot of ways if I SCP'd the files over to the machine that needs to process them and then have most of the work done on the local side. The script I am making is a backup script for ESXi and it will end up storing the files anyway, the ones that I need to read from.
Any opinion would be most helpful.
If the VM's do the work locally, it's probably better in the long run.
In the short term, the ~equal amount of resources will be used, but if you were to migrate these instances to other hardware, then of course you'd see gains from the processing distribution.
Also from a maintenance perspective, it's probably more convenient for each VM to host the local process, since I'd imagine that if you need to tweak it for a specific box, it would make more sense to keep it there.
Aside from the scalability benefits, there isn't really any other pros/cons.