Default UNIX permissions of Mongodb files in the hard drive - mongodb

I noticed that the files in the data/ directory, hosting the databases and collections, are the r permission for others.
So basically, anyone can read the data! Isn't it strange, or is it something I'm missing?
I found no solution to change this behavior in the mondodb configuration (ubuntu 18.04). When you search mongodb file permissions, you will find threads about user permissions inside the database.
Thank you!

Im going to assume you're using WiredTiger, the default storage engine for mongo. Either way, the same concept applies.
You'll see the .wt files (the ones you're talking about), although readable by permission, are not very readable to the eye. Try look for yourself with less <example>.wt.
They're stored in a specific format, with compression and some encryption. Realistically, they shouldn't be able to be retrieved from outside of your server - and your users in the server should trusted, or given limited access to the locations of these files.
In short, if you apply the proper policies, and keep your actual database and server secure, then this is normal and expected. I hope this makes sense.

When you launch mongod you need to specify a path to the data directory, and this directory must already exist.
You can set the permissions on this directory to deny world-read access by running:
chmod o-rwx /path/to/data/dir
Normally this would be done prior to the first start of mongod.
Once this is done, none of the files in the data directory will be world-readable regardless of their individual permissions.
MongoDB does not need to have a provision to do this because it never creates the data directory.
A different way of accomplishing similar end result is to use umask, but changing permissions on data directory generally would be more reliable.

Related

SqlBase Database on One Drive

I have a database on "Microsoft OneDrive", I have 4 valid licenses from Gupta 4 SqlBase. When I try to run from PC 1 I can access the database, but when I try the same from PC 2 I got this
Reason: Attempting to open an existing file and a failure has occurred.
Remedy: Determine and correct the cause of the open file failure.
Verify that the specified file exists. Verify the number of
files allowed open for the operating system permits the
additional file, that is, check the FILES= configuration
parameter setting.
I assume this is related to the LOG files on the database and some settings in the Sql.Ini, but I'm not able to find where/how???
The intention is to run the database on "OneDrive", buy SqlBase licenses and run a multi user system. The application has been made as such.
Where do I think wrong?
Where do I do wrong?
What setting are missing?
Thanks
That won't work.
SqlBase (and all other RDBMS) are built to manage one databasefile + logfiles.
When multiple instances work with more or less replicated datafiles this ends up in a clash.
There are systems in the world which can work as a distributed cluster (e.g. like the document-database RavenDB) but they are built to work like this (not with OneDrive of course but with their own replication mechanism). Sqlbase is not.

three ways to let PHP and a regular user edit the same files

I am a web developer, and for some upcoming projects I would like to use a file-based CMS. This means that many of the files I create at the start must be editable by the PHP user later, but also remain editable for my user (and also the other way around). My PC runs Debian 9, which I love but am not super knowledgeable about, and I have also just set up a local network server with Debian 9 for backups and possibly file sharing. (I'm using Webmin to configure this, which reflects my level of command line skills).
On my online shared hosting server, the PHP user and the FTP user seem to be the same, and 644/755 permissions work fine, this is also recommended by the CMS I'm using. I would like to mimic this on my computer so I don't have to fiddle with permissions all the time. But how do I do this? Currently, my regular user (anna) does not have access to www-data's files and vice versa. Putting them in the same group still means changing file permissions. Making anna the PHP user is a Bad Idea (as far as I understand it) because anna has sudo permissions.
So far I have researched three possible solutions that I don't really know very much about, and I would like to know which is the best route to take.
Develop locally on my computer and use apache-mpm-itk or suPHP to let PHP edit the files (I got that idea from this question on ServerFault).
Develop locally on my computer and rsync the files to my server with grunt-rsync, and somehow get rsync to set the ownership to www-data (another ServerFault thread helping here).
Mount the project's server directory, which is owned by www-data, on my computer with SSHFS and then either edit the files on the server directly or copy them over from my local directory with grunt-copy.
What do you think: from a security and ease of use perspective, which is the best way? Or do you know an even better one?
Thank you for taking the time to read and think about this!
Anna~
I figured it out! I finally ended up reading about running PHP as CGI instead of as an Apache module, and that this would solve my permissions problem. Plus, as far as I understand it, there are no extra security precautions to take when I'm the only one working with it on my local computer.
In case someone comes across this who might find it helpful, here's what I did (basically following these instructions):
I installed php7.0-fpm
Edited /etc/apache2/sites-enabled/000-default.conf and put the following just before </VirtualHost>:
DirectoryIndex index.php
<LocationMatch "^(.*\.php)$">
ProxyPass fcgi://127.0.0.1:9000/var/www/html
</LocationMatch>
I activated the Apache module proxy_fcgi (via Webmin, which apparently does an automatic Apache restart)
In /etc/php/7.0/fpm/pool.d/www.conf I commented out a listen line and put another below like this:
; listen = /run/php/php7.0-fpm.sock
listen = 127.0.0.1:9000
I then restarted PHP-FPM with this command: /etc/init.d/php7.0-fpm restart (a little different from the instructions, I'm on Debian 9). After that, phpinfo() gave me the Server API "FPM/FastCGI".
And finally, I changed the user and group from www-data to anna in three places, twice in /etc/php/7.0/fpm/pool.d/www.conf and then once more in /usr/lib/tmpfiles.d/php7.0-fpm.conf (this last bit may be Ubuntu/Debian specific, my thanks go to Keith for a comment on StackExchange).
And that was it! :-)

Password-protect private stuff

I'm looking for a portable way (application, file format, library/API, CMS, DBMS, whatever) to deny read and write access to a collection of text files unless the user enters the password. This would be for personal use, i.e. the files would be stored on my computer, which I share with other people.
I've already tried with:
password protected archives: but even a minor edit to one file requires unpacking
and re-packing everything, which is quite annoying
a wiki backed by a DBMS, with a single password protected account: but the DBMS
root user will be able to read my stuff
... any ideas?
I use TrueCrypt to mount an encrypted volume on my PC. Also available for Mac and Linux.
I mount the volume when I want to work with the files (the volume shows up as a new drive letter), and unmount it when done. The mount does not survive a reboot, so shutting down the computer guarantees that the volume will have to be re-mounted before it can be accessed again.

Design Advise: Sending signals to daemons through HTTP

I'm using Apache on Ubuntu. I have a Perl script which basically read the files names of a directory, then rewrites a text file, then sends a signal to a daemon. How can this be done, as secure as possible through a web-page?
Actually I can run a simplified cgi in the code below, but not if I remove the comments. I'm looking for advise considering any of:
Using HTTP Requests?
How about Apache file permissions on the directory shown in code?
Is htaccess enough to enable user/pass access to the cgi?
Should I use a database instead of writing to a file and run a cron querying the db with permission granted to write and send the signal?
Granting as less permissions as possible to the webserver.
Should I set a VPN?
#!/usr/bin/perl -wT
use strict;
use CGI;
##fileList = </home/user/*>; #read a directory listing
my $query = CGI->new();
print $query->header( "text/html" ),
$query->p( "FirstFileNameInArray" ),
#$query->p( $fileList[0] ), #output the first file in directory
$query->end_html;
Presumably, the error you're getting from the commented lines is a permission denied when trying to read the /home/user directory. The way to fix this is (surprise, surprise) to give the apache user[1] to read that directory. There are three primary approaches to doing this:
In most environments, there's
really no good reason to hide all
filenames within a user's home
directory, so you could make the
directory world-readable with chmod
a+r /home/user. Unless you have a
specific reason to prevent the
general public from knowing the
names of the files in the user's
home directory, I'd tend to
recommend this approach.
If you want to be a bit more
restrictive about it, you could
change /home/user to be owned by a
group which the apache user belongs
to (or add the apache user to the
group that currently owns
/home/user) and then set
/home/user to be group-readable.
This will make it accessible to all
members of that group, but not the
general public.
If you need to have standard
filesystem permissions applied to
web access, you can look at
configuring suexec so that
individual requests can take on
permissions of users other than the
apache user. This is normally the
user who owns the code which is
being run to handle the request
(e.g., in this case, the user who
owns your directory-listing script),
but, if you're using htaccess-based
authentication, it may be possible
to configure suexec to decide
which user's permissions to take on
based on what user you log in as.
(I avoid suexec myself, so I'm not
100% certain if this can be done and
have no idea how to go about it if
it can.)
[1] ...by which I mean the user that apache is running as; depending on your system config, this user may be named "apache", "httpd", "nobody", "www-data", or something else entirely.

Restore full external ESENT backup

I've wrote the code that creates full backups of my ESENT database, using JetBeginExternalBackup API.
Following the MSDN guidelines, I backed up every file returned by JetGetAttachInfo and JetGetLogInfo.
I've made the backup, erased old database, and copied the backup data to the database folder.
The DB engine was unable to start, the JetInit error code is "JET_errMissingLogFile".
I've checked the backup, it only contains the database file, and "<inst>XXXXX.log" log files. It lacks the current log file (I'm using circular logging, BTW).
Is there any way to restore such backup?
I don't want to use JetExternalRestore API because it's too complex: I don't need to restore to another location, I don't understand why there're 3 input folders not 2, and I don't know the values to supply in genLow and genHigh arguments.
I do need external backups: the ESENT database is used by ASP.NET on a remote server, and I'm backing it up over the Internet.
Or, maybe there's a way to retrieve the name of the current log file, and I should just add it to the backup?
Thanks in advance!
P.S. I've got no permissions to span processes on my web server, so using eseutil.exe is not an option.
Unpack all backed up files to a single folder.
Take the name of your main database file. Replace extension to .pat. Create zero-length file with that name, e.g. database.pat.
After this simple step, call JetRestoreInstance API, it will restore the backup from that folder.