I have a permissions error on a ubuntu host from the cron job setup to make a database backup using pgbackrest.
ERROR [041]: : unable to open /var/lib/postgresql/10/main/global/pg_control
The cron job is setup to run under my administrator account. The only option I see to fix this is to change the directory permissions to /var/lib/postgresql/10/main to allow my admin account in, and I don't want to do that.
Clearly only the postgres user has access to this directory and I found that its not possible to setup a cron job using that user.
i.e.
postgres#host110:~/$ crontab -e
You (postgres) are not allowed to use this program (crontab)
See crontab(1) for more information
What else can I do? There is no more information on this in the pgbackrest manual.
Only the PostgreSQL OS user (postgres) and its group are allowed to access the PostgreSQL data directory. See this code from the source:
/*
* Check if the directory has correct permissions. If not, reject.
*
* Only two possible modes are allowed, 0700 and 0750. The latter mode
* indicates that group read/execute should be allowed on all newly
* created files and directories.
*
* XXX temporarily suppress check when on Windows, because there may not
* be proper support for Unix-y file permissions. Need to think of a
* reasonable check to apply on Windows.
*/
#if !defined(WIN32) && !defined(__CYGWIN__)
if (stat_buf.st_mode & PG_MODE_MASK_GROUP)
ereport(FATAL,
(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
errmsg("data directory \"%s\" has invalid permissions",
DataDir),
errdetail("Permissions should be u=rwx (0700) or u=rwx,g=rx (0750).")));
#endif
If the data directory allows the group in, the group will normally also have permissions for pg_control.
So you can allow that pgBackRest user in if you give it postgres' primary user group.
postgres is allowed to create a crontab if the system is configured accordingly.
From man crontab:
Running cron jobs can be allowed or disallowed for different users. For this purpose, use the cron.allow and cron.deny files. If the cron.allow file exists, a
user must be listed in it to be allowed to use cron If the cron.allow file does not exist but the cron.deny file does exist, then a user must not be listed in
the cron.deny file in order to use cron. If neither of these files exists, only the super user is allowed to use cron. Another way to restrict access to cron
is to use PAM authentication in /etc/security/access.conf to set up users, which are allowed or disallowed to use crontab or modify system cron jobs in the
/etc/cron.d/ directory.
If you are able to sudo -u postgres at the prompt, you can do it in your cron job, too.
Your question doesn't reveal which actual commands you are trying to run, but to run thiscommand as postgres, simply
sudo -u postgres thiscommand
If you have su but not sudo, the adaptation is minor but not entirely trivial:
su -c thiscommand postgres
With sudo you can set fine-grained limitations on what exactly you can do as another user, so in that sense, it's safer than full unlimited su.
Related
I'm looking for a way to back up a database through a LAN on a mounted drive on a workstation. Basically a Bash script on the workstation or the server to dump a single database into a path on that volume. The volume isn't mounted normally, so I'm not clear as to which box to put the script on, given username/password and mounted volume permissions/availability.
The problem I currently have is permissions on the workstation:
myfile='/volumes/Dragonfly/PG_backups/serverbox_PG_mydomain5myusername_'`date +%Y_%m_%d_%H_%M`'.sql'
pg_dump -h serverbox.local -U adminuser -w dbname > $myfile
Is there a syntax that I can provide for this? Read the docs and there is no provision for a password, which is kind of expected. I also don't want to echo the password and keep it in a shell script. Or is there another way of doing this using rsync after the backups are done locally? Cheers
First, note the pg_dump command you are using includes the -w option, which means pg_dump will not issue a password prompt. This is indeed what you want for unattended backups (i.e. performed by a script). But you just need to make sure you have authentication set up properly. The options here are basically:
Set up a ~/.pgpass file on the host the dump is running from. Based on what you have written, you should keep this file in the home directory of the server this backup job runs on, not stored somewhere on the mounted volume. Based on the info in your example, the line in this file should look like:
serverbox.local:5432:database:adminuser:password
Remember to specify the database name that you are backing up! This was not specified in your example pg_dump command.
Fool with your Postgres server's pg_hba.conf file so that connections from your backup machine as your backup user don't require a password, but use something like trust or ident authentication. Be careful here of course, if you don't fully trust the host your backups are running on (e.g. it's a shared machine), this isn't a good idea.
Set environment variables on the server such as PGPASSWORD that are visible to your backup script. Using a ~/.pgpass file is generally recommended instead for security reasons.
Or is there another way of doing this using rsync after the backups are done locally?
Not sure what you are asking here -- you of course have to specify credentials for pg_dump before the backup can take place, not afterwards. And pg_dump is just one of many backup options, there are other methods that would work if you have SSH/rsync access to the Postgres server, such as file-system level backups. These kinds of backups (aka "physical" level) are complementary to pg_dump ("logical" level), you could use either or both methods depending on your level of paranoia and sophistication.
Got it to work with ~/.pgpass, pg_hba.conf on the server, and a script that included the TERM environment variable (xterm), and a path to pg_dump.
There is no login for the crontab, even as the current admin user. So it's running a bit blind.
We have a read-only Postgresql database that should run in Openshift cluster.
We are using RHEL as the undrlying operating system.
Our Dockerfile will install postgres software, create the database instance, loads the data to it than shuts the database down and save the image.
We are using only bash and sql scripts and deploy the database using flyway.
When starting the container the entrypoint script will simply startup the database instance using "pg_ctl" command then perform an endless loop to keep the container running.
The Dockerfile has as the last command USER 26, where 26 is the id of the postgres user. The entrypoint script can be started as the postgres user or by a sudo user.
Everything is working well in Docker.
In Openshift the container is started by a different user belonging to the root group, but not the root user nor the user 26. Actually Openshift ignores the USER 26 clause in the Dockerfile.
The user starting the container (we'll call it containeruser) has anyhow no rights to start the postgres instance , so when running the entrypoint it will get permission denied on the postgresql data folder.
I have tried different options, adding the containeruser user to the wheel group and modify the sudoers file to allow it using sudo and start the entrypoint as postgres user but with no success.
So I have my database image ready but can not start it in Openshift.
On the openshift configuration side we are not allowed to make changes like allowing sudo usage, or starting the container as root or postgres user.
Any idea or help to this problem?
I am not an Openshift expert.
Thank you!
Best regards,
rimetnac
You have two choices.
The preferred choice is to fix your image so that it can run as any user. For this, do not use the existing postgres user. Create a new user, where that user has group root. Then ensure that all directories/files that PostgreSQL needs to write to are owned by that user, but also have group root and writable by group. When the container is then started up, it will run as an assigned user ID, not in /etc/passwd, and so will fallback to using group root still. Because the directories/file are writable to group root, everything will still work. For more information see:
https://docs.openshift.org/latest/creating_images/guidelines.html
Specifically, section 'Support Arbitrary User IDs'.
The second option if you have admin control of the cluster, and your security team do not object that you are overriding the default security model, is to allow your image to be run as user ID it wants to.
First create a new service account:
oc create serviceaccount runasnonroot
Next grant that service account the ability to run as non root user ID of its choosing.
oc adm policy add-scc-to-user nonroot -z runasnonroot --as system:admin
Then patch the deployment config to use that service account.
oc patch dc/mydatabase --patch '{"spec":{"template":{"spec":{"serviceAccountName": "runasnonroot"}}}}'
Note that this still requires you use USER in the image with an integer user ID and not postgres. Otherwise it can't verify it will run as non root user. That is because if you use a user name instead of user ID, you could be maliciously mapping that to root.
I spent days figuring this out and found one good solution.
OpenShift Origin runs an image as a user created by it, as explained in this OpenShift blog post. This prevents programs from being able to access needed files and directories. To successfully run a program on OpenShift Origin, the blog post provides two solutions, however, the first will not work for PostgreSQL and the second has two disadvantages (explained in the notes):
Grant group write access to the directories used by the main program.
This will not solve the problem because, although the PostgreSQL files will be accessible by any program, they must be owned by the owner of the PostgreSQL process.
Ensure that when operating system libraries are used to look up a system user, one is returned for the ID of the user OpenShift Origin runs the image as. The following are two methods for doing this:
Use a package called nss_wrapper, "which intercepts any calls which look up details of a user and returns a valid entry."
Make the UNIX password database file (/etc/passwd) have global write permissions in the image build so that the OpenShift user can be added to it in the S2I run script.
Each option has a disadvantage: 1. install an extra package and 2. make user accounts insecure.
The best solution is to build the docker image to run as the user OpenShift Origin will run the image as. I built this instructional image with it.
One additional problem to note is that, as the owner of the PostgreSQL process must be the owner of the files and directories accessed by PostgreSQL, PostgreSQL must be set up (i.e. initdb, roles, databases, etc.) during the image build. This is because file ownership can only be changed during the image build and the ownership of the files must be changed after PostgreSQL has been set up for the reason explained in #2 below.
Here are the complete steps with notes for setting up PostgreSQL in the image build:
Manually create the PostgreSQL data directory and change its ownership to a non-root user that will be used to initialze PostgreSQL and set up the components (e.g. roles and databases) required to run the server on OpenShift Origin.
This is required because the "initdb" executable must be executed by a user other than root and will need access to the data directory. Additionally, this user cannot be the user OpenShift Origin will run the image as because it is not in the system.
Switch to the non-root user.
This is required because the initdb executable must be executed "as the user that will own the server process, because the server needs to have access to the files and directories that initdb creates" (PostgreSQL documentation) and because the PostgreSQL server will be started to set up components (e.g. roles and databases) required to run the server on OpenShift Origin.
Run the "initdb" executable.
Start the PostgreSQL server, set up the required components (roles, databases, etc.) and stop the PostgreSQL server.
Switch back to the root user.
Change the ownership of the PostgreSQL files and directories to the user OpenShift Origin will run the image as.
Edit (06/20/18): I have found that there is a solution to set up PostgreSQL after the image is built. The user OpenShift Origin will run the image as can be added to the system at the start of the build. This will allow PostgreSQL to be set up and the ownership of its files and directories to be changed after the image build.
After gathering the comments from all contributors I can asnwer my question as follows:
Option 1
When you create the postgres database during image build, you must configure openshift policies to allow starting your container as the user that created the database during image build. Use this option when the database must be filled with data and this operation takes much time making it inappropriate for a container start. the entrypoint will only start the already prepared database.
Option 2
Create your database when starting the container using the entrypoint script. Use this option when the database creation is fast enough to be done at container start.
Option 3
See the last comment from Adrian which seems to answer all the problems anyhow I didn't got the time to test it.
Thank you all for your contributions.
Using Capistrano v3, how can I run all remote tasks through su as another user? I cannot find anything in the official documentation (http://capistranorb.com/)
For my use case, there is one SSH user and one user for every virtual host. User A connects to the server and should run all commands as user B.
This isn't much of an answer, but I don't think what you are trying to do is possible without code modifications. Here's why:
There are two primary cases where you would use a different user:
Deployment needs to run as a particular user because of file ownership.
Deployment needs to run with root permissions.
In the first case, you generally would simply tell Capistrano to ssh as that user.
In the second case, you would tell Capistrano to run certain commands with paswordless sudo (http://capistranorb.com/documentation/getting-started/authentication-and-authorisation/#authorisation).
I can see a situation where only one user is available via SSH, but file ownership and permissions is based on another user, so you want to make su part of the workflow. I'm sure it is possible to do, but if I had to do it, I would be reading the source code of Capistrano and overriding how shell commands are executed. This would be non-trivial.
If you have a specific command like rm which needs to run as a different user, you may be able to use the SSHKit.config.command_map[:rm] = 'sudo rm' mechanism to do it.
In a nutshell, I don't think what you are asking for is, on its face, easily done with Capistrano. If you have a specific use case, we may be able to offer suggestions as to how you may approach the problem differently which plays better to Capistrano's strengths.
Good luck!
Update
Looking further, the capistrano-rbenv gem has a mechanism by which it has overridden the execution of all commands:
task :map_bins do
SSHKit.config.default_env.merge!({ rbenv_root: fetch(:rbenv_path), rbenv_version: fetch(:rbenv_ruby) })
rbenv_prefix = fetch(:rbenv_prefix, proc { "#{fetch(:rbenv_path)}/bin/rbenv exec" })
SSHKit.config.command_map[:rbenv] = "#{fetch(:rbenv_path)}/bin/rbenv"
fetch(:rbenv_map_bins).each do |command|
SSHKit.config.command_map.prefix[command.to_sym].unshift(rbenv_prefix)
end
end
https://github.com/capistrano/rbenv/blob/master/lib/capistrano/tasks/rbenv.rake#L17
You might have success with something similar.
In order to run all remote tasks through su as another user I think you need to change Ownership for that User.
I'm Assuming that deployment folder name is /public_html/test .
sudo chown User:User /public_html/test # `chown` will change the owner ship so that `User` user can `**Read/Write**`
umask 0002
sudo chown User:User public_html/test/releases
sudo chown User:User public_html/test/shared
Hope this will solve your issue!!!
I'm trying to do a simple postgres backup with crontab. Here is the command I use:
# m h dom mon dow user command
49 13 * * * postgres /usr/bin/pg_dump store | bzip2 > /home/backups/postgres/$(date +"\%Y-\%m-\%d")_store.sq.bz2
A backup file is created but it is very small (looks like 14 bytes).
I can run this command just fine in the terminal (with a filesize that matches my db).
The log files don't mention any errors (grep CRON /var/log/syslog). Any idea what might be off?
The key to the solution of this, is to realize is that running the 'same' command in bash and running via Cron isn't the same thing!
For e.g. when run via Cron obvious defaults (.bash_profile / .pgpass / default binary paths) aren't the same and thus what works on bash may not work in Cron.
For a checklist:
Ensure Bzip2 is replaced with the complete path (for e.g. /usr/bin/bzip2 on CentOS / RHEL)
Ensure that 'Store' Database is readable by the Cron command (for e.g. adding -U Postgres would be a good addition). If the DB login is dependent on .pgpass file, it wouldn't work with cron. In such scenarios, you'd need to ensure pg_hba.conf is configured for this purpose (for e.g. you could allow 'trust' authentication for a specific-known DB / machine / user combination etc.
Ensure that /home/backups/postgres/... is accessible for writing, for obvious reasons.
Following my previous question I'm now trying to execute a batch file trough NSIS code in order to successfully setup the postgres installation after it is being unzipped. The batch file contains command for initializing the database but it fails because of permission restrictions. I am on a Win7 x64 PC. My user account is the administrator and I start the Setup.exe with Run as adminitrator option. This is the error I get:
C:\Program Files (x86)\Poker Assistant>cd "pgsql\bin"
C:\Program Files (x86)\Poker Assistant\pgsql\bin>initdb -U postgres -A
password
--pwfile "pwd.txt" -E utf8 -D "..\data" The files belonging to this database system will be owned by user "Mandarinite".
This user must also own the server process.
The database cluster will be initialized with locale
"Bulgarian_Bulgaria.1251". initdb: could not find suitable text search
configuration for locale "Bulgarian_ Bulgaria.1251" The default text
search configuration will be set to "simple".
Data page checksums are disabled.
creating directory ../data ... initdb: could not create directory
"../data": Permission denied
EDIT: After tinkering little more with the installer I got to the root of the problem. I cannot in any way execute the following command when the installation is in the Program Files folder:
initdb -U postgres -A password --pwfile "pwd.txt" -E utf8 -D "..\data"
I tried from .bat file. I tried from .cmd file. I tried manually from Command Prompt. I tried start as Administrator. All attempts resulted in the Permission denied error
EDIT2: I did not find any way to fix the problem so I made a workaround. Now I distribute the postgres with its data directory already initialized. Then I only need to create the service and start it.
I just realised what the issue here is.
If you run postgres as Administrator, it uses a special Windows API call to drop permissions (acquire a restricted token), so that it runs without full Administrator rights for security. See PostgreSQL utilities and restricted tokens on windows.
I suspect that what's happening here is that initdb isn't creating the target data directory and setting its permissions before doing that, so it drops permissions and then doesn't have the permissions to create the data directory.
To work around it, simply md ..\data to create the empty directory and then use icacls.exe to grant appropriate permissions before you try to initdb. Or, even better, store it in a more appropriate place like %PROGRAMDATA%\MyApp\pgdata or whatever; application data should not go in %PROGRAMFILES%.