Cloud init execution by custom user - cloud-init

As far as I know, by default cloud-init is executed by the root user. Is it possible to set a custom user with sudo access to execute?

Related

Cloud SQL Server on GCP is recreating if I am changing the root password in teraform

I'm using Terraform to create a cloud SQL Server and specify its root password. When I use Terraform to change the root password for the same instance, it initially deletes the instance and then creates a new one with the new root password, which is not how it should work.
However, whether I use the console or the gcloud api to perform the same action. It enables me to modify the server's existing password.

Solve a permission problem found when running pgbackrest as a cron job

I have a permissions error on a ubuntu host from the cron job setup to make a database backup using pgbackrest.
ERROR [041]: : unable to open /var/lib/postgresql/10/main/global/pg_control
The cron job is setup to run under my administrator account. The only option I see to fix this is to change the directory permissions to /var/lib/postgresql/10/main to allow my admin account in, and I don't want to do that.
Clearly only the postgres user has access to this directory and I found that its not possible to setup a cron job using that user.
i.e.
postgres#host110:~/$ crontab -e
You (postgres) are not allowed to use this program (crontab)
See crontab(1) for more information
What else can I do? There is no more information on this in the pgbackrest manual.
Only the PostgreSQL OS user (postgres) and its group are allowed to access the PostgreSQL data directory. See this code from the source:
/*
* Check if the directory has correct permissions. If not, reject.
*
* Only two possible modes are allowed, 0700 and 0750. The latter mode
* indicates that group read/execute should be allowed on all newly
* created files and directories.
*
* XXX temporarily suppress check when on Windows, because there may not
* be proper support for Unix-y file permissions. Need to think of a
* reasonable check to apply on Windows.
*/
#if !defined(WIN32) && !defined(__CYGWIN__)
if (stat_buf.st_mode & PG_MODE_MASK_GROUP)
ereport(FATAL,
(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
errmsg("data directory \"%s\" has invalid permissions",
DataDir),
errdetail("Permissions should be u=rwx (0700) or u=rwx,g=rx (0750).")));
#endif
If the data directory allows the group in, the group will normally also have permissions for pg_control.
So you can allow that pgBackRest user in if you give it postgres' primary user group.
postgres is allowed to create a crontab if the system is configured accordingly.
From man crontab:
Running cron jobs can be allowed or disallowed for different users. For this purpose, use the cron.allow and cron.deny files. If the cron.allow file exists, a
user must be listed in it to be allowed to use cron If the cron.allow file does not exist but the cron.deny file does exist, then a user must not be listed in
the cron.deny file in order to use cron. If neither of these files exists, only the super user is allowed to use cron. Another way to restrict access to cron
is to use PAM authentication in /etc/security/access.conf to set up users, which are allowed or disallowed to use crontab or modify system cron jobs in the
/etc/cron.d/ directory.
If you are able to sudo -u postgres at the prompt, you can do it in your cron job, too.
Your question doesn't reveal which actual commands you are trying to run, but to run thiscommand as postgres, simply
sudo -u postgres thiscommand
If you have su but not sudo, the adaptation is minor but not entirely trivial:
su -c thiscommand postgres
With sudo you can set fine-grained limitations on what exactly you can do as another user, so in that sense, it's safer than full unlimited su.

PostgreSQL in Openshift won't execute the entrypoint and can not start the database

We have a read-only Postgresql database that should run in Openshift cluster.
We are using RHEL as the undrlying operating system.
Our Dockerfile will install postgres software, create the database instance, loads the data to it than shuts the database down and save the image.
We are using only bash and sql scripts and deploy the database using flyway.
When starting the container the entrypoint script will simply startup the database instance using "pg_ctl" command then perform an endless loop to keep the container running.
The Dockerfile has as the last command USER 26, where 26 is the id of the postgres user. The entrypoint script can be started as the postgres user or by a sudo user.
Everything is working well in Docker.
In Openshift the container is started by a different user belonging to the root group, but not the root user nor the user 26. Actually Openshift ignores the USER 26 clause in the Dockerfile.
The user starting the container (we'll call it containeruser) has anyhow no rights to start the postgres instance , so when running the entrypoint it will get permission denied on the postgresql data folder.
I have tried different options, adding the containeruser user to the wheel group and modify the sudoers file to allow it using sudo and start the entrypoint as postgres user but with no success.
So I have my database image ready but can not start it in Openshift.
On the openshift configuration side we are not allowed to make changes like allowing sudo usage, or starting the container as root or postgres user.
Any idea or help to this problem?
I am not an Openshift expert.
Thank you!
Best regards,
rimetnac
You have two choices.
The preferred choice is to fix your image so that it can run as any user. For this, do not use the existing postgres user. Create a new user, where that user has group root. Then ensure that all directories/files that PostgreSQL needs to write to are owned by that user, but also have group root and writable by group. When the container is then started up, it will run as an assigned user ID, not in /etc/passwd, and so will fallback to using group root still. Because the directories/file are writable to group root, everything will still work. For more information see:
https://docs.openshift.org/latest/creating_images/guidelines.html
Specifically, section 'Support Arbitrary User IDs'.
The second option if you have admin control of the cluster, and your security team do not object that you are overriding the default security model, is to allow your image to be run as user ID it wants to.
First create a new service account:
oc create serviceaccount runasnonroot
Next grant that service account the ability to run as non root user ID of its choosing.
oc adm policy add-scc-to-user nonroot -z runasnonroot --as system:admin
Then patch the deployment config to use that service account.
oc patch dc/mydatabase --patch '{"spec":{"template":{"spec":{"serviceAccountName": "runasnonroot"}}}}'
Note that this still requires you use USER in the image with an integer user ID and not postgres. Otherwise it can't verify it will run as non root user. That is because if you use a user name instead of user ID, you could be maliciously mapping that to root.
I spent days figuring this out and found one good solution.
OpenShift Origin runs an image as a user created by it, as explained in this OpenShift blog post. This prevents programs from being able to access needed files and directories. To successfully run a program on OpenShift Origin, the blog post provides two solutions, however, the first will not work for PostgreSQL and the second has two disadvantages (explained in the notes):
Grant group write access to the directories used by the main program.
This will not solve the problem because, although the PostgreSQL files will be accessible by any program, they must be owned by the owner of the PostgreSQL process.
Ensure that when operating system libraries are used to look up a system user, one is returned for the ID of the user OpenShift Origin runs the image as. The following are two methods for doing this:
Use a package called nss_wrapper, "which intercepts any calls which look up details of a user and returns a valid entry."
Make the UNIX password database file (/etc/passwd) have global write permissions in the image build so that the OpenShift user can be added to it in the S2I run script.
Each option has a disadvantage: 1. install an extra package and 2. make user accounts insecure.
The best solution is to build the docker image to run as the user OpenShift Origin will run the image as. I built this instructional image with it.
One additional problem to note is that, as the owner of the PostgreSQL process must be the owner of the files and directories accessed by PostgreSQL, PostgreSQL must be set up (i.e. initdb, roles, databases, etc.) during the image build. This is because file ownership can only be changed during the image build and the ownership of the files must be changed after PostgreSQL has been set up for the reason explained in #2 below.
Here are the complete steps with notes for setting up PostgreSQL in the image build:
Manually create the PostgreSQL data directory and change its ownership to a non-root user that will be used to initialze PostgreSQL and set up the components (e.g. roles and databases) required to run the server on OpenShift Origin.
This is required because the "initdb" executable must be executed by a user other than root and will need access to the data directory. Additionally, this user cannot be the user OpenShift Origin will run the image as because it is not in the system.
Switch to the non-root user.
This is required because the initdb executable must be executed "as the user that will own the server process, because the server needs to have access to the files and directories that initdb creates" (PostgreSQL documentation) and because the PostgreSQL server will be started to set up components (e.g. roles and databases) required to run the server on OpenShift Origin.
Run the "initdb" executable.
Start the PostgreSQL server, set up the required components (roles, databases, etc.) and stop the PostgreSQL server.
Switch back to the root user.
Change the ownership of the PostgreSQL files and directories to the user OpenShift Origin will run the image as.
Edit (06/20/18): I have found that there is a solution to set up PostgreSQL after the image is built. The user OpenShift Origin will run the image as can be added to the system at the start of the build. This will allow PostgreSQL to be set up and the ownership of its files and directories to be changed after the image build.
After gathering the comments from all contributors I can asnwer my question as follows:
Option 1
When you create the postgres database during image build, you must configure openshift policies to allow starting your container as the user that created the database during image build. Use this option when the database must be filled with data and this operation takes much time making it inappropriate for a container start. the entrypoint will only start the already prepared database.
Option 2
Create your database when starting the container using the entrypoint script. Use this option when the database creation is fast enough to be done at container start.
Option 3
See the last comment from Adrian which seems to answer all the problems anyhow I didn't got the time to test it.
Thank you all for your contributions.

Run tasks as another user

Using Capistrano v3, how can I run all remote tasks through su as another user? I cannot find anything in the official documentation (http://capistranorb.com/)
For my use case, there is one SSH user and one user for every virtual host. User A connects to the server and should run all commands as user B.
This isn't much of an answer, but I don't think what you are trying to do is possible without code modifications. Here's why:
There are two primary cases where you would use a different user:
Deployment needs to run as a particular user because of file ownership.
Deployment needs to run with root permissions.
In the first case, you generally would simply tell Capistrano to ssh as that user.
In the second case, you would tell Capistrano to run certain commands with paswordless sudo (http://capistranorb.com/documentation/getting-started/authentication-and-authorisation/#authorisation).
I can see a situation where only one user is available via SSH, but file ownership and permissions is based on another user, so you want to make su part of the workflow. I'm sure it is possible to do, but if I had to do it, I would be reading the source code of Capistrano and overriding how shell commands are executed. This would be non-trivial.
If you have a specific command like rm which needs to run as a different user, you may be able to use the SSHKit.config.command_map[:rm] = 'sudo rm' mechanism to do it.
In a nutshell, I don't think what you are asking for is, on its face, easily done with Capistrano. If you have a specific use case, we may be able to offer suggestions as to how you may approach the problem differently which plays better to Capistrano's strengths.
Good luck!
Update
Looking further, the capistrano-rbenv gem has a mechanism by which it has overridden the execution of all commands:
task :map_bins do
SSHKit.config.default_env.merge!({ rbenv_root: fetch(:rbenv_path), rbenv_version: fetch(:rbenv_ruby) })
rbenv_prefix = fetch(:rbenv_prefix, proc { "#{fetch(:rbenv_path)}/bin/rbenv exec" })
SSHKit.config.command_map[:rbenv] = "#{fetch(:rbenv_path)}/bin/rbenv"
fetch(:rbenv_map_bins).each do |command|
SSHKit.config.command_map.prefix[command.to_sym].unshift(rbenv_prefix)
end
end
https://github.com/capistrano/rbenv/blob/master/lib/capistrano/tasks/rbenv.rake#L17
You might have success with something similar.
In order to run all remote tasks through su as another user I think you need to change Ownership for that User.
I'm Assuming that deployment folder name is /public_html/test .
sudo chown User:User /public_html/test # `chown` will change the owner ship so that `User` user can `**Read/Write**`
umask 0002
sudo chown User:User public_html/test/releases
sudo chown User:User public_html/test/shared
Hope this will solve your issue!!!

run capistrano 3 custom setup scripts as root once

I've got capistrano 3 running perfectly with passwordless deploy as a non root user.
What I'm trying to do now is setup a install script that install's the upstart service, the sudoers.d file and installs some dependencies on the server.
so that I could install a new server by simply entering the user and host in the production.rb file and run cap production setupserver
the problem is that the setup scripts that I've created need to be run as root.
But since it's a one time thing, I'd simply like to ask the user for the root password and run a couple of tasks on the server.
the as :root command doesn't work since it uses su -c
I could ask for the password as demonstrated here
http://capistranorb.com/documentation/faq/how-can-i-get-capistrano-to-prompt-for-a-password/
any suggestions on how to override the user specified in the production.rb file?
and how to pass the asked password?
The way I finally solved it was by adding another role for the same server.
role :app, 'theappuser#myserver'
role :install, 'root#myserver', :no_release => true
And then doing something like this
desc 'Install nginx'
task :install do
on roles(:install) do
require 'erb'
template = ERB.new(File.new('deploy/templates/nginx.conf.erb').read).result(binding)
upload! StringIO.new(template), "/etc/nginx/conf.d/#{fetch(:domain)}.conf"
end
on roles(:app) do
invoke 'nginx:restart'
end
end
if i understand you correctly this should solve your problem by make the user variable:
set :current_user, Proc.new { current_user = Capistrano::CLI.ui.ask "Enter deploy user" }
if current_user.present?
set :user, "#{current_user}"
end
what it will do it will ask you before run the command for user then assign this user to production.rb :user, then you will be able to control it on the setup.