Using Capistrano v3, how can I run all remote tasks through su as another user? I cannot find anything in the official documentation (http://capistranorb.com/)
For my use case, there is one SSH user and one user for every virtual host. User A connects to the server and should run all commands as user B.
This isn't much of an answer, but I don't think what you are trying to do is possible without code modifications. Here's why:
There are two primary cases where you would use a different user:
Deployment needs to run as a particular user because of file ownership.
Deployment needs to run with root permissions.
In the first case, you generally would simply tell Capistrano to ssh as that user.
In the second case, you would tell Capistrano to run certain commands with paswordless sudo (http://capistranorb.com/documentation/getting-started/authentication-and-authorisation/#authorisation).
I can see a situation where only one user is available via SSH, but file ownership and permissions is based on another user, so you want to make su part of the workflow. I'm sure it is possible to do, but if I had to do it, I would be reading the source code of Capistrano and overriding how shell commands are executed. This would be non-trivial.
If you have a specific command like rm which needs to run as a different user, you may be able to use the SSHKit.config.command_map[:rm] = 'sudo rm' mechanism to do it.
In a nutshell, I don't think what you are asking for is, on its face, easily done with Capistrano. If you have a specific use case, we may be able to offer suggestions as to how you may approach the problem differently which plays better to Capistrano's strengths.
Good luck!
Update
Looking further, the capistrano-rbenv gem has a mechanism by which it has overridden the execution of all commands:
task :map_bins do
SSHKit.config.default_env.merge!({ rbenv_root: fetch(:rbenv_path), rbenv_version: fetch(:rbenv_ruby) })
rbenv_prefix = fetch(:rbenv_prefix, proc { "#{fetch(:rbenv_path)}/bin/rbenv exec" })
SSHKit.config.command_map[:rbenv] = "#{fetch(:rbenv_path)}/bin/rbenv"
fetch(:rbenv_map_bins).each do |command|
SSHKit.config.command_map.prefix[command.to_sym].unshift(rbenv_prefix)
end
end
https://github.com/capistrano/rbenv/blob/master/lib/capistrano/tasks/rbenv.rake#L17
You might have success with something similar.
In order to run all remote tasks through su as another user I think you need to change Ownership for that User.
I'm Assuming that deployment folder name is /public_html/test .
sudo chown User:User /public_html/test # `chown` will change the owner ship so that `User` user can `**Read/Write**`
umask 0002
sudo chown User:User public_html/test/releases
sudo chown User:User public_html/test/shared
Hope this will solve your issue!!!
Related
I've been having a bit of trouble trying to install PostgreSQL 14 for the first time.
I would like to apologize in advance if this question has been asked in the manner that I am about to ask it, but I do not think it has. If it has been, please direct me to the appropriate location!
I've done a fair amount of Googling on the matter, and all the information that I find seems to be rather fragmented, or I end up following a spaghetti trail of hyperlinks (a la-do-this-and-follow-this-other-link-with-more-information-than-you-need-to-understand-this-other-required-portion).
Personally, I don't want to jump around to 50 different locations on the web to try and conjure up a piecemeal solution that I believe works, only to be proven wrong later. I want to know what to do and why it works. I've tried reading the documentation, and have given up on it, because to me, it seems to assume that the server has already been set up by a database administrator.
Instead of articulating my problem directly (as I seem to be having more trouble than I would like by trying to do so), I believe it would be easier to articulate my problem indirectly by stating what my expectations would be after installing PostgreSQL for the first time.
So to start, I will mention that I'm running Ubuntu 18.04.6 LTS, and am installing PostgreSQL 14.1 with the following command:
sudo apt install postgresql-14
Before continuing, I would like to add a side note in advance, that I do not want suggestions for an alternative OS or install method. I just want to be able to get "up and running" in a common-sense fashion from this exact point.
Moving on, I know that the aforementioned command creates a *nix user called postgres.
From here, I can now indirectly state my problem using an outline of what my goals and expectations are immediately after installing the software via that command.
After installing PostgreSQL via apt, these are my expected goals:
I want any client to be able to connect to the database server from any computer where a route exists from the client to the server.
For the sake of simplicity with these stated goals, when it is directly or implicitly stated that I am trying to connect a client to the database server, I am making the assumption that the client is able to, at a minimum, ping the machine that the server is running on, and vice versa.
For now, I'm not completely worried about the database being accessible from the public Internet.
I expect to be able to access the database from any computer on my LAN, whether it is an actual LAN, or some sort of logical LAN (like a WAN or a VPN).
If I change the PostgreSQL password of the postgres user, I expect that any client logging into the database server via the postgres user will require the password.
This means if I want to change the password to some_password via \password postgres or ALTER USER postgres WITH PASSWORD 'some_password'; (I am assuming this is how you change the login password of a PostgreSQL user), then...
I expect running psql [-h host] -U postgres -W from any host...
That when I am prompted to enter the password...
I can only log in by entering the exact password of some_password.
Entering any other arbitrary text for the password should not allow me to log in.
I am adding this as a requirement because previous install attempts have shown me that this is NOT the case.
I expect to be able to create a PostgreSQL user account other than postgres (e.g. db_user) with a password and have it be subject to the same requirements as the postgres user.
i.e. once the new account is given permission to log in, the same common-sense login requirements to log in must be imposed, i.e. you can't get in if you don't have the correct username/password combination.
If the process to achieve the aforementioned can be explained in such a way that it can be understood with minimal mental friction, I would be extremely grateful.
Feel free to assume that my knowledge is on par with that of a undergraduate CS student who just completed their first year of university, who also understands Linux filesystems and basic computer networking. I just want the answer to be as accessible to as many people as possible, as I am sure I'm not the only person who has struggled with installing PostgreSQL, in spite of having a power user's level of computer literacy.
sudo apt install postgresql
sudo -u postgres psql
Set a password for this user with \password or the other method you mention
sudo vi /etc/postgresql/10/main/pg_hba.conf
Make the only uncommented nonblank line in this file be host all all all md5
sudo vi /etc/postgresql/10/main/postgresql.conf
uncomment listen_addresses line and set it to '*'
sudo service postgresql restart
When you make a new user, you should also make a new database which has the same spelling as the user does. Otherwise you will need to specify the database name when you try to log in with psql -U, such as psql -U newname -d postgres -h[hhh]. Should you actually be running 14 not 10, then you will need to change the paths of the config files you need to edit accordingly.
When I run singularity exec foo.simg whoami I get my own username from the host, unlike in Docker where I would get root or the user specified by the container.
If I look at /etc/passwd inside this Singularity container, an entry has been added to /etc/passwd for my host user ID.
How can I make a portable Singularity container if I don't know the user ID that programs will be run as?
I have converted a Docker container to a Singularity image, but it expects to run as a particular user ID it defines, and several directories have been chown'd to that user. When I run it under Singularity, my host user does not have access to those directories.
It would be a hack but I could modify the image to chmod 777 all of those directories. Is there a better way to make this image work on Singularity as any user?
(I'm running Singularity 2.5.2.)
There is actually a better approach than just chmod 777, which is to create a "vanilla" folder with your application data/conf in the image, and then copy it over to a target directory within the container, at runtime.
Since the copy will be carried out by the user actually running the container, you will not have any permission issues when working within the target directory.
You can have a look at what I done here to create a portable remote desktop service, for example: https://github.com/sarusso/Containers/blob/c30bd32/MinimalMetaDesktop/files/entrypoint.sh
This approach is compatible with both Docker and Singularity, but it depends on your use-case if it is a viable solution or not. Most notably, it requires you to run the Singularity container with --writable-tmpfs.
As a general comment, keep in mind that even if Singularity is very powerful, it behaves more as an environment than a container engine. You can make it work more container-like using some specific options (in particular --writable-tmpfs --containall --cleanenv --pid), but it will still have limitations (variable usernames and user ids will not go away).
First, upgrade to the v3 of Singularity if at all possible (and/or bug your cluster admins to do it). The v2 is no longer supported and several versions <2.6.1 have security issues.
Singularity is actually mounting the host system's /etc/passwd into the container so that it can be run by any arbitrary user. Unfortunately, this also effectively clobbers any users that may have been created by a Dockerfile. The solution is as you thought, to chmod any files and directories to be readable by all. chmod -R o+rX /path/to/base/dir in a %post step is simplest.
Since the final image is read-only, allowing write permission doesn't do anything and it's useful to get into the mindset about only writing to files/directories that have been mounted to the image.
We have a read-only Postgresql database that should run in Openshift cluster.
We are using RHEL as the undrlying operating system.
Our Dockerfile will install postgres software, create the database instance, loads the data to it than shuts the database down and save the image.
We are using only bash and sql scripts and deploy the database using flyway.
When starting the container the entrypoint script will simply startup the database instance using "pg_ctl" command then perform an endless loop to keep the container running.
The Dockerfile has as the last command USER 26, where 26 is the id of the postgres user. The entrypoint script can be started as the postgres user or by a sudo user.
Everything is working well in Docker.
In Openshift the container is started by a different user belonging to the root group, but not the root user nor the user 26. Actually Openshift ignores the USER 26 clause in the Dockerfile.
The user starting the container (we'll call it containeruser) has anyhow no rights to start the postgres instance , so when running the entrypoint it will get permission denied on the postgresql data folder.
I have tried different options, adding the containeruser user to the wheel group and modify the sudoers file to allow it using sudo and start the entrypoint as postgres user but with no success.
So I have my database image ready but can not start it in Openshift.
On the openshift configuration side we are not allowed to make changes like allowing sudo usage, or starting the container as root or postgres user.
Any idea or help to this problem?
I am not an Openshift expert.
Thank you!
Best regards,
rimetnac
You have two choices.
The preferred choice is to fix your image so that it can run as any user. For this, do not use the existing postgres user. Create a new user, where that user has group root. Then ensure that all directories/files that PostgreSQL needs to write to are owned by that user, but also have group root and writable by group. When the container is then started up, it will run as an assigned user ID, not in /etc/passwd, and so will fallback to using group root still. Because the directories/file are writable to group root, everything will still work. For more information see:
https://docs.openshift.org/latest/creating_images/guidelines.html
Specifically, section 'Support Arbitrary User IDs'.
The second option if you have admin control of the cluster, and your security team do not object that you are overriding the default security model, is to allow your image to be run as user ID it wants to.
First create a new service account:
oc create serviceaccount runasnonroot
Next grant that service account the ability to run as non root user ID of its choosing.
oc adm policy add-scc-to-user nonroot -z runasnonroot --as system:admin
Then patch the deployment config to use that service account.
oc patch dc/mydatabase --patch '{"spec":{"template":{"spec":{"serviceAccountName": "runasnonroot"}}}}'
Note that this still requires you use USER in the image with an integer user ID and not postgres. Otherwise it can't verify it will run as non root user. That is because if you use a user name instead of user ID, you could be maliciously mapping that to root.
I spent days figuring this out and found one good solution.
OpenShift Origin runs an image as a user created by it, as explained in this OpenShift blog post. This prevents programs from being able to access needed files and directories. To successfully run a program on OpenShift Origin, the blog post provides two solutions, however, the first will not work for PostgreSQL and the second has two disadvantages (explained in the notes):
Grant group write access to the directories used by the main program.
This will not solve the problem because, although the PostgreSQL files will be accessible by any program, they must be owned by the owner of the PostgreSQL process.
Ensure that when operating system libraries are used to look up a system user, one is returned for the ID of the user OpenShift Origin runs the image as. The following are two methods for doing this:
Use a package called nss_wrapper, "which intercepts any calls which look up details of a user and returns a valid entry."
Make the UNIX password database file (/etc/passwd) have global write permissions in the image build so that the OpenShift user can be added to it in the S2I run script.
Each option has a disadvantage: 1. install an extra package and 2. make user accounts insecure.
The best solution is to build the docker image to run as the user OpenShift Origin will run the image as. I built this instructional image with it.
One additional problem to note is that, as the owner of the PostgreSQL process must be the owner of the files and directories accessed by PostgreSQL, PostgreSQL must be set up (i.e. initdb, roles, databases, etc.) during the image build. This is because file ownership can only be changed during the image build and the ownership of the files must be changed after PostgreSQL has been set up for the reason explained in #2 below.
Here are the complete steps with notes for setting up PostgreSQL in the image build:
Manually create the PostgreSQL data directory and change its ownership to a non-root user that will be used to initialze PostgreSQL and set up the components (e.g. roles and databases) required to run the server on OpenShift Origin.
This is required because the "initdb" executable must be executed by a user other than root and will need access to the data directory. Additionally, this user cannot be the user OpenShift Origin will run the image as because it is not in the system.
Switch to the non-root user.
This is required because the initdb executable must be executed "as the user that will own the server process, because the server needs to have access to the files and directories that initdb creates" (PostgreSQL documentation) and because the PostgreSQL server will be started to set up components (e.g. roles and databases) required to run the server on OpenShift Origin.
Run the "initdb" executable.
Start the PostgreSQL server, set up the required components (roles, databases, etc.) and stop the PostgreSQL server.
Switch back to the root user.
Change the ownership of the PostgreSQL files and directories to the user OpenShift Origin will run the image as.
Edit (06/20/18): I have found that there is a solution to set up PostgreSQL after the image is built. The user OpenShift Origin will run the image as can be added to the system at the start of the build. This will allow PostgreSQL to be set up and the ownership of its files and directories to be changed after the image build.
After gathering the comments from all contributors I can asnwer my question as follows:
Option 1
When you create the postgres database during image build, you must configure openshift policies to allow starting your container as the user that created the database during image build. Use this option when the database must be filled with data and this operation takes much time making it inappropriate for a container start. the entrypoint will only start the already prepared database.
Option 2
Create your database when starting the container using the entrypoint script. Use this option when the database creation is fast enough to be done at container start.
Option 3
See the last comment from Adrian which seems to answer all the problems anyhow I didn't got the time to test it.
Thank you all for your contributions.
I recently configured a little server for test some services, now, before an upgrade or install new software, I want to make an exact copy of my files, with owners, groups and permissions, also the symlinks.
I tried with rsync to keep the owner and group but in the machine who receives the copy I lost them.
rsync -azp -H /directorySource/ myUser#192.168.0.30:/home/myUser/myBackupDirectory
My intention is to do it with the / folder, to keep all my configurations just in case, I have 3 services who have it's own users and maybe makes modifications in folders outside it's home.
In the destination folder appear with my destination user, whether I do the copy from the server as if I do it from the destination, it doesn't keep the users and groups!, I create the same user, tried with sudo, even a friend tried with 777 folder :)
cp theoretically serves the same but doesn't work over ssh, anyway I tried to do it in the server but have many errors. As I remembered the command tar also keep the permissions and owners but have errors because the server it's working and it isn't so fast the process to restore. I remember too the magic dd command, but I made a big partition. Rsync looked the best option to do it, and to keep synchronized the backup. I saw rsync in the new version work well with owners but I have the package upgraded.
Anybody have some idea how I do this, or how is the normal process to keep my own server well backuped, to restore just making the partition again?
The services are taiga, a project manager platform, a git repository, a code reviewer, and so on, all are working well with nginx over Ubuntu Server. I haven't looked other backup methods because I thought rsync with a cron job do the work.
Your command would be fine, but you need to run as root user on the remote end (only root has permission to set file owners):
rsync -az -H /directorySource/ root#192.168.0.30:/home/myUser/myBackupDirectory
You also need to ensure that you use rsync's -o option to preserve owners, and -g to preserve groups, but as these are implied by -a your command is OK. I removed -p because that's also implied by -a.
You'll also need root access, on the local end, to do the reverse transfer (if you want to restore your files).
If that doesn't work for you (no root access), then you might consider doing this using tar. A proper archive is probably the correct tool for the job, and will contain all the correct user data. Again, root access will be needed to write that back to the file-system.
I have around 1000 servers on which I need to restart the SNMP service on, is there an easy method to this via a script or a batch file?
Do you have any sort of collection of the IP's and the root users and passwords (or SSH keys)?
If so, you could use a for loop to cycle through them (implementation depends on the way they're stored), and select the username and password with regular expression filtering or selecting by field and use expect to provide it the password.
If you don't have a collection like that, it seems that you'll have to build a database of them, and it may just be easier to do it manually, but it may be worth creating the database anyways in case you ever need to do this again.
You should give a look to Ansible provisioning tool.
The steps should be somthing like this:
Install Ansible: sudo apt-get install ansible (on ubuntu)
Define your server groups at /etc/ansible/hosts
[snmpservers]
myhostnames[01:10000].example.com
Restart the service on all servers
ansible snmpservers -m service -a "name=snmp state=restarted"