Capistrano deployment with common user - deployment

I'm trying to setup Capistrano to do our deployments, but I now stumbled upon what seems to be a common assumption of capistrano users: that the user you SSH to the remote host will have permission to write to the directory of deployment.
Here, administrators are common users with a single distinction: they can sudo. At first, I thought that would be enough, since there are some configurations related to sudo, but it seems that's not the case after all.
Is there a way around this? Creating a user shared by everyone doing deployment is not an acceptable solution.
Edit: to make it clear, no deploy action should happen without calling sudo -- that's the gateway point that checks whether the user is allowed to deploy or not, and it should be a mandatory checkpoint.
The presently accepted answer does not fit that criteria. It goes around sudo by granting extra permissions to the user. I'm accepting it anyway because I've come to the conclusion that Capistrano is fundamentally broken in this regard.

I assume you are deploying to a Linux distro. The easiest way to resolve your issue is to create a group, say, deployers, and add each user who should have the permissions to deploy to that group. Once the group is created and the users are in the group, change the ownership and permissions on the deployment path.
Depending on the distro, the syntax will vary slightly. Here it is for ubuntu/debian:
Create the group:
$ sudo groupadd deployers
Add users to group:
$ sudo usermod -a -G deployers daniel
The last argument there is the username.
Next, update the ownership of the deployment path:
$ sudo chown -R root:deployers /deploy/to/path/
The syntax for is :. Here I am assuming that the user that currently owns the path is root. Update to which ever user should own the directory.
Finally, change the permissions on the deployment path:
$ sudo chmod -R 0766 /deploy/to/path/
That will allow users in the deployers group to read and write all files and directories beneath /deploy/to/path

Related

docker-compose pull gives either a gpg error or a permissions error when I attempt to use it with or without sudo

everyone.
I hope that someone can help to answer my question.
I am joining a project in which I have to use various docker containers. I was told that I just needed to use docker-compose to pull down all the necessary containers. I tried this, and got two different errors, based on whether I used sudo or not. My machine is Ubuntu bionic beaver 18.04.4LTS
I have docker-engine installed according to the installation instructions for Bionic on the github page, and docker-compose is likewise installed according to its instructions. I did not create a "docker" group since I have sudo access.
We have two repos that I have to log in to before I can do anything. In order to prevent my passwords from being stored unencrypted in config.json, I followed this guide to set up a secure credential store:
https://www.techrepublic.com/article/how-to-setup-secure-credential-storage-for-docker/
However, rather than asking me for the password and/or passphrase mentioned in this article, the login process makes me enter the actual passwords to the repos. So, the secure credential store may not be working, which might be causing the problem.
At any rate, once I log in and the two commands show login succeeded, I then try to do a
docker-compose pull
on the repos. When I do
sudo docker-compose pull
I get this final error:
docker.errors.DockerException: Credentials store error: StoreError('Credentials store docker-credential-pass exited with "exit status 2: gpg: WARNING: unsafe ownership on homedir '/home/myuser/.gnupg'\ngpg: decryption failed: No secret key".')
an ls of the .gnupg directory is
myuser#myhost$ ls -lA ~ | grep gnupg
drwx------ 4 myuser myuser 226 Feb 9 13:35 .gnupg
gpg --list-secret-keys shows my keypair when I run it as myuser.
I am assuming that what is happening is that because I am running as sudo the user trying to access this directory is root, not myuser, and so it is failing. However, if I leave off the sudo
docker-compose pull
docker.errors.DockerException: Error while fetching server API version: ('Connection aborted.', PermissionError(13, 'Permission denied'))
I am guessing that this is because my normal user doesn't have the ability to connect to the docker daemon's Unix socket.
So, how do I make these play together? Is the answer to add a docker group so that the command still runs as myuser and not as root? or is there another way to do this?
Also, why is my credential store not asking me for the password set by docker-credential-pass or the GPG passphrase? I suspect these two are related. Perhaps the pull is trying to send my authentication tokens over again and can't because it doesn't have access to the secure credentials store.
All of the above are guesses. Does anyone know what is going on here?
Thanking you in advance,
Brad
I just wanted to follow up with a solution to this question that worked for me.
Firstly, you need to add your user to the docker group that was created during docker-engine's installation.
sudo usermod --append --groups docker your_user_name
Because I had already used sudo to try this, there were a few files that ended up being created by root.
So, you have to chown a few things.
sudo chown your_user_name:your_group_name ~/.docker/config.json
Note that for the group name I used
docker
but I'm not sure if that's necessary.
Then, there were files inside the ~/.password-store directory that needed to be changed.
sudo chown -R your_user_name:your_group_name ~/.password-store
Most of these files are already owned by you, but the recorded credentials are not.
Then, the magic that fixed it all. From
https://ask.csdn.net/questions/5153956
you have to do this.
export GPG_TTY=$(tty)
and it is this last that makes gpg work.
Then, you can log in to your repos if you have to without using sudo
docker login -u repo_user_name your_repo_host
and then log in with your repo password.
Note that I don't know why you have to use the repo password instead of using the stored credentials.
Once you log in, you should be able to do a
docker-compose pull
without sudo
from the directory where you want the containers to be placed.
Note that you will probably have to provide your GPG passphrase at first. I'm not sure about this because I had already unlocked the key by following the steps in the above link to check to see if docker-credential-pass had the right credential store password stored.
and that should do it.

sudo access to install confluent / kafka platform

Looking into Confluent Docs for installation, installation commands seems to need sudo permission. I have couple of questions regarding the same:
Is it sudo root privileges that are needed here, or should we be sudo'ing to some specific confluent user like cp-kafka to install the platform? I presume we need sudo root privileges for install.
Will the platform create all the necessary service user accounts for each of the individual components like Kafka, ZooKeeper etc? Or should they be created upfront and kept ready before installation is initiated?
What should be the user group that confluent needs / creates?
Thanks
should we be sudo'ing to some specific confluent user like cp-kafka to install the platform?
So, to clarify, "sudo to a user" is not a phrase in Linux. su is the command to "switch users". If you do not currently have sudo privileges, then you will need to actually logout and log back in to this other user. This could be done with su from that current account, though (which you will be prompted for a password), but not sudo su
Those users do not exist prior to installation, and installing any software typically is just sudo yum install, for example, assuming your current user has access to do so (via /etc/sudoers).
Will the platform create all the necessary service user accounts for each of the individual components like Kafka, ZooKeeper etc?
It should, yes. Maybe except a Zookeeper service user, if I recall correctly because Zookeeper is "installed" within the Kafka directory, owned by cp-kafka.
What should be the user group that confluent needs / creates?
Nothing needs pre-created. The scripts will create cp-kafka, cp-schema-registry, cp-kafka-connect, cp-ksql, etc for each component.
For easily installing on a cluster, it is recommended to try the Ansible playbooks

Security loopholes in adding 666 permissions to /var/run/docker.sock

I was not able to build my dockerfile via jenkins until I added 666 permissions to /var/run/docker.sock. Now, I understand that this is more secure than adding the 'jenkins' user to 'sudoers' list. HOWEVER,
Is there still a better way ?
What are the ways in which this extra permission could be used to my disadvantage ?
What are the ways in which this extra permission could be used to my disadvantage ?
You have given permission for any user on the machine to become root without any password.
Is there still a better way ?
For Jenkins, you just need to run the following to give them access to the docker group so they can run docker commands. This will give the Jenkins user access to become root, so you'll want to be sure your Jenkins is secure or you do not care about users becoming root on this system:
sudo usermod -aG docker jenkins

How to run chef-client vagrant provisioner from custom non-root user?

I wonder is there a way to set user in vagrant configuration, so that box will be provisioned from non-root account? The thing is that I want to run chef-client on boxes as specific user (deployer), and not root, but for that I should run provisioner and create this user first, and this provisioner is created under root user.
As I understand, the one solution is to run provisioning for create deployer user, and then change all chef-related files and directories on box to be owned by deployer user, and then run the actual provisioning from chef server.
Is there some better solution?
Forgive me if I'm just restating the second half of your question but it seems like you may want to create a minimal starting provisioner (runs as root) then spawn another provisioner as your intended user.
Here is an example of how I install my dotfiles as my ssh user (vagrant)
# ... in shell provision script...
su -c "cd /home/vagrant/.dotfiles && bash install.bash" vagrant
Similar Vagrant github issue

capistrano deployment with use_sudo=true - permissions problem

i am trying to do a deployment with capistrano to newly installed Ubuntu server
i am deploying to directory /var/www, owned by root, so i need to set use_sudo to true
while i execute commands with run "#{try_sudo} command" without problem, svn checkout doesn't work with sudo prefix
i try
set :deploy_via, :export
and it throws
Can't make directory '/var/www/pr_name/releases/20091217171253': Permission denied
during checkout
i imagine adding "try_sudo" prefix to "svn export" would help, but where can i edit the one it uses in deploy_via?
--
if on other hand i don't use use_sudo, and set /var/www/ directory ownership to myuser, i still cannot deploy - some of my deployment commands set folders ownership to apache user www-data and then i get something like:
changing ownership of `/var/www/pr_name/current/specificdirectory': Operation not permitted
which, if i understand correctly, has to be done with sudo
Using the sudo helper solved the problem.
Here is an example:
run "#{sudo} chown root:root /etc/my.cnf"
Try cap deploy:setup