I'm brand new to Capistrano, working with an existing server that was previously using chef to run deployments.
I have set :use_sudo, true in my deploy.rb, and yet "cap deploy:check" claims "You do not have permissions to write to '/srv/app/'"
My deployment user is correctly configured to sudo without a password prompt. If I manually run "sudo test -w /srv/app" on the server, it succeeds.
Why isn't Capistrano using sudo?
The command fails because the directory does not exist. You should first run cap deploy:setup After that cap deploy:check succeeds.
Related
I configured project with privateKey authentication. I have server and one node, where i can run all operations, which doesn't require sudo. On the node i have user test, who can run commands with sudo. I'm using this user for running jobs from server on node. When i run the job i get response from node, that i need to type password for user test. There is configuration in rundeck, which allows automate this process. Here is how my project.properties file looks:
#Project Test configuration, generated
#Tue Dec 08 10:52:45 UTC 2015
project.name=Test
resources.source.1.config.requireFileExists=false
project.ssh-authentication=privateKey
resources.source.1.config.includeServerNode=true
resources.source.1.config.generateFileAutomatically=true
resources.source.1.config.format=resourcexml
resources.source.1.config.file=/home/vagrant/projects/Test/etc/resources.xml
project.ssh-keypath=/opt/test/keys/test_prv_key
project.description=Test project
resources.source.1.type=file
sudo-command-enabled=true
sudo-password-storage-path=/home/vagrant/var/storage/content/keys/test.password
sudo-prompt-pattern='^\[sudo\] password for .+:.*'
The problem is, that rundeck doesn't match the pattern for sudo command and connection is dropped in 3s after asking the password.
Upd.
Did not find solution, therefore give user access to sudo without password (NOPASSWD in sudoers)
I encountered a similar problem and after trying every combination of configuration options specified in the documentation I gave up and used this hack instead:
echo #option.sudoPassword# | sudo -S my_command
Try to apply the below config in "/etc/rundeck/project.properties"
project.sudo-command-enabled=true
project.sudo-command-pattern=^sudo$
Excuse my dev ops naiveté but I assume all you need to deploy to a machine is a proper SSH key, a port to expose, the machine's IP address a login and the code to deploy.
So are there any simple solutions that deploy code to a remote server with the only input being an SSH key, a Dockerfile and the code itself? I'm thinking it could be set up in a deterministic (almost functional) manner where the input is server IP address, login, and the output is a running server.
I've tried setting up Dokku on digital ocean (https://www.digitalocean.com/community/tutorials/how-to-use-the-digitalocean-dokku-application) and that requires a DNS record, and git. I don't need those as dependencies.
Thanks
If I understand your question correctly, you don't needed anything more than scp, ssh and a couple of shell scripts.
Let's say you want to deploy your code from serverA to serverB.
On serverB, create a directory with you Dockerfile. Also, create a shell script, let's call it build_image.sh, that runs your docker build command using sudo.
Also, on serverB, create a shell script that builds your code from source (if necessary).
Finally, on serverB, create a shell script that calls your code build script, your docker build script and at the end runs your new docker image. Let call this script do_it_all.sh.
Make sure that you chmod 755 all shell scripts.
Now, on serverA, you have a directory with the source code. scp that directory to serverB into the directory with the Dockerfile.
Next, from serverA use ssh to call do_it_all.sh on serverB. This will build your code, build your image and deploy a container without the need for extra software, packages, git, DNS records, etc.
You can even automate this process using cron or something else to have nightly deployments, if you wish, or deployments under other conditions.
Example scripts/commands:
On serverB:
build_image.sh:
#!/bin/bash
sudo docker build -t my_image
build_code.sh (optional, adjust to your code):
#!/bin/bash
cd /path/to/my/code
./configure
make
do_it_all.sh:
#!/bin/bash
cd /path/to/my/dockerfile
sudo docker stop my_container #stop the old container
sudo docker rm my_container #remove the old container
sudo docker rmi my_image #remove the old image
./build_code.sh #comment out if not needed
./build_image.sh
sudo docker run -d --name my_container my_image
On serverA:
scp -r /path/to/my/code serverB:/path/to/my/dockerfile
ssh serverB '/path/to/my/dockerfile/do_it_all.sh'
That should be it. Adjust for your system.
To deploy to a brand new system, just write a script on serverA that uses ssh to copy create necessary directories on serverB ssh serverB 'mkdir /path/to/dockerfile'. Next, copy your Dockerfile and your build scripts and your code from serverA to serverB using scp. Then run do_it_all.sh on serverB from serverA using ssh.
I am using capistrano to deplay my rails application on a Ubuntu server.
I already logged into the server and created a folder /webapps/myapp, but no sub folders from here.
Then I run
cap deploy:setup
No errors so far, so i run
cap deploy:setup
Now I get this message
You do not have permissions to write to /webapps/myapp/releases
I can get around this by logging in to the server and change the owner of releases, I just wonder why it is not created with the user I use for deploying? Is this how it work or am I missing something?
In your deploy.rb file you should specify the deployment user and if he has sudo privilege.
set :user, "william"
set :use_sudo, false
Giving sudo privilege isn't recommended, but this option exists.
The directory to which you deploy should be already owned by the deployment user "william"
i am trying to do a deployment with capistrano to newly installed Ubuntu server
i am deploying to directory /var/www, owned by root, so i need to set use_sudo to true
while i execute commands with run "#{try_sudo} command" without problem, svn checkout doesn't work with sudo prefix
i try
set :deploy_via, :export
and it throws
Can't make directory '/var/www/pr_name/releases/20091217171253': Permission denied
during checkout
i imagine adding "try_sudo" prefix to "svn export" would help, but where can i edit the one it uses in deploy_via?
--
if on other hand i don't use use_sudo, and set /var/www/ directory ownership to myuser, i still cannot deploy - some of my deployment commands set folders ownership to apache user www-data and then i get something like:
changing ownership of `/var/www/pr_name/current/specificdirectory': Operation not permitted
which, if i understand correctly, has to be done with sudo
Using the sudo helper solved the problem.
Here is an example:
run "#{sudo} chown root:root /etc/my.cnf"
Try cap deploy:setup
I have a capistrano deployment recipe I've been using for some time to deploy my web app and then restart apache/nginx using the sudo command. Recently cap deploy is hanging when I try to execute these sudo commands. I see the output:
"[sudo] password for "
With my server name and the remote login, but this is not a secure login prompt. The cap shell is just hanging waiting for more output and does not allow me to type my password in to complete the remote sudo command.
Is there a way to fix this or a decent work around? I did not want to remove the sudo password prompt of my remote user for web restart commands.
This seems to happen when connecting to CentOS machines as well. Add the following line in your capistrano deploy file:
default_run_options[:pty] = true
Also make sure to use the sudo helper instead of executing sudo in your run commands directly. For example:
# not
run "sudo chown root:root /etc/my.cnf"
# but
sudo "chown root:root /etc/my.cnf"
The other advice may be sound, but I found that once I updated to Capistrano 2.5.3 the problem went away. I have to make sure I stop running the default versions of tools that came with my O/S.
# prevent sudo prompting for password
set :sudo_prompt, ""