Ansible synchronize asking for a password - deployment

I am using Ansible (1.9.2) to deploy some files to a Redhat 6.4 server.
The playbook looks something like this
- name: deploy files
hosts: web
tasks:
- name sync files
sudo: no
synchronize:
src={{ local_path }}
dest={{ dest_path }}
And to kick this off I run something like the following
ansible-playbook -i myinventory myplaybook.yml -u DOMAIN\\user --ask-pass
When I start the play I enter my password at the prompt, facts are then obtained successfully, however as soon as the synchronize task is reached another prompt asks for my password again, like the following
DOMAIN\user#hostname's password:
If I enter my password again the deploy completes correctly.
My questions are
How can I fix or work around this, so that I do not have to enter my password for every use of the synchronize module?
Is this currently expected behaviour for the synchronize module? Or is this a bug in Ansible?
I cannot use ssh keys due to environment restrictions.
I do not want to use the copy module for scalability reasons.
Things I have tried
I have seen a number of other questions on this subject but I have
not been able to use any of them to fix my issue or understand if
this is expected behavior.
Ansible synchronize prompts passphrase even if already entered at the beginning
Ansible prompts password when using synchronize
https://github.com/ansible/ansible/issues/5934
https://github.com/ansible/ansible/issues/7071
The Ansible docs are generally excellent but I have not been able to find anything about this on the offical docs.
I have tried specifiying the user and password in the inventory file and not using the --ask-pass and -u parameters. But while I then do not have to enter the password to collect facts, the synchronize module still requests my password.
I have tried setting the --ask-sudo-pass as well, but it did not help
I have been using a CentOS 7 control box, but I have also tried an Ubuntu 14.04 box
Can anyone help?

Why not use inventory like below encrypted with Vault (ansible-playbook –ask-vault-pass …)?:
[targets]
other1.example.com ansible_connection=ssh ansible_ssh_user=mpdehaan ansible_ssh_pass=foobar
other2.example.com ansible_connection=ssh ansible_ssh_user=mdehaan ansible_ssh_pass=foobar123

Synchronize will ask you for password if your ansible server credential is different from you target host. I've tried many proposed workarounds however none of them worked...
Eventually I had to go back to file module using --sftp-extra-args to achieve what I needed. It did the trick.

To pass a password to synchronize module you can use --password-file option like so.
tasks:
- name: test_rsync
synchronize:
mode: pull
src: rsync://user#host/your/remote/path
dest: /your/local/path/
rsync_opts:
- "--password-file=/path/to/password_file"

I used the Shell for that.
- name: test_rsync
shell: rsync -a --delete --rsh='/usr/bin/sshpass -p "{{ pass }}" ssh -o StrictHostKeyChecking=no -l $RemoteUser' {{ local_path }} $RemoteUser#{{ inventory_hostname }}:/{{ dest_path }}
become: false
delegate_to: localhost #If needed
The password is encrypted with Ansible-Vault and saved under /vars/main.yml

Related

How to call GitHub Secret in Action

I have stored my SSH password in the GitHub Secret. With the keyname PASSWORD. When I use it with ${{secrets.PASSWORD}} I get no output. And therefore no access to the server via SSH. What do I have to do to use my secret as password?
- name: Run a multi-line script
run: |
echo Echo my secret
echo ${{secrets.PASSWORD}}
- name: executing remote ssh commands using password
uses: appleboy/ssh-action#master
with:
host: '12234.myserver.com'
username: 'ssh-user'
password: ${{secrets.PASSWORD}}
port: '22'
script: |
cd www/htdocs/src/
I found out what the problem was. For all those who have the same or similar problems in the future. With GitHub Secrets I made the mistake of storing the password under Environments. But it has to be stored under Repository Secret.
The next question I had was what is the difference between Repository and Enviroment Secret? For the short answear take a look to the comment below from #jessehouwing. Or / and take a look to the posted link from #Nasir Rabbani https://stackoverflow.com/a/65958690/13889413.

docker-compose pull gives either a gpg error or a permissions error when I attempt to use it with or without sudo

everyone.
I hope that someone can help to answer my question.
I am joining a project in which I have to use various docker containers. I was told that I just needed to use docker-compose to pull down all the necessary containers. I tried this, and got two different errors, based on whether I used sudo or not. My machine is Ubuntu bionic beaver 18.04.4LTS
I have docker-engine installed according to the installation instructions for Bionic on the github page, and docker-compose is likewise installed according to its instructions. I did not create a "docker" group since I have sudo access.
We have two repos that I have to log in to before I can do anything. In order to prevent my passwords from being stored unencrypted in config.json, I followed this guide to set up a secure credential store:
https://www.techrepublic.com/article/how-to-setup-secure-credential-storage-for-docker/
However, rather than asking me for the password and/or passphrase mentioned in this article, the login process makes me enter the actual passwords to the repos. So, the secure credential store may not be working, which might be causing the problem.
At any rate, once I log in and the two commands show login succeeded, I then try to do a
docker-compose pull
on the repos. When I do
sudo docker-compose pull
I get this final error:
docker.errors.DockerException: Credentials store error: StoreError('Credentials store docker-credential-pass exited with "exit status 2: gpg: WARNING: unsafe ownership on homedir '/home/myuser/.gnupg'\ngpg: decryption failed: No secret key".')
an ls of the .gnupg directory is
myuser#myhost$ ls -lA ~ | grep gnupg
drwx------ 4 myuser myuser 226 Feb 9 13:35 .gnupg
gpg --list-secret-keys shows my keypair when I run it as myuser.
I am assuming that what is happening is that because I am running as sudo the user trying to access this directory is root, not myuser, and so it is failing. However, if I leave off the sudo
docker-compose pull
docker.errors.DockerException: Error while fetching server API version: ('Connection aborted.', PermissionError(13, 'Permission denied'))
I am guessing that this is because my normal user doesn't have the ability to connect to the docker daemon's Unix socket.
So, how do I make these play together? Is the answer to add a docker group so that the command still runs as myuser and not as root? or is there another way to do this?
Also, why is my credential store not asking me for the password set by docker-credential-pass or the GPG passphrase? I suspect these two are related. Perhaps the pull is trying to send my authentication tokens over again and can't because it doesn't have access to the secure credentials store.
All of the above are guesses. Does anyone know what is going on here?
Thanking you in advance,
Brad
I just wanted to follow up with a solution to this question that worked for me.
Firstly, you need to add your user to the docker group that was created during docker-engine's installation.
sudo usermod --append --groups docker your_user_name
Because I had already used sudo to try this, there were a few files that ended up being created by root.
So, you have to chown a few things.
sudo chown your_user_name:your_group_name ~/.docker/config.json
Note that for the group name I used
docker
but I'm not sure if that's necessary.
Then, there were files inside the ~/.password-store directory that needed to be changed.
sudo chown -R your_user_name:your_group_name ~/.password-store
Most of these files are already owned by you, but the recorded credentials are not.
Then, the magic that fixed it all. From
https://ask.csdn.net/questions/5153956
you have to do this.
export GPG_TTY=$(tty)
and it is this last that makes gpg work.
Then, you can log in to your repos if you have to without using sudo
docker login -u repo_user_name your_repo_host
and then log in with your repo password.
Note that I don't know why you have to use the repo password instead of using the stored credentials.
Once you log in, you should be able to do a
docker-compose pull
without sudo
from the directory where you want the containers to be placed.
Note that you will probably have to provide your GPG passphrase at first. I'm not sure about this because I had already unlocked the key by following the steps in the above link to check to see if docker-credential-pass had the right credential store password stored.
and that should do it.

ncftp deployment (via butbucket pipelines) getting "server said: www: no such file or directory" but path in filezilla is exactly right

I'm trying to automate deployment via ftp via bitbucket pipelines.
Path is:
/var/www/vhosts/maindomain.com/subdomain.maindomain.com
Tried it with and without the first forward slash in there. Also checked the default path when you connect and its maindomain.com/subdomain.maindomain.com -- tried that too but same error.
Code looks like this:
image: node:9.8.0
pipelines:
default:
- step:
name: Deployment
script:
- apt-get update
- apt-get install ncftp
- ncftpput -v -u "$FTP_USERNAME" -p "$FTP_PASSWORD" -R $FTP_HOST $FTP_SITE_ROOT dist/*
- echo Finished uploading /dist files to $FTP_HOST$FTP_SITE_ROOT
But the problem is ncftp doesn't like the file path to upload no matter what. I've been using the one showing up in filezilla after navigating to that folder whilst connecting with the exact same credentials.
How could I trackdown the right path or troubleshoot this forward?
I think the problem lies with my server only accepting SFTP connections and can't set port to 22 as NCFTP does not support SSH. I'm currently look at lftp as an alternative, will post here the syntax if I figure it out.
Edit: Does not scale well, will be pursuing different avenues for continuous deployment.
Don't need to add the full path of FTP site just put the path as below.
-R /maindomain.com/subdomain.maindomain.com dist/*
for check the physical path of site, site->manage ftp site-> advance setting.
where you find the physical path that we don't need to include when we use cli.

Ansible playbook: pipeline local cmd output (e,g. git archive) to server?

So my project has a special infrastructure, the server has only SSH connection, I have to upload my project code to server using SSH/SFTP everytime, manually. The server can not fetch.
Basically I need something like git archive master | ssh user#host 'tar -zxvf -' automatically done using playbook.
I looked at docs, local_action seems to work but it requires a local ssh setup. Are there other ways around?
How about something like this. You may have to tweak to suit your needs.
tasks:
- shell: git archive master /tmp/master.tar.gz
- unarchive: src=/tmp/master.tar.gz dest={{dir_to_untar}}
I still do not understand it requires a local ssh setup in your question.

capistrano insisting on password

First, my teammate is successfully deploying on almost exactly the same setup and using the exact same config as me re deploy. Therefore, cannot be a deploy configuration issue, there is nothing local or unique to any of our machines.
Second, I can successfully login via my machine using ssh user#server.com without password prompt.
However, I have tried everything to stop capistrano asking this question:
--recursive; fi"
servers: ["myserver.com"]
Password:
* [deploy:update_code] rolling back
I have tried every single password I have, and not entering a password. I don't even know what this password is for. Is it SSH? Because I don't even have a password protected key file.
I'm totally lost and I've literally been debugging this for 5 hours now without a single change in status. I'd really appreciate some help on how I can find out what the problem is.
Note, cap deploy simply works for my teammate using same config, same server. Everything, except different key file (note mine works and tested via ssh command).
Do you have to specify user#server.com to SSH to your server successfully (i.e., do you have a different username on your remote server from your local machine)?
You might just need to tell Capistrano what username it should be using to connect with by adding it to your deploy.rb:
set :user, "your-username"
You could also change the default username SSH will pick for that server by using ~/.ssh/config:
Host your.server.name
User your-username