How do I get past authentication for setting local host name via kubespray? - kubernetes

Hi i keep getting this error when using ansible via kubespray and I am wondering how to over come it
TASK [bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS, non-Flatcar, Suse and ClearLinux)] ********************************************************************************************************************************************************************************************************
task path: /home/dc/xcp-projects/kubespray/roles/bootstrap-os/tasks/main.yml:50
<192.168.10.55> (1, b'\x1b[1;31m==== AUTHENTICATING FOR org.freedesktop.hostname1.set-hostname ===\r\n\x1b[0mAuthentication is required to set the local host name.\r\nMultiple identities can be used for authentication:\r\n 1. test\r\n 2. provision\r\n 3. dc\r\nChoose identity to authenticate as (1-3): \r\n{"msg": "Command failed rc=1, out=, err=\\u001b[0;1;31mCould not set property: Connection timed out\\u001b[0m\\n", "failed": true, "invocation": {"module_args": {"name": "node3", "use": null}}}\r\n', b'Shared connection to 192.168.10.55 closed.\r\n')
<192.168.10.55> Failed to connect to the host via ssh: Shared connection to 192.168.10.55 closed.
<192.168.10.55> ESTABLISH SSH CONNECTION FOR USER: provision
<192.168.10.55> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile="/home/dc/.ssh/xcp_server_k8s_nodes/xcp-k8s-provision-key"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="provision"' -o ConnectTimeout=10 -oStrictHostKeyChecking=no -o ControlPath=/home/dc/.ansible/cp/c6d70a0b7d 192.168.10.55 '/bin/sh -c '"'"'rm -f -r /home/provision/.ansible/tmp/ansible-tmp-1614373378.5434802-17760837116436/ > /dev/null 2>&1 && sleep 0'"'"''
<192.168.10.56> (0, b'', b'')
fatal: [node2]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"name": "node2",
"use": null
}
},
"msg": "Command failed rc=1, out=, err=\u001b[0;1;31mCould not set property: Method call timed out\u001b[0m\n"
}
my inventory file is as follows
all:
hosts:
node1:
ansible_host: 192.168.10.54
ip: 192.168.10.54
access_ip: 192.168.10.54
node2:
ansible_host: 192.168.10.56
ip: 192.168.10.56
access_ip: 192.168.10.56
node3:
ansible_host: 192.168.10.55
ip: 192.168.10.55
access_ip: 192.168.10.55
children:
kube-master:
hosts:
node1:
node2:
kube-node:
hosts:
node1:
node2:
node3:
etcd:
hosts:
node1:
node2:
node3:
k8s-cluster:
children:
kube-master:
kube-node:
calico-rr:
hosts: {}
I also have a file which provision the users in the following manner
- name: Add a new user named provision
user:
name: provision
create_home: true
shell: /bin/bash
password: "{{ provision_password }}"
groups: sudo
append: yes
- name: Add a new user named dc
user:
name: dc
create_home: true
shell: /bin/bash
password: "{{ provision_password }}"
groups: sudo
append: yes
- name: Add provision user to the sudoers
copy:
dest: "/etc/sudoers.d/provision"
content: "provision ALL=(ALL) NOPASSWD: ALL"
- name: Add provision user to the sudoers
copy:
dest: "/etc/sudoers.d/dc"
content: "dc ALL=(ALL) NOPASSWD: ALL"
- name: Disable Root Login
lineinfile:
path: /etc/ssh/sshd_config
regexp: '^PermitRootLogin'
line: "PermitRootLogin no"
state: present
backup: yes
notify:
- Restart ssh
I have run the ansible command in the following manner
ansible-playbook -i kubespray/inventory/mycluster/hosts.yaml --user="provision" --ssh-extra-args="-oStrictHostKeyChecking=no" --key-file "/home/dc/.ssh/xcp_server_k8s_nodes/xcp-k8s-provision-key" kubespray/cluster.yml -vvv
as well as
ansible-playbook -i kubespray/inventory/mycluster/hosts.yaml --user="provision" --ssh-extra-args="-oStrictHostKeyChecking=no" --key-file "/home/dc/.ssh/xcp_server_k8s_nodes/xcp-k8s-provision-key" --become-user="provision" kubespray/cluster.yml -vv
both yield the same error an interestingly escalation seems to succeed on earlier points
after reading this article
https://askubuntu.com/questions/542397/change-default-user-for-authentication
I have decided to add the users to the sudo group but the error still persists
looking into the main.yaml file position suggested by the error it seems to be this code possibly causing issues?
# Workaround for https://github.com/ansible/ansible/issues/42726
# (1/3)
- name: Gather host facts to get ansible_os_family
setup:
gather_subset: '!all'
filter: ansible_*
- name: Assign inventory name to unconfigured hostnames (non-CoreOS, non-Flatcar, Suse and ClearLinux)
hostname:
name: "{{ inventory_hostname }}"
when:
- override_system_hostname
- ansible_os_family not in ['Suse', 'Flatcar Container Linux by Kinvolk', 'ClearLinux'] and not is_fedora_coreos
The OS'es of the hosts are ubuntu 20.04.02 server
is there anything more I can do?

From Kubespray documentation:
# Deploy Kubespray with Ansible Playbook - run the playbook as root
# The option `--become` is required, as for example writing SSL keys in /etc/,
# installing packages and interacting with various systemd daemons.
# Without --become the playbook will fail to run!
ansible-playbook -i inventory/mycluster/hosts.yaml --become --become-user=root cluster.yml
As stated, the --become is mandatory, it allows to do privilege escalation for most of the system modifications (like setting the hostname) that Kubespray performs.
With --user=provision you're just setting the SSH user, but it will need privilege escalation anyway.
With --become-user=provision you're just saying that privilege escalation will escalade to 'provision' user (but you would need --become to do the privilege escalation).
In both cases, unless 'provision' user has root permissions (not sure putting it in root group is enough), it won't be enough.
For the user 'provision' to be enough, you need to make sure that it can perform a hostnamectl <some-new-host> without being asked for authentication.

Related

Ansible Playbook that runs 2nd play as different user

The goal is to have only 1 playbook, that can be executed with the initial password setup when the os is built.
This playbook will add a service account, and then execute the remaining plays as that service account.
The issue im having is that the subsequent plays are not using the service account correctly.
Does anyone have any advice on how to get his method to work?
I can see that its using the new account, but its not passing the password for that new account.
my playbook is below
---
#name: Playbook to run through roles to provision new server
- hosts: all
gather_facts: false
become: true
#become_user: '{{ root_user }}' #this is commented out to show what acct is being used
tasks:
#User root account to add new service account, so root account can also be managed.
- name: Add Service Accounts
include_tasks: ../steps/ServiceAccount_add.yml
- name: Pause for 30 seconds
ansible.builtin.pause:
seconds: 30
#2nd play to be ran as service account so root is not used.
- hosts: all
gather_facts: false
become: true
remote_user: '{{ Service_Account }}'
become_user: '{{ Service_Account }}'
vars:
ansible_become_password: '{{ Service_AccountPW }}'
remote_user_password: '{{ Service_AccountPW }}'
tasks:
- name: Run Baseline
include_tasks: ../steps/Yum_baseline.yml
- name: Run Update
include_tasks: ../steps/Yum_Update.yml
Everything executes up to this part:
<IPADDRESS> SSH: EXEC sshpass -d8 ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'User="service_account"' -o ConnectTimeout=10 -o ControlPath=/tmp/bwrap_656_m3k5zy9e/awx_656_4iz9_26u/cp/a07c97f8e1 IPADDRESS '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"''
<IPADDRESS> (5, '', 'Permission denied, please try again.\r\nPermission denied, please try again.\r\nPermission denied (publickey,gssapi-keyex,gssapi-with-mic,password).\r\n')
fatal: [IPADDRESS]: UNREACHABLE! => {
"changed": false,
"msg": "Invalid/incorrect password: Permission denied, please try again.\r\nPermission denied, please try again.\r\nPermission denied (publickey,gssapi-keyex,gssapi-with-mic,password).",
"unreachable": true
}
Thank you! #Zeitounator for his comment!
Here is the final code that works
---
#name: Playbook to run through roles to provision new server
- hosts: all
gather_facts: false
become: true
#become_user: '{{ root_user }}'
tasks:
#User root account to add new service account, so root account can also be managed.
- name: Add Service Accounts
include_tasks: ../steps/Ansible_accountadd.yml
#Jeff Geerling suggested looking into this
- name: Reset ssh connection to allow user changes to affect 'current login user'
ansible.builtin.meta: reset_connection
#2nd play to be ran as service account so root is not used.
- hosts: all
gather_facts: false
remote_user: '{{ Service_Account }}' # this is used to change the ssh user
vars:
ansible_ssh_pass: '{{ Service_AccountPW }}' #set the ssh user pw
tasks:
- name: Run Baseline
include_tasks: ../steps/Yum_baseline.yml
- name: Run Update
include_tasks: ../steps/Yum_Update.yml

why does ansible not see admin.conf and it needs to be manually exported?

why does ansible not see admin.conf when creating resources in the cloud?
- name: apply ingress
shell: export KUBECONFIG=/etc/kubernetes/admin.conf && kubectl apply -f /home/ingress.yaml
works like this and sees everything, and if so
- name: apply ingress
shell: kubectl apply -f /home/ingress.yaml
error:
The connection to the server localhost:8080 was refused - did you
specify the right host or port?", "stderr_lines": ["The connection to
the server localhost:8080 was refused - did you specify the right host
or port?"], "stdout": "", "stdout_lines": []}
at the same time, if I log on to the server via ssh, the command is used under the ubuntu order, and under the root order, without exports.
P.S. just in case, I copied admin.conf to the user directory
- name: Create directory for kube config.
become: yes
file:
path: /home/{{ ansible_user }}/.kube
state: directory
owner: "{{ ansible_user }}"
group: "{{ ansible_user }}"
mode: 0755
- name: Copy admin.conf to user's home directory
become_user: root
become_method: sudo
become: true
copy:
src: /etc/kubernetes/admin.conf
dest: "/home/{{ ansible_user }}/.kube/config"
remote_src: yes
owner: "{{ ansible_user }}"
group: "{{ ansible_user }}"
mode: 0644
i dont know why, but solution:
name: apply ingress
become: true
become: ubuntu
shell: kubectl apply -f /home/ingress.yaml

Ansible No File in Directory docker-compose

Hello I'm trying to start the docker compose (docker-compose_mysql.yml up) but ansible says no files in the directory. I've already looked at other solutions on github and stackoverflow but nothing that has allowed me to solve my problem.
Thanks you :)
my playbook
---
- name: Mettre en place Redmine - mySQL
connection: localhost
hosts: localhost
become_method: sudo
tasks:
- name: install docker-py
pip: name=docker-py
- name: Installer le docker compose
command: sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
register: command1
- debug: var=command1.stdout_lines
- name: Installer le docker compose
command: pip install docker-compose
- name: download docker compose
command: wget https://raw.githubusercontent.com/sameersbn/docker-redmine/master/docker-compose-mysql.yml
register: command2
- debug: var=command2.stdout_lines
- name: docker compose run
command: docker-compose-mysql.yml up-d
register: command3
- debug: var=command3.stdout_lines
my error
FAILED! => {"changed": false, "cmd": "docker-compose-mysql.yml up-d", "msg": "[Errno 2] Aucun fichier ou dossier de ce type: b'docker-compose-mysql.yml'", "rc": 2}
file docker-compose-mysql in directory
- name: copy sql schema
hosts: test-mysql
gather_facts: no
tasks:
- debug:
msg: "{{ playbook_dir }}"
- name: Docker compose
command: docker-compose -f {{ name }}_compose.yml up -d
Then, either move {{ name }}_compose.yml to the directory or provide an absolute path in command: docker-compose -f [abs_path]{{ name }}_compose.yml up -d

Vagrant: running Ansible provisioning after all VMs booted, Ansible cannot connect to all hosts

I'm trying to figure out how to use Ansible with Vagrant the proper way. By default, it seems Vagrant is isolating Ansible execution per box and executes playbooks after each box partially as it applies to that single box in the loop. I find this VERY counterproductive and I have tried tricking Vagrant into executing a playbook across all of the hosts AFTER all of them booted, but it seems Ansible, when started from Vagrant never sees more than a single box at a time.
Edit: these are the version I am working with:
Vagrant: 2.2.6
Ansible: 2.5.1
Virtualbox: 6.1
The playbook (with the hosts.ini) by itsef executes without issues when I run it stand-alone with the ansible-playbook executable after the hosts come up, so the problem is with my Vagrant file. I just cannot figure it out.
This is the Vagrantfile:
# -*- mode: ruby -*-
# vi: set ft=ruby :
IMAGE_NAME = "ubuntu/bionic64"
Vagrant.configure("2") do |config|
config.ssh.insert_key = false
config.vm.box = IMAGE_NAME
# Virtualbox configuration
config.vm.provider "virtualbox" do |v|
v.memory = 4096
v.cpus = 2
#v.linked_clone = true
end
# master and node definition
boxes = [
{ :name => "k8s-master", :ip => "192.168.50.10" },
{ :name => "k8s-node-1", :ip => "192.168.50.11" }
]
boxes.each do |opts|
config.vm.define opts[:name] do |config|
config.vm.hostname = opts[:name]
config.vm.network :private_network, ip: opts[:ip]
if opts[:name] == "k8s-node-1"
config.vm.provision "ansible_local" do |ansible|
ansible.compatibility_mode = "2.0"
ansible.limit = "all"
ansible.config_file = "ansible.cfg"
ansible.become = true
ansible.playbook = "playbook.yml"
ansible.groups = {
"masters" => ["k8s-master"],
"nodes" => ["k8s-node-1"]
}
end
end
end
end
end
ansible.cfg
[defaults]
connection = smart
timeout = 60
deprecation_warnings = False
host_key_checking = False
inventory = hosts.ini
[ssh_connection]
ssh_args = -o ControlMaster=auto -o ControlPersist=60s -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes
hosts.ini
[masters]
k8s-master ansible_host=192.168.50.10 ansible_user=vagrant
[nodes]
k8s-node-1 ansible_host=192.168.50.11 ansible_user=vagrant
[all:vars]
ansible_python_interpreter=/usr/bin/python3
ansible_ssh_user=vagrant
ansible_ssh_private_key_file=~/.vagrant.d/insecure_private_key
playbook.yml
- hosts: all
become: yes
tasks:
- name: Update apt cache.
apt: update_cache=yes cache_valid_time=3600
when: ansible_os_family == 'Debian'
- name: Ensure swap is disabled.
mount:
name: swap
fstype: swap
state: absent
- name: Disable swap.
command: swapoff -a
when: ansible_swaptotal_mb > 0
- name: create the 'mobile' user
user: name=mobile append=yes state=present createhome=yes shell=/bin/bash
- name: allow 'mobile' to have passwordless sudo
lineinfile:
dest: /etc/sudoers
line: 'mobile ALL=(ALL) NOPASSWD: ALL'
validate: 'visudo -cf %s'
- name: set up authorized keys for the mobile user
authorized_key:
user: mobile
key: "{{ lookup('pipe','cat ssh_keys/*.pub') }}"
state: present
exclusive: yes
- hosts: all
become: yes
tasks:
- name: install Docker
apt:
name: docker.io
state: present
update_cache: true
- name: install APT Transport HTTPS
apt:
name: apt-transport-https
state: present
- name: add Kubernetes apt-key
apt_key:
url: https://packages.cloud.google.com/apt/doc/apt-key.gpg
state: present
- name: add Kubernetes' APT repository
apt_repository:
repo: deb http://apt.kubernetes.io/ kubernetes-xenial main
state: present
filename: 'kubernetes'
- name: install kubelet
apt:
name: kubelet=1.17.0-00
state: present
update_cache: true
- name: install kubeadm
apt:
name: kubeadm=1.17.0-00
state: present
- hosts: masters
become: yes
tasks:
- name: install kubectl
apt:
name: kubectl=1.17.0-00
state: present
force: yes
- hosts: k8s-master
become: yes
tasks:
- name: check docker status
systemd:
state: started
name: docker
- name: initialize the cluster
shell: kubeadm init --apiserver-advertise-address 192.168.50.10 --pod-network-cidr=10.244.0.0/16 >> cluster_initialized.txt
args:
chdir: $HOME
creates: cluster_initialized.txt
- name: create .kube directory
become: yes
become_user: mobile
file:
path: $HOME/.kube
state: directory
mode: 0755
- name: copy admin.conf to user's kube config
copy:
src: /etc/kubernetes/admin.conf
dest: /home/mobile/.kube/config
remote_src: yes
owner: mobile
- name: install Pod network
become: yes
become_user: mobile
shell: kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml >> pod_network_setup.txt
args:
chdir: $HOME
creates: pod_network_setup.txt
- hosts: k8s-master
become: yes
gather_facts: false
tasks:
- name: get join command
shell: kubeadm token create --print-join-command 2>/dev/null
register: join_command_raw
- name: set join command
set_fact:
join_command: "{{ join_command_raw.stdout_lines[0] }}"
- hosts: nodes
become: yes
tasks:
- name: check docker status
systemd:
state: started
name: docker
- name: join cluster
shell: "{{ hostvars['k8s-master'].join_command }} >> node_joined.txt"
args:
chdir: $HOME
creates: node_joined.txt
The moment the playbook tries to execute against k8s-master, it fails like this:
fatal: [k8s-master]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname k8s-master: Temporary failure in name resolution", "unreachable": true}
The host is up. SSH works.
Who can help me sort this out?
Thanks!
I have managed to use Ansible inside of Vagrant.
Here is what I did to make it work:
Steps to reproduce:
Install Vagrant, Virtualbox
Create all the necessary files and directories
ansible.cfg
playbook.yml
hosts
insecure_private_key
Vagrant file
Test
Install Vagrant, Virtualbox
Follow installation guides at appropriate sites:
Vagrant
Virtualbox
Create all the necessary files and directories
This example bases on original poster files.
Create vagrant and ansible folders to store all the configuration files and directories. The structure of it could look like that:
vagrant - directory
Vagrantfile - file with main configuration
ansible - directory
ansible.cfg - configuration file of Ansible
playbook.yml - file with steps for Ansible to execute
hosts - file with information about hosts
insecure_private_key - private key of created machines
Ansible folder is a seperate directory that will be copied to k8s-node-1.
By default Vagrant shares a vagrant folder with permissions of 777. It allows owner, group and others to have full access on everything that is inside of it.
Logging to virtual machine manualy and running ansible-playbook command inside vagrant directory will output errors connected with permissions. It will render ansible.cfg and insecure_private_key useless.
Ansible.cfg
Ansible.cfg is configuration file of Ansible. Example used below:
[defaults]
connection = smart
timeout = 60
deprecation_warnings = False
host_key_checking = False
[ssh_connection]
ssh_args = -o ControlMaster=auto -o ControlPersist=60s -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes
Create ansible.cfg inside ansible directory.
Playbook.yml
Example playbook.yml is a file with steps for Ansible to execute.
It will check connections and test if groups are configured correctly:
- name: Check all connections
hosts: all
tasks:
- name: Ping
ping:
- name: Check specific connection to masters
hosts: masters
tasks:
- name: Ping
ping:
- name: Check specific connection to nodes
hosts: nodes
tasks:
- name: Ping
ping:
Create playbook.yml inside ansible directory.
Insecure_private_key
To successfully connect to virtual machines you will need insecure_private_key. You can create it by invoking command:$ vagrant init inside vagrant directory.
It will create insecure_private_key inside your physical machine in HOME_DIRECTORY/.vagrant.d.
Copy it to ansible folder.
Hosts
Below hosts file is responsible for passing the information about hosts to Ansible:
[masters]
k8s-master ansible_host=192.168.50.10 ansible_user=vagrant
[nodes]
k8s-node-1 ansible_host=192.168.50.11 ansible_user=vagrant
[all:vars]
ansible_python_interpreter=/usr/bin/python3
ansible_ssh_user=vagrant
ansible_ssh_private_key_file=/ansible/insecure_private_key
Create hosts file inside ansible directory.
Please take a specific look on: ansible_ssh_private_key_file=/ansible/insecure_private_key
This is declaration for Ansible to use earlier mentioned key.
Vagrant
Vagrant file is the main configuration file:
# -*- mode: ruby -*-
# vi: set ft=ruby :
IMAGE_NAME = "ubuntu/bionic64"
Vagrant.configure("2") do |config|
config.ssh.insert_key = false
config.vm.box = IMAGE_NAME
# Virtualbox configuration
config.vm.provider "virtualbox" do |v|
v.memory = 4096
v.cpus = 2
#v.linked_clone = true
end
# master and node definition
boxes = [
{ :name => "k8s-master", :ip => "192.168.50.10" },
{ :name => "k8s-node-1", :ip => "192.168.50.11" }
]
boxes.each do |opts|
config.vm.define opts[:name] do |config|
config.vm.hostname = opts[:name]
config.vm.network :private_network, ip: opts[:ip]
if opts[:name] == "k8s-node-1"
config.vm.synced_folder "../ansible", "/ansible", :mount_options => ["dmode=700", "fmode=700"]
config.vm.provision "ansible_local" do |ansible|
ansible.compatibility_mode = "2.0"
ansible.limit = "all"
ansible.config_file = "/ansible/ansible.cfg"
ansible.become = true
ansible.playbook = "/ansible/playbook.yml"
ansible.inventory_path = "/ansible/hosts"
end
end
end
end
end
Please take a specific look on:
config.vm.synced_folder "../ansible", "/ansible", :mount_options => ["dmode=700", "fmode=700"]
config.vm.synced_folder will copy ansible directory to k8s-node-1 with all the files inside.
It will set permissions for full access only to owner (vagrant user).
ansible.inventory_path = "/ansible/hosts"
ansible.inventory_path will tell Vagrant to provide hosts file for Ansible.
Test
To check run the following command from the vagrant directory:
$ vagrant up
The part of the output responsible for Ansible should look like that:
==> k8s-node-1: Running provisioner: ansible_local...
k8s-node-1: Installing Ansible...
k8s-node-1: Running ansible-playbook...
PLAY [Check all connections] ***************************************************
TASK [Gathering Facts] *********************************************************
ok: [k8s-master]
ok: [k8s-node-1]
TASK [Ping] ********************************************************************
ok: [k8s-master]
ok: [k8s-node-1]
PLAY [Check specific connection to masters] ************************************
TASK [Gathering Facts] *********************************************************
ok: [k8s-master]
TASK [Ping] ********************************************************************
ok: [k8s-master]
PLAY [Check specific connection to nodes] **************************************
TASK [Gathering Facts] *********************************************************
ok: [k8s-node-1]
TASK [Ping] ********************************************************************
ok: [k8s-node-1]
PLAY RECAP *********************************************************************
k8s-master : ok=4 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
k8s-node-1 : ok=4 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

Unable to run 2x Powershell scripts after deploying ARM template via Ansible

I'm deploying an Azure ARM template via Ansible playbook which seems to work fine, however I wish to add the ability to run 2x Powershell scripts after the machine has been deployed. I already have a custom script extension running when the machine is deployed via the ARM template, but I also wish to run 2 more Powershell scripts afterwards.
My Playbook:
---
- name: Deploy Azure ARM template.
hosts: localhost
connection: local
gather_facts: false
vars_files:
- ./vars/vault.yml
- ./vars/vars.yml
tasks:
- include_vars: vault.yml
- name: Create Azure Deploy
azure_rm_deployment:
client_id: "{{ client_id }}"
secret: "{{ secret }}"
subscription_id: "{{ subscription_id }}"
tenant: "{{ tenant }}"
state: present
resource_group_name: AnsibleTest1
location: UK South
template: "{{ lookup('file', 'WindowsVirtualMachine.json') }}"
parameters: "{{ (lookup('file', 'WindowsVirtualMachine.parameters.json') | from_json).parameters }}"
- name: Run powershell script
script: files/helloworld1.ps1
- name: Run powershell script
script: files/helloworld2.ps1
And the error after successfully deploying the template:
TASK [Run powershell script] ***************************************************************************************************************************************************************************
task path: /home/beefcake/.ansible/azure-json-deploy.yml:25
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: beefcake
<127.0.0.1> EXEC /bin/sh -c 'echo ~ && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/beefcake/.ansible/tmp/ansible-tmp-1507219682.48-53342098196230 `" && echo ansible-tmp-1507219682.48-53342098196230="` echo /home/beefcake/.ansible/tmp/ansible-tmp-1507219682.48-53342098196230 `" ) && sleep 0'
<127.0.0.1> PUT /home/beefcake/.ansible/files/helloworld1.ps1 TO /home/beefcake/.ansible/tmp/ansible-tmp-1507219682.48-53342098196230/helloworld1.ps1
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/beefcake/.ansible/tmp/ansible-tmp-1507219682.48-53342098196230/ /home/beefcake/.ansible/tmp/ansible-tmp-1507219682.48-53342098196230/helloworld1.ps1 && sleep 0'
<127.0.0.1> EXEC /bin/sh -c ' /home/beefcake/.ansible/tmp/ansible-tmp-1507219682.48-53342098196230/helloworld1.ps1 && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/beefcake/.ansible/tmp/ansible-tmp-1507219682.48-53342098196230/ > /dev/null 2>&1 && sleep 0'
fatal: [localhost]: FAILED! => {
"changed": true,
"failed": true,
"msg": "non-zero return code",
"rc": 127,
"stderr": "/home/beefcake/.ansible/tmp/ansible-tmp-1507219682.48-53342098196230/helloworld1.ps1: 1: /home/beefcake/.ansible/tmp/ansible-tmp-1507219682.48-53342098196230/helloworld1.ps1: =: not found\n/home/beefcake/.ansible/tmp/ansible-tmp-1507219682.48-53342098196230/helloworld1.ps1: 2: /home/beefcake/.ansible/tmp/ansible-tmp-1507219682.48-53342098196230/helloworld1.ps1: Set-Content: not found\n",
"stdout": "",
"stdout_lines": []
}
to retry, use: --limit #/home/beefcake/.ansible/azure-json-deploy.retry
PLAY RECAP *********************************************************************************************************************************************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=1
As far as I can tell, the playbook script option should send the script to the machine and run it locally, but for some reason it cannot find the script I have in a subfolder of the playbook.
Folder structure:
.ansible (folder)
- ansible.cfg
- azure-json-deploy.yml
- azure_rm.ini
- azure_rm.py
- WindowsVirtualMachine.json
- WindowsVirtualMachine.parameters.json
- vars (folder)
- vars.yml
- vault.yml
- files (folder)
- helloworld1.ps1
- helloworld2.ps1
Am I missing something?
edit
This is the 2nd playbook I've created which 4c74356b41 advised me to do.
---
# This playbook tests the script module on Windows hosts
- name: Run powershell script 1
hosts: localhost
connection: local
gather_facts: false
tasks:
- name: Run powershell script
script: files/helloworld1.ps1
# This playbook tests the script module on Windows hosts
- name: Run powershell script 2
hosts: localhost
connection: local
gather_facts: false
tasks:
- name: Run powershell script
script: files/helloworld2.ps1
Which still generates the same error:
fatal: [localhost]: FAILED! => {
"changed": true,
"failed": true,
"msg": "non-zero return code",
"rc": 127,
"stderr": "/home/beefcake/.ansible/tmp/ansible-tmp-1507288326.16-187870805725578/helloworld1.ps1: 1: /home/beefcake/.ansible/tmp/ansible-tmp-1507288326.16-187870805725578/helloworld1.ps1: =: not found\n/home/beefcake/.ansible/tmp/ansible-tmp-1507288326.16-187870805725578/helloworld1.ps1: 2: /home/beefcake/.ansible/tmp/ansible-tmp-1507288326.16-187870805725578/helloworld1.ps1: Set-Content: not found\n",
"stdout": "",
"stdout_lines": []
}
to retry, use: --limit #/home/beefcake/.ansible/azure-json-deploy.retry
What ansible is trying to do is copy the file from localhost to localhost. Because the play is scoped to localhost.
I would imagine you dont have that host in the hosts file when you launch the playbook.
You need to add the host to ansible and scope script tasks to that host.
You can either create another playbook to do that or add a add_host step in the current one.
- add_host:
name: name
To scope tasks to the new hosts I'm using import_playbook directive, which imports another playbook that is scoped to the host(s) in question. There might be a better way.