I'm deploying an Azure ARM template via Ansible playbook which seems to work fine, however I wish to add the ability to run 2x Powershell scripts after the machine has been deployed. I already have a custom script extension running when the machine is deployed via the ARM template, but I also wish to run 2 more Powershell scripts afterwards.
My Playbook:
---
- name: Deploy Azure ARM template.
hosts: localhost
connection: local
gather_facts: false
vars_files:
- ./vars/vault.yml
- ./vars/vars.yml
tasks:
- include_vars: vault.yml
- name: Create Azure Deploy
azure_rm_deployment:
client_id: "{{ client_id }}"
secret: "{{ secret }}"
subscription_id: "{{ subscription_id }}"
tenant: "{{ tenant }}"
state: present
resource_group_name: AnsibleTest1
location: UK South
template: "{{ lookup('file', 'WindowsVirtualMachine.json') }}"
parameters: "{{ (lookup('file', 'WindowsVirtualMachine.parameters.json') | from_json).parameters }}"
- name: Run powershell script
script: files/helloworld1.ps1
- name: Run powershell script
script: files/helloworld2.ps1
And the error after successfully deploying the template:
TASK [Run powershell script] ***************************************************************************************************************************************************************************
task path: /home/beefcake/.ansible/azure-json-deploy.yml:25
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: beefcake
<127.0.0.1> EXEC /bin/sh -c 'echo ~ && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/beefcake/.ansible/tmp/ansible-tmp-1507219682.48-53342098196230 `" && echo ansible-tmp-1507219682.48-53342098196230="` echo /home/beefcake/.ansible/tmp/ansible-tmp-1507219682.48-53342098196230 `" ) && sleep 0'
<127.0.0.1> PUT /home/beefcake/.ansible/files/helloworld1.ps1 TO /home/beefcake/.ansible/tmp/ansible-tmp-1507219682.48-53342098196230/helloworld1.ps1
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/beefcake/.ansible/tmp/ansible-tmp-1507219682.48-53342098196230/ /home/beefcake/.ansible/tmp/ansible-tmp-1507219682.48-53342098196230/helloworld1.ps1 && sleep 0'
<127.0.0.1> EXEC /bin/sh -c ' /home/beefcake/.ansible/tmp/ansible-tmp-1507219682.48-53342098196230/helloworld1.ps1 && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/beefcake/.ansible/tmp/ansible-tmp-1507219682.48-53342098196230/ > /dev/null 2>&1 && sleep 0'
fatal: [localhost]: FAILED! => {
"changed": true,
"failed": true,
"msg": "non-zero return code",
"rc": 127,
"stderr": "/home/beefcake/.ansible/tmp/ansible-tmp-1507219682.48-53342098196230/helloworld1.ps1: 1: /home/beefcake/.ansible/tmp/ansible-tmp-1507219682.48-53342098196230/helloworld1.ps1: =: not found\n/home/beefcake/.ansible/tmp/ansible-tmp-1507219682.48-53342098196230/helloworld1.ps1: 2: /home/beefcake/.ansible/tmp/ansible-tmp-1507219682.48-53342098196230/helloworld1.ps1: Set-Content: not found\n",
"stdout": "",
"stdout_lines": []
}
to retry, use: --limit #/home/beefcake/.ansible/azure-json-deploy.retry
PLAY RECAP *********************************************************************************************************************************************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=1
As far as I can tell, the playbook script option should send the script to the machine and run it locally, but for some reason it cannot find the script I have in a subfolder of the playbook.
Folder structure:
.ansible (folder)
- ansible.cfg
- azure-json-deploy.yml
- azure_rm.ini
- azure_rm.py
- WindowsVirtualMachine.json
- WindowsVirtualMachine.parameters.json
- vars (folder)
- vars.yml
- vault.yml
- files (folder)
- helloworld1.ps1
- helloworld2.ps1
Am I missing something?
edit
This is the 2nd playbook I've created which 4c74356b41 advised me to do.
---
# This playbook tests the script module on Windows hosts
- name: Run powershell script 1
hosts: localhost
connection: local
gather_facts: false
tasks:
- name: Run powershell script
script: files/helloworld1.ps1
# This playbook tests the script module on Windows hosts
- name: Run powershell script 2
hosts: localhost
connection: local
gather_facts: false
tasks:
- name: Run powershell script
script: files/helloworld2.ps1
Which still generates the same error:
fatal: [localhost]: FAILED! => {
"changed": true,
"failed": true,
"msg": "non-zero return code",
"rc": 127,
"stderr": "/home/beefcake/.ansible/tmp/ansible-tmp-1507288326.16-187870805725578/helloworld1.ps1: 1: /home/beefcake/.ansible/tmp/ansible-tmp-1507288326.16-187870805725578/helloworld1.ps1: =: not found\n/home/beefcake/.ansible/tmp/ansible-tmp-1507288326.16-187870805725578/helloworld1.ps1: 2: /home/beefcake/.ansible/tmp/ansible-tmp-1507288326.16-187870805725578/helloworld1.ps1: Set-Content: not found\n",
"stdout": "",
"stdout_lines": []
}
to retry, use: --limit #/home/beefcake/.ansible/azure-json-deploy.retry
What ansible is trying to do is copy the file from localhost to localhost. Because the play is scoped to localhost.
I would imagine you dont have that host in the hosts file when you launch the playbook.
You need to add the host to ansible and scope script tasks to that host.
You can either create another playbook to do that or add a add_host step in the current one.
- add_host:
name: name
To scope tasks to the new hosts I'm using import_playbook directive, which imports another playbook that is scoped to the host(s) in question. There might be a better way.
Related
I am trying to write a playbook which needs to
check if the resources exists in a namespace.
If not, 'deploy' it and if exists 'skip' the deployment as it is already available.
I am able to achieve the skip step but not the deploy step if is not exists.
- name: To check if the resource exists
shell: |
set -e -o pipefail;
kubectl get pods -n test | grep postgres
changed_when: false
register: checkpod
args:
executable: /bin/bash
- name: Print checkpod
debug:
msg: "{{ checkpod.stdout_lines }}"
- name: Task to run the optional to install postgres
shell: ansible-playbook -i ~/playbooks/deploy.yml
when: checkpod | length == 0
My error is this
"stderr": "No resources found in ama namespace.", "stderr_lines": ["No resources found in test namespace."], "stdout": "", "stdout_lines": []}
I have a code like this:
- name: Check if postgres is running
community.postgresql.postgresql_ping:
db: "{{ stl_database }}"
port: "{{ stl_postgres_port }}"
login_host: "{{ db_host }}"
login_password: "{{ postgres_password }}"
register: postgres_availabe
- name: Check the database versions
postgresql_query:
db: "{{ stl_database }}"
port: "{{ stl_postgres_port }}"
login_host: "{{ db_host }}"
login_user: postgres
login_password: "{{ postgres_password }}"
query: "{{ get_db_version }}"
become: yes
become_user: postgres
register: db_version_return
when: postgres_availabe.is_available == true
It uses two community modules which I have installed with ansible-galaxy collection install community.postgresql.
The first module checks if postgresql is running on the remote server {{ db_host }}, the second module run a query(defined by {{ get_db_version }}) to get the code version from the postgresql DB on the remote server {{ db_host }}. When I run the code, I am getting the below output:
TASK [Check if postgres is running] *****************************************************************************************************************************************************************************************************
Wednesday 08 June 2022 16:13:25 +0000 (0:00:00.030) 0:00:01.676 ********
ok: [localhost]
TASK [Check the database versions] ******************************************************************************************************************************************************************************************************
Wednesday 08 June 2022 16:13:25 +0000 (0:00:00.419) 0:00:02.095 ********
fatal: [localhost]: FAILED! => {"msg": "Failed to set permissions on the temporary files Ansible needs to create when becoming an unprivileged user (rc: 1, err: chmod: invalid mode: ‘A+user:postgres:rx:allow’\nTry 'chmod --help' for more information.\n}). For information on working around this, see https://docs.ansible.com/ansible-core/2.12/user_guide/become.html#risks-of-becoming-an-unprivileged-user"}
The first module works. But the 2nd one errors. When I use "-vvv" in the CLI and I got the details like this:
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: philip.shangguan
<127.0.0.1> EXEC /bin/sh -c 'echo ~philip.shangguan && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /var/tmp `"&& mkdir "` echo /var/tmp/ansible-tmp-1654700298.6125453-341391-58979494615213 `" && echo ansible-tmp-1654700298.6125453-341391-58979494615213="` echo /var/tmp/ansible-tmp-1654700298.6125453-341391-58979494615213 `" ) && sleep 0'
redirecting (type: modules) ansible.builtin.postgresql_query to community.postgresql.postgresql_query
Using module file /home/philip.shangguan/.ansible/collections/ansible_collections/community/postgresql/plugins/modules/postgresql_query.py
<127.0.0.1> PUT /home/philip.shangguan/.ansible/tmp/ansible-local-341261szx105mk/tmppy72gy9u TO /var/tmp/ansible-tmp-1654700298.6125453-341391-58979494615213/AnsiballZ_postgresql_query.py
<127.0.0.1> EXEC /bin/sh -c 'setfacl -m u:postgres:r-x /var/tmp/ansible-tmp-1654700298.6125453-341391-58979494615213/ /var/tmp/ansible-tmp-1654700298.6125453-341391-58979494615213/AnsiballZ_postgresql_query.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /var/tmp/ansible-tmp-1654700298.6125453-341391-58979494615213/ /var/tmp/ansible-tmp-1654700298.6125453-341391-58979494615213/AnsiballZ_postgresql_query.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'chown postgres /var/tmp/ansible-tmp-1654700298.6125453-341391-58979494615213/ /var/tmp/ansible-tmp-1654700298.6125453-341391-58979494615213/AnsiballZ_postgresql_query.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'chmod +a '"'"'postgres allow read,execute'"'"' /var/tmp/ansible-tmp-1654700298.6125453-341391-58979494615213/ /var/tmp/ansible-tmp-1654700298.6125453-341391-58979494615213/AnsiballZ_postgresql_query.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'chmod A+user:postgres:rx:allow /var/tmp/ansible-tmp-1654700298.6125453-341391-58979494615213/ /var/tmp/ansible-tmp-1654700298.6125453-341391-58979494615213/AnsiballZ_postgresql_query.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /var/tmp/ansible-tmp-1654700298.6125453-341391-58979494615213/ > /dev/null 2>&1 && sleep 0'
It looks like the module is trying to do chmod +a and chmod A+user:postgres:rx:allow. If I manually try the commands, I got:
chmod A+user:postgres:rx:allow rename_appdb.sql
chmod: invalid mode: ‘A+user:postgres:rx:allow’
Try 'chmod --help' for more information.
Any idea why the module is doing that? I have the same code running on another ansible server that I used before and it has been working(still today). But when I try to run this on a new ansible server that I installed the community modules yesterday, I got the errors above.
Thanks!
As β.εηοιτ.βε suggested, I removed the become and become_user from the code and it works.
The new code:
- name: Check the database versions
postgresql_query:
db: "{{ stl_database }}"
port: "{{ stl_postgres_port }}"
login_host: "{{ db_host }}"
login_user: postgres
login_password: "{{ postgres_password }}"
query: "{{ get_db_version }}"
register: db_version_return
when: postgres_availabe.is_available == true
and the result:
TASK [debug] ************************************************************************************Wednesday 08 June 2022 20:16:19 +0000 (0:00:00.039) 0:00:02.880 ********
ok: [localhost] => {
"msg": "8-8-3"
}
Hello I'm trying to start the docker compose (docker-compose_mysql.yml up) but ansible says no files in the directory. I've already looked at other solutions on github and stackoverflow but nothing that has allowed me to solve my problem.
Thanks you :)
my playbook
---
- name: Mettre en place Redmine - mySQL
connection: localhost
hosts: localhost
become_method: sudo
tasks:
- name: install docker-py
pip: name=docker-py
- name: Installer le docker compose
command: sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
register: command1
- debug: var=command1.stdout_lines
- name: Installer le docker compose
command: pip install docker-compose
- name: download docker compose
command: wget https://raw.githubusercontent.com/sameersbn/docker-redmine/master/docker-compose-mysql.yml
register: command2
- debug: var=command2.stdout_lines
- name: docker compose run
command: docker-compose-mysql.yml up-d
register: command3
- debug: var=command3.stdout_lines
my error
FAILED! => {"changed": false, "cmd": "docker-compose-mysql.yml up-d", "msg": "[Errno 2] Aucun fichier ou dossier de ce type: b'docker-compose-mysql.yml'", "rc": 2}
file docker-compose-mysql in directory
- name: copy sql schema
hosts: test-mysql
gather_facts: no
tasks:
- debug:
msg: "{{ playbook_dir }}"
- name: Docker compose
command: docker-compose -f {{ name }}_compose.yml up -d
Then, either move {{ name }}_compose.yml to the directory or provide an absolute path in command: docker-compose -f [abs_path]{{ name }}_compose.yml up -d
Hi i keep getting this error when using ansible via kubespray and I am wondering how to over come it
TASK [bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS, non-Flatcar, Suse and ClearLinux)] ********************************************************************************************************************************************************************************************************
task path: /home/dc/xcp-projects/kubespray/roles/bootstrap-os/tasks/main.yml:50
<192.168.10.55> (1, b'\x1b[1;31m==== AUTHENTICATING FOR org.freedesktop.hostname1.set-hostname ===\r\n\x1b[0mAuthentication is required to set the local host name.\r\nMultiple identities can be used for authentication:\r\n 1. test\r\n 2. provision\r\n 3. dc\r\nChoose identity to authenticate as (1-3): \r\n{"msg": "Command failed rc=1, out=, err=\\u001b[0;1;31mCould not set property: Connection timed out\\u001b[0m\\n", "failed": true, "invocation": {"module_args": {"name": "node3", "use": null}}}\r\n', b'Shared connection to 192.168.10.55 closed.\r\n')
<192.168.10.55> Failed to connect to the host via ssh: Shared connection to 192.168.10.55 closed.
<192.168.10.55> ESTABLISH SSH CONNECTION FOR USER: provision
<192.168.10.55> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile="/home/dc/.ssh/xcp_server_k8s_nodes/xcp-k8s-provision-key"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="provision"' -o ConnectTimeout=10 -oStrictHostKeyChecking=no -o ControlPath=/home/dc/.ansible/cp/c6d70a0b7d 192.168.10.55 '/bin/sh -c '"'"'rm -f -r /home/provision/.ansible/tmp/ansible-tmp-1614373378.5434802-17760837116436/ > /dev/null 2>&1 && sleep 0'"'"''
<192.168.10.56> (0, b'', b'')
fatal: [node2]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"name": "node2",
"use": null
}
},
"msg": "Command failed rc=1, out=, err=\u001b[0;1;31mCould not set property: Method call timed out\u001b[0m\n"
}
my inventory file is as follows
all:
hosts:
node1:
ansible_host: 192.168.10.54
ip: 192.168.10.54
access_ip: 192.168.10.54
node2:
ansible_host: 192.168.10.56
ip: 192.168.10.56
access_ip: 192.168.10.56
node3:
ansible_host: 192.168.10.55
ip: 192.168.10.55
access_ip: 192.168.10.55
children:
kube-master:
hosts:
node1:
node2:
kube-node:
hosts:
node1:
node2:
node3:
etcd:
hosts:
node1:
node2:
node3:
k8s-cluster:
children:
kube-master:
kube-node:
calico-rr:
hosts: {}
I also have a file which provision the users in the following manner
- name: Add a new user named provision
user:
name: provision
create_home: true
shell: /bin/bash
password: "{{ provision_password }}"
groups: sudo
append: yes
- name: Add a new user named dc
user:
name: dc
create_home: true
shell: /bin/bash
password: "{{ provision_password }}"
groups: sudo
append: yes
- name: Add provision user to the sudoers
copy:
dest: "/etc/sudoers.d/provision"
content: "provision ALL=(ALL) NOPASSWD: ALL"
- name: Add provision user to the sudoers
copy:
dest: "/etc/sudoers.d/dc"
content: "dc ALL=(ALL) NOPASSWD: ALL"
- name: Disable Root Login
lineinfile:
path: /etc/ssh/sshd_config
regexp: '^PermitRootLogin'
line: "PermitRootLogin no"
state: present
backup: yes
notify:
- Restart ssh
I have run the ansible command in the following manner
ansible-playbook -i kubespray/inventory/mycluster/hosts.yaml --user="provision" --ssh-extra-args="-oStrictHostKeyChecking=no" --key-file "/home/dc/.ssh/xcp_server_k8s_nodes/xcp-k8s-provision-key" kubespray/cluster.yml -vvv
as well as
ansible-playbook -i kubespray/inventory/mycluster/hosts.yaml --user="provision" --ssh-extra-args="-oStrictHostKeyChecking=no" --key-file "/home/dc/.ssh/xcp_server_k8s_nodes/xcp-k8s-provision-key" --become-user="provision" kubespray/cluster.yml -vv
both yield the same error an interestingly escalation seems to succeed on earlier points
after reading this article
https://askubuntu.com/questions/542397/change-default-user-for-authentication
I have decided to add the users to the sudo group but the error still persists
looking into the main.yaml file position suggested by the error it seems to be this code possibly causing issues?
# Workaround for https://github.com/ansible/ansible/issues/42726
# (1/3)
- name: Gather host facts to get ansible_os_family
setup:
gather_subset: '!all'
filter: ansible_*
- name: Assign inventory name to unconfigured hostnames (non-CoreOS, non-Flatcar, Suse and ClearLinux)
hostname:
name: "{{ inventory_hostname }}"
when:
- override_system_hostname
- ansible_os_family not in ['Suse', 'Flatcar Container Linux by Kinvolk', 'ClearLinux'] and not is_fedora_coreos
The OS'es of the hosts are ubuntu 20.04.02 server
is there anything more I can do?
From Kubespray documentation:
# Deploy Kubespray with Ansible Playbook - run the playbook as root
# The option `--become` is required, as for example writing SSL keys in /etc/,
# installing packages and interacting with various systemd daemons.
# Without --become the playbook will fail to run!
ansible-playbook -i inventory/mycluster/hosts.yaml --become --become-user=root cluster.yml
As stated, the --become is mandatory, it allows to do privilege escalation for most of the system modifications (like setting the hostname) that Kubespray performs.
With --user=provision you're just setting the SSH user, but it will need privilege escalation anyway.
With --become-user=provision you're just saying that privilege escalation will escalade to 'provision' user (but you would need --become to do the privilege escalation).
In both cases, unless 'provision' user has root permissions (not sure putting it in root group is enough), it won't be enough.
For the user 'provision' to be enough, you need to make sure that it can perform a hostnamectl <some-new-host> without being asked for authentication.
I'm trying to figure out how to use Ansible with Vagrant the proper way. By default, it seems Vagrant is isolating Ansible execution per box and executes playbooks after each box partially as it applies to that single box in the loop. I find this VERY counterproductive and I have tried tricking Vagrant into executing a playbook across all of the hosts AFTER all of them booted, but it seems Ansible, when started from Vagrant never sees more than a single box at a time.
Edit: these are the version I am working with:
Vagrant: 2.2.6
Ansible: 2.5.1
Virtualbox: 6.1
The playbook (with the hosts.ini) by itsef executes without issues when I run it stand-alone with the ansible-playbook executable after the hosts come up, so the problem is with my Vagrant file. I just cannot figure it out.
This is the Vagrantfile:
# -*- mode: ruby -*-
# vi: set ft=ruby :
IMAGE_NAME = "ubuntu/bionic64"
Vagrant.configure("2") do |config|
config.ssh.insert_key = false
config.vm.box = IMAGE_NAME
# Virtualbox configuration
config.vm.provider "virtualbox" do |v|
v.memory = 4096
v.cpus = 2
#v.linked_clone = true
end
# master and node definition
boxes = [
{ :name => "k8s-master", :ip => "192.168.50.10" },
{ :name => "k8s-node-1", :ip => "192.168.50.11" }
]
boxes.each do |opts|
config.vm.define opts[:name] do |config|
config.vm.hostname = opts[:name]
config.vm.network :private_network, ip: opts[:ip]
if opts[:name] == "k8s-node-1"
config.vm.provision "ansible_local" do |ansible|
ansible.compatibility_mode = "2.0"
ansible.limit = "all"
ansible.config_file = "ansible.cfg"
ansible.become = true
ansible.playbook = "playbook.yml"
ansible.groups = {
"masters" => ["k8s-master"],
"nodes" => ["k8s-node-1"]
}
end
end
end
end
end
ansible.cfg
[defaults]
connection = smart
timeout = 60
deprecation_warnings = False
host_key_checking = False
inventory = hosts.ini
[ssh_connection]
ssh_args = -o ControlMaster=auto -o ControlPersist=60s -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes
hosts.ini
[masters]
k8s-master ansible_host=192.168.50.10 ansible_user=vagrant
[nodes]
k8s-node-1 ansible_host=192.168.50.11 ansible_user=vagrant
[all:vars]
ansible_python_interpreter=/usr/bin/python3
ansible_ssh_user=vagrant
ansible_ssh_private_key_file=~/.vagrant.d/insecure_private_key
playbook.yml
- hosts: all
become: yes
tasks:
- name: Update apt cache.
apt: update_cache=yes cache_valid_time=3600
when: ansible_os_family == 'Debian'
- name: Ensure swap is disabled.
mount:
name: swap
fstype: swap
state: absent
- name: Disable swap.
command: swapoff -a
when: ansible_swaptotal_mb > 0
- name: create the 'mobile' user
user: name=mobile append=yes state=present createhome=yes shell=/bin/bash
- name: allow 'mobile' to have passwordless sudo
lineinfile:
dest: /etc/sudoers
line: 'mobile ALL=(ALL) NOPASSWD: ALL'
validate: 'visudo -cf %s'
- name: set up authorized keys for the mobile user
authorized_key:
user: mobile
key: "{{ lookup('pipe','cat ssh_keys/*.pub') }}"
state: present
exclusive: yes
- hosts: all
become: yes
tasks:
- name: install Docker
apt:
name: docker.io
state: present
update_cache: true
- name: install APT Transport HTTPS
apt:
name: apt-transport-https
state: present
- name: add Kubernetes apt-key
apt_key:
url: https://packages.cloud.google.com/apt/doc/apt-key.gpg
state: present
- name: add Kubernetes' APT repository
apt_repository:
repo: deb http://apt.kubernetes.io/ kubernetes-xenial main
state: present
filename: 'kubernetes'
- name: install kubelet
apt:
name: kubelet=1.17.0-00
state: present
update_cache: true
- name: install kubeadm
apt:
name: kubeadm=1.17.0-00
state: present
- hosts: masters
become: yes
tasks:
- name: install kubectl
apt:
name: kubectl=1.17.0-00
state: present
force: yes
- hosts: k8s-master
become: yes
tasks:
- name: check docker status
systemd:
state: started
name: docker
- name: initialize the cluster
shell: kubeadm init --apiserver-advertise-address 192.168.50.10 --pod-network-cidr=10.244.0.0/16 >> cluster_initialized.txt
args:
chdir: $HOME
creates: cluster_initialized.txt
- name: create .kube directory
become: yes
become_user: mobile
file:
path: $HOME/.kube
state: directory
mode: 0755
- name: copy admin.conf to user's kube config
copy:
src: /etc/kubernetes/admin.conf
dest: /home/mobile/.kube/config
remote_src: yes
owner: mobile
- name: install Pod network
become: yes
become_user: mobile
shell: kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml >> pod_network_setup.txt
args:
chdir: $HOME
creates: pod_network_setup.txt
- hosts: k8s-master
become: yes
gather_facts: false
tasks:
- name: get join command
shell: kubeadm token create --print-join-command 2>/dev/null
register: join_command_raw
- name: set join command
set_fact:
join_command: "{{ join_command_raw.stdout_lines[0] }}"
- hosts: nodes
become: yes
tasks:
- name: check docker status
systemd:
state: started
name: docker
- name: join cluster
shell: "{{ hostvars['k8s-master'].join_command }} >> node_joined.txt"
args:
chdir: $HOME
creates: node_joined.txt
The moment the playbook tries to execute against k8s-master, it fails like this:
fatal: [k8s-master]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname k8s-master: Temporary failure in name resolution", "unreachable": true}
The host is up. SSH works.
Who can help me sort this out?
Thanks!
I have managed to use Ansible inside of Vagrant.
Here is what I did to make it work:
Steps to reproduce:
Install Vagrant, Virtualbox
Create all the necessary files and directories
ansible.cfg
playbook.yml
hosts
insecure_private_key
Vagrant file
Test
Install Vagrant, Virtualbox
Follow installation guides at appropriate sites:
Vagrant
Virtualbox
Create all the necessary files and directories
This example bases on original poster files.
Create vagrant and ansible folders to store all the configuration files and directories. The structure of it could look like that:
vagrant - directory
Vagrantfile - file with main configuration
ansible - directory
ansible.cfg - configuration file of Ansible
playbook.yml - file with steps for Ansible to execute
hosts - file with information about hosts
insecure_private_key - private key of created machines
Ansible folder is a seperate directory that will be copied to k8s-node-1.
By default Vagrant shares a vagrant folder with permissions of 777. It allows owner, group and others to have full access on everything that is inside of it.
Logging to virtual machine manualy and running ansible-playbook command inside vagrant directory will output errors connected with permissions. It will render ansible.cfg and insecure_private_key useless.
Ansible.cfg
Ansible.cfg is configuration file of Ansible. Example used below:
[defaults]
connection = smart
timeout = 60
deprecation_warnings = False
host_key_checking = False
[ssh_connection]
ssh_args = -o ControlMaster=auto -o ControlPersist=60s -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes
Create ansible.cfg inside ansible directory.
Playbook.yml
Example playbook.yml is a file with steps for Ansible to execute.
It will check connections and test if groups are configured correctly:
- name: Check all connections
hosts: all
tasks:
- name: Ping
ping:
- name: Check specific connection to masters
hosts: masters
tasks:
- name: Ping
ping:
- name: Check specific connection to nodes
hosts: nodes
tasks:
- name: Ping
ping:
Create playbook.yml inside ansible directory.
Insecure_private_key
To successfully connect to virtual machines you will need insecure_private_key. You can create it by invoking command:$ vagrant init inside vagrant directory.
It will create insecure_private_key inside your physical machine in HOME_DIRECTORY/.vagrant.d.
Copy it to ansible folder.
Hosts
Below hosts file is responsible for passing the information about hosts to Ansible:
[masters]
k8s-master ansible_host=192.168.50.10 ansible_user=vagrant
[nodes]
k8s-node-1 ansible_host=192.168.50.11 ansible_user=vagrant
[all:vars]
ansible_python_interpreter=/usr/bin/python3
ansible_ssh_user=vagrant
ansible_ssh_private_key_file=/ansible/insecure_private_key
Create hosts file inside ansible directory.
Please take a specific look on: ansible_ssh_private_key_file=/ansible/insecure_private_key
This is declaration for Ansible to use earlier mentioned key.
Vagrant
Vagrant file is the main configuration file:
# -*- mode: ruby -*-
# vi: set ft=ruby :
IMAGE_NAME = "ubuntu/bionic64"
Vagrant.configure("2") do |config|
config.ssh.insert_key = false
config.vm.box = IMAGE_NAME
# Virtualbox configuration
config.vm.provider "virtualbox" do |v|
v.memory = 4096
v.cpus = 2
#v.linked_clone = true
end
# master and node definition
boxes = [
{ :name => "k8s-master", :ip => "192.168.50.10" },
{ :name => "k8s-node-1", :ip => "192.168.50.11" }
]
boxes.each do |opts|
config.vm.define opts[:name] do |config|
config.vm.hostname = opts[:name]
config.vm.network :private_network, ip: opts[:ip]
if opts[:name] == "k8s-node-1"
config.vm.synced_folder "../ansible", "/ansible", :mount_options => ["dmode=700", "fmode=700"]
config.vm.provision "ansible_local" do |ansible|
ansible.compatibility_mode = "2.0"
ansible.limit = "all"
ansible.config_file = "/ansible/ansible.cfg"
ansible.become = true
ansible.playbook = "/ansible/playbook.yml"
ansible.inventory_path = "/ansible/hosts"
end
end
end
end
end
Please take a specific look on:
config.vm.synced_folder "../ansible", "/ansible", :mount_options => ["dmode=700", "fmode=700"]
config.vm.synced_folder will copy ansible directory to k8s-node-1 with all the files inside.
It will set permissions for full access only to owner (vagrant user).
ansible.inventory_path = "/ansible/hosts"
ansible.inventory_path will tell Vagrant to provide hosts file for Ansible.
Test
To check run the following command from the vagrant directory:
$ vagrant up
The part of the output responsible for Ansible should look like that:
==> k8s-node-1: Running provisioner: ansible_local...
k8s-node-1: Installing Ansible...
k8s-node-1: Running ansible-playbook...
PLAY [Check all connections] ***************************************************
TASK [Gathering Facts] *********************************************************
ok: [k8s-master]
ok: [k8s-node-1]
TASK [Ping] ********************************************************************
ok: [k8s-master]
ok: [k8s-node-1]
PLAY [Check specific connection to masters] ************************************
TASK [Gathering Facts] *********************************************************
ok: [k8s-master]
TASK [Ping] ********************************************************************
ok: [k8s-master]
PLAY [Check specific connection to nodes] **************************************
TASK [Gathering Facts] *********************************************************
ok: [k8s-node-1]
TASK [Ping] ********************************************************************
ok: [k8s-node-1]
PLAY RECAP *********************************************************************
k8s-master : ok=4 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
k8s-node-1 : ok=4 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0