Deploy and configure ec2 with ansible - deployment

I'm trying to Deploy and Configure an cluster on ec2/aws with anisble.
I want to do the deployment and configuration as part of the same playbook.
main.yml :
- hosts: localhost
gather_facts: false
vars_files:
- vars/main.yml
tasks:
- name: Deploy the master for the kubernetes cluster
include_tasks: tasks/kub_master.yml
- name: Configure Master Kub node
include_tasks: tasks/config_kub_master.yml
kub_master.yml
---
- name: Deploy the admin node
ec2:
region: "{{ region }}"
key_name: "{{ ssh_key_name }}"
instance_type: "{{ master_inst_type }}"
image: "{{ image_id }}"
count: "{{ master_inst_count }}"
assign_public_ip: no
vpc_subnet_id: "{{ subnet_id }}"
group_id: "{{ sg_id }}"
wait: yes
wait_timeout: 1800
volumes:
- device_name: /dev/xvda
volume_type: gp2
volume_size: 50
delete_on_termination: true
user_data: "{{ lookup ('file', '../files/user_data_master.sh') }}"
instance_tags:
Name: "{{ kub_cluster }}-admin-node"
lob: "{{ tags_lob }}"
project: "{{ tags_project }}"
component: "{{ kub_cluster}}_kub_master_node"
contact_email: "{{ tags_contact_email }}"
product: "{{ tags_product }}"
async: 45
poll: 25
register: kub_mas
Kub_configure.yml
---
- hosts: "{{ kub_mas.instances[0].private_ip }}"
gather_facts: true
remote_user: remote_user
shell: " cat /etc/redhat-release "
However this doesnt seem to work at the Kub_configure end point as it seems to fail on the remote execution.
How can we deploy and use the ip from the deployment to configure the node using single ansible playbook?
Here's the output of the ansible run :
you can see the task is trying to execute in local eventhough i'm trying to give a remote address.
TASK [Configure Master Kub node] ******************************************************************************************************************************************************************************
task path: /home/username/Repo_S/kube_cluster/cluster_deploy/cluster_deploy.yml:11
Read vars_file 'vars/main.yml'
included: /home/username/Repo_S/kube_cluster/cluster_deploy/tasks/master/master_config.yml for localhost
Read vars_file 'vars/main.yml'
Read vars_file 'vars/main.yml'
TASK [shell] **************************************************************************************************************************************************************************************************
task path: /home/username/Repo_S/kube_cluster/cluster_deploy/tasks/master/master_config.yml:2
<localhost> ESTABLISH LOCAL CONNECTION FOR USERe/username
<localhost> EXEC /bin/sh -c 'echoe/username && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/username/.ansible/tmp/ansible-tmp-1541439662.55-121688078512813 `" && echo ansible-tmp-1541439662.55-121688078512813="` echo /home/username/.ansible/tmp/ansible-tmp-1541439662.55-121688078512813 `" ) && sleep 0'
Using module file /usr/local/lib/python2.7/dist-packages/ansible/modules/commands/command.py
<localhost> PUT /home/username/.ansible/tmp/ansible-local-30214U7V93F/tmpmcASIl TO /home/username/.ansible/tmp/ansible-tmp-1541439662.55-121688078512813/command.py
<localhost> EXEC /bin/sh -c 'chmod u+x /home/username/.ansible/tmp/ansible-tmp-1541439662.55-121688078512813/ /home/username/.ansible/tmp/ansible-tmp-1541439662.55-121688078512813/command.py && sleep 0'
<localhost> EXEC /bin/sh -c 'sudo -H -S -n -u root /bin/sh -c '"'"'echo BECOME-SUCCESS-sywhejuzolifjntwhpbxlesbbbutlegn; /usr/bin/python /home/username/.ansible/tmp/ansible-tmp-1541439662.55-121688078512813/command.py'"'"' && sleep 0'
<localhost> EXEC /bin/sh -c 'rm -f -r /home/username/.ansible/tmp/ansible-tmp-1541439662.55-121688078512813/ > /dev/null 2>&1 && sleep 0'
fatal: [localhost]: FAILED! => {
"changed": false,
"module_stderr": "sudo: a password is required\n",
"module_stdout": "",
"msg": "MODULE FAILURE",
"rc": 1
}
to retry, use: --limit #/home/username/Repo_S/kube_cluster/cluster_deploy/cluster_deploy.retry
PLAY RECAP ****************************************************************************************************************************************************************************************************
localhost : ok=3 changed=1 unreachable=0 failed=1

in Kub_configure.yml
Try using
become: yes
This will may fix the issue
---
- hosts: "{{ kub_mas.instances[0].private_ip }}"
gather_facts: true
become: yes
remote_user: remote_user
shell: " cat /etc/redhat-release "

Related

ChaosToolkit --var-file option is fails with error Error: no such option: --var-file /tmp/token.env

We are trying to run a chaos experiment and running into this error:
ubuntu#ip-x-x-x-x:~$ kubectl logs -f pod/chaos-testing-hn8c5 -n <ns>
[2022-12-08 16:05:22 DEBUG] [cli:70] ###############################################################################
[2022-12-08 16:05:22 DEBUG] [cli:71] Running command 'run'
[2022-12-08 16:05:22 DEBUG] [cli:75] Using settings file '/root/.chaostoolkit/settings.yaml'
Usage: chaos run [OPTIONS] SOURCE
Try 'chaos run --help' for help.
Error: no such option: --var-file /tmp/token.env
Here is the spec file:
spec:
serviceAccountName: {{ .Values.serviceAccount.name }}
restartPolicy: Never
initContainers:
- name: {{ .Values.initContainer.name }}
image: "{{ .Values.initContainer.image.name }}:{{ .Values.initContainer.image.tag }}"
imagePullPolicy: {{ .Values.initContainer.image.pullPolicy }}
command: ["sh", "-c", "curl -X POST https://<url> -H 'Content-Type: application/x-www-form-urlencoded' -d 'grant_type=client_credentials&client_id=<client_id&client_secret=<client_secret>'| jq -r --arg prefix 'ACCESS_TOKEN=' '$prefix + (.access_token)' > /tmp/token.env;"]
volumeMounts:
- name: token-path
mountPath: /tmp
- name: config
mountPath: /experiment
readOnly: true
containers:
- name: {{ .Values.image.name }}
securityContext:
privileged: true
capabilities:
add: ["SYS_ADMIN"]
allowPrivilegeEscalation: true
image: {{ .Values.image.repository }}
args:
- --verbose
- run
- --var-file /tmp/token.env
- /experiment/terminate-all-pods.yaml
env:
- name: CHAOSTOOLKIT_IN_POD
value: "true"
volumeMounts:
- name: token-path
mountPath: /tmp
- name: config
mountPath: /experiment
readOnly: true
resources:
limits:
cpu: 20m
memory: 64Mi
requests:
cpu: 20m
memory: 64Mi
volumes:
- name: token-path
emptyDir: {}
- name: config
configMap:
name: {{ .Values.experiments.name }}
We have also tried using the --var "KEY=VALUE" which also failed with the same error.
Any help with this is appreciated. We have hit the wall at this point in time
Docker image being used is: https://hub.docker.com/r/chaostoolkit/chaostoolkit/tags
The kubernetes manifest is slight incorrect.
The environment variable injection worked when passing it like this:
args:
- --verbose
- run
- --var-file
- /tmp/token.env
- /experiment/terminate-all-pods.yaml
The option flag and its value needed to be on two separate lines

Ansible loop based on fact to Restart Kafka Connector Failed Tasks

Here I want to restart Kafka Connect tasks if they are in failed state using ansible-playbook, I have fetched connector tasks state using 'set_fact'
I want to create a loop over collected facts to restart Kafka Connector Tasks using connector name and task id.
tasks:
- name: Gethering Connector Names
uri:
url: "{{scheme }}://{{ server }}:{{ port_no }}/connectors"
user: "{{ username }}"
password: "{{ password }}"
method: GET
force_basic_auth: yes
status_code: 200
register: conn_stat
- name: Checking for Connector status
uri:
url: "{{scheme }}://{{ server }}:{{ port_no }}/connectors/{{ abc_conn_name }}/status"
user: "{{ username }}"
password: "{{ password }}"
method: GET
force_basic_auth: yes
loop: "{{ conn_name }}"
loop_control:
loop_var: abc_conn_name
vars:
conn_name: "{{ conn_stat.json }}"
register: conn_stat_1
- name: Gethering Failed task id
set_fact:
failed_connector_name_task_id: "{{ conn_stat_1 | json_query('results[].json[].{name: name ,id: [tasks[?state == `RUNNING`].id [] | [0] ]}') }}"
- name: Failed connector name with tasks id
ansible.builtin.debug:
var: failed_connector_name_task_id
Getting below values from fact, which I need to push into loop
"failed_connector_name_task_id": [
{
"id": [
0
1
],
"name": "test-connector-sample"
},
{
"id": [
0
1
],
"name": "confluent-connect"
},
{
"id": [
0
1
2
],
"name": "confluent-test-1"
}
]
},
"changed": false
}
value need to be posted
- name: Restart Connector Failed tasks
uri:
url: "{{scheme }}://{{ server }}:{{ port_no }}/connectors/**{{name of connector}}**/tasks/**{{task ID}}**/restart"
user: "{{ username }}"
password: "{{ password }}"
method: POST
force_basic_auth: yes
status_code: 200
register: conn_stat
name of connector,
task ID want to use in loop.
In above I need to setup loop for tasks.
As we can see above connector 'confluent-test-1' have three tasks in failed state, so it need to be iterate three times with task 'confluent-test-1'.
This is a typical case where you want to use subelements, either through the aforementioned filter or lookup. Here is an example using the filter
- name: Restart Connector Failed tasks
uri:
url: "{{scheme }}://{{ server }}:{{ port_no }}/connectors/{{ item.0.name }}/tasks/{{ item.1 }}/restart"
user: "{{ username }}"
password: "{{ password }}"
method: POST
force_basic_auth: yes
status_code: 200
loop: "{{ failed_connector_name_task_id | subelements('id', skip_missing=True) }}"
References worth to read:
ansible loops
subelements filter
You can actually remove your last unnecessary set_fact task with e.g. the following construct:
- name: Restart Connector Failed tasks
vars:
failed_connector_name_task_id: "{{ conn_stat_1 | json_query('results[].json[].{name: name ,id: [tasks[?state == `RUNNING`].id [] | [0] ]}') }}"
uri:
url: "{{scheme }}://{{ server }}:{{ port_no }}/connectors/{{ item.0.name }}/tasks/{{ item.1 }}/restart"
user: "{{ username }}"
password: "{{ password }}"
method: POST
force_basic_auth: yes
status_code: 200
loop: "{{ failed_connector_name_task_id | subelements('id', skip_missing=True) }}"

How to fix "[WARNING]: Could not match supplied host pattern, ignoring: kube-master"

I'm trying to install Kubernetes on Google Cloud Instance using ansible, and it says it can't find the match over and over again,,
when I run ansible-playbook -i inventory/mycluster/inventory.ini -v --become --become-user=root cluster.yml :
[WARNING]: Could not match supplied host pattern, ignoring: kube-master
PLAY [Add kube-master nodes to kube_control_plane] ***********************************************************************
skipping: no hosts matched
[WARNING]: Could not match supplied host pattern, ignoring: kube-node
PLAY [Add kube-node nodes to kube_node] **********************************************************************************
skipping: no hosts matched
[WARNING]: Could not match supplied host pattern, ignoring: k8s-cluster
My inventory.ini :
[all]
instance-1 ansible_ssh_host=10.182.0.2 ip = 34.125.199.45 etcd_member_name=etcd1
instance-2 ansible_ssh_host=10.182.0.3 ip = 34.125.217.86 etcd_member_name=etcd2
instance-3 ansible_ssh_host=10.182.0.4 ip = 34.125.112.124 etcd_member_name=etcd3
instance-4 ansible_ssh_host=10.182.0.5 ip = 34.125.251.168
instance-5 ansible_ssh_host=10.182.0.6 ip = 34.125.231.40
# ## configure a bastion host if your nodes are not directly reachable
# bastion ansible_host=x.x.x.x ansible_user=some_user
[kube-master]
instance-1
instance-2
instance-3
[etcd]
instance-1
instance-2
instance-3
[kube-node]
instance-4
instance-5
[calico-rr]
[k8s-cluster:children]
kube-master
kube-node
calico-rr
My cluster.yml :
---
- name: Check ansible version
import_playbook: ansible_version.yml
- name: Ensure compatibility with old groups
import_playbook: legacy_groups.yml
- hosts: bastion[0]
gather_facts: False
environment: "{{ proxy_disable_env }}"
roles:
- { role: kubespray-defaults }
- { role: bastion-ssh-config, tags: ["localhost", "bastion"] }
- hosts: k8s_cluster:etcd
strategy: linear
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
gather_facts: false
environment: "{{ proxy_disable_env }}"
roles:
- { role: kubespray-defaults }
- { role: bootstrap-os, tags: bootstrap-os}
- name: Gather facts
tags: always
import_playbook: facts.yml
- hosts: k8s_cluster:etcd
gather_facts: False
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
environment: "{{ proxy_disable_env }}"
roles:
- { role: kubespray-defaults }
- { role: kubernetes/preinstall, tags: preinstall }
- { role: "container-engine", tags: "container-engine", when: deploy_container_engine|default(true) }
- { role: download, tags: download, when: "not skip_downloads" }
- hosts: k8s_cluster
gather_facts: False
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
I changed - to _ and did some other renaming work, And it still doesn't find its match. I don't understand how it works... would you please help me fix this...?
I have the same error, and noticed that error does not exist in release-2.15(example), and node groups are written by "-", not by "_" initially. So if you don't care about release number, use 2.15. At least it helped me.

How to get postgresql_query results from Ansible

I'm trying to print the output of PostgreSQL query that is run by Ansible. Unfortunately I'm not sure how to get ahold of the return value.
- name: Get specific tables
postgresql_query:
db: "{{ database_name }}"
login_host: "{{ my_host }}"
login_user: "{{ my_user }}"
login_password: "{{ my_password }}"
query: SELECT * FROM pg_tables t WHERE t.tableowner = current_user
Googling just says to use register:, but the PostgreSQL ansible module does not have a register param:
fatal: [xx.xxx.xx.xx]: FAILED! => {"changed": false, "msg": "Unsupported parameters for (postgresql_query) module: register Supported parameters include: ca_cert, db, login_host, login_password, login_unix_socket, login_user, named_args, path_to_script, port, positional_args, query, session_role, ssl_mode"}
The Ansible docs list return values for this module but there are no examples on how to use them, and everything I search for leads right back to register:.
Sounds like you are very close, but have register at the wrong indentation. It's a parameter of the task itself, not the postgresql module.
Try:
- name: Get specific tables
postgresql_query:
db: "{{ database_name }}"
login_host: "{{ my_host }}"
login_user: "{{ my_user }}"
login_password: "{{ my_password }}"
query: SELECT * FROM pg_tables t WHERE t.tableowner = current_user
register: result
- debug:
var: result

Ansible - 'unicode object' has no attribute 'file_input'

I'm working with Ansible 2.2.1.0 and I work on a old project made by someone else with errors.
I have the following variables in my code:
software_output:
- { file_input: 'Download_me.zip', file_output: 'download.zip' }
software_version:"0.5,0.6"
And I have this shell module instruction to download on a FTP:
- name: "MySoftware | get package on FTP"
shell: >
curl --ftp-ssl -k {{ ' --ssl-allow-beast ' if os == 'aix' else "" }} -# -f -u {{ ftp_user }}:{{ ftp_password }} -f "{{ ftp_url | replace('##software_version##',item[1]) }}{{ item[0].file_input }}"
-o {{ require_inst_dir }}/{{ item[0].file_output }} 2>/dev/null
with_nested:
- software_output
- "{{ software_version.split(',') }}"
when: software_version is defined
But it doesn't work at all, I have the following error:
'unicode object' has no attribute 'file_input'
It looks like with_nested is not used as it has to be used, did I missed something?
In:
with_nested:
- software_output
software_output is a string software_output.
To refer to the variable value, change to:
with_nested:
- "{{ software_output }}"
Long time ago the first syntax was valid, but it was long time ago.