I am trying to pass the debug message to conditional on Kubernetes object but it looks like it doesn't recognise it properly:
- name: get some service status log
kubernetes.core.k8s_log:
namespace: "{{ product.namespace }}"
label_selectors:
- app.kubernetes.io/name=check-service-existence
register: service_existence
- name: some service existence check log
debug:
msg: "{{ service_existence.log_lines | first }}"
- name: create service for "{{ product.namespace }}"
kubernetes.core.k8s:
state: present
template: create-service.j2
wait: yes
wait_timeout: 300
wait_condition:
type: "Complete"
status: "True"
when: service_existence == "service_does_not_exist"
what I am getting when I am running it is:
TASK [playbook : some service existence check log] ***
ok: [127.0.0.1] =>
msg: service_does_not_exist
TASK [playbook : create service for "namespace"] ***
skipping: [127.0.0.1]
I suspect that it treats msg: as a part of string. How can I deal with this properly?
Since your debug message is about the value of service_existence.log_lines | first your conditional should also be.
when: service_existence.log_lines | first == "service_does_not_exist"
Related
Here I want to restart Kafka Connect tasks if they are in failed state using ansible-playbook, I have fetched connector tasks state using 'set_fact'
I want to create a loop over collected facts to restart Kafka Connector Tasks using connector name and task id.
tasks:
- name: Gethering Connector Names
uri:
url: "{{scheme }}://{{ server }}:{{ port_no }}/connectors"
user: "{{ username }}"
password: "{{ password }}"
method: GET
force_basic_auth: yes
status_code: 200
register: conn_stat
- name: Checking for Connector status
uri:
url: "{{scheme }}://{{ server }}:{{ port_no }}/connectors/{{ abc_conn_name }}/status"
user: "{{ username }}"
password: "{{ password }}"
method: GET
force_basic_auth: yes
loop: "{{ conn_name }}"
loop_control:
loop_var: abc_conn_name
vars:
conn_name: "{{ conn_stat.json }}"
register: conn_stat_1
- name: Gethering Failed task id
set_fact:
failed_connector_name_task_id: "{{ conn_stat_1 | json_query('results[].json[].{name: name ,id: [tasks[?state == `RUNNING`].id [] | [0] ]}') }}"
- name: Failed connector name with tasks id
ansible.builtin.debug:
var: failed_connector_name_task_id
Getting below values from fact, which I need to push into loop
"failed_connector_name_task_id": [
{
"id": [
0
1
],
"name": "test-connector-sample"
},
{
"id": [
0
1
],
"name": "confluent-connect"
},
{
"id": [
0
1
2
],
"name": "confluent-test-1"
}
]
},
"changed": false
}
value need to be posted
- name: Restart Connector Failed tasks
uri:
url: "{{scheme }}://{{ server }}:{{ port_no }}/connectors/**{{name of connector}}**/tasks/**{{task ID}}**/restart"
user: "{{ username }}"
password: "{{ password }}"
method: POST
force_basic_auth: yes
status_code: 200
register: conn_stat
name of connector,
task ID want to use in loop.
In above I need to setup loop for tasks.
As we can see above connector 'confluent-test-1' have three tasks in failed state, so it need to be iterate three times with task 'confluent-test-1'.
This is a typical case where you want to use subelements, either through the aforementioned filter or lookup. Here is an example using the filter
- name: Restart Connector Failed tasks
uri:
url: "{{scheme }}://{{ server }}:{{ port_no }}/connectors/{{ item.0.name }}/tasks/{{ item.1 }}/restart"
user: "{{ username }}"
password: "{{ password }}"
method: POST
force_basic_auth: yes
status_code: 200
loop: "{{ failed_connector_name_task_id | subelements('id', skip_missing=True) }}"
References worth to read:
ansible loops
subelements filter
You can actually remove your last unnecessary set_fact task with e.g. the following construct:
- name: Restart Connector Failed tasks
vars:
failed_connector_name_task_id: "{{ conn_stat_1 | json_query('results[].json[].{name: name ,id: [tasks[?state == `RUNNING`].id [] | [0] ]}') }}"
uri:
url: "{{scheme }}://{{ server }}:{{ port_no }}/connectors/{{ item.0.name }}/tasks/{{ item.1 }}/restart"
user: "{{ username }}"
password: "{{ password }}"
method: POST
force_basic_auth: yes
status_code: 200
loop: "{{ failed_connector_name_task_id | subelements('id', skip_missing=True) }}"
I'm trying to install Kubernetes on Google Cloud Instance using ansible, and it says it can't find the match over and over again,,
when I run ansible-playbook -i inventory/mycluster/inventory.ini -v --become --become-user=root cluster.yml :
[WARNING]: Could not match supplied host pattern, ignoring: kube-master
PLAY [Add kube-master nodes to kube_control_plane] ***********************************************************************
skipping: no hosts matched
[WARNING]: Could not match supplied host pattern, ignoring: kube-node
PLAY [Add kube-node nodes to kube_node] **********************************************************************************
skipping: no hosts matched
[WARNING]: Could not match supplied host pattern, ignoring: k8s-cluster
My inventory.ini :
[all]
instance-1 ansible_ssh_host=10.182.0.2 ip = 34.125.199.45 etcd_member_name=etcd1
instance-2 ansible_ssh_host=10.182.0.3 ip = 34.125.217.86 etcd_member_name=etcd2
instance-3 ansible_ssh_host=10.182.0.4 ip = 34.125.112.124 etcd_member_name=etcd3
instance-4 ansible_ssh_host=10.182.0.5 ip = 34.125.251.168
instance-5 ansible_ssh_host=10.182.0.6 ip = 34.125.231.40
# ## configure a bastion host if your nodes are not directly reachable
# bastion ansible_host=x.x.x.x ansible_user=some_user
[kube-master]
instance-1
instance-2
instance-3
[etcd]
instance-1
instance-2
instance-3
[kube-node]
instance-4
instance-5
[calico-rr]
[k8s-cluster:children]
kube-master
kube-node
calico-rr
My cluster.yml :
---
- name: Check ansible version
import_playbook: ansible_version.yml
- name: Ensure compatibility with old groups
import_playbook: legacy_groups.yml
- hosts: bastion[0]
gather_facts: False
environment: "{{ proxy_disable_env }}"
roles:
- { role: kubespray-defaults }
- { role: bastion-ssh-config, tags: ["localhost", "bastion"] }
- hosts: k8s_cluster:etcd
strategy: linear
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
gather_facts: false
environment: "{{ proxy_disable_env }}"
roles:
- { role: kubespray-defaults }
- { role: bootstrap-os, tags: bootstrap-os}
- name: Gather facts
tags: always
import_playbook: facts.yml
- hosts: k8s_cluster:etcd
gather_facts: False
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
environment: "{{ proxy_disable_env }}"
roles:
- { role: kubespray-defaults }
- { role: kubernetes/preinstall, tags: preinstall }
- { role: "container-engine", tags: "container-engine", when: deploy_container_engine|default(true) }
- { role: download, tags: download, when: "not skip_downloads" }
- hosts: k8s_cluster
gather_facts: False
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
I changed - to _ and did some other renaming work, And it still doesn't find its match. I don't understand how it works... would you please help me fix this...?
I have the same error, and noticed that error does not exist in release-2.15(example), and node groups are written by "-", not by "_" initially. So if you don't care about release number, use 2.15. At least it helped me.
With Ansible, I want to find which port is available in a range on a K8s cluster and use this port to expose a service temporary.
I'm able to find and extract the port but when I'm declaring the Nodeport using that port the tasks fail.
It seems that ansible is not converting my "port" variable to an int with the instruction {{ port|int }}.
- block:
- name: List all ports in range 32200 to 32220
wait_for:
port: "{{ item|int }}"
timeout: 1
state: stopped
msg: "Port {{ item }} is already in use"
register: available_ports
with_sequence: start=32200 end=32220
ignore_errors: yes
- name: extract first unused port from list
set_fact:
port: "{{ available_ports.results | json_query(\"[? state=='stopped'].port\") | first }}"
- debug:
var: port
- name: Expose service as a nodeport service
k8s:
state: present
definition:
apiVersion: v1
kind: Service
metadata:
name: "{{ namespace }}-service-nodeport"
namespace: "{{ namespace }}"
spec:
type: NodePort
selector:
component: my-app
ports:
- protocol: TCP
targetPort: 5432
nodePort: "{{ port|int }}"
port: 5432
This outputs the following:
TASK [../roles/my-role : debug] ***************************************************************************************************************************************************************************************************
ok: [127.0.0.1] => {
"port": "32380"
}
TASK [../roles/my-role : Expose service as a nodeport service] *******************************************************************************************************************************************
fatal: [127.0.0.1]: FAILED! => {"changed": false, "error": 400, "msg": "Failed to create object: b'{\"kind\":\"Status\",\"apiVersion\":\"v1\",\"metadata\":{},\"status\":\"Failure\",\"message\":\"Service in version \\\\\"v1\\\\\" cannot be handled as a Service: v1.Service.Spec: v1.ServiceSpec.Ports: []v1.ServicePort: v1.ServicePort.NodePort: readUint32: unexpected character: \\\\ufffd, error found in #10 byte of ...|dePort\\\\\": \\\\\"32380\\\\\", \\\\\"p|..., bigger context ...|rotocol\\\\\": \\\\\"TCP\\\\\", \\\\\"targetPort\\\\\": 5432, \\\\\"nodePort\\\\\": \\\\\"32380\\\\\", \\\\\"port\\\\\": 5432}]}}|...\",\"reason\":\"BadRequest\",\"code\":400}\\n'", "reason": "Bad Request", "status": 400}
If I set the nodeport to a fix value such as 32800, it works.
Disclaimer: First time I use Prometheus.
I am trying to send a Slack notification every time a Job ends successfully.
To achieve this, I installed kube-state-metrics, Prometheus and AlertManager.
Then I created the following rule:
rules:
- alert: KubeJobCompleted
annotations:
identifier: '{{ $labels.instance }}'
summary: Job Completed Successfully
description: Job *{{ $labels.namespace }}/{{ $labels.job_name }}* is completed successfully.
expr: |
kube_job_spec_completions{job="kube-state-metrics"} - kube_job_status_succeeded{job="kube-state-metrics"} == 0
labels:
severity: information
And added the AlertManager receiver text (template) :
{{ define "custom_slack_message" }}
{{ range .Alerts }}
{{ .Annotations.description }}
{{ end }}
{{ end }}
My current result: Everytime a new job completes successfully, I receive a Slack notification with the list of all Job that completed successfully.
I don't mind receiving the whole list at first but after that I would like to receive notifications that contain only the newly completed job(s) in the specified group interval.
Is it possible?
Just add extra rule which will just display last completed job(s):
line: for: <10m> - which will list just lastly completed job(s) in 10 minutes:
rules:
- alert: KubeJobCompleted
annotations:
identifier: '{{ $labels.instance }}'
summary: Job Completed Successfully
description: Job *{{ $labels.namespace }}/{{ $labels.job_name }}* is completed successfully.
expr: |
kube_job_spec_completions{job="kube-state-metrics"} - kube_job_status_succeeded{job="kube-state-metrics"} == 0
for: 10m
labels:
severity: information
I ended up using kube_job_status_completion_time and time() to dismiss past events (avoid refiring event upon repeat time).
rules:
- alert: KubeJobCompleted
annotations:
identifier: '{{ $labels.instance }}'
summary: Job Completed Successfully
description: Job *{{ $labels.namespace }}/{{ $labels.job_name }}* is completed successfully.
expr: |
time() - kube_job_status_completion_time < 60 and kube_job_spec_completions{job="kube-state-metrics"} - kube_job_status_succeeded{job="kube-state-metrics"} == 0
labels:
severity: information
In my machines , 4 pvc are created. Now i need to get all volume name associated with the pvc in a list. Then those list will be passed to storage array and i will ensure that the volumes are created in storage server.
- name: Verify whether the PVC is created
command: "kubectl get pvc pvc{{local_uuid}}-{{item}} -o json"
with_sequence: start=1 end=4
register: result
- set_fact:
pvcstatus: "{{ (item.stdout |from_json).status.phase }}"
volume_name: "{{(item.stdout |from_json).spec.volumeName}}"
with_items: "{{ result.results}}"
- debug: var=volume_name
But when i run the above tasks , volume_name is having last volumename alone instead of all the volumes as a list. How to get all the volume names in a list?
Your set_fact task is setting volume_name to a single value in each iteration...so of course, when the loop completes, the variable has the value from the final iteration. That's the expected behavior. If you want a list, you need to create a list. You can do this by appending to a list in your set_fact loop:
- set_fact:
volume_name: "{{ volume_name|default([]) + [(item.stdout |from_json).spec.volumeName] }}"
with_items: "{{ result.results}}"
The expression volume_name|default([]) will evaluate to an empty list when volume_name is undefined (which is the case on the first iteration of the loop).
I tested this out using the following playbook:
---
- hosts: localhost
gather_facts: false
vars:
result:
results:
- stdout: '{"spec": {"volumeName": "volume1"}}'
- stdout: '{"spec": {"volumeName": "volume2"}}'
- stdout: '{"spec": {"volumeName": "volume3"}}'
tasks:
- debug:
var: result
- set_fact:
volume_name: "{{ volume_name|default([]) + [(item.stdout |from_json).spec.volumeName] }}"
with_items: "{{ result.results}}"
- debug:
var: volume_name
Which results in:
TASK [debug] *****************************************************************************************************************************************************************
ok: [localhost] => {
"volume_name": [
"volume1",
"volume2",
"volume3"
]
}