I'm trying to automate the following:
Apply the Physical Volumes
kubectl apply -f food-pv.yaml
kubectl apply -f bar-pv.yaml
Apply the Physical Volume Claims
kubectl apply -f foo.yaml
kubectl apply -f bar.yaml
Apply the Services
kubectl apply -f this-service.yaml
kubectl apply -f that-nodeport.yaml
Apply the Deployment
kubectl apply -f something.yaml
Now I could run the cmds as shell commands, but I don't think that's the proper way to do it. I've been reading thru the Ansible documentation, but I'm not seeing what I need to do for this. Is there a better way to apply these yaml files without using a series of shell cmds?
Thanks in advance
The best way to do this would be to use ansible kubernetes.core collection
An example with file:
- name: Create a Deployment by reading the definition from a local file
kubernetes.core.k8s:
state: present
src: /testing/deployment.yml
So, you could loop from different folders containing the yaml definitions for your objects with state: present
I don't currently have a running kube cluster to test this against but you should basically be able to run all this in a single task with a loop using the kubernetes.core.k8s module
Here is what I believe should meet your requirement (provided your access to your kube instance is configured and ok in your environment and that you installed the above collection as described in the documentation)
- name: install my kube objects
hosts: localhost
gather_facts: false
vars:
obj_def_path: /path/to/your/obj_def_dir/
obj_def_list:
- food-pv.yaml
- bar-pv.yaml
- foo.yaml
- bar.yaml
- this-service.yaml
- that-nodeport.yaml
- something.yaml
tasks:
- name: Install all objects from def files
k8s:
src: "{{ obj_def_path }}/{{ item }}"
state: present
apply: true
loop: "{{ obj_def_list }}"
Related
I'm trying to execute a few simple commands on a kubernetes pod in Azure. I've successfully done so with the localhost + pod-as-module-parameter syntax:
---
- hosts: localhost
connection: kubectl
collections:
- kubernetes.core
gather_facts: False
tasks:
- name: Get pod
k8s_info:
kind: Pod
namespace: my-namespace
register: pod_list
- name: Run command
k8s_exec:
pod: "{{pod_list.resources[0].metadata.name}}"
namespace: my_namespace
command: "/bin/bash -c 'echo Hello world'"
However, I want to avoid the repetition of specifying pod and namespace for every kubernetes.core module call, as well as parsing the namespace explicitly in every playbook.
So I got the kubernetes dynamic inventory plugin to work, and can see the desired pod in a group label_app_some-predictable-name, as confirmed by output of ansible-inventory.
What I don't get is if at this point I should be able to run regular command module (I couldn't get that to work at all), or if I need to keep using k8s_exec, which still requires pod and namespace to be specified explicitly (albeit now I can refer to the guest facts populated by the inventory plugin), on top of now requiring delegate_to: localhost:
---
- name: Execute command
hosts: label_app_some-predicatable-name
connection: kubectl
gather_facts: false
collections:
- kubernetes.core
tasks:
- name: Execute command via kubectl
delegate_to: localhost
k8s_exec:
command: "/bin/sh -c 'echo Hello world'"
pod: "{{ansible_kubectl_pod}}"
namespace: "{{ansible_kubectl_namespace}}"
What am I missing? Is there a playbook example that makes use of the kubernetes dynamic inventory?
Salutations, I am deploying pods/applications to EKS via Ansible. My playbook runs a few kubectl apply -f commands in order to deploy EKS resources and all of the .yaml files are in that directory.
I would like to place these .yaml files that create each application in it's own ansible role/files directory in order to clean up the main ansible directory a bit (the .yaml files are becoming overwhelming and I only have two applications being deployed thus far).
The issue is this: When I move the .yaml files to it's respective /roles/files directory ansible still seems to look for the files in the main ansible directory instead of scanning the internal role directory.
How do I redirect Ansible to run the shell commands on .yamls in the role's file directory? Playbook below:
#
# Deploying Jenkins to AWS EKS
#
# Create Jenkins Namespace
- name: Create Jenkins Namespace & set it to default
shell: |
kubectl create namespace jenkins
kubectl config set-context --current --namespace=jenkins
# Create Jenkins Service Account
- name: Create Jenkins Service Account
shell: |
kubectl create serviceaccount jenkins-master -n jenkins
kubectl get secret $(kubectl get sa jenkins-master -n jenkins -o jsonpath={.secrets[0].name}) -n jenkins -o jsonpath={.data.'ca\.crt'} | base64 --decode
# Deploy Jenkins
- name: Deploy Jenkins Application
shell: |
kubectl apply -f jenkins-service.yaml
kubectl apply -f jenkins-vol.yaml
kubectl apply -f jenkins-role.yaml
kubectl apply -f jenkins-configmap.yaml
kubectl apply -f jenkins-deployment.yaml
Below is the role directory structure, Ansible doesn't check this location for the yaml files to run in the playbook above.
You could use the role_path variable, which contains the path to the currently executing role. You could write your tasks like:
- name: Deploy Jenkins Application
shell: |
kubectl apply -f {{ role_path }}/files/jenkins-service.yaml
kubectl apply -f {{ role_path }}/files/jenkins-vol.yaml
...
Alternately, a fileglob lookup might be easier:
- name: Deploy Jenkins Application
command: kubectl apply -f {{ item }}
loop: "{{ query('fileglob', '*.yaml') }}"
This would loop over all the *.yaml files in your role's files
directory.
You could consider replacing your use of kubectl with
the k8s module.
Lastly, rather than managing these resources using Ansible, you could
consider using kustomize, which I have found
to be easier to work with unless you're relying heavily on Ansible
templating.
Writing an Ansible playbook where we pull projects from GIT repos and thereafter apply all pulled yamls to a Kubernetes cluster.
I see only an example to apply single yaml files to a Kubernetes cluster but not multiple ones at once. e.g.:
- name: Apply metrics-server manifest to the cluster.
community.kubernetes.k8s:
state: present
src: ~/metrics-server.yaml
Is there any way of applying multiple yaml files? Something like:
- name: Apply Services to the cluster.
community.kubernetes.k8s:
state: present
src: ~/svc-*.yaml
Or:
- name: Apply Ingresses to the cluster.
community.kubernetes.k8s:
state: present
dir: ~/ing/
Is there maybe another Ansible K8s module I should be looking at maybe?
Or should we just run kubectl commands directly in Ansible tasks. e.g.:
- name: Apply Ingresses to the cluster.
command: kubectl apply -f ~/ing/*.yaml
What is the best way to achieve this using Ansible?
You can use k8s Ansible module along with 'with_fileglob' recursive pattern. Below code should work for your requirement.
- name: Apply K8s resources
k8s:
definition: "{{ lookup('template', '{{ item }}') | from_yaml }}"
with_fileglob:
- "~/ing/*.yaml"
I would like to apply the ingress https://projectcontour.io/ to my kubernetes cluster with ansible community.kubernetes.k8s.
The example on https://docs.ansible.com/ansible/latest/collections/community/kubernetes/k8s_module.html shows me how to apply local files, for instance
- name: Create a Deployment by reading the definition from a local file
community.kubernetes.k8s:
state: present
src: /testing/deployment.yml
However I could not find the example with remote file.
With kubectl the deployment of contour ingress can be done as follow:
kubectl apply -f https://projectcontour.io/quickstart/contour.yaml
When I read the docs, I don't see an option to do that.
You could download the file first, then apply it.
- name: retrieve file
get_url:
url: https://projectcontour.io/quickstart/contour.yaml
dest: /testing/contour.yaml
register: download_contour
- name: create deployment
k8s:
src: /testing/deployment.yml
when: download_contour.changed
I am writing ansible scripts for deploying services using Kubernetes, I am stuck with a step that is for the post-deployment process:
I have deployed a service having "replicas: 3", and all the replicas are up and running now my problem is to I have to do a migration for which I have to get into the container and run a script already present there.
I can do it manually by getting into the container individually and then run the script but this will again require manual intervention.
What I want to achieve is once the deployment is done and all the replicas are up and running I want to run the scripts by getting into the containers and all these steps should be performed by ansible script and no manual effort required.
Is there a way to do this?
Take a look at k8s_exec module.
- name: Check RC status of command executed
community.kubernetes.k8s_exec:
namespace: myproject
pod: busybox-test
command: cmd_with_non_zero_exit_code
register: command_status
ignore_errors: True
- name: Check last command status
debug:
msg: "cmd failed"
when: command_status.return_code != 0
#Vasili Angapov is right - k8s_exec module is probably the best solution in this case but I would like to add some useful notes.
To use k8s_exec we need to know the exact Pod name (we need to pass it as pod parameter in ansible task). As you wrote, I assume that your Pods are managed by Deployment, so every Pod has random string in its name added by ReplicaSet. Therefore, you have to find the full names of the Pods somehow.
I've created simple playbook to illustrate how we can find Pod names for all Pods with label: app=web and then run sample touch file123456789 command on these Pods.
---
- hosts: localhost
collections:
- community.kubernetes
tasks:
- name: "Search for all Pods labelled app=web"
k8s_info:
kind: Pod
label_selectors:
- app = web
register: pod_names
- name: "Get Pod names"
set_fact:
pod_names: "{{ pod_names | json_query('resources[*].metadata.name') }}"
- name: "Run command on every Pod labelled app=web"
k8s_exec:
namespace: default
pod: "{{ item }}"
command: touch file123456789
with_items: "{{ pod_names }}"
NOTE: Instead of k8s_exec module you can use command module as well.
In our example instead of k8s_exec task we can have:
- name: "Run command on every Pod labelled app=web"
command: >
kubectl exec "{{ item }}" -n default -- touch file123456789
with_items: "{{ pod_names }}"