Writing an Ansible playbook where we pull projects from GIT repos and thereafter apply all pulled yamls to a Kubernetes cluster.
I see only an example to apply single yaml files to a Kubernetes cluster but not multiple ones at once. e.g.:
- name: Apply metrics-server manifest to the cluster.
community.kubernetes.k8s:
state: present
src: ~/metrics-server.yaml
Is there any way of applying multiple yaml files? Something like:
- name: Apply Services to the cluster.
community.kubernetes.k8s:
state: present
src: ~/svc-*.yaml
Or:
- name: Apply Ingresses to the cluster.
community.kubernetes.k8s:
state: present
dir: ~/ing/
Is there maybe another Ansible K8s module I should be looking at maybe?
Or should we just run kubectl commands directly in Ansible tasks. e.g.:
- name: Apply Ingresses to the cluster.
command: kubectl apply -f ~/ing/*.yaml
What is the best way to achieve this using Ansible?
You can use k8s Ansible module along with 'with_fileglob' recursive pattern. Below code should work for your requirement.
- name: Apply K8s resources
k8s:
definition: "{{ lookup('template', '{{ item }}') | from_yaml }}"
with_fileglob:
- "~/ing/*.yaml"
Related
I am trying to automate making a HA cluster with Ansible.
Normally I have two options to install the load balancer (MetalLb), with manifest or helm.
I really like that helm has a --values option. This is useful because I can add toleration to the MetalLB speakers, that way I can deploy them in the nodes that I dont want to deploy jobs on.
When making the playbook I want to have a way to deploy the MetalLB speakers with the toleration so they can get deploy but I don't want to install helm on one of the nodes.
When the playbook is ran I can download the manifest file https://raw.githubusercontent.com/metallb/metallb/v0.13.7/config/manifests/metallb-native.yaml but now I want to be able to add the tolerations. How can I accomplish this without me downloading the yaml file and editing it myself, something like the --values option in helm would be nice
https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/ lays out the general idea of how kustomize is going to work: take some bases, apply some transformations to them. In most cases, the strategic merge behaves like folks expect, and is how the kubectl patch you mentioned behaves1. But, dealing with array values in merges is tricky, so I have had better luck with using JSON Patch array add support, which is what we will use here
# the contents of "kustomization.yaml" in the current directory
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- https://raw.githubusercontent.com/metallb/metallb/v0.13.7/config/manifests/metallb-native.yaml
patches:
- target:
version: v1
group: apps
kind: DaemonSet
namespace: metallb-system
name: speaker
patch: |-
- op: add
path: /spec/template/spec/tolerations/-
value: {"effect":"NoSchedule","key":"example.com/some-taint","operator":"Exists"}
Then, using kubectl kustomize . we see the result from applying that patch:
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
operator: Exists
- effect: NoSchedule
key: node-role.kubernetes.io/control-plane
operator: Exists
- effect: NoSchedule
key: example.com/some-taint
operator: Exists
Obviously if you wanted to wholesale replace the tolerations, you may have better luck with the strategic merge flavor, but given that your question didn't specify and this case is the harder of the two, I started with it
FN 1: I saw you mention kubectl patch but that is for editing existing kubernetes resources, so after you already deployed your metallb-native.yaml into the cluster, only then would kubectl patch do anything for you. Using kustomize is the helm-replacement in that it is designed for the manifests to go into the cluster in the right state, versus fixing it up later
I'm trying to automate the following:
Apply the Physical Volumes
kubectl apply -f food-pv.yaml
kubectl apply -f bar-pv.yaml
Apply the Physical Volume Claims
kubectl apply -f foo.yaml
kubectl apply -f bar.yaml
Apply the Services
kubectl apply -f this-service.yaml
kubectl apply -f that-nodeport.yaml
Apply the Deployment
kubectl apply -f something.yaml
Now I could run the cmds as shell commands, but I don't think that's the proper way to do it. I've been reading thru the Ansible documentation, but I'm not seeing what I need to do for this. Is there a better way to apply these yaml files without using a series of shell cmds?
Thanks in advance
The best way to do this would be to use ansible kubernetes.core collection
An example with file:
- name: Create a Deployment by reading the definition from a local file
kubernetes.core.k8s:
state: present
src: /testing/deployment.yml
So, you could loop from different folders containing the yaml definitions for your objects with state: present
I don't currently have a running kube cluster to test this against but you should basically be able to run all this in a single task with a loop using the kubernetes.core.k8s module
Here is what I believe should meet your requirement (provided your access to your kube instance is configured and ok in your environment and that you installed the above collection as described in the documentation)
- name: install my kube objects
hosts: localhost
gather_facts: false
vars:
obj_def_path: /path/to/your/obj_def_dir/
obj_def_list:
- food-pv.yaml
- bar-pv.yaml
- foo.yaml
- bar.yaml
- this-service.yaml
- that-nodeport.yaml
- something.yaml
tasks:
- name: Install all objects from def files
k8s:
src: "{{ obj_def_path }}/{{ item }}"
state: present
apply: true
loop: "{{ obj_def_list }}"
I would like to apply the ingress https://projectcontour.io/ to my kubernetes cluster with ansible community.kubernetes.k8s.
The example on https://docs.ansible.com/ansible/latest/collections/community/kubernetes/k8s_module.html shows me how to apply local files, for instance
- name: Create a Deployment by reading the definition from a local file
community.kubernetes.k8s:
state: present
src: /testing/deployment.yml
However I could not find the example with remote file.
With kubectl the deployment of contour ingress can be done as follow:
kubectl apply -f https://projectcontour.io/quickstart/contour.yaml
When I read the docs, I don't see an option to do that.
You could download the file first, then apply it.
- name: retrieve file
get_url:
url: https://projectcontour.io/quickstart/contour.yaml
dest: /testing/contour.yaml
register: download_contour
- name: create deployment
k8s:
src: /testing/deployment.yml
when: download_contour.changed
I am writing ansible scripts for deploying services using Kubernetes, I am stuck with a step that is for the post-deployment process:
I have deployed a service having "replicas: 3", and all the replicas are up and running now my problem is to I have to do a migration for which I have to get into the container and run a script already present there.
I can do it manually by getting into the container individually and then run the script but this will again require manual intervention.
What I want to achieve is once the deployment is done and all the replicas are up and running I want to run the scripts by getting into the containers and all these steps should be performed by ansible script and no manual effort required.
Is there a way to do this?
Take a look at k8s_exec module.
- name: Check RC status of command executed
community.kubernetes.k8s_exec:
namespace: myproject
pod: busybox-test
command: cmd_with_non_zero_exit_code
register: command_status
ignore_errors: True
- name: Check last command status
debug:
msg: "cmd failed"
when: command_status.return_code != 0
#Vasili Angapov is right - k8s_exec module is probably the best solution in this case but I would like to add some useful notes.
To use k8s_exec we need to know the exact Pod name (we need to pass it as pod parameter in ansible task). As you wrote, I assume that your Pods are managed by Deployment, so every Pod has random string in its name added by ReplicaSet. Therefore, you have to find the full names of the Pods somehow.
I've created simple playbook to illustrate how we can find Pod names for all Pods with label: app=web and then run sample touch file123456789 command on these Pods.
---
- hosts: localhost
collections:
- community.kubernetes
tasks:
- name: "Search for all Pods labelled app=web"
k8s_info:
kind: Pod
label_selectors:
- app = web
register: pod_names
- name: "Get Pod names"
set_fact:
pod_names: "{{ pod_names | json_query('resources[*].metadata.name') }}"
- name: "Run command on every Pod labelled app=web"
k8s_exec:
namespace: default
pod: "{{ item }}"
command: touch file123456789
with_items: "{{ pod_names }}"
NOTE: Instead of k8s_exec module you can use command module as well.
In our example instead of k8s_exec task we can have:
- name: "Run command on every Pod labelled app=web"
command: >
kubectl exec "{{ item }}" -n default -- touch file123456789
with_items: "{{ pod_names }}"
Is there a way to tie a skaffold profile to a namespace? I'd like to make sure that dev, staging and prod deployments always go to the right namespace. I know that I can add a namespace to skaffold run like skaffold run -p dev -n dev but that's a little error prone. I'd like to make my builds even safer by tying profiles to namespaces.
I've tried adding the following to my skaffold.yaml based on the fact that there's a path in skaffold.yaml which is build/cluster/namespace but I suspect I'm misunderstanding the purpose of the cluster spec.
profiles:
- name: local
patches:
- op: replace
path: /build/artifacts/0/cluster/namespace
value: testing
but I get the error
❮❮❮ skaffold render -p local
FATA[0000] creating runner: applying profiles: applying profile local: invalid path: /build/artifacts/0/cluster/namespace
I've tried other variants of changing the cluster namespace but all of them fail.
if TL/DR: please go directly to "solution" (the last section)
Is there a way to tie a skaffold profile to a namespace? I'd like to
make sure that dev, staging and prod deployments always go to the
right namespace. I know that I can add a namespace to skaffold run
like skaffold run -p dev -n dev but that's a little error prone. I'd
like to make my builds even safer by tying profiles to namespaces.
At the beginning we need to clarify two things, namely if we are talking about namespaces in build or deploy stage of the pipeline. On one hand you write, that you want to make sure that dev, staging and prod deployments always go to the right namespace so I'm assuming you're rather interested in setting the appropriate namespace on your kubernetes cluster in which built images will be eventually deployed. Hovewer later you mentioned also about making builds even safer by tying profiles to namespaces. Please correct me if I'm wrong but my guess is that you rather mean namespaces at the deploy stage.
So answering your question: yes, it is possible to tie a skaffold profile to a specific namespace.
I've tried adding the following to my skaffold.yaml based on the
fact that there's a path in skaffold.yaml which is
build/cluster/namespace but I suspect I'm misunderstanding the
purpose of the cluster spec.
You're right, there is such path in skaffold.yaml but then your example should look as follows:
profiles:
- name: local
patches:
- op: replace
path: /build/cluster/namespace
value: testing
Note that cluster element is on the same indentation level as artifacts. As you can read in the reference:
cluster: # beta describes how to do an on-cluster build.
and as you can see, most of its options are related with kaniko. It can be also patched in the same way as other skaffold.yaml elements in specific profiles but anyway I don't think this is the element you're really concerned about so let's leave it for now.
Btw. you can easily validate your skaffold.yaml syntax by runnig:
skaffold fix
If every element is properly used, all the indentation levels are correct etc. it will print:
config is already latest version
otherwise something like the error below:
FATA[0000] creating runner: applying profiles: applying profile prod: invalid path: /build/cluster/namespace
solution
You can make sure your deployments go to the right namespace by setting kubectl flags. It assumes you're using docker as builder and kubectl as deployer. As there are plenty of different builders and deployers supported by skaffold the e.g. you deploy with helm the detailed solution may look quite different.
One very important caveat: the path must be already present in your general config part, otherwise you won't be able to patch it in profiles section e.g.:
if you have in your profiles section following patch:
profiles:
- name: prod
patches:
- op: replace
path: /build/artifacts/0/docker/dockerfile
value: DifferentNameForDockerfile
following section must be already present in your skaffold.yaml:
build:
artifacts:
- image: skaffold-example
docker:
dockerfile: Dockerfile # the pipeline will fail at build stage
Going back to our namaspaces, first we need to set default values in deploy section:
deploy:
kubectl:
manifests:
- k8s-pod.yaml
flags:
global: # additional flags passed on every command.
- --namespace=default
# apply: # additional flags passed on creations (kubectl apply).
# - --namespace=default
# delete: # additional flags passed on deletions (kubectl delete).
# - --namespace=default
I set only global flags but this is also possible to set for apply and delete commands separately.
In next step we need to override our default value (they must be already present, so we can override them) in our profiles:
profiles:
- name: dev
patches:
- op: replace
path: /deploy/kubectl/flags/global/0
value: --namespace=dev
- name: staging
patches:
- op: replace
path: /deploy/kubectl/flags/global/0
value: --namespace=staging
- name: prod
patches:
- op: replace
path: /deploy/kubectl/flags/global/0
value: --namespace=prod
Then we can run:
skaffold run --render-only --profile=prod
As we can see our Pod is going to be deployed in prod namespace of our kubernetes cluster:
Generating tags...
- skaffold-example -> skaffold-example:v1.3.1-15-g11d005d-dirty
Checking cache...
- skaffold-example: Found Locally
apiVersion: v1
kind: Pod
metadata:
labels:
app.kubernetes.io/managed-by: skaffold-v1.3.1
skaffold.dev/builder: local
skaffold.dev/cleanup: "true"
skaffold.dev/deployer: kubectl
skaffold.dev/docker-api-version: "1.39"
skaffold.dev/profile.0: prod
skaffold.dev/run-id: b83d48db-aec8-4570-8cb8-dbf9a7795c00
skaffold.dev/tag-policy: git-commit
skaffold.dev/tail: "true"
name: getting-started
namespace: prod
spec:
containers:
- image: skaffold-example:3e4840dfd2ad13c4d32785d73641dab66be7a89b43355eb815b85bc09f45c8b2
name: getting-started