How to include a dictionary into a k8s_raw data field - kubernetes

I'm writing an Ansible-playbook to insert a list of secret object into Kubernetes.
I'm using k8s_raw syntax and I want to import this list from a group_vars file.
I can't find the right syntax to import the list of secret into my data field.
playbook.yml
- hosts: localhost
tasks:
- name: Create a Secret object
k8s_raw:
state: present
definition:
apiVersion: v1
kind: Secret
data:
"{{ secrets }}"
SKRT: "c2trcnIK"
metadata:
name: "test"
namespace: "namespace-test"
type: Opaqueroot
vars_files:
- "varfile.yml"
varfile.yml
secrets:
TAMAGOTCHI_CODE: "MTIzNAo="
FRIDGE_PIN: "MTIzNAo="

First, what does it actually say when you attempt the above? It would help to have the result of your attempts.
Just guessing but try moving the var_files to before the place where you try to use the variables. Also, be sure that your indentation is exactly right when you do.
- hosts: localhost
vars_files:
- /varfile.yml
tasks:
- name: Create a Secret object
k8s_raw:
state: present
definition:
apiVersion: v1
kind: Secret
data:
"{{ secrets }}"
metadata:
name: "test"
namespace: "namespace-test"
type: Opaqueroot
Reference
side note: I would debug this immediately without attempting the task. Remove your main task and after trying to use vars_files, attempt to directly print the secrets using the debug play. This will allow you to fine tune the syntax and keep fiddling with it until you get it right without having to run and wait for the more complex play that follows. Reference.

To import this list from a group_vars file
Put the localhost into a group. For example a group test
> cat hosts
test:
hosts:
localhost:
Put the varfile.yml into the group_vars/test directory
$ tree group_vars
group_vars/
├── test
  └── varfile.yml
Then running the playbook below
$ cat test.yml
- hosts: test
tasks:
- debug:
var: secrets.TAMAGOTCHI_COD
$ ansible-playbook -i hosts test.yml
gives:
PLAY [test] ***********************************
TASK [debug] **********************************
ok: [localhost] => {
"secrets.TAMAGOTCHI_CODE": "MTIzNAo="
}
PLAY RECAP *************************************
localhost: ok=1 changed=0 unreachable=0 failed=0

The problem was the SKRT: "c2trcnIK" field just under the "{{ secrets }}" line. I deleted it and now it works ! Thank you all.

Related

How to create a Kubernetes configMap from part of a yaml file?

As I know the way to create a configMap in Kubernetes from a file is to use:
--from-file option for kubectl
What I am looking for is a way to only load part of the yaml file into the configMap.
Example:
Let's say I have this yml file:
family:
Boys:
- name: Joe
- name: Bob
- name: dan
Girls:
- name: Alice
- name: Jane
Now I want to create a configMap called 'boys' which will include only the 'Boys' section.
Possible?
Another thing that could help if the above is not possible is when I am exporting the configMap as environment variables to a pod (using envFrom) to be able to only export part of the configMap.
Both options will work for me.
Any idea?
The ConfigMap uses a key and value for its configuration. Based on your example, you get multiple arrays of data where there are multiple values with their own keys. But you can create multiple ConfigMap from different file for these issues.
First you need to create .yaml files to create a ConfigMap guided by the documentation.
First file call Boys.yaml
# Env-files contain a list of environment variables.
# These syntax rules apply:
# Each line in an env file has to be in VAR=VAL format.
# Lines beginning with # (i.e. comments) are ignored.
# Blank lines are ignored.
# There is no special handling of quotation marks (i.e. they will be part of the ConfigMap value)).
name=Joe
name=Bob
name=Dan
Second file call Girls.yaml
name=Alice
name=Jane
Create your ConfigMap
kubectl create configmap NmaeOfYourConfigmap --from-env-file=PathToYourFile/Boys.yaml
where the output is similar to this:
apiVersion: v1
kind: ConfigMap
metadata:
creationTimestamp:
name: NmaeOfYourConfigmap
namespace: default
resourceVersion:
uid:
data:
name: Joe
name: Bob
name: Dan
Finally, you can pass these ConfigMap to pod or deployment using configMapRef entries:
envFrom:
- configMapRef:
name: NmaeOfYourConfigmap-Boys
- configMapRef:
name: NmaeOfYourConfigmap-Girls
Configmaps cannot contain rich yaml data. Only key value pairs. So if you want to have a list of things, you need to express this as a multiline string.
With that in mind you could use certain tools, such a yq to query your input file and select the part you want.
For example:
podman run -rm --interactive bluebrown/tpl '{{ .family.Boys | toYaml }}' < fam.yaml \
| kubectl create configmap boys --from-file=Boys=/dev/stdin
The result looks like this
apiVersion: v1
kind: ConfigMap
metadata:
name: boys
namespace: sandbox
data:
Boys: |+
- name: Joe
- name: Bob
- name: dan
You could also encode the file or part of the file with base64 and use that as an environment variable, since you get a single string, which is easily processable, out of it. For example:
$ podman run --rm --interactive bluebrown/tpl \
'{{ .family.Boys | toYaml | b64enc }}' < fam.yaml
# use this string as env variable and decode it in your app
LSBuYW1lOiBKb2UKLSBuYW1lOiBCb2IKLSBuYW1lOiBkYW4K
Or with set env which you could further combine with dry run if required.
podman run --rm --interactive bluebrown/tpl \
'YAML_BOYS={{ .family.Boys | toYaml | b64enc }}' < fam.yaml \
| kubectl set env -e - deploy/myapp
Another thing is, that YAML is a superset of JSON, in many cases you are able to convert YAML to JSON or at least use JSON like syntax.
This can be useful in such a scenario in order to express this as a single line string rather than having to use multiline syntax. It's less fragile.
Every YAML parser will be able to parse JSON just fine. So if you are parsing the string in your app, you won't have problems.
$ podman run --rm --interactive bluebrown/tpl '{{ .family.Boys | toJson }}' < fam.yaml
[{"name":"Joe"},{"name":"Bob"},{"name":"dan"}]
Disclaimer, I created the above used tool tpl. As mentioned, you might as well use alternative tools such as yq.

How to use k8s Ansible module without quotes?

I am trying to use the module community.kubernetes.k8s – Manage Kubernetes (K8s) objects with variables from the role (e.g. role/sampleRole/vars file).
I am failing when it comes to the integer point e.g.:
- name: sample
community.kubernetes.k8s:
state: present
definition:
apiVersion: apps/v1
kind: Deployment
metadata:
name: "{{ name }}"
namespace: "{{ namespace }}"
labels:
app: "{{ app }}"
spec:
replicas: 2
selector:
matchLabels:
app: "{{ app }}"
template:
metadata:
labels:
app: "{{ app }}"
spec:
containers:
- name: "{{ name }}"
image: "{{ image }}"
ports:
- containerPort: {{ containerPort }}
When I deploy with this format obviously it will fail at it can not parse the "reference" to the var.
Sample of error:
ERROR! We were unable to read either as JSON nor YAML, these are the errors we got from each:
JSON: Expecting value: line 1 column 1 (char 0)
Syntax Error while loading YAML.
found unacceptable key (unhashable type: 'AnsibleMapping')
The error appears to be in 'deploy.yml': line <some line>, column <some column>, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
ports:
- containerPort: {{ containerPort }}
^ here
We could be wrong, but this one looks like it might be an issue with
missing quotes. Always quote template expression brackets when they
start a value. For instance:
with_items:
- {{ foo }}
Should be written as:
with_items:
- "{{ foo }}"
When I use quotes on the variable e.g. - containerPort: "{{ containerPort }}" then I get the following error (part of it):
v1.Deployment.Spec: v1.DeploymentSpec.Template: v1.PodTemplateSpec.Spec: v1.PodSpec.Containers: []v1.Container: v1.Container.Ports: []v1.ContainerPort: v1.ContainerPort.ContainerPort: readUint32: unexpected character: \\\\ufffd, error found in #10 byte of ...|nerPort\\\\\":\\\\\"80\\\\\"}]}],\\\\\"d|..., bigger context ...|\\\\\",\\\\\"name\\\\\":\\\\\"samplegreen\\\\\",\\\\\"ports\\\\\":[{\\\\\"containerPort\\\\\":\\\\\"80\\\\\"}]}],\\\\\"dnsPolicy\\\\\":\\\\\"ClusterFirst\\\\\",\\\\\"restartPolicy\\\\\"|...\",\"field\":\"patch\"}]},\"code\":422}\\n'", "reason": "Unprocessable Entity", "status": 422}
I tried to cast the string to int by using - containerPort: "{{ containerPort | int }}" but it did not worked. The problem seems to be coming from the quotes, independently how I define the var in my var file e.g. containerPort: 80 or containerPort: "80".
I found a similar question on the forum Ansible, k8s and variables but the user seems not to have the same problems that I am having.
I am running with the latest version of the module:
$ python3 -m pip show openshift
Name: openshift
Version: 0.11.2
Summary: OpenShift python client
Home-page: https://github.com/openshift/openshift-restclient-python
Author: OpenShift
Author-email: UNKNOWN
License: Apache License Version 2.0
Location: /usr/local/lib/python3.8/dist-packages
Requires: ruamel.yaml, python-string-utils, jinja2, six, kubernetes
Is there any workaround this problem or is it a bug?
Update (08-01-2020): The problem is fixed on version 0.17.0.
$ python3 -m pip show k8s
Name: k8s
Version: 0.17.0
Summary: Python client library for the Kubernetes API
Home-page: https://github.com/fiaas/k8s
Author: FiaaS developers
Author-email: fiaas#googlegroups.com
License: Apache License
Location: /usr/local/lib/python3.8/dist-packages
Requires: requests, pyrfc3339, six, cachetools
You could try the following as a workaround; in this example, we're creating a text template, and then using the from_yaml filter to transform this into our desired data structure:
- name: sample
community.kubernetes.k8s:
state: present
definition:
apiVersion: apps/v1
kind: Deployment
metadata:
name: "{{ name }}"
namespace: "{{ namespace }}"
labels:
app: "{{ app }}"
spec: "{{ spec|from_yaml }}"
vars:
spec: |
replicas: 2
selector:
matchLabels:
app: "{{ app }}"
template:
metadata:
labels:
app: "{{ app }}"
spec:
containers:
- name: "{{ name }}"
image: "{{ image }}"
ports:
- containerPort: {{ containerPort }}
The solution provided by larsks works perfectly. Although I got another problem on my case where I use templates with a bit more complex cases (e.g. loops etc) where I found my self having the same problem.
The only solution that I had before was to use ansible.builtin.template – Template a file out to a remote server and simply ssh the some_file.yml.j2 to one of my Master nodes and deploy through ansible.builtin.shell – Execute shell commands on targets (e.g. kubectl apply -f some_file.yml).
Thanks to community.kubernetes.k8s – Manage Kubernetes (K8s) objects I am able to do all this work with a single task e.g. (example taken from documentation):
- name: Read definition template file from the Ansible controller file system
community.kubernetes.k8s:
state: present
template: '/testing/deployment.j2'
The only requirement that the user needs to have in advance is to have the kubeconfig file placed in the default location (~/.kube/config) or use the kubeconfig flag to point to the location of the file.
As a last step I use it delegate_to to localhost command e.g.
- name: Read definition template file from the Ansible controller file system
community.kubernetes.k8s:
state: present
template: '/testing/deployment.j2'
delegate_to: localhost
The way that this task works is that the user ssh to himself and run kubectl apply -f some_file.yml.j2 towards the LB or Master node API and the API applies the request (if the user has the permissions).

How to connect to Kubernetes using ansible?

I want to connect to Kubernetes using Ansible. I want to run some ansible playbooks to create Kubernetes objects such as roles and rolebindings using ansible k8s module. I want to know if the Ansible K8s module is standard Kubernetes client that can use Kubeconfig in the same way as helm and kubectl.
Please let me know how to configure Kubeconfig for ansible to connect to K8s cluster.
You basically specify the kubeconfig parameter in the Ansible YAML file. (It defaults to ~/.kube/config.json). For example:
---
- hosts: localhost
gather_facts: false
vars_files:
- vars/main.yml
tasks:
- name: Deploy my app secrets.
k8s:
definition: '{{ item }}'
kubeconfig: '~/.kube/config'
state: present
loop: "{{ lookup('template', 'myapp/mysql-pass.yml') | from_yaml_all | list }}"
no_log: k8s_no_log
...
You can also make it a variable:
...
- name: Deploy my app secrets.
k8s:
definition: '{{ item }}'
kubeconfig: '{{ k8s_kubeconfig }}'
...
Thankyou..It worked for me..I tried the below.
- hosts: localhost
gather_facts: false
tasks:
- name: Create a k8s namespace
k8s:
kubeconfig: '~/Documents/sample-project/eks-kubeconfig'
name: testing1
api_version: v1
kind: Namespace
state: present
state: present

In Ansible, How to read configMap file in local system and remove it from remote node?

I am having a configMap file in my local system. This configMap is created in kubernetes cluster (which is remote machine). Now i am trying to remove the configMap from remote machine using ansible.
- hosts: k8s
vars:
configmap: "./configmap.yml"
secret: "./secret.yml"
- name: uninstall configMap file
shell: "kubectl delete -f {{ configMap }}"
The error is as below. Seems it looks for the file in k8s nodes. but the file is there in local machine.
"stderr": "error: the path \"./configmap.yml\" does not exist"
I also tried this .
- hosts: k8s
vars:
configmap: "{{ lookup('file', './configmap.yml') }}"
- name: get ConfigMap
shell: "cat {{configmap | from_yaml}} | kubectl delete -f - "
It says changed as if success but the configmap is not removed
How to remove the config map from remote node?
The easiest way to understand the issue is checking current directory for the playbook. Compare current directory and directory where you saved the file. They are different. To take current directory:
- shell: "pwd"
register: result
- debug:
var: result
It's easier to use absolute path for configmap.yml to create/use files on the host, e.g.:
vars:
configmap: "/root/configmap.yml"

How to setup ansible playbook that is able to execute kubectl (kubernetes) commands

I'm trying to write simple ansible playbook that would be able to execute some arbitrary command against the pod (container) running in kubernetes cluster.
I would like to utilise kubectl connection plugin: https://docs.ansible.com/ansible/latest/plugins/connection/kubectl.html but having struggle to figure out how to actually do that.
Couple of questions:
Do I need to first have inventory for k8s defined? Something like: https://docs.ansible.com/ansible/latest/plugins/inventory/k8s.html. My understanding is that I would define kube config via inventory which would be used by the kubectl plugin to actually connect to the pods to perform specific action.
If yes, is there any example of arbitrary command executed via kubectl plugin (but not via shell plugin that invokes kubectl on some remote machine - this is not what I'm looking for)
I'm assuming that, during the ansible-playbook invocation, I would point to k8s inventory.
Thanks.
I would like to utilise kubectl connection plugin: https://docs.ansible.com/ansible/latest/plugins/connection/kubectl.html but having struggle to figure out how to actually do that.
The fine manual describes how one uses connection plugins, and while it is possible to use in in tasks, that is unlikely to make any sense unless your inventory started with Pods.
The way I have seen that connection used is to start by identifying the Pods against which you might want to take action, and then run a playbook against a unique group for that purpose:
- hosts: all
tasks:
- set_fact:
# this is *just an example for brevity*
# in reality you would use `k8s:` or `kubectl get -o name pods -l my-selector=my-value` to get the pod names
pod_names:
- nginx-12345
- nginx-3456
- add_host:
name: '{{ item }}'
groups:
- my-pods
with_items: '{{ pod_names }}'
- hosts: my-pods
connection: kubectl
tasks:
# and now you are off to the races
- command: ps -ef
# watch out if the Pod doesn't have a working python installed
# as you will have to use raw: instead
# (and, of course, disable "gather_facts: no")
- raw: ps -ef
First install k8s collections
ansible-galaxy collection install community.kubernetes
and here is play-book, it will sort all pods and run a command in every pod
---
-
hosts: localhost
vars_files:
- vars/main.yaml
collections:
- community.kubernetes
tasks:
-
name: Get the pods in the specific namespace
k8s_info:
kubeconfig: '{{ k8s_kubeconfig }}'
kind: Pod
namespace: test
register: pod_list
-
name: Print pod names
debug:
msg: "pod_list: {{ pod_list | json_query('resources[*].status.podIP') }} "
- set_fact:
pod_names: "{{pod_list|json_query('resources[*].metadata.name')}}"
-
k8s_exec:
kubeconfig: '{{ k8s_kubeconfig }}'
namespace: "{{ namespace }}"
pod: "{{ item.metadata.name }}"
command: apt update
with_items: "{{ pod_list.resources }}"
register: exec
loop_control:
label: "{{ item.metadata.name }}"
Maybe you can use like this...
- shell: |
kubectl exec -i -n {{ namespace }} {{ pod_name }} -- bash -c 'clickhouse-client --query "INSERT INTO customer FORMAT CSV"
--user=test --password=test < /mnt/azure/azure/test/test.tbl'
As per the latest documentation you can use the following k8s modules
The following are some of the examples
- name: Create a k8s namespace
kubernetes.core.k8s:
name: testing
api_version: v1
kind: Namespace
state: present
- name: Create a Service object from an inline definition
kubernetes.core.k8s:
state: present
definition:
apiVersion: v1
kind: Service
metadata:
name: web
namespace: testing
labels:
app: galaxy
service: web
spec:
selector:
app: galaxy
service: web
ports:
- protocol: TCP
targetPort: 8000
name: port-8000-tcp
port: 8000
- name: Remove an existing Service object
kubernetes.core.k8s:
state: absent
api_version: v1
kind: Service
namespace: testing
name: web