I want to connect to Kubernetes using Ansible. I want to run some ansible playbooks to create Kubernetes objects such as roles and rolebindings using ansible k8s module. I want to know if the Ansible K8s module is standard Kubernetes client that can use Kubeconfig in the same way as helm and kubectl.
Please let me know how to configure Kubeconfig for ansible to connect to K8s cluster.
You basically specify the kubeconfig parameter in the Ansible YAML file. (It defaults to ~/.kube/config.json). For example:
---
- hosts: localhost
gather_facts: false
vars_files:
- vars/main.yml
tasks:
- name: Deploy my app secrets.
k8s:
definition: '{{ item }}'
kubeconfig: '~/.kube/config'
state: present
loop: "{{ lookup('template', 'myapp/mysql-pass.yml') | from_yaml_all | list }}"
no_log: k8s_no_log
...
You can also make it a variable:
...
- name: Deploy my app secrets.
k8s:
definition: '{{ item }}'
kubeconfig: '{{ k8s_kubeconfig }}'
...
Thankyou..It worked for me..I tried the below.
- hosts: localhost
gather_facts: false
tasks:
- name: Create a k8s namespace
k8s:
kubeconfig: '~/Documents/sample-project/eks-kubeconfig'
name: testing1
api_version: v1
kind: Namespace
state: present
state: present
Related
I am trying to use ansible or helm test to test all resources are up and running after the deployment of ansible automation platform (automation controller, private-automation-hub) on openshift.
Currently, I am using ansible assertion to check the deployments but seems like I can use --atomic with helm commands and check the all resources are up after the helm deployment.
Can you help me with ansible to check all the resources (not only deployments but all resources I deployed with helm chart)? maybe example code or also if possible with helm test some examples?
Thank you.
- name: Test deployment
hosts: localhost
gather_facts: false
# vars:
# deployment_name: "pah-api"
tasks:
- name: gather all deployments
shell: oc get deployment -o template --template '{{"{{"}}range.items{{"}}"}}{{"{{"}}.metadata.name{{"}}"}}{{"{{"}}"\n"{{"}}"}}{{"{{"}}end{{"}}"}}'
register: deployed_resources
# - name: print the output of deployments
# debug:
# var: deployed_resources.stdout_lines
- name: Get deployment status
shell: oc get deployment {{ item }} -o=jsonpath='{.status.readyReplicas}'
with_items: "{{ deployed_resources.stdout_lines }}"
register: deployment_status
failed_when: deployment_status.rc != 0
- name: Verify deployment is running
assert:
that:
- deployment_status.stdout != 'null'
- deployment_status.stdout != '0'
fail_msg: 'Deployment {{ deployed_resources }} is not running.'
Currently I only check for deployments but it would be nice to check all resources (I deployed with helm chart) with ansible or via helm test?
You could use the Ansible Helm module. The atomic parameter is available out of the box: https://docs.ansible.com/ansible/latest/collections/kubernetes/core/helm_module.html
I installed Keycloak using the bitnami/keycloak Helm chart (https://bitnami.com/stack/keycloak/helm).
As I'm also using Prometheus-Operator for monitoring I enabled the metrics endpoint and the service monitor:
keycloak:
...
metrics:
enabled: true
serviceMonitor:
enabled: true
namespace: monitoring
additionalLabels:
release: my-prom-operator-release
As I'm way more interested in actual realm metrics I installed the keycloak-metrics-spi provider (https://github.com/aerogear/keycloak-metrics-spi) by setting up an init container that downloads it to a shared volume.
keycloak:
...
extraVolumeMounts:
- name: providers
mountPath: /opt/bitnami/keycloak/providers
extraVolumes:
- name: providers
emptyDir: {}
...
initContainers:
- name: metrics-spi-provider
image: SOME_IMAGE_WITH_WGET_INSTALLED
imagePullPolicy: Always
command:
- sh
args:
- -c
- |
KEYCLOAK_METRICS_SPI_VERSION=2.5.2
wget --no-check-certificate -O /providers/keycloak-metrics-spi-${KEYCLOAK_METRICS_SPI_VERSION}.jar \
https://github.com/aerogear/keycloak-metrics-spi/releases/download/${KEYCLOAK_METRICS_SPI_VERSION}/keycloak-metrics-spi-${KEYCLOAK_METRICS_SPI_VERSION}.jar
chmod +x /providers/keycloak-metrics-spi-${KEYCLOAK_METRICS_SPI_VERSION}.jar
touch /providers/keycloak-metrics-spi-${KEYCLOAK_METRICS_SPI_VERSION}.jar.dodeploy
volumeMounts:
- name: providers
mountPath: /providers
The provider enables metrics endpoints on the regular public-facing http port instead of the http-management port, which is not great for me. But I can block external access to them in my reverse proxy.
What I'm missing is some kind of auto-scraping of those endpoints. Right now I created an additional template, that creates a new service monitor for each element of a predefined list in my chart:
values.yaml
keycloak:
...
metrics:
extraServiceMonitors:
- realmName: master
- realmName: my-realm
servicemonitor-metrics-spi.yaml
{{- range $serviceMonitor := .Values.keycloak.metrics.extraServiceMonitors }}
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: {{ $.Release.Name }}-spi-{{ $serviceMonitor.realmName }}
...
spec:
endpoints:
- port: http
path: /auth/realms/{{ $serviceMonitor.realmName }}/metrics
...
{{- end }}
Is there a better way of doing this? So that Prometheus can auto-detect all my realms and scrape their endpoints?
Thanks in advance!
As commented by #jan-garaj there is no need to query all the endpoints. All return the accumulated data of all realms. So it is enough to just scrape the endpoint of one realm (e.g. the master realm).
Thanks a lot!
It might help someone, the bitnami image so the helm chart already include the metrics-spi-provider. So do not need any further installation action but the metrics must be enabled in values.
I'm trying to write simple ansible playbook that would be able to execute some arbitrary command against the pod (container) running in kubernetes cluster.
I would like to utilise kubectl connection plugin: https://docs.ansible.com/ansible/latest/plugins/connection/kubectl.html but having struggle to figure out how to actually do that.
Couple of questions:
Do I need to first have inventory for k8s defined? Something like: https://docs.ansible.com/ansible/latest/plugins/inventory/k8s.html. My understanding is that I would define kube config via inventory which would be used by the kubectl plugin to actually connect to the pods to perform specific action.
If yes, is there any example of arbitrary command executed via kubectl plugin (but not via shell plugin that invokes kubectl on some remote machine - this is not what I'm looking for)
I'm assuming that, during the ansible-playbook invocation, I would point to k8s inventory.
Thanks.
I would like to utilise kubectl connection plugin: https://docs.ansible.com/ansible/latest/plugins/connection/kubectl.html but having struggle to figure out how to actually do that.
The fine manual describes how one uses connection plugins, and while it is possible to use in in tasks, that is unlikely to make any sense unless your inventory started with Pods.
The way I have seen that connection used is to start by identifying the Pods against which you might want to take action, and then run a playbook against a unique group for that purpose:
- hosts: all
tasks:
- set_fact:
# this is *just an example for brevity*
# in reality you would use `k8s:` or `kubectl get -o name pods -l my-selector=my-value` to get the pod names
pod_names:
- nginx-12345
- nginx-3456
- add_host:
name: '{{ item }}'
groups:
- my-pods
with_items: '{{ pod_names }}'
- hosts: my-pods
connection: kubectl
tasks:
# and now you are off to the races
- command: ps -ef
# watch out if the Pod doesn't have a working python installed
# as you will have to use raw: instead
# (and, of course, disable "gather_facts: no")
- raw: ps -ef
First install k8s collections
ansible-galaxy collection install community.kubernetes
and here is play-book, it will sort all pods and run a command in every pod
---
-
hosts: localhost
vars_files:
- vars/main.yaml
collections:
- community.kubernetes
tasks:
-
name: Get the pods in the specific namespace
k8s_info:
kubeconfig: '{{ k8s_kubeconfig }}'
kind: Pod
namespace: test
register: pod_list
-
name: Print pod names
debug:
msg: "pod_list: {{ pod_list | json_query('resources[*].status.podIP') }} "
- set_fact:
pod_names: "{{pod_list|json_query('resources[*].metadata.name')}}"
-
k8s_exec:
kubeconfig: '{{ k8s_kubeconfig }}'
namespace: "{{ namespace }}"
pod: "{{ item.metadata.name }}"
command: apt update
with_items: "{{ pod_list.resources }}"
register: exec
loop_control:
label: "{{ item.metadata.name }}"
Maybe you can use like this...
- shell: |
kubectl exec -i -n {{ namespace }} {{ pod_name }} -- bash -c 'clickhouse-client --query "INSERT INTO customer FORMAT CSV"
--user=test --password=test < /mnt/azure/azure/test/test.tbl'
As per the latest documentation you can use the following k8s modules
The following are some of the examples
- name: Create a k8s namespace
kubernetes.core.k8s:
name: testing
api_version: v1
kind: Namespace
state: present
- name: Create a Service object from an inline definition
kubernetes.core.k8s:
state: present
definition:
apiVersion: v1
kind: Service
metadata:
name: web
namespace: testing
labels:
app: galaxy
service: web
spec:
selector:
app: galaxy
service: web
ports:
- protocol: TCP
targetPort: 8000
name: port-8000-tcp
port: 8000
- name: Remove an existing Service object
kubernetes.core.k8s:
state: absent
api_version: v1
kind: Service
namespace: testing
name: web
I launch the GCP cluster with no problem but I do not know how to get k8s ansible module to work. I would prefer to get the api_key to authenticate into k8s module.
My playbook is the following.
- name: Hello k8s
hosts: all
tasks:
- name: Create a cluster
register: cluster
gcp_container_cluster:
name: thecluster
initial_node_count: 1
master_auth:
username: admin
password: TheRandomPassword
node_config:
machine_type: g1-small
disk_size_gb: 10
oauth_scopes:
- "https://www.googleapis.com/auth/compute"
- "https://www.googleapis.com/auth/devstorage.read_only"
- "https://www.googleapis.com/auth/logging.write"
- "https://www.googleapis.com/auth/monitoring"
zone: europe-west3-c
project: second-network-255214
auth_kind: serviceaccount
service_account_file: "{{ lookup('env', 'GOOGLE_CREDENTIALS') }}"
state: present
- name: Show results
debug: var=cluster
- name: Create temporary file for CA
tempfile:
state: file
suffix: build
register: ca_crt
- name: Save content to file
copy:
content: "{{ cluster.masterAuth.clusterCaCertificate |b64decode }}"
dest: "{{ ca_crt.path }}"
- name: Create a k8s namespace
k8s:
host: "https://{{ cluster.endpoint }}"
ca_cert: "{{ ca_crt.path }}"
api_key: "{{ cluster.HOW_I_GET_THE_API_KEY}}" <<<-- Here is what I want!!!
name: testing
api_version: v1
kind: Namespace
state: present
Any idea?
I founded a workaround that is to call gcloud directly:
- name: Get JWT
command: gcloud auth application-default print-access-token
register: api_key
Obviously, I needed to:
Install GCloud
Redefine the envvar with the auth.json to GOOGLE_APPLICATION_CREDENTIALS.
The task calls gcloud directly to obtain the token, so no need to generate the token. I will try to add to add this feature as a module into ansible for better interoperability with kubernetes.
Once obtained it is possible to call k8s module like this:
- name: Create ClusterRoleBinding
k8s:
state: present
host: "https://{{ cluster.endpoint }}"
ca_cert: "{{ ca_crt.path }}"
api_version: rbac.authorization.k8s.io/v1
api_key: "{{ api_key.stdout }}"
definition:
kind: ClusterRoleBinding
metadata:
name: kube-system_default_cluster-admin
subjects:
- kind: ServiceAccount
name: default # Name is case sensitive
namespace: kube-system
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
According to the fine manual, masterAuth contains two other fields, clientCertificate and clientKey that correspond to the client_cert: and client_key: parameters, respectively. From that point, you can authenticate to your cluster's endpoint as cluster-admin using the very, very strong credentials of the private key, and from that point use the same k8s: task to provision yourself a cluster-admin ServiceAccount token if you wish to do that.
You can also apparently use masterAuth.username and masterAuth.password in the username: and password: parameters of k8s:, too, which should be just as safe since the credentials travel over HTTPS, but you seemed like you were more interested in a higher entropy authentication solution.
Admins-MacBook-Pro:~ Harshin$ kubectl cluster-info
Kubernetes master is running at http://localhost:8080
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
error: the server doesn't have a resource type "services"
i am following this document
https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html?refid=gs_card
while i am trying to test my configuration in step 11 of configure kubectl for amazon eks
apiVersion: v1
clusters:
- cluster:
server: ...
certificate-authority-data: ....
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: aws
name: aws
current-context: aws
kind: Config
preferences: {}
users:
- name: aws
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
command: heptio-authenticator-aws
args:
- "token"
- "-i"
- "kunjeti"
# - "-r"
# - "<role-arn>"
# env:
# - name: AWS_PROFILE
# value: "<aws-profile>"
Change "name: kubernetes" to actual name of your cluster.
Here is what I did to work it through....
1.Enabled verbose to make sure config files are read properly.
kubectl get svc --v=10
2.Modified the file as below:
apiVersion: v1
clusters:
- cluster:
server: XXXXX
certificate-authority-data: XXXXX
name: abc-eks
contexts:
- context:
cluster: abc-eks
user: aws
name: aws
current-context: aws
kind: Config
preferences: {}
users:
- name: aws
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
command: aws-iam-authenticator
args:
- "token"
- "-i"
- "abc-eks"
# - "-r"
# - "<role-arn>"
env:
- name: AWS_PROFILE
value: "aws"
I have faced a similar issue, however this is not a direct solution but workaround. Use AWS cli commands to create cluster rather than console. As per documentation, the user or role which creates cluster will have master access.
aws eks create-cluster --name <cluster name> --role-arn <EKS Service Role> --resources-vpc-config subnetIds=<subnet ids>,securityGroupIds=<security group id>
Make sure that EKS Service Role has IAM access(I have given Full however AssumeRole will do I guess).
The EC2 machine Role should have eks:CreateCluster and IAM access. Worked for me :)
I had this issue and found it was caused default key setting in ~/.aws/credentials.
We have a few AWS accounts for different customers plus a sandbox account for our own testing and research. So our credentials file looks something like this:
[default]
aws_access_key_id = abc
aws_secret_access_key = xyz
region=us-east-1
[cpproto]
aws_access_key_id = abc
aws_secret_access_key = xyz
region=us-east-1
[sandbox]
aws_access_key_id = abc
aws_secret_access_key = xyz
region=us-east-1
I was messing around our sandbox account but the [default] section was pointing to another account.
Once I put the keys for sandbox into the default section the "kubectl get svc" command worked fine.
Seems we need a way to tell aws-iam-authenticator which keys to use same as --profile in the aws cli.
I guess you should uncomment "env" item and change your refer to ~/.aws/credentials
Because your aws_iam_authenticator requires exact AWS credentials.
Refer this document: https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.html
To have the AWS IAM Authenticator for Kubernetes always use a specific named AWS credential profile (instead of the default AWS credential provider chain), uncomment the env lines and substitute with the profile name to use.