I would like to apply the ingress https://projectcontour.io/ to my kubernetes cluster with ansible community.kubernetes.k8s.
The example on https://docs.ansible.com/ansible/latest/collections/community/kubernetes/k8s_module.html shows me how to apply local files, for instance
- name: Create a Deployment by reading the definition from a local file
community.kubernetes.k8s:
state: present
src: /testing/deployment.yml
However I could not find the example with remote file.
With kubectl the deployment of contour ingress can be done as follow:
kubectl apply -f https://projectcontour.io/quickstart/contour.yaml
When I read the docs, I don't see an option to do that.
You could download the file first, then apply it.
- name: retrieve file
get_url:
url: https://projectcontour.io/quickstart/contour.yaml
dest: /testing/contour.yaml
register: download_contour
- name: create deployment
k8s:
src: /testing/deployment.yml
when: download_contour.changed
Related
I use kubebuilder to quickly develop k8s operator, and now I save the yaml deployed by kustomize to a file in the following way.
create: manifests kustomize ## Create chart
cd config/manager && $(KUSTOMIZE) edit set image controller=${IMG}
$(KUSTOMIZE) build config/default --output yamls
I found a configmap, but it is not referenced by other resources.
apiVersion: v1
data:
controller_manager_config.yaml: |
apiVersion: controller-runtime.sigs.k8s.io/v1alpha1
kind: ControllerManagerConfig
health:
healthProbeBindAddress: :8081
metrics:
bindAddress: 127.0.0.1:8080
webhook:
port: 9443
leaderElection:
leaderElect: true
resourceName: 31568e44.ys7.com
kind: ConfigMap
metadata:
name: myoperator-manager-config
namespace: myoperator-system
I am a little curious what it does? can i delete it?
I really appreciate any help with this.
It is another way to provide controller configuration. Check that https://book.kubebuilder.io/component-config-tutorial/custom-type.html .
However, it should be mounted in your deployment and the file path must be provided in the --config flag (the name of the flag can be different in your case, but "config" is used in the tutorial - it depends on your code)
I'm trying to automate the following:
Apply the Physical Volumes
kubectl apply -f food-pv.yaml
kubectl apply -f bar-pv.yaml
Apply the Physical Volume Claims
kubectl apply -f foo.yaml
kubectl apply -f bar.yaml
Apply the Services
kubectl apply -f this-service.yaml
kubectl apply -f that-nodeport.yaml
Apply the Deployment
kubectl apply -f something.yaml
Now I could run the cmds as shell commands, but I don't think that's the proper way to do it. I've been reading thru the Ansible documentation, but I'm not seeing what I need to do for this. Is there a better way to apply these yaml files without using a series of shell cmds?
Thanks in advance
The best way to do this would be to use ansible kubernetes.core collection
An example with file:
- name: Create a Deployment by reading the definition from a local file
kubernetes.core.k8s:
state: present
src: /testing/deployment.yml
So, you could loop from different folders containing the yaml definitions for your objects with state: present
I don't currently have a running kube cluster to test this against but you should basically be able to run all this in a single task with a loop using the kubernetes.core.k8s module
Here is what I believe should meet your requirement (provided your access to your kube instance is configured and ok in your environment and that you installed the above collection as described in the documentation)
- name: install my kube objects
hosts: localhost
gather_facts: false
vars:
obj_def_path: /path/to/your/obj_def_dir/
obj_def_list:
- food-pv.yaml
- bar-pv.yaml
- foo.yaml
- bar.yaml
- this-service.yaml
- that-nodeport.yaml
- something.yaml
tasks:
- name: Install all objects from def files
k8s:
src: "{{ obj_def_path }}/{{ item }}"
state: present
apply: true
loop: "{{ obj_def_list }}"
Writing an Ansible playbook where we pull projects from GIT repos and thereafter apply all pulled yamls to a Kubernetes cluster.
I see only an example to apply single yaml files to a Kubernetes cluster but not multiple ones at once. e.g.:
- name: Apply metrics-server manifest to the cluster.
community.kubernetes.k8s:
state: present
src: ~/metrics-server.yaml
Is there any way of applying multiple yaml files? Something like:
- name: Apply Services to the cluster.
community.kubernetes.k8s:
state: present
src: ~/svc-*.yaml
Or:
- name: Apply Ingresses to the cluster.
community.kubernetes.k8s:
state: present
dir: ~/ing/
Is there maybe another Ansible K8s module I should be looking at maybe?
Or should we just run kubectl commands directly in Ansible tasks. e.g.:
- name: Apply Ingresses to the cluster.
command: kubectl apply -f ~/ing/*.yaml
What is the best way to achieve this using Ansible?
You can use k8s Ansible module along with 'with_fileglob' recursive pattern. Below code should work for your requirement.
- name: Apply K8s resources
k8s:
definition: "{{ lookup('template', '{{ item }}') | from_yaml }}"
with_fileglob:
- "~/ing/*.yaml"
I'm new to openshift and Kubernetes too (coming from Docker swarm world). and I'm looking to create secrets in openshift using definition file. these secrets are generated from a file. to give an example of what I'm trying to do let's say I have a file "apache.conf" and I want to add that file to containers as a secret mounted as a volume. In swarm I can just write the following in the stack file:
my-service:
secrets:
- source: my-secret
target: /home/myuser/
mode: 0700
secrets:
my-secret:
file: /from/host/apache.conf
In openshift I'm looking to have something similar like:
apiVersion: v1
kind: Secret
metadata:
- name: my-secret
files:
- "/from/host/apache.conf"
type: Opaque
The only way I've found that I can do something similar is by using kustomize and according to this post using Kustomize with openshift is cumbersome. is there a better way for creating secrets from a file?
No you can't
The reason is that the object Secret is stored in the etcd database and is not bound to any host. Therefore, the object doesn't understand the path.
You can create the secret from a file using the cli, and then the content will be saved in the Secret object.
oc create secret generic my-secret --from-file=fil1=pullsecret_private.json
In K8S, what is the best way to execute scripts in container (POD) once at deployment, which reads from confuguration files which are part of the deployment and seed ex mongodb once?
my project consist of k8s manifest files + configuration files
I would like to be able to update the config files locally and then redeploy via kubectl or helm
In docker-compose i could create a volume ponting at the directory where the config files resides and then in the command part execute bash -c cmds reading from the config files in the volume. How is this best done in K8S? I don't want to include the configuration files in a image via dockerfile, forcing me to rebuild the image before redeploying again via kubectl or helm
How is this best done in K8S?
There are several ways to skin a cat, but my suggestion would be to do the following:
Keep configuration in configMap and mount it as separate volume. Such a map is kept as k8s manifest, making all changes to it separate from docker build image - no need to rebuild or keep sensitive data within image. You can also use instead (or together with) secret in the same manner as configMap.
Use initContainers to do the initialization before main container is to be brought online, covering for your 'once on deployment' automatically. Alternatively (if init operation is not repeatable) you can use Jobs instead and start it when necessary.
Here is excerpt of example we are using on gitlab runner:
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: ss-my-project
spec:
...
template:
....
spec:
...
volumes:
- name: volume-from-config-map-config-files
configMap:
name: cm-my-config-files
- name: volume-from-config-map-script
projected:
sources:
- configMap:
name: cm-my-scripts
items:
- key: run.sh
path: run.sh
mode: 0755
# if you need to run as non-root here is how it is done:
securityContext:
runAsNonRoot: true
runAsUser: 999
supplementalGroups: [999]
containers:
- image: ...
name: ...
command:
- /scripts/run.sh
...
volumeMounts:
- name: volume-from-config-map-script
mountPath: "/scripts"
readOnly: true
- mountPath: /usr/share/my-app-config/config.file
name: volume-from-config-map-config-files
subPath: config.file
...
You can, ofc, mount several volumes from config maps or combine them in one single, depending on frequency of your changes and affected parts. This is example with two separately mounted configMaps just to illustrate the principle (and mark script executable), but you can use only one for all required files, put several files into one or put single file into each - as per your need.
Example of such configMap is like so:
apiVersion: v1
kind: ConfigMap
metadata:
name: cm-my-scripts
data:
run.sh: |
#!/bin/bash
echo "Doing some work here..."
And example of configMap covering config file is like so:
kind: ConfigMap
apiVersion: v1
metadata:
name: cm-my-config-files
data:
config.file: |
---
# Some config.file (example name) required in project
# in whatever format config file actually is (just example)
... (here is actual content like server.host: "0" or EFG=True or whatever)
Playing with single or multiple files in configMaps can yield result you want, and depending on your need you can have as many or as few as you want.
In docker-compose i could create a volume ponting at the directory where the config files resides and then in the command part execute bash -c cmds reading from the config files in the volume.
In k8s equivalent of this would be hostPath but then you would seriously hamper k8s ability to schedule pods to different nodes. This might be ok if you have single node cluster (or while developing) to ease change of config files, but for actual deployment above approach is advised.