With manually installed Kubernetes, how to install and use addon manager? - kubernetes

With manually installed Kubernetes on CoreOS, how does one install and use the Kubernetes addon manager?
I've found references to the addon manager being the current standard way of installing Kubernetes addons, but I can't find any authoritative documentation on it. Hoping someone can help me out here.

The addon manager is deployed as a normal pod or a deployment, with a simple kubectl apply -f.
The yaml looks something like this, look at the specific version that you need:
apiVersion: v1
kind: Pod
metadata:
name: kube-addon-manager
namespace: kube-system
labels:
component: kube-addon-manager
spec:
hostNetwork: true
containers:
- name: kube-addon-manager
# When updating version also bump it in:
# - cluster/images/hyperkube/static-pods/addon-manager-singlenode.json
# - cluster/images/hyperkube/static-pods/addon-manager-multinode.json
# - test/kubemark/resources/manifests/kube-addon-manager.yaml
image: gcr.io/google-containers/kube-addon-manager:v6.4-beta.1
command:
- /bin/bash
- -c
- /opt/kube-addons.sh 1>>/var/log/kube-addon-manager.log 2>&1
resources:
requests:
cpu: 5m
memory: 50Mi
volumeMounts:
- mountPath: /etc/kubernetes/
name: addons
readOnly: true
- mountPath: /var/log
name: varlog
readOnly: false
volumes:
- hostPath:
path: /etc/kubernetes/
name: addons
- hostPath:
path: /var/log
name: varlog
The addon manager observes the specific yaml files under /etc/kubernetes/addons/, put any addon you like here to install it.

Related

kubernetes mongodb ops manager running "VolumeBinding" filter plugin for pod "ops-manager-db-0": pod has unbound immediate PersistentVolumeClaims

I am trying to configure MongoDB ops manager on Kubernetes, I have a PersistentVolumeClaim based on dynamic provisioning based on CEPH and configured it successfully, What I am trying to do is to define the volume mounts and volumes in MongoDBOpsManager YAML file, I tried different things but couldn't define them
here is my MongoDBOpsManager yaml file:
apiVersion: mongodb.com/v1
kind: MongoDBOpsManager
metadata:
name: ops-manager
namespace: mongodb
# podSpec:
# podTemplate:
# spec:
# containers:
# - name: mongodb-enterprise-database
# volumeMounts:
# - name: mongo-persistent-storage
# mountPath: /data/db
# volumes:
# - name: mongo-persistent-storage
# persistentVolumeClaim:
# claimName: mongo-pvc
spec:
# the version of Ops Manager distro to use
version: 4.2.4
containers:
- name: mongodb-ops-manager
volumeMounts:
- name: mongo-persistent-storage
mountPath: /data/db
volumes:
- name: mongo-persistent-storage
persistentVolumeClaim:
claimName: mongo-pvc
# the name of the secret containing admin user credentials.
adminCredentials: ops-manager-admin-secret
externalConnectivity:
type: NodePort
# the Replica Set backing Ops Manager.
# appDB has the SCRAM-SHA authentication mode always enabled
applicationDatabase:
members: 3
statefulSet:
spec:
# volumeClaimTemplates:letsChangeTheWorld1
template:
spec:
containers:
- name: mongodb-ops-manager
volumeMounts:
- name: mongo-persistent-storage
mountPath: /data/db
volumes:
- name: mongo-persistent-storage
persistentVolumeClaim:
claimName: mongo-pvc
I don't know where should I put the volume mounts and volume definition
I the ops manager om is created successfully but when I check the created pod for it I found this error
running "VolumeBinding" filter plugin for pod "ops-manager-db-0": pod has unbound immediate PersistentVolumeClaims
spec:
containers:
- image:
....
volumeMounts:
.....
- image:
....
volumeMounts:
......
volumes:
- name:
Volumes tag should come parallel to containers.
Volumes are defined globally for all containers and mounts are speific to containers
Example: https://kubernetes.io/docs/concepts/storage/volumes/
Check with this once

Mount / copy a file from host to Pod in kubernetes using minikube

I'm writing a kubectl configuration to start an image and copy a file to the container.
I need the file Config.yaml in the / so /Config.yaml needs to be a valid file.
I need that file in the Pod before it starts, so kubectl cp does not work.
I have the Config2.yaml in my local folder, and I'm starting the pod like:
kubectl apply -f pod.yml
Here follows my pod.yml file.
apiVersion: v1
kind: Pod
metadata:
name: python
spec:
containers:
- name: python
image: mypython
volumeMounts:
- name: config
mountPath: /Config.yaml
volumes:
- name: config
hostPath:
path: Config2.yaml
type: File
If I try to use like this it also fails:
- name: config-yaml
mountPath: /
subPath: Config.yaml
#readOnly: true
If you just need the information contained in the config.yaml to be present in the pod from the time it is created, use a configMap instead.
Create a configMap that contains all the data stored in the config.yaml and mount that into the correct path in the pod. This would not work for read/write, but works wonderfully for read-only data
You can try postStart lifecycle handler here to validate the file before pod starts.
Please refer here
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: nginx
name: nginx
spec:
containers:
- image: nginx
name: nginx
resources: {}
volumeMounts:
- mountPath: /config.yaml
name: config
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "apt update && apt install yamllint -y && yamllint /config.yaml"]
volumes:
- name: config
hostPath:
path: /tmp/config.yaml
type: File
dnsPolicy: ClusterFirst
restartPolicy: Never
status: {}
If config.yaml is invalid. Pod won't start.

Is it possible to run gcsfuse without privileged mode inside GCP kubernetes?

Following this guide,I am trying to run gcsfuse inside a pod in GKE. Below is the deployment manifest that I am using:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: gcsfuse-test
spec:
replicas: 1
template:
metadata:
labels:
app: gcsfuse-test
spec:
containers:
- name: gcsfuse-test
image: gcr.io/project123/gcs-test-fuse:latest
securityContext:
privileged: true
capabilities:
add:
- SYS_ADMIN
lifecycle:
postStart:
exec:
command: ["mkdir", "/mnt"]
command: ["gcsfuse", "-o", "nonempty", "cloudsql-p8p", "/mnt"]
preStop:
exec:
command: ["fusermount", "-u", "/mnt"]
However, I would like to run gcsfuse without the privileged mode inside my GKE Cluster.
I think (because of questions like these on SO) it is possible to run the docker image with certain flags and there will be no need to run it in privileged mode.
Is there any way in GKE to run gcsfuse without running the container in privileged mode?
Edit Apr 26, 2022: for a further developed repo derived from this answer, see https://github.com/samos123/gke-gcs-fuse-unprivileged
Now it finally is possible to mount devices without privileged: true or CAP_SYS_ADMIN!
What you need is
A Kubelet device manager which allows containers to have direct access to host devices in a secure way. The device manager explicitly given devices to be available via the Kubelet Device API. I used this hidden gem: https://gitlab.com/arm-research/smarter/smarter-device-manager.
Define list of devices provided by the Device Manager - add /dev/YOUR_DEVICE_NAME into this list, see example below.
Request a device via the Device Manager in the pod spec resources.requests.smarter-devices/YOUR_DEVICE_NAME: 1
I spent quite some time figuring this out so I hope sharing the information here will help someone else from the exploration.
I wrote my details findings in the Kubernetes Github issue about /dev/fuse. See an example setup in this comment and more technical details above that one.
Examples from the comment linked above:
Allow FUSE devices via Device Manager:
apiVersion: v1
kind: ConfigMap
metadata:
name: smarter-device-manager
namespace: device-manager
data:
conf.yaml: |
- devicematch: ^fuse$
nummaxdevices: 20
Request /dev/fuse via Device Manager:
# Pod spec:
resources:
limits:
smarter-devices/fuse: 1
memory: 512Mi
requests:
smarter-devices/fuse: 1
cpu: 10m
memory: 50Mi
Device Manager as a DaemonSet:
# https://gitlab.com/arm-research/smarter/smarter-device-manager/-/blob/master/smarter-device-manager-ds.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: smarter-device-manager
namespace: device-manager
labels:
name: smarter-device-manager
role: agent
spec:
selector:
matchLabels:
name: smarter-device-manager
updateStrategy:
type: RollingUpdate
template:
metadata:
labels:
name: smarter-device-manager
annotations:
node.kubernetes.io/bootstrap-checkpoint: "true"
spec:
## kubectl label node pike5 smarter-device-manager=enabled
# nodeSelector:
# smarter-device-manager : enabled
priorityClassName: "system-node-critical"
hostname: smarter-device-management
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: smarter-device-manager
image: registry.gitlab.com/arm-research/smarter/smarter-device-manager:v1.1.2
imagePullPolicy: IfNotPresent
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop: ["ALL"]
resources:
limits:
cpu: 100m
memory: 15Mi
requests:
cpu: 10m
memory: 15Mi
volumeMounts:
- name: device-plugin
mountPath: /var/lib/kubelet/device-plugins
- name: dev-dir
mountPath: /dev
- name: sys-dir
mountPath: /sys
- name: config
mountPath: /root/config
volumes:
- name: device-plugin
hostPath:
path: /var/lib/kubelet/device-plugins
- name: dev-dir
hostPath:
path: /dev
- name: sys-dir
hostPath:
path: /sys
- name: config
configMap:
name: smarter-device-manager
Privileged mode means you have all the capabilities enabled, see https://stackoverflow.com/a/36441605. So adding CAP_SYS_ADMIN looks redundant here in your example.
You can either give all the privileges or do something more fine-grained by mounting /dev/fuse and giving only SYS_ADMIN capability (which remains an important permission).
I think we can rephrase the question as : Can we run GCSFuse without the capability SYS_ADMIN ?
Actually it does not look feasible, you can find the related docker issue here : https://github.com/docker/for-linux/issues/321.
For most of projects it won't be a hard no-go. You may want to act in regard of your threat model and decide whether or not it may be a security risk for your production.

Hosting local directory to Kubernetes Pod

I have a single node Kubernetes cluster. I want the pod I make to have access to /mnt/galahad on my local computer (which is the host for the cluster).
Here is my Kubernetes config yaml:
apiVersion: v1
kind: Pod
metadata:
name: galahad-test-distributor
namespace: galahad-test
spec:
volumes:
- name: place-for-stuff
hostPath:
path: /mnt/galahad
containers:
- name: galahad-test-distributor
image: vergilkilla/distributor:v9
volumeMounts:
- name: place-for-stuff
mountPath: /mnt
resources:
limits:
memory: "200Mi"
requests:
memory: "100Mi"
I start my pod like such:
kubectl apply -f ./create-distributor.yaml -n galahad-test
I get a terminal into my newly-made pod:
kubectl exec -it galahad-test-distributor -n galahad-test -- /bin/bash
I go to /mnt in my pod and it doesn't have anything from /mnt/galahad. I make a new file in the host /mnt/galahad folder - doesn't reflect in the pod. How do I achieve this functionality to have the host path files/etc. reflect in the pod? Is this possible in the somewhat straightforward way I am trying here (defining it per-pod definition without creating separate PersistentVolumes and PersistentVolumeRequests)?
Your yaml file looks good.
Using this configuration:
apiVersion: v1
kind: Pod
metadata:
name: galahad-test-distributor
namespace: galahad-test
spec:
volumes:
- name: place-for-stuff
hostPath:
path: /mnt/galahad
containers:
- name: galahad-test-distributor
image: busybox
args: [/bin/sh, -c,
'i=0; while true; do echo "$i: $(date)"; i=$((i+1)); sleep 1; done']
volumeMounts:
- name: place-for-stuff
mountPath: /mnt
resources:
limits:
memory: "200Mi"
requests:
memory: "100Mi"
I ran this and everything worked as expected:
>>> kubectl apply -f create-distributor.yaml # side node: you don't need
# to specify the namespace here
# since it's inside the yaml file
pod/galahad-test-distributor created
>>> touch /mnt/galahad/file
>>> kubectl -n galahad-test exec galahad-test-distributor ls /mnt
file
Are you sure you are adding your files in the right place? For instance, if you are running your cluster inside a VM (e.g. minikube), make sure you are adding the files inside the VM, not on the machine hosting the VM.

How to allow a Kubernetes Job access to a file on host

I've been though the Kubernetes documentation thoroughly but am still having problems interacting with a file on the host filesystem with an application running inside a K8 job launched pod. This happens with even the simplest utility so I have included an stripped down example of my yaml config. The local file, 'hello.txt', referenced here does exist in /tmp on the host (ie. outside the Kubernetes environment) and I have even chmod 777'd it. I've also tried different places in the hosts filesystem than /tmp.
The pod that is launched by the Kubernetes Job terminates with Status=Error and generates the log ls: /testing/hello.txt: No such file or directory
Because I ultimately want to use this programmatically as part of a much more sophisticated workflow it really needs to be a Job not a Deployment. I hope that is possible. My current config file which I am launching with kubectl just for testing is:
apiVersion: batch/v1
kind: Job
metadata:
name: kio
namespace: kmlflow
spec:
# ttlSecondsAfterFinished: 5
template:
spec:
containers:
- name: kio-ingester
image: busybox
volumeMounts:
- name: test-volume
mountPath: /testing
imagePullPolicy: IfNotPresent
command: ["ls"]
args: ["-l", "/testing/hello.txt"]
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /tmp
# this field is optional
# type: Directory
restartPolicy: Never
backoffLimit: 4
Thanks in advance for any assistance.
Looks like when the volume is mounted , the existing data can't be accessed.
You will need to make use of init container to pre-populate the data in the volume.
apiVersion: v1
kind: Pod
metadata:
name: my-app
spec:
containers:
- name: my-app
image: my-app:latest
volumeMounts:
- name: config-data
mountPath: /data
initContainers:
- name: config-data
image: busybox
command: ["echo","-n","{'address':'10.0.1.192:2379/db'}", ">","/data/config"]
volumeMounts:
- name: config-data
mountPath: /data
volumes:
- name: config-data
hostPath: {}
Reference:
https://medium.com/#jmarhee/using-initcontainers-to-pre-populate-volume-data-in-kubernetes-99f628cd4519