Does anyone know if is it possible to use environment variable or ConfigMap in hostPath of PersistentVolume? Found that it's possible with Helm, envsubst etc. But I want to use only Kubernetes functions
I need to create a volume that will have a not static path.
Here is my PV:
apiVersion: v1
kind: PersistentVolume
metadata:
name: some-pv
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "${PATH_FROM_ENV}/some-path"
You can't do it natively, but a combination of a kubernetes Job that reads from a configmap can do that for you.
We will create a Job with the proper RBAC permissions, this job uses kubectl image, reads the configmap, and passes it to the PV creation manifest.
Here are the manifests:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
namespace: default
name: pv-generator-role
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["persistentvolumes"]
verbs: ["create"]
- apiGroups: [""] # "" indicates the core API group
resources: ["configmaps"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: pv-geneartor-role-binding
namespace: default
subjects:
- kind: ServiceAccount
name: pv-generator-sa
namespace: default
roleRef:
kind: ClusterRole
name: pv-generator-role
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: pv-generator-sa
---
apiVersion: batch/v1
kind: Job
metadata:
name: pv-generator
spec:
template:
spec:
serviceAccountName: pv-generator-sa
containers:
- name: kubectl
image: bitnami/kubectl
command:
- sh
- "-c"
- |
/bin/bash <<'EOF'
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: PersistentVolume
metadata:
name: some-pv
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: $(kubectl get cm path-configmap -ojsonpath="{.data.path}")/some-path
EOF
restartPolicy: Never
backoffLimit: 4
---
apiVersion: v1
kind: ConfigMap
metadata:
name: path-configmap
namespace: default
data:
path: /mypath
Related
Very confused, I used an ec2 instance to bootstrap an eks cluster and everything worked completely fine yesterday. I deleted that cluster last night, just spun a new one up and now I'm getting this error when trying to build my jenkins pod
Error: stat /mnt/jenkins-store: no such file or directory
I find it strange how this error didn't show up yesterday and I set everything up the exact same way today. That error is what I got when I described my jenkins pod.
Here's my jenkins.yaml file for reference
apiVersion: v1
kind: ServiceAccount
metadata:
name: jenkins
namespace: default
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: jenkins
namespace: default
rules:
- apiGroups: [""]
resources: ["pods","services"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: ["apps"]
resources: ["deployments"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["create","delete","get","list","patch","update","watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: jenkins
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: jenkins
subjects:
- kind: ServiceAccount
name: jenkins
---
# Allows jenkins to create persistent volumes
# This cluster role binding allows anyone in the "manager" group to read secrets in any namespace.
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: jenkins-crb
subjects:
- kind: ServiceAccount
namespace: default
name: jenkins
roleRef:
kind: ClusterRole
name: jenkinsclusterrole
apiGroup: rbac.authorization.k8s.io
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
# "namespace" omitted since ClusterRoles are not namespaced
name: jenkinsclusterrole
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["create","delete","get","list","patch","update","watch"]
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins
namespace: default
spec:
selector:
matchLabels:
app: jenkins
replicas: 1
template:
metadata:
labels:
app: jenkins
spec:
containers:
- name: jenkins
image: jenkins/jenkins:lts
env:
- name: JAVA_OPTS
value: -Djenkins.install.runSetupWizard=false
ports:
- name: http-port
containerPort: 8080
- name: jnlp-port
containerPort: 50000
volumeMounts:
- name: jenkins-home
mountPath: /var
subPath: jenkins_home
- name: docker-sock-volume
mountPath: "/var/run/docker.sock"
imagePullPolicy: Always
volumes:
# This allows jenkins to use the docker daemon on the host, for running builds
# see https://stackoverflow.com/questions/27879713/is-it-ok-to-run-docker-from-inside-docker
- name: docker-sock-volume
hostPath:
path: /var/run/docker.sock
- name: jenkins-home
hostPath:
path: /mnt/jenkins-store
serviceAccountName: jenkins
---
apiVersion: v1
kind: Service
metadata:
name: jenkins
namespace: default
spec:
type: NodePort
ports:
- name: ui
port: 8080
targetPort: 8080
nodePort: 31000
- name: jnlp
port: 50000
targetPort: 50000
selector:
app: jenkins
---
Similarly to this answer, check if your ec2-run-instances launched as your new ec2 spawned instance should not include some volumes, which would be then used by the jenkins.yaml.
The lack of such a volume would explain the Error: stat /mnt/jenkins-store message.
I want a deployment in kubernetes to have the permission to restart itself, from within the cluster.
I know I can create a serviceaccount and bind it to the pod, but I'm missing the name of the most specific permission (i.e. not just allowing '*') to allow for the command
kubectl rollout restart deploy <deployment>
here's what I have, and ??? is what I'm missing
apiVersion: v1
kind: ServiceAccount
metadata:
name: restart-sa
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: default
name: restarter
rules:
- apiGroups: ["apps"]
resources: ["deployments"]
verbs: ["list", "???"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: testrolebinding
namespace: default
subjects:
- kind: ServiceAccount
name: restart-sa
namespace: default
roleRef:
kind: Role
name: restarter
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: Pod
metadata:
name: example
spec:
containers:
- image: nginx
name: nginx
serviceAccountName: restart-sa
I believe the following is the minimum permissions required to restart a deployment:
rules:
- apiGroups: ["apps", "extensions"]
resources: ["deployments"]
resourceNames: [$DEPLOYMENT]
verbs: ["get", "patch"]
If you want permission to restart kubernetes deployment itself from within the cluster you need to set permission on rbac authorisation.
In the yaml file you have missed some specific permissions under Role:rules you need to add in the below format
verbs: ["get", "watch", "list"]
Instead of “Pod” you need to add “deployment” in the yaml file.
Make sure that you add “serviceAccountName: restart-sa” in the deployment yaml file under “spec:containers.” As mentioned below:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
serviceAccountName: restart-sa
Then you can restart the deployment using the below command:
$ kubectl rollout restart deployment [deployment_name]
I am getting this error -
Error: rendered manifests contain a resource that already exists. Unable to continue with >install: could not get information about the resource: serviceaccounts "simpleapi" is forbidden: >User "system:serviceaccount:management:gitlab-admin" cannot get resource "serviceaccounts" in API >group "" in the namespace "services"
apiVersion: v1
kind: ServiceAccount
metadata:
name: gitlab
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: gitlab-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: gitlab
namespace: kube-system
- kind: ServiceAccount
name: gitlab
namespace: services
I am using this for RBAC as cluster-admin. Why am I getting this . I also tried the below but still got the same issue . Can someone explain what is that I am doing wrong here -
apiVersion: rbac.authorization.k8s.io/v1
kind: "ClusterRole"
metadata:
name: gitlab-admin
labels:
app: gitlab-admin
rules:
- apiGroups: ["*"] # also tested with ""
resources:
[
"replicasets",
"pods",
"pods/exec",
"secrets",
"configmaps",
"services",
"deployments",
"ingresses",
"horizontalpodautoscalers",
"serviceaccounts",
]
verbs: ["get", "list", "watch", "create", "patch", "delete", "update"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: "ClusterRoleBinding"
metadata:
name: gitlab-admin-global
labels:
app: gitlab-admin
roleRef:
apiGroup: "rbac.authorization.k8s.io"
kind: "ClusterRole"
name: cluster-admin
subjects:
- kind: ServiceAccount
name: gitlab-admin
namespace: management
- kind: ServiceAccount
name: gitlab-admin
namespace: services
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: gitlab-admin
namespace: management
labels:
app: gitlab-admin
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: gitlab-admin
namespace: services
labels:
app: gitlab-admin
So here is what happened . I needed to run this as inside the namespace i.e
I changed the config to run from the namespace management itself .
kubectl config set-context --current --namespace=management
And then
kubectl apply -f gitlab-admin.yaml
I am trying to setup istio1.5.1 in minicube kubernetes cluster,I'm following the official documentation of Knative for setting up istio without sidecar injection. I;m facing issue with istio ingress gateway service which is showing external ip of ingressgateway service as .
I have gone through other answers posted here as well as many other forums but none of them helped in my case.
Using minikube v1.9.1 with driver=none
helm v2.16.5
kubectl v1.18.0
I am getting the following output for:
kubectl get pods --namespace istio-system
NAME READY STATUS RESTARTS AGE
istio-ingressgateway-b599cccd9-qnp5l 1/1 Running 0 60s
istio-pilot-b67ccb85-mfllc 1/1 Running 0 60s
kubectl get svc --namespace istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
AGE
istio-ingressgateway LoadBalancer 10.104.37.189 ***<pending>*** 15020:30168/TCP,80:31380/TCP,443:31390/TCP,31400:31400/TCP,15029:32576/TCP,15030:31080/TCP,15031:31767/TCP,15032:31812/TCP,15443:30660/TCP 74s
istio-pilot ClusterIP 10.100.224.212 <none> 15010/TCP,15011/TCP,8080/TCP,15014/TCP 74s
On describing ingress pod I am getting a warning Readiness probe failed: HTTP probe failed with statuscode: 503
Can someone help me to solve this issue.
Thanks!
Updating with output from trying out the answer:
kubectl apply -f metallb.yaml
podsecuritypolicy.policy/controller created
podsecuritypolicy.policy/speaker created
serviceaccount/controller created
serviceaccount/speaker created
clusterrole.rbac.authorization.k8s.io/metallb-system:controller created
clusterrole.rbac.authorization.k8s.io/metallb-system:speaker created
role.rbac.authorization.k8s.io/config-watcher created
role.rbac.authorization.k8s.io/pod-lister created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:controller created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:speaker created
rolebinding.rbac.authorization.k8s.io/config-watcher created
rolebinding.rbac.authorization.k8s.io/pod-lister created
daemonset.apps/speaker created
deployment.apps/controller created
$ kubectl get pods -n metallb-system
No resources found in metallb-system namespace.
After applying the yaml file it show that everything is created but I am not getting any pod deployed under metallb-system namespace.
Minikube may not provide external IP or loadbalancer you might have to use metalLB in minikube.
Metal lb : https://metallb.universe.tf/
you can also check this out for reference:https://medium.com/#emirmujic/istio-and-metallb-on-minikube-242281b1134b
This is also a good reference: https://gist.github.com/diegopacheco/9ed4fd9b9a0f341e94e0eb791169ecf9
Metal LB YAMl :
apiVersion: v1
kind: Namespace
metadata:
name: metallb-system
labels:
app: metallb
---
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: metallb-system
name: controller
labels:
app: metallb
---
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: metallb-system
name: speaker
labels:
app: metallb
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: metallb-system:controller
labels:
app: metallb
rules:
- apiGroups: [""]
resources: ["services"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: [""]
resources: ["services/status"]
verbs: ["update"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: metallb-system:speaker
labels:
app: metallb
rules:
- apiGroups: [""]
resources: ["services", "endpoints", "nodes"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: metallb-system
name: config-watcher
labels:
app: metallb
rules:
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create"]
---
## Role bindings
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: metallb-system:controller
labels:
app: metallb
subjects:
- kind: ServiceAccount
name: controller
namespace: metallb-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: metallb-system:controller
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: metallb-system:speaker
labels:
app: metallb
subjects:
- kind: ServiceAccount
name: speaker
namespace: metallb-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: metallb-system:speaker
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
namespace: metallb-system
name: config-watcher
labels:
app: metallb
subjects:
- kind: ServiceAccount
name: controller
- kind: ServiceAccount
name: speaker
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: config-watcher
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
namespace: metallb-system
name: speaker
labels:
app: metallb
component: speaker
spec:
selector:
matchLabels:
app: metallb
component: speaker
template:
metadata:
labels:
app: metallb
component: speaker
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "7472"
spec:
serviceAccountName: speaker
terminationGracePeriodSeconds: 0
hostNetwork: true
containers:
- name: speaker
image: metallb/speaker:v0.7.1
imagePullPolicy: IfNotPresent
args:
- --port=7472
- --config=config
env:
- name: METALLB_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
ports:
- name: monitoring
containerPort: 7472
resources:
limits:
cpu: 100m
memory: 100Mi
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop:
- all
add:
- net_raw
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: metallb-system
name: controller
labels:
app: metallb
component: controller
spec:
revisionHistoryLimit: 3
selector:
matchLabels:
app: metallb
component: controller
template:
metadata:
labels:
app: metallb
component: controller
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "7472"
spec:
serviceAccountName: controller
terminationGracePeriodSeconds: 0
securityContext:
runAsNonRoot: true
runAsUser: 65534 # nobody
containers:
- name: controller
image: metallb/controller:v0.7.1
imagePullPolicy: IfNotPresent
args:
- --port=7472
- --config=config
ports:
- name: monitoring
containerPort: 7472
resources:
limits:
cpu: 100m
memory: 100Mi
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- all
readOnlyRootFilesystem: true
I have a Pod or Job yaml spec file (I can edit it) and I want to launch it from my local machine (e.g. using kubectl create -f my_spec.yaml)
The spec declares a volume mount. There would be a file in that volume that I want to use as value for an environment variable.
I want to make it so that the volume file contents ends up in the environment variable (without me jumping through hoops by somehow "downloading" the file to my local machine and inserting it in the spec).
P.S. It's obvious how to do that if you have control over the command of the container. But in case of launching arbitrary image, I have no control over the command attribute as I do not know it.
apiVersion: batch/v1
kind: Job
metadata:
generateName: puzzle
spec:
template:
spec:
containers:
- name: main
image: arbitrary-image
env:
- name: my_var
valueFrom: <Contents of /mnt/my_var_value.txt>
volumeMounts:
- name: my-vol
path: /mnt
volumes:
- name: my-vol
persistentVolumeClaim:
claimName: my-pvc
You can create deployment with kubectl endless loop which will constantly poll volume and update configmap from it. After that you can mount created configmap into your pod. It's a little bit hacky but will work and update your configmap automatically. The only requirement is that PV must be ReadWriteMany or ReadOnlyMany (but in that case you can mount it in read-only mode to all pods).
apiVersion: v1
kind: ServiceAccount
metadata:
name: cm-creator
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: default
name: cm-creator
rules:
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["create", "update", "get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: cm-creator
namespace: default
subjects:
- kind: User
name: system:serviceaccount:default:cm-creator
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: cm-creator
apiGroup: rbac.authorization.k8s.io
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: cm-creator
namespace: default
labels:
app: cm-creator
spec:
replicas: 1
serviceAccountName: cm-creator
selector:
matchLabels:
app: cm-creator
template:
metadata:
labels:
app: cm-creator
spec:
containers:
- name: cm-creator
image: bitnami/kubectl
command:
- /bin/bash
- -c
args:
- while true;
kubectl create cm myconfig --from-file=my_var=/mnt/my_var_value.txt --dry-run -o yaml | kubectl apply -f-;
sleep 60;
done
volumeMounts:
- name: my-vol
path: /mnt
readOnly: true
volumes:
- name: my-vol
persistentVolumeClaim:
claimName: my-pvc