Write to Secret file in pod - kubernetes

I define a Secret:
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
stringData:
config.yaml: |-
apiUrl: "https://my.api.com/api/v1"
username: Administrator
password: NewPasswdTest11
And then creating volume mount in Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: k8s-webapp-test
labels:
name: k8s-webapp-test
version: 1.0.4
spec:
replicas: 2
selector:
matchLabels:
name: k8s-webapp-test
version: 1.0.4
template:
metadata:
labels:
name: k8s-webapp-test
version: 1.0.4
spec:
nodeSelector:
kubernetes.io/os: windows
volumes:
- name: secret-volume
secret:
secretName: string-data-secret
containers:
- name: k8s-webapp-test
image: dockerstore/k8s-webapp-test:1.0.4
ports:
- containerPort: 80
volumeMounts:
- name: secret-volume
mountPath: "/secrets"
readOnly: false
So, after the deployment, I have 2 pods with volume mounts in C:\secrets (I do use Windows nodes). When I try to edit config.yaml that is located in C:\secrets folder, I get following error:
Access to the path 'c:\secrets\config.yaml' is denied.
Although I marked file as readOnly false I cannot write to it. How can I modify the file?

As you can see here it is not possible by intention:
Secret, configMap, downwardAPI and projected volumes will be mounted as read-only volumes. Applications that attempt to write to these volumes will receive read-only filesystem errors. Previously, applications were allowed to make changes to these volumes, but those changes were reverted at an arbitrary interval by the system. Applications should be re-configured to write derived files to another location
You can look into using an init container which maps the secret and then copies it to the desired location where you might be able to modify it.
As an alternative to the init container you might also use a container lifecycle hook i.e. a PostStart-hook which executes immediately after a container is created.
lifecycle:
postStart:
exec:
command:
- "/bin/sh"
- "-c"
- >
cp -r /secrets ~/secrets;

You can create secrets from within a Pod but it seems you need to utilize the Kubernetes REST API to do so:
https://kubernetes.io/docs/concepts/overview/kubernetes-api/

Related

container level securityContext fsGroup

I'm trying to play with single pod multi container scenario.
The problem is one of my container (directus) is a node app that run as user 'node' with uid 1000
First try, I use hostpath as storage back end. With this, I need to change the host's directory mode with chmod manualy.
Now, I'm trying using longhorn.
And basicaly I don't want to change a host directory mod/ownership each time i deploy this deployment.
Here is my manifest
apiVersion: apps/v1
kind: Deployment
metadata:
name: lh-directus
namespace: lh-directus
spec:
replicas: 1
selector:
matchLabels:
app: lh-directus
template:
metadata:
labels:
app: lh-directus
spec:
nodeSelector:
kubernetes.io/os: linux
isGeneralDeployment: "true"
volumes:
- name: lh-directus-uploads-volume
persistentVolumeClaim:
claimName: lh-directus-uploads-pvc
- name: lh-directus-dbdata-volume
persistentVolumeClaim:
claimName: lh-directus-dbdata-pvc
containers:
# Redis Cache
- name: redis
image: redis:6
# Database
- name: database
image: postgres:12
volumeMounts:
- name: lh-directus-dbdata-volume
mountPath: /var/lib/postgresql/data
# Directus
- name: directus
image: directus/directus:latest
securityContext:
fsGroup: 1000
volumeMounts:
- name: lh-directus-uploads-volume
mountPath: /directus/uploads
When I Appy the manifest, I got error
error: error validating "lh-directus.yaml": error validating data: ValidationError(Deployment.spec.template.spec.containers[2].securityContext): unknown field "fsGroup" in io.k8s.api.core.v1.SecurityContext; if you choose to ignore these errors, turn validation off with --validate=false
I reads about initContainer ....
But Kindly please tell me how to fix this problem without initContainer and without manualy set/change host's path ownership/mod.
Sincerely
-bino-

Passing values from initContainers to container spec

I have a kubernetes deployment with the below spec that gets installed via helm 3.
apiVersion: apps/v1
kind: Deployment
metadata:
name: gatekeeper
spec:
replicas: 1
template:
spec:
containers:
- name: gatekeeper
image: my-gatekeeper-image:some-sha
args:
- --listen=0.0.0.0:80
- --client-id=gk-client
- --discovery-url={{ .Values.discoveryUrl }}
I need to pass the discoveryUrl value as a helm value, which is the public IP address of the nginx-ingress pod that I deploy via a different helm chart. I install the above deployment like below:
helm3 install my-nginx-ingress-chart
INGRESS_IP=$(kubectl get svc -lapp=nginx-ingress -o=jsonpath='{.items[].status.loadBalancer.ingress[].ip}')
helm3 install my-gatekeeper-chart --set discovery_url=${INGRESS_IP}
This works fine, however, Now instead of these two helm3 install, I want to have a single helm3 install, where both the nginx-ingress and the gatekeeper deployment should be created.
I understand that in the initContainer of my-gatekeeper-image we can get the nginx-ingress ip address, but I am not able to understand how to set that as an environment variable or pass to the container spec.
There are some stackoverflow questions that mention that we can create a persistent volume or secret to achieve this, but I am not sure, how that would work if we have to delete them. I do not want to create any extra objects and maintain the lifecycle of them.
It is not possible to do this without mounting a persistent volume. But the creation of persistent volume can be backed by just an in-memory store, instead of a block storage device. That way, we do not have to do any extra lifecycle management. The way to achieve that is:
apiVersion: v1
kind: ConfigMap
metadata:
name: gatekeeper
data:
gatekeeper.sh: |-
#!/usr/bin/env bash
set -e
INGRESS_IP=$(kubectl get svc -lapp=nginx-ingress -o=jsonpath='{.items[].status.loadBalancer.ingress[].name}')
# Do other validations/cleanup
echo $INGRESS_IP > /opt/gkconf/discovery_url;
exit 0
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: gatekeeper
labels:
app: gatekeeper
spec:
replicas: 1
selector:
matchLabels:
app: gatekeeper
template:
metadata:
name: gatekeeper
labels:
app: gatekeeper
spec:
initContainers:
- name: gkinit
command: [ "/opt/gk-init.sh" ]
image: 'bitnami/kubectl:1.12'
volumeMounts:
- mountPath: /opt/gkconf
name: gkconf
- mountPath: /opt/gk-init.sh
name: gatekeeper
subPath: gatekeeper.sh
readOnly: false
containers:
- name: gatekeeper
image: my-gatekeeper-image:some-sha
# ENTRYPOINT of above image should read the
# file /opt/gkconf/discovery_url and then launch
# the actual gatekeeper binary
imagePullPolicy: Always
ports:
- containerPort: 80
protocol: TCP
volumeMounts:
- mountPath: /opt/gkconf
name: gkconf
volumes:
- name: gkconf
emptyDir:
medium: Memory
- name: gatekeeper
configMap:
name: gatekeeper
defaultMode: 0555
Using init containers is indeed a valid solution but you need to be aware that by doing so you are adding complexity to your deployment.
This is because you would also need to create serviceaccount with permisions to be able to read service objects from inside of init container. Then, when having the IP, you can't just set env variable for gatekeeper container without recreating a pod so you would need to save the IP e.g. to shared file and read it from it when starting gatekeeper.
Alternatively you can reserve ip address if your cloud provided supports this feature and use this static IP when deploying nginx service:
apiVersion: v1
kind: Service
[...]
type: LoadBalancer
loadBalancerIP: "YOUR.IP.ADDRESS.HERE"
Let me know if you have any questions or if something needs clarification.

How to fetch configmap from kubernetes pod

I have one spring boot microservice running on docker container, below is the Dockerfile
FROM java:8-jre
MAINTAINER <>
WORKDIR deploy/
#COPY config/* /deploy/config/
COPY ./ms.console.jar /deploy/
CMD chmod +R 777 ./ms.console.jar
CMD ["java","-jar","/deploy/ms.console.jar","console"]
EXPOSE 8384
here my configuration stores in external folder, i.e /config/console-server.yml and when I started the application, internally it will load the config (spring boot functionality).
Now I want to separate this configuration using configmap, for that I simply created one configmap and storing all the configuration details.
kubectl create configmap console-configmap
--from-file=./config/console-server.yml
kubectl describe configmap console-configmap
below are the description details:
Name: console-configmap
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
console-server.yml:
----
server:
http:
port: 8385
compression:
enabled: true
mime-types: application/json,application/xml,text/html,text/xml,text/plain,text/css,application/javascript
min-response-size: 2048
---
spring:
thymeleaf:
prefix: classpath:/static
application:
name: console-service
profiles:
active: native
servlet:
multipart:
max-file-size: 30MB
max-request-size: 30MB
---
host:
gateway: http://apigateway:4000
webhook: http://localhost:9000
my deployment yml is:
apiVersion: apps/v1 # for versions before 1.8.0 use apps/v1beta1
kind: Deployment
metadata:
name: consoleservice1
spec:
selector:
matchLabels:
app: consoleservice
replicas: 1 # tells deployment to run 3 pods matching the template
template: # create pods using pod definition in this template
metadata:
labels:
app: consoleservice
spec:
containers:
- name: consoleservice
image: ms-console
ports:
- containerPort: 8384
imagePullPolicy: Always
envFrom:
- configMapRef:
name: console-configmap
imagePullSecrets:
- name: regcresd
My doubt is, I commented config folder in the Dockerfile, so while running pods, it's throwing exception because of no configuration, how I will inject this console-configmap to my deployment, what I tried already shared, but getting same issues.
First of all, how are you consuming the .yml file in your application? If you consume your yml file contents as environment variables, your config should just work fine. But I suspect that you want to consume the contents from the config file inside the container. If that is the case you have to create a volume out of the configmap as follows:
apiVersion: apps/v1 # for versions before 1.8.0 use apps/v1beta1
kind: Deployment
metadata:
name: consoleservice1
spec:
selector:
matchLabels:
app: consoleservice
replicas: 1 # tells deployment to run 3 pods matching the template
template: # create pods using pod definition in this template
metadata:
labels:
app: consoleservice
spec:
containers:
- name: consoleservice
image: ms-console
ports:
- containerPort: 8384
imagePullPolicy: Always
volumeMounts:
- mountPath: /app/config
name: config
volumes:
- name: config
configMap:
name: console-configmap
imagePullSecrets:
- name: regcresd
The file will be available in the path /app/config/console-server.yml. You have to modify it as per your needs.
do you need to load key:value pairs from the config file as environment variables then below spec would work
envFrom:
- configMapRef:
name: console-configmap
if you need the config as a file inside pod then mount the configmap as volume. following link would be helpful
https://kubernetes.io/docs/tutorials/configuration/configure-redis-using-configmap/

can i use a configmap created from an init container in the pod

I am trying to "pass" a value from the init container to a container. Since values in a configmap are shared across the namespace, I figured I can use it for this purpose. Here is my job.yaml (with faked-out info):
apiVersion: batch/v1
kind: Job
metadata:
name: installer-test
spec:
template:
spec:
containers:
- name: installer-test
image: installer-test:latest
env:
- name: clusterId
value: "some_cluster_id"
- name: in_artifactoryUrl
valueFrom:
configMapKeyRef:
name: test-config
key: artifactorySnapshotUrl
initContainers:
- name: artifactory-snapshot
image: busybox
command: ['kubectl', 'create configmap test-config --from-literal=artifactorySnapshotUrl=http://artifactory.com/some/url']
restartPolicy: Never
backoffLimit: 0
This does not seem to work (EDIT: although the statements following this edit note may still be correct, this is not working because kubectl is not a recognizable command in the busybox image), and I am assuming that the pod can only read values from a configmap created BEFORE the pod is created. Has anyone else come across the difficulty of passing values between containers, and what did you do to solve this?
Should I deploy the configmap in another pod and wait to deploy this one until the configmap exists?
(I know I can write files to a volume, but I'd rather not go that route unless it's absolutely necessary, since it essentially means our docker images must be coupled to an environment where some specific files exist)
You can create an EmptyDir volume, and mount this volume onto both containers. Unlike persistent volume, EmptyDir has no portability issue.
apiVersion: batch/v1
kind: Job
metadata:
name: installer-test
spec:
template:
spec:
containers:
- name: installer-test
image: installer-test:latest
env:
- name: clusterId
value: "some_cluster_id"
volumeMounts:
- name: tmp
mountPath: /tmp/artifact
initContainers:
- name: artifactory-snapshot
image: busybox
command: ['/bin/sh', '-c', 'cp x /tmp/artifact/x']
volumeMounts:
- name: tmp
mountPath: /tmp/artifact
restartPolicy: Never
volumes:
- name: tmp
emptyDir: {}
backoffLimit: 0
If for various reasons, you don't want to use share volume. And you want to create a configmap or a secret, here is a solution.
First you need to use a docker image which contains kubectl : gcr.io/cloud-builders/kubectl:latest for example. (docker image which contains kubectl manage by Google).
Then this (init)container needs enough rights to create resource on Kubernetes cluster. Ok by default, kubernetes inject a token of default service account named : "default" in container, but I prefer to make more explicit, then add this line :
...
initContainers:
- # Already true by default but if use it, prefer to make it explicit
automountServiceAccountToken: true
name: artifactory-snapshot
And add "edit" role to "default" service account:
kubectl create rolebinding default-edit-rb --clusterrole=edit --serviceaccount=default:myapp --namespace=default
Then complete example :
apiVersion: batch/v1
kind: Job
metadata:
name: installer-test
spec:
template:
spec:
initContainers:
- # Already true by default but if use it, prefer to make it explicit.
automountServiceAccountToken: true
name: artifactory-snapshot
# You need to use docker image which contains kubectl
image: gcr.io/cloud-builders/kubectl:latest
command:
- sh
- -c
# the "--dry-run -o yaml | kubectl apply -f -" is to make command idempotent
- kubectl create configmap test-config --from-literal=artifactorySnapshotUrl=http://artifactory.com/some/url --dry-run -o yaml | kubectl apply -f -
containers:
- name: installer-test
image: installer-test:latest
env:
- name: clusterId
value: "some_cluster_id"
- name: in_artifactoryUrl
valueFrom:
configMapKeyRef:
name: test-config
key: artifactorySnapshotUrl
First of all, kubectl is a binary. It was downloaded in your machine before you could use the command. But, In your POD, the kubectl binary doesn't exist. So, you can't use kubectl command from a busybox image.
Furthermore, kubectl uses some credential that is saved in your machine (probably in ~/.kube path). So, If you try to use kubectl from inside an image, this will fail because of missing credentials.
For your scenario, I will suggest the same as #ccshih, use volume sharing.
Here is the official doc about volume sharing between init-container and container.
The yaml that is used here is ,
apiVersion: v1
kind: Pod
metadata:
name: init-demo
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: workdir
mountPath: /usr/share/nginx/html
# These containers are run during pod initialization
initContainers:
- name: install
image: busybox
command:
- wget
- "-O"
- "/work-dir/index.html"
- http://kubernetes.io
volumeMounts:
- name: workdir
mountPath: "/work-dir"
dnsPolicy: Default
volumes:
- name: workdir
emptyDir: {}
Here init-containers saves a file in the volume and later the file was available in inside the container. Try the tutorial by yourself for better understanding.

Use two different configmaps for the same pods as volume mounts

I am trying to write a deployment for a k8s pod.
I am having the following in the deploy.yaml file
apiVersion: apps/v1beta1 # for versions before 1.8.0 use apps/v1beta1
kind: Deployment
metadata:
name: __DEPLOY_NAME__-__ENV__
namespace: __RG_NAME__
spec:
replicas: 1
template:
metadata:
labels:
app: __DEPLOY_NAME__-__ENV__
containers:
- name: __DEPLOY_NAME__-__ENV__
image: __CONTAINER_REGISTRY__/__IMAGE_NAME__
env:
- name: NODE_ENV
value: __ENV__
imagePullPolicy: Always
volumeMounts:
- name: config-volume
mountPath: /etc/config
ports:
- containerPort: __PORT__
volumes:
- name: config-volume
configMap:
name: config
configMap:
name: oauth
I trying to use two different config maps named 'config' and 'oauth' as volume mounts in the same pod. When I tried the above code I got the following error.
error validating data: found invalid field volumes for v1.Container; if you choose to ignore these errors, turn validation off with --validate=false
I am not sure if what I want to achieve is feasible and if not then how should I give the volume mounts.
First: fix indentation on your volumes block, it should be two spaces less (not a child of containers: but sibling of it.
Second: you should create two distinct volumes with distinct names and then have a volume mount for each one of them
Third: if you need to merge files from them you might want to try mounting particular files with subPath