How to mount one file from a secret to a container? - kubernetes

I'm trying to mount a secret as a file
apiVersion: v1
data:
credentials.conf: >-
dGl0bGU6IHRoaYWwpCg==
kind: Secret
metadata:
name: address-finder-secret
type: Opaque
kind: DeploymentConfig
apiVersion: v1
metadata:
name: app-sample
spec:
replicas: 1
selector:
app: app-sample
template:
metadata:
labels:
app: app-sample
spec:
volumes:
- name: app-sample-vol
configMap:
name: app-sample-config
- name: secret
secret:
secretName: address-finder-secret
containers:
- name: app-sample
volumeMounts:
- mountPath: /config
name: app-sample-vol
- mountPath: ./secret/credentials.conf
name: secret
readOnly: true
subPath: credentials.conf
I need to add the credentials.conf file to a directory where there are already other files. I'm trying to use subPath, but I get 'Error: failed to create subPath directory for volumeMount "secret" of container "app-sample"'
If I remove the subPath, I will lose all other files in the directory.
Where did I go wrong?

Hello, Hope you are enjoying your kubernetes journey !
It would have been better if you have given your image name to try it out but however, i decided to create on custom image.
I created a simple file named file1.txt and copied it to the image, here is my dockefile:
FROM nginx
COPY file1.txt /secret/
I built it simply with:
❯ docker build -t test-so-mount-file .
I just checked if my file were here before going further:
❯ docker run -it test-so-mount-file bash
root#1c9cebc4884c:/# ls
bin etc mnt sbin usr
boot home opt secret var
dev lib proc srv
docker-entrypoint.d lib64 root sys
docker-entrypoint.sh media run tmp
root#1c9cebc4884c:/# cd secret/
root#1c9cebc4884c:/secret# ls
file1.txt
root#1c9cebc4884c:/secret#
perfect. now lets deploy it on kubernetes.
For this test, since i'm using kind (kubernetes in docker) I just used this command to upload my image to the cluster:
❯ kind load docker-image test-so-mount-file --name so-cluster-1
It seems that you are deploying on openshift regarding to your "Deploymentconfig" kind. however, once my image has been added to my cluster i modified your deployment with it :
first without volumes, to check that the file1 is in the container:
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-sample
spec:
replicas: 1
selector:
matchLabels:
app: app-sample
template:
metadata:
labels:
app: app-sample
spec:
containers:
- name: app-sample
image: test-so-mount-file
imagePullPolicy: Never
# volumeMounts:
yes it is:
❯ k exec -it app-sample-7b96558fdf-hn4qt -- ls /secret
file1.txt
Before going further, when i tried to deploy your secret i got this:
Error from server (BadRequest): error when creating "manifest.yaml": Secret in version "v1" cannot be handled as a Secret: illegal base64 data at input byte 20
This is linked to your base64 string that actually contains illegal base64 data, here it is:
❯ base64 -d <<< "dGl0bGU6IHRoaYWwpCg=="
title: thi���(base64: invalid input
No pb, I used another string in b64:
❯ base64 <<< test
dGVzdAo=
and added it to the secret. since I want this data to be in a file, i replaced the '>-' by a '|-' (What is the difference between '>-' and '|-' in yaml?),however it works, with or without it.
Now, lets add the secret to our deployment. I replaced the "./secret/credentials.conf" by "/secret/credentials.conf" (it works with or without but i prefer to remove the "."). since I don't have your confimap datas, I commented out this part. However, Here is the deployment manifest of my file manifest.yaml:
apiVersion: v1
kind: Secret
type: Opaque
metadata:
name: address-finder-secret
data:
credentials.conf: |-
dGVzdAo=
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-sample
spec:
replicas: 1
selector:
matchLabels:
app: app-sample
template:
metadata:
labels:
app: app-sample
spec:
containers:
- name: app-sample
image: test-so-mount-file
imagePullPolicy: Never
volumeMounts:
# - mountPath: /config
# name: app-sample-vol
- mountPath: /secret/credentials.conf
name: secret
readOnly: true
subPath: credentials.conf
volumes:
# - name: app-sample-vol
# configMap:
# name: app-sample-config
- name: secret
secret:
secretName: address-finder-secret
Lets deploy this:
❯ kaf manifest.yaml
secret/address-finder-secret created
deployment.apps/app-sample created
❯ k get pod
NAME READY STATUS RESTARTS AGE
app-sample-c45ff9d58-j92ct 1/1 Running 0 31s
❯ k exec -it app-sample-c45ff9d58-j92ct -- ls /secret
credentials.conf file1.txt
❯ k exec -it app-sample-c45ff9d58-j92ct -- cat /secret/credentials.conf
test
It worked perfectly, Since I havent modified big things from you manifest, i think the pb comes from the deploymentConfig, I suggest you to use deployment instead of deploymentConfig, That way it will works (I hope) and if someday you decide to migrate from openshift to another kubernetes cluster your manifest will be compatible.
bguess

Related

Pass unique configuration to a distroless container

I'm running a StatefulSet where each replica requires its own unique configuration. To achieve that I'm currently using a configuration with two containers per Pod:
An initContainer prepares the configuration and stores it to a shared volume
A main container consumes the configuration by outputting the contents of the shared volume and passing it to the program as CLI flags.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: my-app
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: my-app
serviceName: my-app
template:
metadata:
labels:
app.kubernetes.io/name: my-app
spec:
initContainers:
- name: generate-config
image: myjqimage:latest
command: [ "/bin/sh" ]
args:
- -c
- |
set -eu -o pipefail
POD_INDEX="${HOSTNAME##*-}"
# A configuration is stored as a JSON array in a Secret
# E.g., [{"param1":"string1","param2":"string2"}]
echo "$MY_APP_CONFIG" | jq -rc --arg i "$POD_INDEX" '.[$i|tonumber-1].param1' > /config/param1
echo "$MY_APP_CONFIG" | jq -rc --arg i "$POD_INDEX" '.[$i|tonumber-1].param2' > /config/param2
env:
- name: MY_APP_CONFIG
valueFrom:
secretKeyRef:
name: my-app
key: config
volumeMounts:
- name: configs
mountPath: "/config"
containers:
- name: my-app
image: myapp:latest
command:
- /bin/sh
args:
- -c
- |
/myapp --param1 $(cat /config/param1) --param2 $(cat /config/param2)
volumeMounts:
- name: configs
mountPath: "/config"
volumes:
- name: configs
emptyDir:
medium: "Memory"
---
apiVersion: v1
kind: Secret
metadata:
name: my-app
namespace: default
labels:
app.kubernetes.io/name: my-app
type: Opaque
data:
config: W3sicGFyYW0xIjoic3RyaW5nMSIsInBhcmFtMiI6InN0cmluZzIifV0=
Now I want to switch to distroless for my main container. Distroless images only contain the required dependencies to run the program (glibc in my case). And it is missing a shell. So if previously I could execute cat and output the contents of a file. Now I'm a bit stuck.
Now instead of reading the contents from file, I should pass the CLI flags defined as environment variables. Something like this:
containers:
- name: my-app
image: myapp:latest
command: ["/myapp", "--param1", "$(PARAM1)", "--param2", "$(PARAM2)"]
env:
- name: PARAM1
value: somevalue1
- name: PARAM2
value: somevalue2
Again, each Pod in a StatefulSet should have a unique configuration. I.e., PARAM1 and PARAM2 should be unique across the Pods in a StatefulSet. How do I achieve that?
Options I considered:
Using Debug Containers -- a new feature of K8s. Somehow use it to edit the configuration of a running container in runtime and inject the required variables. But the feature just became beta in 1.23. And I don't want to mutate my StatefulSet in runtime as I'm using a GitOps approach to store the configuration in Git. It'll probably cause a continuous configuration drift
Using a Job to mutate the configuration in runtime. Again, looks very ugly and violates the GitOps principle
Using shareProcessNamespace. Unsure if it can help but maybe I can somehow inject the environment variables from within the initContainer
Limitations:
Application only supports configuration provisioned through CLI flags. No environment variables, no loading the config from a file

kubernetes deployment mounts secret as a folder instead of a file

I am having a config file as a secret in kubernetes and I want to mount it into a specific location inside the container. The problem is that the volume that is created inside the container is a folder instead of a file with the content of the secrets in it. Any way to fix it?
My deployment looks like this:
kind: Deployment
apiVersion: apps/v1
metadata:
name: jetty
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: jetty
template:
metadata:
labels:
app: jetty
spec:
containers:
- name: jetty
image: quay.io/user/jetty
ports:
- containerPort: 8080
volumeMounts:
- name: config-properties
mountPath: "/opt/jetty/config.properties"
subPath: config.properties
- name: secrets-properties
mountPath: "/opt/jetty/secrets.properties"
- name: doc-path
mountPath: /mnt/storage/
resources:
limits:
cpu: '1000m'
memory: '3000Mi'
requests:
cpu: '750m'
memory: '2500Mi'
volumes:
- name: config-properties
configMap:
name: jetty-config-properties
- name: secrets-properties
secret:
secretName: jetty-secrets
- name: doc-path
persistentVolumeClaim:
claimName: jetty-docs-pvc
imagePullSecrets:
- name: rcc-quay
Secrets vs ConfigMaps
Secrets let you store and manage sensitive information (e.g. passwords, private keys) and ConfigMaps are used for non-sensitive configuration data.
As you can see in the Secrets and ConfigMaps documentation:
A Secret is an object that contains a small amount of sensitive data such as a password, a token, or a key.
A ConfigMap allows you to decouple environment-specific configuration from your container images, so that your applications are easily portable.
Mounting Secret as a file
It is possible to create Secret and pass it as a file or multiple files to Pods.
I've create simple example for you to illustrate how it works.
Below you can see sample Secret manifest file and Deployment that uses this Secret:
NOTE: I used subPath with Secrets and it works as expected.
---
apiVersion: v1
kind: Secret
metadata:
name: my-secret
data:
secret.file1: |
c2VjcmV0RmlsZTEK
secret.file2: |
c2VjcmV0RmlsZTIK
---
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
containers:
- image: nginx
name: nginx
volumeMounts:
- name: secrets-files
mountPath: "/mnt/secret.file1" # "secret.file1" file will be created in "/mnt" directory
subPath: secret.file1
- name: secrets-files
mountPath: "/mnt/secret.file2" # "secret.file2" file will be created in "/mnt" directory
subPath: secret.file2
volumes:
- name: secrets-files
secret:
secretName: my-secret # name of the Secret
Note: Secret should be created before Deployment.
After creating Secret and Deployment, we can see how it works:
$ kubectl get secret,deploy,pod
NAME TYPE DATA AGE
secret/my-secret Opaque 2 76s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx 1/1 1 1 76s
NAME READY STATUS RESTARTS AGE
pod/nginx-7c67965687-ph7b8 1/1 Running 0 76s
$ kubectl exec nginx-7c67965687-ph7b8 -- ls /mnt
secret.file1
secret.file2
$ kubectl exec nginx-7c67965687-ph7b8 -- cat /mnt/secret.file1
secretFile1
$ kubectl exec nginx-7c67965687-ph7b8 -- cat /mnt/secret.file2
secretFile2
Projected Volume
I think a better way to achieve your goal is to use projected volume.
A projected volume maps several existing volume sources into the same directory.
In the Projected Volume documentation you can find detailed explanation but additionally I created an example that might help you understand how it works.
Using projected volume I mounted secret.file1, secret.file2 from Secret and config.file1 from ConfigMap as files into the Pod.
---
apiVersion: v1
kind: Secret
metadata:
name: my-secret
data:
secret.file1: |
c2VjcmV0RmlsZTEK
secret.file2: |
c2VjcmV0RmlsZTIK
---
apiVersion: v1
kind: ConfigMap
metadata:
name: my-config
data:
config.file1: |
configFile1
---
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- name: all-in-one
mountPath: "/config-volume"
readOnly: true
volumes:
- name: all-in-one
projected:
sources:
- secret:
name: my-secret
items:
- key: secret.file1
path: secret-dir1/secret.file1
- key: secret.file2
path: secret-dir2/secret.file2
- configMap:
name: my-config
items:
- key: config.file1
path: config-dir1/config.file1
We can check how it works:
$ kubectl exec nginx -- ls /config-volume
config-dir1
secret-dir1
secret-dir2
$ kubectl exec nginx -- cat /config-volume/config-dir1/config.file1
configFile1
$ kubectl exec nginx -- cat /config-volume/secret-dir1/secret.file1
secretFile1
$ kubectl exec nginx -- cat /config-volume/secret-dir2/secret.file2
secretFile2
If this response doesn't answer your question, please provide more details about your Secret and what exactly you want to achieve.

How to copy a local file into a helm deployment

I'm trying to deploy in Kubernetes several pods using a mongo image with a initialization script in them. I'm using helm for the deployment. Since I'm beginning with the official Mongo docker image, I'm trying to add a script at /docker-entrypoint-initdb.d so it will be executed right at the beginning to initialize some parameters of my Mongo.
What I don't know is how can I insert my script, that is, let's say, in my local machine, in /docker-entrypoint-initdb.d using helm.
I'm trying to do something like docker run -v hostfile:mongofile but I need the equivalent in helm, so this will be done in all the pods of the deployment
You can use configmap. Lets put nginx configuration file to container via configmap. We have directory name called nginx with same level values.yml. Inside there we have actual configuration file.
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-config-file
labels:
app: ...
data:
nginx.conf: |-
{{ .Files.Get "nginx/nginx.conf" | indent 4 }}
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: SomeDeployment
...
spec:
replicas:
selector:
matchLabels:
app: ...
release: ...
template:
metadata:
labels:
app: ...
release: ...
spec:
volumes:
- name: nginx-conf
configMap:
name: nginx-config-file
items:
- key: nginx.conf
path: nginx.conf
containers:
- name: ...
image: ...
volumeMounts:
- name: nginx-conf
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
You can also check initContainers concept from this link :
https://kubernetes.io/docs/concepts/workloads/pods/init-containers/

How to fetch configmap from kubernetes pod

I have one spring boot microservice running on docker container, below is the Dockerfile
FROM java:8-jre
MAINTAINER <>
WORKDIR deploy/
#COPY config/* /deploy/config/
COPY ./ms.console.jar /deploy/
CMD chmod +R 777 ./ms.console.jar
CMD ["java","-jar","/deploy/ms.console.jar","console"]
EXPOSE 8384
here my configuration stores in external folder, i.e /config/console-server.yml and when I started the application, internally it will load the config (spring boot functionality).
Now I want to separate this configuration using configmap, for that I simply created one configmap and storing all the configuration details.
kubectl create configmap console-configmap
--from-file=./config/console-server.yml
kubectl describe configmap console-configmap
below are the description details:
Name: console-configmap
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
console-server.yml:
----
server:
http:
port: 8385
compression:
enabled: true
mime-types: application/json,application/xml,text/html,text/xml,text/plain,text/css,application/javascript
min-response-size: 2048
---
spring:
thymeleaf:
prefix: classpath:/static
application:
name: console-service
profiles:
active: native
servlet:
multipart:
max-file-size: 30MB
max-request-size: 30MB
---
host:
gateway: http://apigateway:4000
webhook: http://localhost:9000
my deployment yml is:
apiVersion: apps/v1 # for versions before 1.8.0 use apps/v1beta1
kind: Deployment
metadata:
name: consoleservice1
spec:
selector:
matchLabels:
app: consoleservice
replicas: 1 # tells deployment to run 3 pods matching the template
template: # create pods using pod definition in this template
metadata:
labels:
app: consoleservice
spec:
containers:
- name: consoleservice
image: ms-console
ports:
- containerPort: 8384
imagePullPolicy: Always
envFrom:
- configMapRef:
name: console-configmap
imagePullSecrets:
- name: regcresd
My doubt is, I commented config folder in the Dockerfile, so while running pods, it's throwing exception because of no configuration, how I will inject this console-configmap to my deployment, what I tried already shared, but getting same issues.
First of all, how are you consuming the .yml file in your application? If you consume your yml file contents as environment variables, your config should just work fine. But I suspect that you want to consume the contents from the config file inside the container. If that is the case you have to create a volume out of the configmap as follows:
apiVersion: apps/v1 # for versions before 1.8.0 use apps/v1beta1
kind: Deployment
metadata:
name: consoleservice1
spec:
selector:
matchLabels:
app: consoleservice
replicas: 1 # tells deployment to run 3 pods matching the template
template: # create pods using pod definition in this template
metadata:
labels:
app: consoleservice
spec:
containers:
- name: consoleservice
image: ms-console
ports:
- containerPort: 8384
imagePullPolicy: Always
volumeMounts:
- mountPath: /app/config
name: config
volumes:
- name: config
configMap:
name: console-configmap
imagePullSecrets:
- name: regcresd
The file will be available in the path /app/config/console-server.yml. You have to modify it as per your needs.
do you need to load key:value pairs from the config file as environment variables then below spec would work
envFrom:
- configMapRef:
name: console-configmap
if you need the config as a file inside pod then mount the configmap as volume. following link would be helpful
https://kubernetes.io/docs/tutorials/configuration/configure-redis-using-configmap/

Write to Secret file in pod

I define a Secret:
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
stringData:
config.yaml: |-
apiUrl: "https://my.api.com/api/v1"
username: Administrator
password: NewPasswdTest11
And then creating volume mount in Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: k8s-webapp-test
labels:
name: k8s-webapp-test
version: 1.0.4
spec:
replicas: 2
selector:
matchLabels:
name: k8s-webapp-test
version: 1.0.4
template:
metadata:
labels:
name: k8s-webapp-test
version: 1.0.4
spec:
nodeSelector:
kubernetes.io/os: windows
volumes:
- name: secret-volume
secret:
secretName: string-data-secret
containers:
- name: k8s-webapp-test
image: dockerstore/k8s-webapp-test:1.0.4
ports:
- containerPort: 80
volumeMounts:
- name: secret-volume
mountPath: "/secrets"
readOnly: false
So, after the deployment, I have 2 pods with volume mounts in C:\secrets (I do use Windows nodes). When I try to edit config.yaml that is located in C:\secrets folder, I get following error:
Access to the path 'c:\secrets\config.yaml' is denied.
Although I marked file as readOnly false I cannot write to it. How can I modify the file?
As you can see here it is not possible by intention:
Secret, configMap, downwardAPI and projected volumes will be mounted as read-only volumes. Applications that attempt to write to these volumes will receive read-only filesystem errors. Previously, applications were allowed to make changes to these volumes, but those changes were reverted at an arbitrary interval by the system. Applications should be re-configured to write derived files to another location
You can look into using an init container which maps the secret and then copies it to the desired location where you might be able to modify it.
As an alternative to the init container you might also use a container lifecycle hook i.e. a PostStart-hook which executes immediately after a container is created.
lifecycle:
postStart:
exec:
command:
- "/bin/sh"
- "-c"
- >
cp -r /secrets ~/secrets;
You can create secrets from within a Pod but it seems you need to utilize the Kubernetes REST API to do so:
https://kubernetes.io/docs/concepts/overview/kubernetes-api/