kubernetes deployment mounts secret as a folder instead of a file - kubernetes

I am having a config file as a secret in kubernetes and I want to mount it into a specific location inside the container. The problem is that the volume that is created inside the container is a folder instead of a file with the content of the secrets in it. Any way to fix it?
My deployment looks like this:
kind: Deployment
apiVersion: apps/v1
metadata:
name: jetty
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: jetty
template:
metadata:
labels:
app: jetty
spec:
containers:
- name: jetty
image: quay.io/user/jetty
ports:
- containerPort: 8080
volumeMounts:
- name: config-properties
mountPath: "/opt/jetty/config.properties"
subPath: config.properties
- name: secrets-properties
mountPath: "/opt/jetty/secrets.properties"
- name: doc-path
mountPath: /mnt/storage/
resources:
limits:
cpu: '1000m'
memory: '3000Mi'
requests:
cpu: '750m'
memory: '2500Mi'
volumes:
- name: config-properties
configMap:
name: jetty-config-properties
- name: secrets-properties
secret:
secretName: jetty-secrets
- name: doc-path
persistentVolumeClaim:
claimName: jetty-docs-pvc
imagePullSecrets:
- name: rcc-quay

Secrets vs ConfigMaps
Secrets let you store and manage sensitive information (e.g. passwords, private keys) and ConfigMaps are used for non-sensitive configuration data.
As you can see in the Secrets and ConfigMaps documentation:
A Secret is an object that contains a small amount of sensitive data such as a password, a token, or a key.
A ConfigMap allows you to decouple environment-specific configuration from your container images, so that your applications are easily portable.
Mounting Secret as a file
It is possible to create Secret and pass it as a file or multiple files to Pods.
I've create simple example for you to illustrate how it works.
Below you can see sample Secret manifest file and Deployment that uses this Secret:
NOTE: I used subPath with Secrets and it works as expected.
---
apiVersion: v1
kind: Secret
metadata:
name: my-secret
data:
secret.file1: |
c2VjcmV0RmlsZTEK
secret.file2: |
c2VjcmV0RmlsZTIK
---
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
containers:
- image: nginx
name: nginx
volumeMounts:
- name: secrets-files
mountPath: "/mnt/secret.file1" # "secret.file1" file will be created in "/mnt" directory
subPath: secret.file1
- name: secrets-files
mountPath: "/mnt/secret.file2" # "secret.file2" file will be created in "/mnt" directory
subPath: secret.file2
volumes:
- name: secrets-files
secret:
secretName: my-secret # name of the Secret
Note: Secret should be created before Deployment.
After creating Secret and Deployment, we can see how it works:
$ kubectl get secret,deploy,pod
NAME TYPE DATA AGE
secret/my-secret Opaque 2 76s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx 1/1 1 1 76s
NAME READY STATUS RESTARTS AGE
pod/nginx-7c67965687-ph7b8 1/1 Running 0 76s
$ kubectl exec nginx-7c67965687-ph7b8 -- ls /mnt
secret.file1
secret.file2
$ kubectl exec nginx-7c67965687-ph7b8 -- cat /mnt/secret.file1
secretFile1
$ kubectl exec nginx-7c67965687-ph7b8 -- cat /mnt/secret.file2
secretFile2
Projected Volume
I think a better way to achieve your goal is to use projected volume.
A projected volume maps several existing volume sources into the same directory.
In the Projected Volume documentation you can find detailed explanation but additionally I created an example that might help you understand how it works.
Using projected volume I mounted secret.file1, secret.file2 from Secret and config.file1 from ConfigMap as files into the Pod.
---
apiVersion: v1
kind: Secret
metadata:
name: my-secret
data:
secret.file1: |
c2VjcmV0RmlsZTEK
secret.file2: |
c2VjcmV0RmlsZTIK
---
apiVersion: v1
kind: ConfigMap
metadata:
name: my-config
data:
config.file1: |
configFile1
---
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- name: all-in-one
mountPath: "/config-volume"
readOnly: true
volumes:
- name: all-in-one
projected:
sources:
- secret:
name: my-secret
items:
- key: secret.file1
path: secret-dir1/secret.file1
- key: secret.file2
path: secret-dir2/secret.file2
- configMap:
name: my-config
items:
- key: config.file1
path: config-dir1/config.file1
We can check how it works:
$ kubectl exec nginx -- ls /config-volume
config-dir1
secret-dir1
secret-dir2
$ kubectl exec nginx -- cat /config-volume/config-dir1/config.file1
configFile1
$ kubectl exec nginx -- cat /config-volume/secret-dir1/secret.file1
secretFile1
$ kubectl exec nginx -- cat /config-volume/secret-dir2/secret.file2
secretFile2
If this response doesn't answer your question, please provide more details about your Secret and what exactly you want to achieve.

Related

How to mount one file from a secret to a container?

I'm trying to mount a secret as a file
apiVersion: v1
data:
credentials.conf: >-
dGl0bGU6IHRoaYWwpCg==
kind: Secret
metadata:
name: address-finder-secret
type: Opaque
kind: DeploymentConfig
apiVersion: v1
metadata:
name: app-sample
spec:
replicas: 1
selector:
app: app-sample
template:
metadata:
labels:
app: app-sample
spec:
volumes:
- name: app-sample-vol
configMap:
name: app-sample-config
- name: secret
secret:
secretName: address-finder-secret
containers:
- name: app-sample
volumeMounts:
- mountPath: /config
name: app-sample-vol
- mountPath: ./secret/credentials.conf
name: secret
readOnly: true
subPath: credentials.conf
I need to add the credentials.conf file to a directory where there are already other files. I'm trying to use subPath, but I get 'Error: failed to create subPath directory for volumeMount "secret" of container "app-sample"'
If I remove the subPath, I will lose all other files in the directory.
Where did I go wrong?
Hello, Hope you are enjoying your kubernetes journey !
It would have been better if you have given your image name to try it out but however, i decided to create on custom image.
I created a simple file named file1.txt and copied it to the image, here is my dockefile:
FROM nginx
COPY file1.txt /secret/
I built it simply with:
❯ docker build -t test-so-mount-file .
I just checked if my file were here before going further:
❯ docker run -it test-so-mount-file bash
root#1c9cebc4884c:/# ls
bin etc mnt sbin usr
boot home opt secret var
dev lib proc srv
docker-entrypoint.d lib64 root sys
docker-entrypoint.sh media run tmp
root#1c9cebc4884c:/# cd secret/
root#1c9cebc4884c:/secret# ls
file1.txt
root#1c9cebc4884c:/secret#
perfect. now lets deploy it on kubernetes.
For this test, since i'm using kind (kubernetes in docker) I just used this command to upload my image to the cluster:
❯ kind load docker-image test-so-mount-file --name so-cluster-1
It seems that you are deploying on openshift regarding to your "Deploymentconfig" kind. however, once my image has been added to my cluster i modified your deployment with it :
first without volumes, to check that the file1 is in the container:
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-sample
spec:
replicas: 1
selector:
matchLabels:
app: app-sample
template:
metadata:
labels:
app: app-sample
spec:
containers:
- name: app-sample
image: test-so-mount-file
imagePullPolicy: Never
# volumeMounts:
yes it is:
❯ k exec -it app-sample-7b96558fdf-hn4qt -- ls /secret
file1.txt
Before going further, when i tried to deploy your secret i got this:
Error from server (BadRequest): error when creating "manifest.yaml": Secret in version "v1" cannot be handled as a Secret: illegal base64 data at input byte 20
This is linked to your base64 string that actually contains illegal base64 data, here it is:
❯ base64 -d <<< "dGl0bGU6IHRoaYWwpCg=="
title: thi���(base64: invalid input
No pb, I used another string in b64:
❯ base64 <<< test
dGVzdAo=
and added it to the secret. since I want this data to be in a file, i replaced the '>-' by a '|-' (What is the difference between '>-' and '|-' in yaml?),however it works, with or without it.
Now, lets add the secret to our deployment. I replaced the "./secret/credentials.conf" by "/secret/credentials.conf" (it works with or without but i prefer to remove the "."). since I don't have your confimap datas, I commented out this part. However, Here is the deployment manifest of my file manifest.yaml:
apiVersion: v1
kind: Secret
type: Opaque
metadata:
name: address-finder-secret
data:
credentials.conf: |-
dGVzdAo=
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-sample
spec:
replicas: 1
selector:
matchLabels:
app: app-sample
template:
metadata:
labels:
app: app-sample
spec:
containers:
- name: app-sample
image: test-so-mount-file
imagePullPolicy: Never
volumeMounts:
# - mountPath: /config
# name: app-sample-vol
- mountPath: /secret/credentials.conf
name: secret
readOnly: true
subPath: credentials.conf
volumes:
# - name: app-sample-vol
# configMap:
# name: app-sample-config
- name: secret
secret:
secretName: address-finder-secret
Lets deploy this:
❯ kaf manifest.yaml
secret/address-finder-secret created
deployment.apps/app-sample created
❯ k get pod
NAME READY STATUS RESTARTS AGE
app-sample-c45ff9d58-j92ct 1/1 Running 0 31s
❯ k exec -it app-sample-c45ff9d58-j92ct -- ls /secret
credentials.conf file1.txt
❯ k exec -it app-sample-c45ff9d58-j92ct -- cat /secret/credentials.conf
test
It worked perfectly, Since I havent modified big things from you manifest, i think the pb comes from the deploymentConfig, I suggest you to use deployment instead of deploymentConfig, That way it will works (I hope) and if someday you decide to migrate from openshift to another kubernetes cluster your manifest will be compatible.
bguess

Get value of configMap from mountPath

I created configmap this way.
kubectl create configmap some-config --from-literal=key4=value1
After that i created pod which looks like this
.
I connect to this pod this way
k exec -it nginx-configmap -- /bin/sh
I found the folder /some/path but i could get value from key4.
If you refer to your ConfigMap in your Pod this way:
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: myfrontend
image: nginx
volumeMounts:
- mountPath: "/var/www/html"
name: config-volume
volumes:
- name: config-volume
configMap:
name: some-config
it will be available in your Pod as a file /var/www/html/key4 with the content of value1.
If you rather want it to be available as an environment variable you need to refer to it this way:
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: myfrontend
image: nginx
envFrom:
- configMapRef:
name: some-config
As you can see you don't need for it any volumes and volume mounts.
Once you connect to such Pod by running:
kubectl exec -ti mypod -- /bin/bash
You will see that your environment variable is defined:
root#mypod:/# echo $key4
value1

GKE Cannot pull image, even though imagesPullSecret is defined

In Google Kubernetes Engine I created a POC cluster for our company which worked flawlessly. But now, when I try to create our production environment I cannot seem to get the imagesPullSecrets to work, it's the exact same credentials as in the POC, Same helm chart and the exact same regcred yaml file.
Yet i keep getting the classical:
Back-off pulling image "registry.company.co/frontend/company-web/upload": ImagePullBackOff
Pulling manually on the node works with the same credentials as those that i supplied in the imagesPullSecret
I've tried defining the imagesPullSecret both on a chart level and on the Service Account
I've verified the secret format and directly copied the credentials there when trying the manual pulls
GKE picks up regcred and shows it in the deployment
Regcred generated by kubectl create secret docker-registry regcred --docker-server="registry.company.co" --docker-username="gitlab" --docker-password="[PASSWORD]"
regcred secret
kind: Secret
apiVersion: v1
metadata:
name: regcred
namespace: default
data:
.dockerconfigjson: eyJhdXRocyI6eyJyZWdpc3RyeS5jb21wYW55LmNvIjp7InVzZXJuYW1lIjoiZ2l0bGFiIiwicGFzc3dvcmQiOiJbUkVEQUNURURdIiwiYXV0aCI6IloybDBiR0ZpT2x0QmJITnZJRkpsWkdGamRHVmtYUT09In19fQ==
type: kubernetes.io/dockerconfigjson
Service Account
kind: ServiceAccount
apiVersion: v1
metadata:
name: default
namespace: default
secrets:
- name: default-token-jktj5
imagePullSecrets:
- name: regcred
Deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nfs-server
spec:
replicas: 1
selector:
matchLabels:
role: nfs-server
template:
metadata:
labels:
role: nfs-server
spec:
containers:
- name: nfs-server
image: gcr.io/google_containers/volume-nfs:latest
ports:
- name: nfs
containerPort: 2049
- name: mountd
containerPort: 20048
- name: rpcbind
containerPort: 111
securityContext:
privileged: true
volumeMounts:
- mountPath: /exports
name: mypvc
initContainers:
- name: init-volume-perms
imagePullPolicy: Always
image: alpine
command: ["/bin/sh", "-c"]
args: ["mkdir /mnt/company-logos; mkdir /mnt/uploads; chown -R 1337:1337 /mnt"]
volumeMounts:
- mountPath: /mnt
name: mypvc
- name: company-web-uploads
image: registry.company.co/frontend/company-web/uploads
imagePullPolicy: Always
volumeMounts:
- mountPath: /var/lib/company/web/uploads
subPath: uploads
name: mypvc
- name: company-logos
image: registry.company.co/backend/pdf-service/company-logos
imagePullPolicy: Always
volumeMounts:
- mountPath: /var/lib/company/shared/company-logos
subPath: company-logos
name: mypvc
volumes:
- name: mypvc
gcePersistentDisk:
pdName: gke-nfs-disk
fsType: ext4
I've looked around, following different guides from the ground up to no success.
So I'm at a total loss as to what to do.
Default namespace all around
It may be because of namespace issue. Can you verify a few things
Are you using default namespace at both places?
K8S version difference between poc and prod.
Can you recreate working secret by something like kubectl get secret default-token-jktj5 -o yaml > imagepullsecret.yaml. Edit the yaml file to remove revision and other status information. Apply the same to prod
I have seen this issue in GKE because of multiline secret conversion to base64. Ensure secrets are matching between environments.

Manual AKS PV fails with "New-SmbGlobalMapping MountVolume.SetUp failed for volume" error

I'm trying to mount an azureFile volume on a Windows AKS pod, but I get the error:
kubelet, MountVolume.SetUp failed for volume "fileshare" :
New-SmbGlobalMapping failed: fork/exec
C:\windows\System32\WindowsPowerShell\v1.0\powershell.exe: The
parameter is incorrect., output: ""
My pod.yml looks like:
apiVersion: v1
kind: Pod
metadata:
name: q-pod-sample-03
namespace: mq
spec:
containers:
- image: test.azurecr.io/q/p:01
name: q-ctr-sample-03
imagePullPolicy: "IfNotPresent"
volumeMounts:
- name: azfileshare
mountPath: 'c:/app/app-data'
nodeSelector:
"beta.kubernetes.io/os": windows
volumes:
- name: azfs
azureFile:
secretName: qastapv-share-01-secret
shareName: qastapv-share-01
readOnly: false
My secret.yml looks like:
apiVersion: v1
kind: Secret
metadata:
name: qastapv-share-01-secret
namespace: mq
type: Opaque
data:
azurestorageaccountname: <Base64Str>
azurestorageaccountkey: <Base64Str>
My PV looks like:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-azfs-q-01
namespace: mq
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
azureFile:
secretName: qastapv-share-01-secret
shareName: qastapv-share-01
readOnly: false
mountOptions:
- dir_mode=0777
- file_mode=0777
- uid=1000
- gid=1000
What I'm missing here?
I'm on AKS 1.14.
As I see, there is something wrong in your yaml file. First, in your pod yaml file:
apiVersion: v1
kind: Pod
metadata:
name: q-pod-sample-03
namespace: mq
spec:
containers:
- image: test.azurecr.io/q/p:01
name: q-ctr-sample-03
imagePullPolicy: "IfNotPresent"
volumeMounts:
- name: azfileshare
mountPath: 'c:/app/app-data'
nodeSelector:
"beta.kubernetes.io/os": windows
volumes:
- name: azfileshare # this name should be the same with the name in volumeMounts
azureFile:
secretName: qastapv-share-01-secret
shareName: qastapv-share-01
readOnly: false
And I do not know how do you convert the storage account name and the key into base64. So I also show two ways to create the secret in AKS.
One is to use the command to create as below:
kubectl create secret generic azure-secret --from-literal=azurestorageaccountname=$AKS_PERS_STORAGE_ACCOUNT_NAME --from-literal=azurestorageaccountkey=$STORAGE_KEY
Second is to use the yaml file and convert the storage account name and the key into base64 and input them in the yaml file as below:
echo 'storageAccountName' | base64
echo 'storageAccountKey' | base64
The yaml file as you show and input the output of the above commands.
Follow the above steps, you do not need to create the PV individual.
For more details, see Manually create and use a volume with Azure Files share in Azure Kubernetes Service (AKS). And if you want to use the PV/PVC, take a look at Mount volumes via PV and PVC.
Update:
If you use the yaml file to create the secret, you also need to pay attention to the operating system where convert the string into base64. The different operating system may have different rules for the base64. For you, you use the Windows nodes, so you need to convert the storage account name and the key into base64 on the Windows system. Below is the PowerShell command to convert:
$Name= [System.Text.Encoding]::UTF8.GetBytes("storageAccountName ")
[System.Convert]::ToBase64String($Name )
$Key = [System.Text.Encoding]::UTF8.GetBytes("storageAccountKey")
[System.Convert]::ToBase64String($Key)

Secret volumes do not work on multinode docker setup

I have setup a multinode kubernetes 1.0.3 cluster using instructions from https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/docker-multinode.md.
I create a secret volume using the following spec in myns namespace:
apiVersion: v1
kind: Secret
metadata:
name: mysecret
namespace: myns
labels:
name: mysecret
data:
myvar: "bUNqVlhCVjZqWlZuOVJDS3NIWkZHQmNWbXBRZDhsOXMK"
Create secret volume:
$ kubectl create -f mysecret.yml --namespace=myns
Check to see if secret volume exists:
$ kubectl get secrets --namespace=myns
NAME TYPE DATA
mysecret Opaque 1
Here is the Pod spec of the consumer of the secret volume:
apiVersion: v1
kind: Pod
metadata:
name: busybox
namespace: myns
spec:
containers:
- image: busybox
command:
- sleep
- "3600"
name: busybox
volumeMounts:
- name: mysecret
mountPath: /etc/mysecret
readOnly: true
volumes:
- name: mysecret
secret:
secretName: mysecret
Create the Pod
kubectl create -f busybox.yml --namespace=myns
Now if I exec into the docker container to inspect the contents of the /etc/mysecret directory. I find it to be empty.
What namespace are your pod and secret in? They must be in the same namespace. Would you post a gist or pastebin of the Kubelet log? That contains information that can help us diagnose this.
Also, are you running the Kubelet on your host directly or in a container?