mount kubernetes ssh secret in container - kubernetes

Unable to mount a Kubernetes secret to ${HOME}/.ssh/id_rsa path.
Following are my secrets.yaml created using
kubectl create secret generic secret-ssh-auth --type=kubernetes.io/ssh-auth --from-file=ssh-privatekey=keys/id_rsa
apiVersion: v1
data:
ssh-privatekey: abcdefgh
kind: Secret
metadata:
name: secret-ssh-auth
namespace: app
type: kubernetes.io/ssh-auth
---
apiVersion: v1
data:
username: YWRtaW4=
password: MWYyZDFlMmU2N2Rm
kind: Secret
metadata:
name: mysecret
namespace: app
type: Opaque
Following is my deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-helm-test
labels:
helm.sh/chart: helm-test-0.1.0
app.kubernetes.io/name: helm-test
app.kubernetes.io/instance: nginx
app.kubernetes.io/version: "1.16.0"
app.kubernetes.io/managed-by: Helm
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: helm-test
app.kubernetes.io/instance: nginx
template:
metadata:
labels:
app.kubernetes.io/name: helm-test
app.kubernetes.io/instance: nginx
spec:
serviceAccountName: nginx-helm-test
securityContext:
{}
containers:
- name: helm-test
securityContext:
{}
image: "nginx:1.16.0"
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 80
protocol: TCP
livenessProbe:
httpGet:
path: /
port: http
readinessProbe:
httpGet:
path: /
port: http
resources:
{}
env:
- name: HOME
value: /root
volumeMounts:
- mountPath: ${HOME}/.ssh/id_rsa
name: sshdir
readOnly: true
- name: foo
mountPath: /etc/foo
readOnly: true
volumes:
- name: sshdir
secret:
secretName: secret-ssh-auth
- name: foo
secret:
secretName: mysecret
All I wanted is to mount the ssh-privatekey value in ${HOME}/.ssh/id_rsa but for some reason, the above mount does not happen
But at the same time, I was able to see the foo secret correctly in /etc/foo/username. Exhaust to be honest but still want to finish this
What am I doing wrong?

K8s Secret type: kubernetes.io/ssh-auth (i.e. ssh-key-secret) does not work out of the box as mount point for SSH, since it mounts it under the filename ssh-privatekey. To fix this you have to do few things:
You need to mount the ssh-privatekey key to id_rsa filename via secret:items:key projection in your volume definition.
Mount the secret so it is NOT group/world readable because the default mode/permissions is 0644 (i.e. add defaultMode: 0400 to your VolumeMount) .
Here is what I believe you need to change in your deployment.yaml to fix this problem:
...
volumeMounts:
- mountPath: ${HOME}/.ssh
name: sshdir
readOnly: true
volumes:
- name: sshdir
secret:
secretName: secret-ssh-auth
defaultMode: 0400
items:
- key: ssh-privatekey
path: id_rsa

kubectl create secret generic secret-ssh-auth \
--from-file=ssh-privatekey=keys/id_rsa
As you show, creates a Secret but the data key is sss-privatekey and it is created from keys/id_rsa.
When you volume mount it, you reference the file (!) as ssh-privatekey.
containers:
- name: ...
volumeMounts:
- mountPath: /for/example/secrets
name: sshdir
readOnly: true
volumes:
- name: sshdir
secret:
secretName: secret-ssh-auth
The key will be /for/example/secrets/ssh-privatekey
Customarily, you'd remap the host file to a similarly named file in the secret to make this less confusing, i.e.
kubectl create secret generic secret-ssh-auth \
--from-file=id_rsa=keys/id_rsa

Related

Using TLS secret in ingress from hashicorp vault directly

How can I retrieve a tls (ssl certificate) secret from hashicorp vault into ingress?
I have deployed a microservices in kubernetes (openstack) with ingress nginx and hashicorp vault. The tls keys are stored in hashicorp vault. I have created a secretproviderclass:
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
name: sslspc
spec:
provider: vault
secretObjects:
- secretName: sslspc
data:
- key: "tls.key"
objectName: TLSKey
- key: "tls.crt"
objectName: TLSCert
type: kubernetes.io/tls
parameters:
vaultAddress: http://vault.vault:8200
roleName: "approle"
objects: |
- objectName: TLSKey
secretPath: "secret/data/myssl"
secretKey: "tls.key"
- objectName: TLSCert
secretPath: "secret/data/myssl"
secretKey: "tls.crt"
but can't use it directly in ingress. I have to create a pod which is creating a volume and map it to an environment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: depssl
labels:
app: appbusy
spec:
replicas: 1
selector:
matchLabels:
app: appbusy
template:
metadata:
labels:
app: appbusy
spec:
serviceAccountName: mysa
containers:
- name: appbusy
image: busybox
imagePullPolicy: IfNotPresent
command: ["/bin/sh"]
args: ["-c", "while true; do sleep 300;done"]
env:
- name: TLS.KEY
valueFrom:
secretKeyRef:
name: sslspc
key: tls.key
- name: TLS.CRT
valueFrom:
secretKeyRef:
name: sslspc
key: tls.crt
volumeMounts:
- name: sslspc
mountPath: "/mnt/secrets-store"
readOnly: true
volumes:
- name: sslspc
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: "sslspc"
After this I can use it in my ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx
tls:
- hosts:
- example.com
secretName: sslspc
rules:
- host: example.com
http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: myservice
port:
number: 80
Is it possible to retrieve the secret in ingress without creating an additional pod just for mapping purpose?
You can make use of vault injectors to inject the secrets using the annotations like
annotations:
vault.hashicorp.com/agent-inject: 'true'
vault.hashicorp.com/agent-configmap: 'my-configmap'
vault.hashicorp.com/tls-secret: 'vault-tls-client'
But to use these annotations you need to set up the injector mechanism in the cluster. Refer these official documentation for complete setup and for some examples. DOC1 DOC2.
Try this tutorial to understand more about vault injectors.

When I attache secrets in deployment it doesn't create `secretObjects` for pods to get parameter

I am trying to create pods and attached ssmparamaters to these pods. And I create secret.yaml file for creating SecretProviderClass and secretObjects to attache pods these secret provider class and secret objects. Here is the file:
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
name: aws-secrets
namespace: default
spec:
provider: aws
secretObjects:
- secretName: dbsecret
type: Opaque
data:
- objectName: dbusername
key: username
- objectName: dbpassword
key: password
parameters:
objects: |
- objectName: "secure-store"
objectType: "ssmparameter"
jmesPath:
- path: username
objectAlias: dbusername
- path: password
objectAlias: dbpassword
Also, I created a service account to attach deployment. Here is the file
apiVersion: v1
kind: ServiceAccount
metadata:
name: csi-provider-user
namespace: default
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::123456789078:role/test-oidc
Here is the deployment file where I tried to create env variables in order to get parameters from parameter store from secrets and attache pods.
apiVersion: apps/v1
kind: Deployment
metadata:
name: new-app
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: new-app
template:
metadata:
labels:
app: new-app
spec:
containers:
- name: new-app
image: nginx:1.14.2
resources:
requests:
memory: "300Mi"
cpu: "500m"
limits:
memory: "500Mi"
cpu: "1000m"
ports:
- containerPort: 80
volumeMounts:
- name: secrets-store-inline
mountPath: "/mnt/secrets-store"
readOnly: true
env:
- name: DB_USERNAME_01
valueFrom:
secretKeyRef:
name: dbsecret
key: username
- name: DB_PASSWORD_01
valueFrom:
secretKeyRef:
name: dbsecret
key: password
serviceAccountName: csi-provider-user
volumes:
- name: secrets-store-inline
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: "aws-secrets"
But when I apply these files and create deployment I get this error:
Error: secret "dbsecret" not found
It doesn't create secret objects for some reason:
secretObjects:
- secretName: dbsecret
I might miss some configurations. Thanks for your help!

k8s Permission Denied issue

I got that error when deploying a k8s deployment, I tried to impersonate being a root user via the security context but it didn't help, any guess how to solve it? Unfortunately, I don't have any other ideas or a workaround to avoid this permission issue.
The error I get is:
30: line 1: /scripts/wrapper.sh: Permission denied
stream closed
The deployment is as follows:
apiVersion: apps/v1
kind: Deployment
metadata:
name: cluster-autoscaler-grok-exporter
labels:
app: cluster-autoscaler-grok-exporter
spec:
replicas: 1
selector:
matchLabels:
app: cluster-autoscaler-grok-exporter
sidecar: cluster-autoscaler-grok-exporter-sidecar
template:
metadata:
labels:
app: cluster-autoscaler-grok-exporter
sidecar: cluster-autoscaler-grok-exporter-sidecar
spec:
securityContext:
runAsUser: 1001
fsGroup: 2000
serviceAccountName: flux
imagePullSecrets:
- name: id-docker
containers:
- name: get-data
# 3.5.0 - helm v3.5.0, kubectl v1.20.2, alpine 3.12
image: dtzar/helm-kubectl:3.5.0
command: ["sh", "-c", "/scripts/wrapper.sh"]
args:
- cluster-autoscaler
- "90"
# - cluster-autoscaler
- "30"
- /scripts/get_data.sh
- /logs/data.log
volumeMounts:
- name: logs
mountPath: /logs/
- name: scripts-volume-get-data
mountPath: /scripts/get_data.sh
subPath: get_data.sh
- name: scripts-wrapper
mountPath: /scripts/wrapper.sh
subPath: wrapper.sh
- name: export-data
image: ippendigital/grok-exporter:1.0.0.RC3
imagePullPolicy: Always
ports:
- containerPort: 9148
protocol: TCP
volumeMounts:
- name: grok-config-volume
mountPath: /grok/config.yml
subPath: config.yml
- name: logs
mountPath: /logs
volumes:
- name: grok-config-volume
configMap:
name: grok-exporter-config
- name: scripts-volume-get-data
configMap:
name: get-data-script
defaultMode: 0777
defaultMode: 0700
- name: scripts-wrapper
configMap:
name: wrapper-config
defaultMode: 0777
defaultMode: 0700
- name: logs
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
name: cluster-autoscaler-grok-exporter-sidecar
labels:
sidecar: cluster-autoscaler-grok-exporter-sidecar
spec:
type: ClusterIP
ports:
- name: metrics
protocol: TCP
targetPort: 9144
port: 9148
selector:
sidecar: cluster-autoscaler-grok-exporter-sidecar
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
labels:
app.kubernetes.io/name: cluster-autoscaler-grok-exporter
app.kubernetes.io/part-of: grok-exporter
name: cluster-autoscaler-grok-exporter
spec:
endpoints:
- port: metrics
selector:
matchLabels:
sidecar: cluster-autoscaler-grok-exporter-sidecar
From what I can see, your script does not have execute permissions.
Remove this line from your config map.
defaultMode: 0700
Keep only:
defaultMode: 0777
Also, I see missing leading / in your script path
- /bin/sh scripts/get_data.sh
So, change it to
- /bin/sh /scripts/get_data.sh

Define/change Kubernetes SSH key file name in a YAML

I have a secret:
apiVersion: v1
kind: Secret
metadata:
name: secret-ssh-auth
type: kubernetes.io/ssh-auth
data:
ssh-privatekey: |
SEVMTE9PT09PT09PT09PT09PT09PCg==
and deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
volumeMounts:
- name: secret-ssh-auth
mountPath: /root/.ssh
volumes:
- name: secret-ssh-auth
secret:
secretName: secret-ssh-auth
defaultMode: 0400
It creates a file with this path /root/.ssh/ssh-privatekey while I want to have /root/.ssh/id_rsa name instead.
I know we can solve it by running a kubectl command, but I want to handle it inside the YAML file.
So, how to do that by the YAML file?
Based on the Kubernetes documentation the ssh-privatekey key is mandatory, in this case, you can leave it empty via stringData key, then define another one by data key like this:
apiVersion: v1
kind: Secret
metadata:
name: secret-ssh-auth
type: kubernetes.io/ssh-auth
stringData:
ssh-privatekey: |
-
data:
id_rsa: |
SEVMTE9PT09PT09PT09PT09PT09PCg==
Got the same problem, and revolved it by simply defining the spec.volumes like this, which renames the key with the path value:
volumes:
- name: privatekey
secret:
secretName: private-key
items:
- key: ssh-privatekey
path: id_rsa
defaultMode: 384
then refer it inside the container definition:
containers:
- name: xxx
volumeMounts:
- name: privatekey
mountPath: /path/to/.ssh

How to mount a configMap as a volume mount in a Stateful Set

I don't see an option to mount a configMap as volume in the statefulset , as per https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#statefulset-v1-apps only PVC can be associated with "StatefulSet" . But PVC does not have option for configMaps.
Here is a minimal example:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: example
spec:
selector:
matchLabels:
app: example
serviceName: example
template:
metadata:
labels:
app: example
spec:
containers:
- name: example
image: nginx:stable-alpine
volumeMounts:
- mountPath: /config
name: example-config
volumes:
- name: example-config
configMap:
name: example-configmap
---
apiVersion: v1
kind: ConfigMap
metadata:
name: example-configmap
data:
a: "1"
b: "2"
In the container, you can find the files a and b under /config, with the contents 1 and 2, respectively.
Some explanation:
You do not need a PVC to mount the configmap as a volume to your pods. PersistentVolumeClaims are persistent drives, which you can read from/write to. An example for their usage is a database, such as Postgres.
ConfigMaps on the other hand are read-only key-value structures that are stored inside Kubernetes (in its etcd store), which are to store the configuration for your application. Their values can be mounted as environment variables or as files, either individually or altogether.
I have done it in this way.
apiVersion: v1
kind: ConfigMap
metadata:
name: rabbitmq-configmap
namespace: default
data:
enabled_plugins: |
[rabbitmq_management,rabbitmq_shovel,rabbitmq_shovel_management].
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: rabbitmq
labels:
component: rabbitmq
spec:
serviceName: "rabbitmq"
replicas: 1
selector:
matchLabels:
component: rabbitmq
template:
metadata:
labels:
component: rabbitmq
spec:
initContainers:
- name: "rabbitmq-config"
image: busybox:1.32.0
volumeMounts:
- name: rabbitmq-config
mountPath: /tmp/rabbitmq
- name: rabbitmq-config-rw
mountPath: /etc/rabbitmq
command:
- sh
- -c
- cp /tmp/rabbitmq/rabbitmq.conf /etc/rabbitmq/rabbitmq.conf && echo '' >> /etc/rabbitmq/rabbitmq.conf;
cp /tmp/rabbitmq/enabled_plugins /etc/rabbitmq/enabled_plugins
volumes:
- name: rabbitmq-config
configMap:
name: rabbitmq-configmap
optional: false
items:
- key: enabled_plugins
path: "enabled_plugins"
- name: rabbitmq-config-rw
emptyDir: {}
containers:
- name: rabbitmq
image: rabbitmq:3.8.5-management
env:
- name: RABBITMQ_DEFAULT_USER
value: "username"
- name: RABBITMQ_DEFAULT_PASS
value: "password"
- name: RABBITMQ_DEFAULT_VHOST
value: "vhost"
ports:
- containerPort: 15672
name: ui
- containerPort: 5672
name: api
volumeMounts:
- name: rabbitmq-data-pvc
mountPath: /var/lib/rabbitmq/mnesia
volumeClaimTemplates:
- metadata:
name: rabbitmq-data-pvc
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 2Gi
---
apiVersion: v1
kind: Service
metadata:
name: rabbitmq
spec:
selector:
component: rabbitmq
ports:
- protocol: TCP
port: 15672
targetPort: 15672
name: ui
- protocol: TCP
port: 5672
targetPort: 5672
name: api
type: ClusterIP