Using TLS secret in ingress from hashicorp vault directly - kubernetes

How can I retrieve a tls (ssl certificate) secret from hashicorp vault into ingress?
I have deployed a microservices in kubernetes (openstack) with ingress nginx and hashicorp vault. The tls keys are stored in hashicorp vault. I have created a secretproviderclass:
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
name: sslspc
spec:
provider: vault
secretObjects:
- secretName: sslspc
data:
- key: "tls.key"
objectName: TLSKey
- key: "tls.crt"
objectName: TLSCert
type: kubernetes.io/tls
parameters:
vaultAddress: http://vault.vault:8200
roleName: "approle"
objects: |
- objectName: TLSKey
secretPath: "secret/data/myssl"
secretKey: "tls.key"
- objectName: TLSCert
secretPath: "secret/data/myssl"
secretKey: "tls.crt"
but can't use it directly in ingress. I have to create a pod which is creating a volume and map it to an environment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: depssl
labels:
app: appbusy
spec:
replicas: 1
selector:
matchLabels:
app: appbusy
template:
metadata:
labels:
app: appbusy
spec:
serviceAccountName: mysa
containers:
- name: appbusy
image: busybox
imagePullPolicy: IfNotPresent
command: ["/bin/sh"]
args: ["-c", "while true; do sleep 300;done"]
env:
- name: TLS.KEY
valueFrom:
secretKeyRef:
name: sslspc
key: tls.key
- name: TLS.CRT
valueFrom:
secretKeyRef:
name: sslspc
key: tls.crt
volumeMounts:
- name: sslspc
mountPath: "/mnt/secrets-store"
readOnly: true
volumes:
- name: sslspc
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: "sslspc"
After this I can use it in my ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx
tls:
- hosts:
- example.com
secretName: sslspc
rules:
- host: example.com
http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: myservice
port:
number: 80
Is it possible to retrieve the secret in ingress without creating an additional pod just for mapping purpose?

You can make use of vault injectors to inject the secrets using the annotations like
annotations:
vault.hashicorp.com/agent-inject: 'true'
vault.hashicorp.com/agent-configmap: 'my-configmap'
vault.hashicorp.com/tls-secret: 'vault-tls-client'
But to use these annotations you need to set up the injector mechanism in the cluster. Refer these official documentation for complete setup and for some examples. DOC1 DOC2.
Try this tutorial to understand more about vault injectors.

Related

When I attache secrets in deployment it doesn't create `secretObjects` for pods to get parameter

I am trying to create pods and attached ssmparamaters to these pods. And I create secret.yaml file for creating SecretProviderClass and secretObjects to attache pods these secret provider class and secret objects. Here is the file:
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
name: aws-secrets
namespace: default
spec:
provider: aws
secretObjects:
- secretName: dbsecret
type: Opaque
data:
- objectName: dbusername
key: username
- objectName: dbpassword
key: password
parameters:
objects: |
- objectName: "secure-store"
objectType: "ssmparameter"
jmesPath:
- path: username
objectAlias: dbusername
- path: password
objectAlias: dbpassword
Also, I created a service account to attach deployment. Here is the file
apiVersion: v1
kind: ServiceAccount
metadata:
name: csi-provider-user
namespace: default
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::123456789078:role/test-oidc
Here is the deployment file where I tried to create env variables in order to get parameters from parameter store from secrets and attache pods.
apiVersion: apps/v1
kind: Deployment
metadata:
name: new-app
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: new-app
template:
metadata:
labels:
app: new-app
spec:
containers:
- name: new-app
image: nginx:1.14.2
resources:
requests:
memory: "300Mi"
cpu: "500m"
limits:
memory: "500Mi"
cpu: "1000m"
ports:
- containerPort: 80
volumeMounts:
- name: secrets-store-inline
mountPath: "/mnt/secrets-store"
readOnly: true
env:
- name: DB_USERNAME_01
valueFrom:
secretKeyRef:
name: dbsecret
key: username
- name: DB_PASSWORD_01
valueFrom:
secretKeyRef:
name: dbsecret
key: password
serviceAccountName: csi-provider-user
volumes:
- name: secrets-store-inline
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: "aws-secrets"
But when I apply these files and create deployment I get this error:
Error: secret "dbsecret" not found
It doesn't create secret objects for some reason:
secretObjects:
- secretName: dbsecret
I might miss some configurations. Thanks for your help!

mount kubernetes ssh secret in container

Unable to mount a Kubernetes secret to ${HOME}/.ssh/id_rsa path.
Following are my secrets.yaml created using
kubectl create secret generic secret-ssh-auth --type=kubernetes.io/ssh-auth --from-file=ssh-privatekey=keys/id_rsa
apiVersion: v1
data:
ssh-privatekey: abcdefgh
kind: Secret
metadata:
name: secret-ssh-auth
namespace: app
type: kubernetes.io/ssh-auth
---
apiVersion: v1
data:
username: YWRtaW4=
password: MWYyZDFlMmU2N2Rm
kind: Secret
metadata:
name: mysecret
namespace: app
type: Opaque
Following is my deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-helm-test
labels:
helm.sh/chart: helm-test-0.1.0
app.kubernetes.io/name: helm-test
app.kubernetes.io/instance: nginx
app.kubernetes.io/version: "1.16.0"
app.kubernetes.io/managed-by: Helm
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: helm-test
app.kubernetes.io/instance: nginx
template:
metadata:
labels:
app.kubernetes.io/name: helm-test
app.kubernetes.io/instance: nginx
spec:
serviceAccountName: nginx-helm-test
securityContext:
{}
containers:
- name: helm-test
securityContext:
{}
image: "nginx:1.16.0"
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 80
protocol: TCP
livenessProbe:
httpGet:
path: /
port: http
readinessProbe:
httpGet:
path: /
port: http
resources:
{}
env:
- name: HOME
value: /root
volumeMounts:
- mountPath: ${HOME}/.ssh/id_rsa
name: sshdir
readOnly: true
- name: foo
mountPath: /etc/foo
readOnly: true
volumes:
- name: sshdir
secret:
secretName: secret-ssh-auth
- name: foo
secret:
secretName: mysecret
All I wanted is to mount the ssh-privatekey value in ${HOME}/.ssh/id_rsa but for some reason, the above mount does not happen
But at the same time, I was able to see the foo secret correctly in /etc/foo/username. Exhaust to be honest but still want to finish this
What am I doing wrong?
K8s Secret type: kubernetes.io/ssh-auth (i.e. ssh-key-secret) does not work out of the box as mount point for SSH, since it mounts it under the filename ssh-privatekey. To fix this you have to do few things:
You need to mount the ssh-privatekey key to id_rsa filename via secret:items:key projection in your volume definition.
Mount the secret so it is NOT group/world readable because the default mode/permissions is 0644 (i.e. add defaultMode: 0400 to your VolumeMount) .
Here is what I believe you need to change in your deployment.yaml to fix this problem:
...
volumeMounts:
- mountPath: ${HOME}/.ssh
name: sshdir
readOnly: true
volumes:
- name: sshdir
secret:
secretName: secret-ssh-auth
defaultMode: 0400
items:
- key: ssh-privatekey
path: id_rsa
kubectl create secret generic secret-ssh-auth \
--from-file=ssh-privatekey=keys/id_rsa
As you show, creates a Secret but the data key is sss-privatekey and it is created from keys/id_rsa.
When you volume mount it, you reference the file (!) as ssh-privatekey.
containers:
- name: ...
volumeMounts:
- mountPath: /for/example/secrets
name: sshdir
readOnly: true
volumes:
- name: sshdir
secret:
secretName: secret-ssh-auth
The key will be /for/example/secrets/ssh-privatekey
Customarily, you'd remap the host file to a similarly named file in the secret to make this less confusing, i.e.
kubectl create secret generic secret-ssh-auth \
--from-file=id_rsa=keys/id_rsa

Define/change Kubernetes SSH key file name in a YAML

I have a secret:
apiVersion: v1
kind: Secret
metadata:
name: secret-ssh-auth
type: kubernetes.io/ssh-auth
data:
ssh-privatekey: |
SEVMTE9PT09PT09PT09PT09PT09PCg==
and deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
volumeMounts:
- name: secret-ssh-auth
mountPath: /root/.ssh
volumes:
- name: secret-ssh-auth
secret:
secretName: secret-ssh-auth
defaultMode: 0400
It creates a file with this path /root/.ssh/ssh-privatekey while I want to have /root/.ssh/id_rsa name instead.
I know we can solve it by running a kubectl command, but I want to handle it inside the YAML file.
So, how to do that by the YAML file?
Based on the Kubernetes documentation the ssh-privatekey key is mandatory, in this case, you can leave it empty via stringData key, then define another one by data key like this:
apiVersion: v1
kind: Secret
metadata:
name: secret-ssh-auth
type: kubernetes.io/ssh-auth
stringData:
ssh-privatekey: |
-
data:
id_rsa: |
SEVMTE9PT09PT09PT09PT09PT09PCg==
Got the same problem, and revolved it by simply defining the spec.volumes like this, which renames the key with the path value:
volumes:
- name: privatekey
secret:
secretName: private-key
items:
- key: ssh-privatekey
path: id_rsa
defaultMode: 384
then refer it inside the container definition:
containers:
- name: xxx
volumeMounts:
- name: privatekey
mountPath: /path/to/.ssh

How to set Kubernetes config map and secret to mongodb environment variables

I am trying to set the two env variables of mongo namely - MONGO_INITDB_ROOT_USERNAME and MONGO_INITDB_ROOT_PASSWORD using kubernetes ConfigMap and Secret as follows:
When I don't use the config map and password, i.e. I hardcode the username and password, it works, but when I try to replace it with configmap and secret, it says
'Authentication failed.'
my username and password is the same, which is admin
Here's the yaml definition for these obects, can someone help me what is wrong?
apiVersion: v1
kind: ConfigMap
metadata:
name: mongodb-username
data:
username: admin
---
apiVersion: v1
kind: Secret
metadata:
name: mongodb-password
data:
password: YWRtaW4K
type: Opaque
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongodbtest
spec:
# serviceName: mongodbtest
replicas: 1
selector:
matchLabels:
app: mongodbtest
template:
metadata:
labels:
app: mongodbtest
selector: mongodbtest
spec:
containers:
- name: mongodbtest
image: mongo:3
# env:
# - name: MONGO_INITDB_ROOT_USERNAME
# value: admin
# - name: MONGO_INITDB_ROOT_PASSWORD
# value: admin
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
configMapKeyRef:
name: mongodb-username
key: username
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongodb-password
key: password
Finally I was able to find the solution after hours, it is not something I did from kubernetes side, it is when I did base64 encode.
The correct way to encode is with following command:
echo -n 'admin' | base64
and this was the issue with me.
Your deployment yaml is fine, just change spec.containers[0].env to spec.containers[0].envFrom:
spec:
containers:
- name: mongodbtest
image: mongo:3
envFrom:
- configMapRef:
name: mongodb-username
- secretRef:
name: mongodb-password
That will put all keys of your secret and configmap as environment variables in the deployment.
apiVersion: v1
data:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD : password
kind: ConfigMap
metadata:
name: mongo-cred
namespace: default
inject it to deployment like
envFrom:
- configMapRef:
name: mongo-cred
the deployment will be something like
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongodbtest
spec:
# serviceName: mongodbtest
replicas: 1
selector:
matchLabels:
app: mongodbtest
template:
metadata:
labels:
app: mongodbtest
selector: mongodbtest
spec:
containers:
- name: mongodbtest
image: mongo:3
envFrom:
- configMapRef:
name: mongo-cred
if you want to save the data in secret, the secret is best practice to store data with encryption base64 and sensitive data.
envFrom:
- secretRef:
name: mongo-cred
you can create the secret with
apiVersion: v1
data:
MONGO_INITDB_ROOT_USERNAME: YWRtaW4K #base 64 encoded
MONGO_INITDB_ROOT_PASSWORD : YWRtaW4K
kind: secret
type: Opaque
metadata:
name: mongo-cred
namespace: default

Ingress endpoint displays a blank page with response 200 on GKE

Being completly new to google cloud, and almost new to kubernetes, I struggled my whole weekend trying to deploy my app in GKE.
My app consists of a react frontend, nodejs backend, postgresql database (connected to the backend with a cloudsql-proxy) and redis.
I serve the frontend and backend with an Ingress, everything seems to be working and all, my pods are running. The ingress-nginx exposes the endpoint of my app, but when when I open it, instead of seeing my app, I see blank page with a 200 response. And when I do kubectl logs MY_POD, I can see that my react app is running.
Ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: superflix-ingress-service
namespace: default
annotations:
kubernetes.io/ingress.class: nginx
kubernetes.io/ingress.global-static-ip-name: "web-static-ip"
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: superflix-ui-node-service
servicePort: 3000
- path: /graphql/*
backend:
serviceName: superflix-backend-node-service
servicePort: 4000
Here is my backend:
kind: Service
apiVersion: v1
metadata:
name: superflix-backend-node-service
spec:
type: NodePort
selector:
app: app
ports:
- port: 4000
targetPort: 4000
# protocol: TCP
name: http
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: superflix-backend-deployment
namespace: default
spec:
replicas: 2
template:
metadata:
labels:
app: app
spec:
containers:
- name: superflix-backend
image: gcr.io/superflix-project/superflix-server:v6
ports:
- containerPort: 4000
# The following environment variables will contain the database host,
# user and password to connect to the PostgreSQL instance.
env:
- name: REDIS_HOST
value: superflix-redis.default.svc.cluster.local
- name: IN_PRODUCTION
value: "true"
- name: POSTGRES_DB_HOST
value: "127.0.0.1"
- name: POSTGRES_DB_PORT
value: "5432"
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: redis-env-secrets
key: REDIS_PASS
# [START cloudsql_secrets]
- name: POSTGRES_DB_USER
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: username
- name: POSTGRES_DB_PASSWORD
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: password
# [END cloudsql_secrets]
# [START proxy_container]
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command: ["/cloud_sql_proxy",
"-instances=superflix-project:europe-west3:superflix-db=tcp:5432",
"-credential_file=/secrets/cloudsql/credentials.json"]
# [START cloudsql_security_context]
securityContext:
runAsUser: 2 # non-root user
allowPrivilegeEscalation: false
# [END cloudsql_security_context]
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
# [END proxy_container]
# [START volumes]
volumes:
- name: cloudsql-instance-credentials
secret:
secretName: cloudsql-instance-credentials
# [END volumes]
And here is my frontend:
kind: Service
apiVersion: v1
metadata:
name: superflix-ui-node-service
spec:
type: NodePort
selector:
app: app
ports:
- port: 3000
targetPort: 3000
# protocol: TCP
name: http
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: superflix-ui-deployment
namespace: default
spec:
replicas: 1
template:
metadata:
labels:
app: app
spec:
containers:
- name: superflix-ui
image: gcr.io/superflix-project/superflix-ui:v4
ports:
- containerPort: 3000
env:
- name: IN_PRODUCTION
value: 'true'
- name: BACKEND_HOST
value: superflix-backend-node-service
EDIT:
When I look at the stackdriver logs of my nginx-ingress-controller I have warnings:
Service "default/superflix-ui" does not have any active Endpoint.
Service "default/superflix-backend" does not have any active Endpoint.
I actually found what was the issue. I changed the ingress service path from /* to /, and now it is working perfectly.