Kubernetes using secrets in pod - kubernetes

I have a spring boot app image which needs the following property.
server.ssl.keyStore=/certs/keystore.jks
I am loading the keystore file to secrets using the bewloe command.
kubectl create secret generic ssl-keystore-cert --from-file=./server-ssl.jks
I use the below secret reference in my deployment.yaml
{
"name": "SERVER_SSL_KEYSTORE",
"valueFrom": {
"secretKeyRef": {
"name": "ssl-keystore-cert",
"key": "server-ssl.jks"
}
}
}
With the above reference, I am getting the below error.
Error: failed to start container "app-service": Error response from
daemon: oci runtime error: container_linux.go:265: starting container
process caused "process_linux.go:368: container init caused \"setenv:
invalid argument\"" Back-off restarting failed container
If i go with the volume mount option,
"spec": {
"volumes": [
{
"name": "keystore-cert",
"secret": {
"secretName": "ssl-keystore-cert",
"items": [
{
"key": "server-ssl.jks",
"path": "keycerts"
}
]
}
}
],
"containers": [
{
"env": [
{
"name": "JAVA_OPTS",
"value": "-Dserver.ssl.keyStore=/certs/keystore/keycerts"
}
],
"name": "app-service",
"ports": [
{
"containerPort": 8080,
"protocol": "TCP"
}
],
"volumeMounts": [
{
"name": "keystore-cert",
"mountPath": "/certs/keystore"
}
],
"imagePullPolicy": "IfNotPresent"
}
]
I am getting the below error with the above approach.
Caused by: java.lang.IllegalArgumentException: Resource location must
not be null at
org.springframework.util.Assert.notNull(Assert.java:134)
~[spring-core-4.3.7.RELEASE.jar!/:4.3.7.RELEASE] at
org.springframework.util.ResourceUtils.getURL(ResourceUtils.java:131)
~[spring-core-4.3.7.RELEASE.jar!/:4.3.7.RELEASE] at
org.springframework.boot.context.embedded.jetty.JettyEmbeddedServletContainerFactory.configureSslKeyStore(JettyEmbeddedServletContainerFactory.java:301)
~[spring-boot-1.4.5.RELEASE.jar!/:1.4.5.RELEASE]
I tried with the below option also, instead of JAVA_OPTS,
{
"name": "SERVER_SSL_KEYSTORE",
"value": "/certs/keystore/keycerts"
}
Still the error is same.
Not sure what is the right approach.

I tried to repeat the situation with your configuration. I created a secret used command:
kubectl create secret generic ssl-keystore-cert --from-file=./server-ssl.jks
I used this YAML as a test environment:
apiVersion: v1
kind: Pod
metadata:
name: busybox
namespace: default
spec:
containers:
- image: busybox
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
name: busybox
env:
- name: JAVA_OPTS
value: "-Dserver.ssl.keyStore=/certs/keystore/server-ssl.jks"
ports:
- containerPort: 8080
protocol: TCP
volumeMounts:
- name: secret-volume
readOnly: true
mountPath: "/cert/keystore"
volumes:
- name: secret-volume
secret:
secretName: ssl-keystore-cert
As you see, I used "server-ssl.jks" file name in the variable. If you create the secret from a file, Kubernetes will store this file in the secret. When you mount this secret to any place, you just store the file. You tried to use /certs/keystore/keycerts but it doesn't exist, which you see in logs:
Resource location must not be null at org.springframework.util.Assert.notNull
Because your mounted secret is here /certs/keystore/keycerts/server-ssl.jks
It should work, but just fix the paths

Related

Setting environment variables in kubernetes manifest using "kubectl set env"

I am trying to update a helm-deployed deployment so that it uses a secret stored as a k8s secret resource. This must be set as the STORAGE_PASSWORD environment variable in my pod.
In my case, the secret is in secrets/redis and the data item is redis-password:
$ kubectl get secret/redis -oyaml
apiVersion: v1
data:
redis-password: XXXXXXXXXXXXXXXX=
kind: Secret
metadata:
name: redis
type: Opaque
I have tried:
$ kubectl set env --from secret/redis deployment/gateway --keys=redis-password
Warning: key redis-password transferred to REDIS_PASSWORD
deployment.apps/gateway env updated
When I look in my updated deployment manifest, I see the variable has been added but (as suggested) the variable has been set to REDIS_PASSWORD:
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
key: redis-password
name: redis
I have also tried kubectl patch with a replace operation, but I can't get the syntax correct to have the secret inserted.
How do I change the name of the environment variable to STORAGE_PASSWORD?
Given a deployment that looks like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: example
spec:
replicas: 1
template:
spec:
containers:
- image: alpinelinux/darkhttpd
name: darkhttpd
args:
- --port
- "9991"
ports:
- name: http
protocol: TCP
containerPort: 9991
env:
- name: EXAMPLE_VAR
value: example value
The syntax for patching in your secret would look like:
kubectl patch deploy/example --patch='
{
"spec": {
"template": {
"spec": {
"containers": [
{
"name": "darkhttpd",
"env": [
{
"name": "STORAGE_PASSWORD",
"valueFrom": {
"secretKeyRef": {
"name": "redis",
"key": "redis-password"
}
}
}
]
}
]
}
}
}
}
'
Or using a JSONPatch style patch:
kubectl patch --type json deploy/example --patch='
[
{
"op": "add",
"path": "/spec/template/spec/containers/0/env/-",
"value": {
"name": "STORAGE_PASSWORD",
"valueFrom": {
"secretKeyRef": {
"name": "redis",
"key": "redis-password"
}
}
}
}
]
'
Neither one is especially pretty because you're adding a complex nested structure to an existing complex nested structure.
you may also update resources with kubectl edit:
kubectl edit deployment gateway
then edit the yaml file
# - name: REDIS_PASSWORD
- name: STORAGE_PASSWORD
valueFrom:
secretKeyRef:
key: redis-password
name: redis
FYI: https://kubernetes.io/docs/concepts/cluster-administration/manage-deployment/#kubectl-edit

RabbitMQ Kubernetes RabbitmqCluster Operator not adding members to Quorum Queues

I am using the rabbitmq cluster operator to deploy a RabbitMQ HA cluster on kubernetes.
I am importing the defintions by referring to [this example](https://github.com/rabbitmq/cluster-operator/blob/main/docs/examples/import-definitions/rabbitmq.yaml Import Definitions Example).
I have provided configuration as below (3 replicas)
After the cluster is up and i access the management console using port-forward on the service, i see that the queue is declared with type set to "quorum" but it only shows the 1st node of the cluster under leader, online and members.
The cluster is set up with 3 replicas and the default value for quorum initial group size is 3 if i dont specify any.(although i am specifying it explicitly in the defintions file).
It should show other members of cluster under online and members section but it shows only the first node (rabbitmq-ha-0)
Am i missing any configuration ?
apiVersion: rabbitmq.com/v1beta1
kind: RabbitmqCluster
metadata:
name: import-definitions
spec:
replicas: 3
override:
statefulSet:
spec:
template:
spec:
containers:
- name: rabbitmq
volumeMounts:
- mountPath: /path/to/exported/ # filename left out intentionally
name: definitions
volumes:
- name: definitions
configMap:
name: definitions # Name of the ConfigMap which contains definitions you wish to import
rabbitmq:
additionalConfig: |
load_definitions = /path/to/exported/definitions.json # Path to the mounted definitions file
and my definitions file is something like this:
{
"users": [
{
"name": "my-vhost",
"password": "my-vhost",
"tags": "",
"limits": {}
}
],
"vhosts": [
{
"name": "/"
},
{
"name": "my-vhost"
}
],
"permissions": [
{
"user": "my-vhost",
"vhost": "my-vhost",
"configure": ".*",
"write": ".*",
"read": ".*"
}
],
"topic_permissions": [
{
"user": "my-vhost",
"vhost": "my-vhost",
"exchange": "",
"write": ".*",
"read": ".*"
}
],
"parameters":[
{
"value":{
"max-connections":100,
"max-queues":15
},
"vhost":"my-vhost",
"component":"vhost-limits",
"name":"limits"
}
],
"policies":[
{
"vhost":"my-vhost",
"name":"Queue-Policy",
"pattern":"my-*",
"apply-to":"queues",
"definition":{
"delivery-limit":3
},
"priority":0
}
],
"queues":[
{
"name":"my-record-q",
"vhost": "my-vhost",
"durable":true,
"auto_delete":false,
"arguments":{
"x-queue-type":"quorum",
"x-quorum-initial-group-size":3
}
}
],
"exchanges":[
{
"name":"my.records.topic",
"vhost": "my-vhost",
"type":"topic",
"durable":true,
"auto_delete":false,
"internal":false,
"arguments":{
}
}
],
"bindings":[
{
"source":"my.records-changed.topic",
"vhost": "my-vhost",
"destination":"my-record-q",
"destination_type":"queue",
"routing_key":"#.record.#",
"arguments":{
}
}
]
}

Does Kubernetes take JSON format as input file to create configmap and secret?

I have an existing configuration file in JSON format, something like below
{
"maxThreadCount": 10,
"trackerConfigs": [{
"url": "https://example1.com/",
"username": "username",
"password": "password",
"defaultLimit": 1
},
{
"url": "https://example2.com/",
"username": "username",
"password": "password",
"defaultLimit": 1
}
],
"repoConfigs": [{
"url": "https://github.com/",
"username": "username",
"password": "password",
"type": "GITHUB"
}],
"streamConfigs": [{
"url": "https://example.com/master.json",
"type": "JSON"
}]
}
I understand that I am allowed to pass key/value pair properties file with --from-file option for configmap and secret creation.
But How about JSON formatted file ? Does Kubernetes take JSON format file as input file to create configmap and secret as well?
$ kubectl create configmap demo-configmap --from-file=example.json
If I run this command, it said configmap/demo-configmap created. But how can I refer this configmap values in other pod ?
When you create configmap/secret using --from-file, by default the file name will be the key name and content of the file will be the value.
For example, You created configmap will be like
apiVersion: v1
data:
test.json: |
{
"maxThreadCount": 10,
"trackerConfigs": [{
"url": "https://example1.com/",
"username": "username",
"password": "password",
"defaultLimit": 1
},
{
"url": "https://example2.com/",
"username": "username",
"password": "password",
"defaultLimit": 1
}
],
"repoConfigs": [{
"url": "https://github.com/",
"username": "username",
"password": "password",
"type": "GITHUB"
}],
"streamConfigs": [{
"url": "https://example.com/master.json",
"type": "JSON"
}]
}
kind: ConfigMap
metadata:
creationTimestamp: "2020-05-07T09:03:55Z"
name: demo-configmap
namespace: default
resourceVersion: "5283"
selfLink: /api/v1/namespaces/default/configmaps/demo-configmap
uid: ce566b36-c141-426e-be30-eb843ab20db6
You can mount the configmap into your pod as volume. where the key name will be the file name and value will be the content of the file. like following
apiVersion: v1
kind: Pod
metadata:
name: test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "ls /etc/config/" ]
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
name: demo-configmap
restartPolicy: Never
When the pod runs, the command ls /etc/config/ produces the output below:
test.json
Config maps are a container for key value pairs. So, if you create a ConfigMap from a file containing JSON, this will be stored with the file name as key and the JSON as value.
To access such a Config Map from a Pod, you would have to mount it into your Pod as a volume:
How to mount config maps
Unfortunately the solution as stated from hoque did not work for me. In may case that app terminated with a very suspect message:
Could not execute because the application was not found or a compatible .NET SDK is not installed.
Possible reasons for this include:
* You intended to execute a .NET program:
The application 'myapp.dll' does not exist.
* You intended to execute a .NET SDK command:
It was not possible to find any installed .NET SDKs.
Install a .NET SDK from:
https://aka.ms/dotnet-download
I could see that appsettings.json was deployed but something has gone wrong here. In the end, this solution has worked for me (similar, but with some extras):
spec:
containers:
- name: webapp
image: <my image>
volumeMounts:
- name: appconfig
# "mountPath: /app" only doesn't work (app crashes)
mountPath: /app/appsettings.json
subPath: appsettings.json
volumes:
- name: appconfig
configMap:
name: my-config-map
# Required since "mountPath: /app" only doesn't work (app crashes)
items:
- key: appsettings.json
path: appsettings.json
apiVersion: v1
kind: ConfigMap
metadata:
name: my-config-map
labels:
app: swpq-task-02-team5
data:
appsettings.json: |
{
...
}
I have had this issue for a couple of days as I wanted to refer a json config file (config.production.json) from my local directory into a specific location inside the containers for pod (/var/lib/ghost). The below config worked for me. Please note the mountPath and subPath keys that did the trick for me. The snippet below is of a pod kind=deployment shortened for ease of reading ---
spec:
volumes:
- name: configmap-volume
configMap:
name: configmap
containers:
- env:
- name: url
value: https://www.example.com
volumeMounts:
- name: configmap-volume
mountPath: /var/lib/ghost/config.production.json
subPath: config.production.json

Mounting k8 permanent volume fails silently

I am trying to mount a PV into a pod with the following:
{
"kind": "PersistentVolume",
"apiVersion": "v1",
"metadata": {
"name": "pv",
"labels": {
"type": "ssd1-zone1"
}
},
"spec": {
"capacity": {
"storage": "150Gi"
},
"hostPath": {
"path": "/mnt/data"
},
"accessModes": [
"ReadWriteOnce"
],
"persistentVolumeReclaimPolicy": "Retain",
"storageClassName": "zone1"
}
}
{
"kind": "PersistentVolumeClaim",
"apiVersion": "v1",
"metadata": {
"name": "pvc",
"namespace": "clever"
},
"spec": {
"accessModes": [
"ReadWriteOnce"
],
"resources": {
"requests": {
"storage": "150Gi"
}
},
"volumeName": "pv",
"storageClassName": "zone1"
}
}
kind: Pod
apiVersion: v1
metadata:
name: task-pv-pod
spec:
volumes:
- name: pv
persistentVolumeClaim:
claimName: pvc
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: pv
The pod creates propertly and uses the PVC claim without problem. When I ssh into the pod to see the mount, however, the size is 50Gb, which is the size of the attached storage and not the volume I specified.
root#task-pv-pod:/# df -aTh | grep "/html"
/dev/vda1 xfs 50G 13G 38G 26% /usr/share/nginx/html
The PVC appears to be correct to:
root#5139993be066:/# kubectl describe pvc pvc
Name: pvc
Namespace: default
StorageClass: zone1
Status: Bound
Volume: pv
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"pvc","namespace":"default"},"spec":{"accessModes":["ReadWriteO...
pv.kubernetes.io/bind-completed=yes
Finalizers: []
Capacity: 150Gi
Access Modes: RWO
Events: <none>
I have deleted and recreated the volume and the claim many times and tried to use different images for my pod. Nothing is working.
It looks like your /mnt/data is on root partition, hence it provides the same free space as any other folder in rootfs.
The thing about requested and defined capacities for PV/PVC is that these are ony values for matching or hinting dynamic provisioner. In case of hostPath and manually created PV you can define 300TB and it will bind, even if real folder for hostPath has 5G as the real size of the device is not verified (which is reasonable, cause you just trust the data that is provided in PV).
So as I said, check if your /mnt/data is not just part of the rootfs. If you still have the problem provide output of mount command on the node where the pod is running.

Getting IP into env variable via DNS in kubernetes

I'm trying to get my head around K8s coming from docker compose. I would like to setup my first pod with two containers which I pushed to a registry. Following question:
How do I get the IP via DNS into a environment variable, so that registrator can connect to consul? See container registrtor in args consul://consul:8500. The consul needs to be changed with the env.
{
"kind": "Pod",
"apiVersion": "v1",
"metadata": {
"name": "service-discovery",
"labels": {
"name": "service-discovery"
}
},
"spec": {
"containers": [
{
"name": "consul",
"image": "eu.gcr.io/{myproject}/consul",
"args": [
"-server",
"-bootstrap",
"-advertise=$(MY_POD_IP)"
],
"env": [{
"name": "MY_POD_IP",
"valueFrom": {
"fieldRef": {
"fieldPath": "status.podIP"
}
}
}],
"imagePullPolicy": "IfNotPresent",
"ports": [
{
"containerPort": 8300,
"name": "server"
},
{
"containerPort": 8400,
"name": "alt-port"
},
{
"containerPort": 8500,
"name": "ui-port"
},
{
"containerPort": 53,
"name": "udp-port"
},
{
"containerPort": 8443,
"name": "https-port"
}
]
},
{
"name": "registrator",
"image": "eu.gcr.io/{myproject}/registrator",
"args": [
"-internal",
"-ip=$(MY_POD_IP)",
"consul://consul:8500"
],
"env": [{
"name": "MY_POD_IP",
"valueFrom": {
"fieldRef": {
"fieldPath": "status.podIP"
}
}
}],
"imagePullPolicy": "Always"
}
]
}
}
Exposing pods to other applications is done with a Service in Kubernetes. Once you've defined a service you can use environment variables related to that services within your pods. Exposing the Pod directly is not a good idea as Pods might get rescheduled.
When e.g. using a service like this:
apiVersion: v1
kind: Service
metadata:
name: consul
namespace: kube-system
labels:
name: consul
spec:
ports:
- name: http
port: 8500
- name: rpc
port: 8400
- name: serflan
port: 8301
- name: serfwan
port: 8302
- name: server
port: 8300
- name: consuldns
port: 8600
selector:
app: consul
The related environment variable will be CONSUL_SERVICE_IP
Anyways it seems others actually stopped using that environment variables for some reasons as described here