How to set kubernetes secret key name when using --from-file other than filename? - kubernetes

Is there a way to set a kubernetes secret key name when using --from-file other than the filename?
I have a bunch of different configuration files that I use as secrets.json within my containers. However, to organize my files, none of them are named secrets.json on my host. For example secrets.dev.json or secrets.test.json. My apps only know to read in secrets.json.
When I create a secret with kubectl create secret generic my-app-secrets --from-file=secrets.dev.json, this results in the key name being secrets.dev.json and not secrets.json.
I'm mounting in my secret contents as a file (this is a carry-over from migrating from Docker swarm).
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app-deployment
spec:
template:
spec:
volumes:
- name: my-secret
secret:
secretName: my-app-secrets
containers:
- name: my-app
volumeMounts:
- name: my-secret
mountPath: "/run/secrets/secrets.json"
subPath: "secrets.json"
Because secrets.json doesn't exist as a key because it used the filename (secrets.dev.json), it ends up getting turned into a directory instead. I end up getting this mount path: /run/secrets/secrets.json/secrets.dev.json.
I'd like to be able to set the key name to secrets.json instead of using the filename of secrets.dev.json.

You can specify key name [--from-file=[key=]source]
kubectl create secret generic my-app-secrets --from-file=secrets.json=secrets.dev.json
Here, secrets.json is key name and secrets.dev.json is source

Related

Update config map object from service app

I have my microservice and ConfigMap deployed in cluster.
I have a need to update the value of variable defined in ConfigMap object on the fly. I can re-start the microservice programmatically (like re-create the service pod). The main thing is to keep the ConfigMap object in cluster and only update the value in it.
My current idea is to define env variables in an external file:
key1: foo
key2: bar
and in ConfigMap manifest mount the file:
spec:
containers:
image: ...
volumeMounts:
- mountPath: /clusters-config
name: config-volume
volumes:
- configMap:
name: my-env-file
name: config-volume
I wonder if using this approach, what is the main pitfalls/cons I should consider?
Is there an better option/solution if not using the volume mounted ConfigMap but keep variables inside ConfigMap manifest?

Is secret mounted as file is editable from application code in Kubernetes deployment

I am mounting db secrets as a file in my Kubernetes container. Db secrets will get updated after the password expiry time. I am using polling mechanism to check if Db secrets has been reset to updated value. Is it possible to change mounted secret inside file.
is secret mounted as file is editable from application code in kubernetes
The file which gets loaded into the container will be loaded in readonly format, so loaded file can't be edited from inside the container. But secret can be edited from either updating the secret or copying the file into different location within the container.
I'm not sure how you did it. Putting the yaml format of pod configuration would help more.
for example if you use hostPath to mount a file inside the container, every time you change the source file, you see the changes inside the container.
for example
apiVersion: v1
kind: Pod
metadata:
name: test-pod
spec:
containers:
- image: busybox
name: test-container
command: ["/bin/sh", "-c", "sleep 36000"]
volumeMounts:
- mountPath: /etc/db_pass
name: password-volume
volumes:
- name: password-volume
hostPath:
path: /var/lib/original_password
type: File

How to read .jks file into Kubernetes secret?

I have created a secret.yaml file as follows:
apiVersion: v1
kind: Secret
metadata:
name: my-secret
data:
truststore.jks: {{ (.Files.Glob "../trust.jks").AsSecrets | b64enc }}
I am calling this as part of template .yaml file in HELM.
.yaml
apiVersion: v1
kind: DeploymentConfig
spec:
...
template:
spec:
...
container:
- name: "my-container"
...
volumeMounts:
- name: secrets
mountPath: /mnt/secrets
readOnly: true
volumes:
- name: secrets
secret:
secretName: "my-secret"
When I run helm install command the pod gets created successfully, and the volume is also mounted, but if I check the truststore.jks in /mnt/secrets using cat command below is the output:
cat /mnt/secrets/truststore.jks
{}
I ran the dry run command to check the generated .yaml file, the secret is populted as below:
# Source: ag-saas/templates/tsSecret.yaml
apiVersion: v1
kind: Secret
metadata:
name: my-secret
data:
truststore.jks: e30=
How do I get the file into my secret?
There's a couple of things going on here:
.Files.Glob is intended to retrieve multiple files, e.g. .Files.Glob "credentials/*.jks". For a single file .File.Get will retrieve its contents directly.
You can only access files inside the chart directory; referencing .Files.Get "../trust.jks" won't work.
.Files.Glob.AsSecret renders a list of files to the entire contents of the data: block; you just directly need the file content.
So your Secret should look like
apiVersion: v1
kind: Secret
metadata:
name: my-secret
data:
truststore.jks: {{ .Files.Get "trust.jks" | b64enc }}
where in the last line I've used .Files.Get, I've not tried to refer to a "../..." path outside the chart, and I don't render it to ...AsSecret.
You also will need to move or copy (not symlink) the keyset file into the chart directory for this to work.
(In the current form, .Files.Glob won't match anything outside the chart directory, so you get an empty list of files. Then rendering that to .AsSecrets gets you an empty JSON object. You're using that string {} as the secret value, which gets correctly base64-encoded, but that's why {} comes out at the end.)

Unable to add configmap data as environment variables into pod. it says invalid variable name cannot be added as environmental variable

I am trying to add config data as environment variables, but Kubernetes warns about invalid variable names. The configmap data contains JSON and property files.
spec:
containers:
- name: env-var-configmap
image: nginx:1.7.9
envFrom:
- configMapRef:
name: example-configmap
After deploying I do not see them added in the process environment. Instead I see a warning message like below
Config map example-configmap contains keys that are not valid environment variable names. Only config map keys with valid names will be added as environment variables.
But I see it works if I add it directly as a key-value pair
env:
# Define the environment variable
- name: SPECIAL_LEVEL_KEY
valueFrom:
configMapKeyRef:
# The ConfigMap containing the value you want to assign to SPECIAL_LEVEL_KEY
name: special-config
# Specify the key associated with the value
key: special.how
I have thousand of key values in the configmap data and I could not add them all as separate key-value pairs.
Is there any short syntax to add all values from a configmap as environment variables?
My answer, while #P-Ekambaram already helped you out, I was getting the same error message, it turned out that my issue was that I named the configMap ms-provisioning-broadsoft-adapter and was trying to use ms-provisioning-broadsoft-adapter as the key. As soon as I changed they key to ms_provisioning_broadsoft_adapter, e.g. I added the underscores instead of hyphens and it happily let me add it to an application.
Hope this might help someone else that also runs into the error invalid variable name cannot be added as environmental variable
sample reference is given below
create configmap as shown below
apiVersion: v1
kind: ConfigMap
metadata:
name: special-config
namespace: default
data:
SPECIAL_LEVEL: very
SPECIAL_TYPE: charm
load configmap data as environment variables in the pod
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "env" ]
envFrom:
- configMapRef:
name: special-config
restartPolicy: Never
output
master $ kubectl logs dapi-test-pod | grep SPECIAL
SPECIAL_LEVEL=very
SPECIAL_TYPE=charm
You should rename your variable.
In my case they were like this:
VV-CUSTOMER-CODE
VV-CUSTOMER-URL
I just rename to:
VV_CUSTOMER_CODE
VV_CUSTOMER_URL
Works fine. Openshift/kubernets works with underline _, but not with hyphen - .
I hope help you.

How to pass user credentials to (user-restricted) mounted volume inside Kubernetes Pod?

I am trying to pass user credentials via Kubernetes secret to a mounted, password protected directory inside a Kubernetes Pod.
The NFS folder /mount/protected has user access restrictions, i.e. only certain users can access this folder.
This is my Pod configuration:
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
volumes:
- name: my-volume
hostPath:
path: /mount/protected
type: Directory
secret:
secretName: my-secret
containers:
- name: my-container
image: <...>
command: ["/bin/sh"]
args: ["-c", "python /my-volume/test.py"]
volumeMounts:
- name: my-volume
mountPath: /my-volume
When applying it, I get the following error:
The Pod "my-pod" is invalid:
* spec.volumes[0].secret: Forbidden: may not specify more than 1 volume type
* spec.containers[0].volumeMounts[0].name: Not found: "my-volume"
I created my-secret according to the following guide:
https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/#create-a-secret
So basically:
apiVersion: v1
kind: Secret
metadata:
name: my-secret
data:
username: bXktYXBw
password: PHJlZGFjdGVkPg==
But when I mount the folder /mount/protected with:
spec:
volumes:
- name: my-volume
hostPath:
path: /mount/protected
type: Directory
I get a permission denied error python: can't open file '/my-volume/test.py': [Errno 13] Permission denied when running a Pod that mounts this volume path.
My question is how can I tell my Pod that it should use specific user credentials to gain access to this mounted folder?
You're trying to tell Kubernetes that my-volume should get its content from both a host path and a Secret, and it can only have one of those.
You don't need to manually specify a host path. Kubernetes will figure out someplace appropriate to put the Secret content and it will still be visible on the mountPath you specify within the container. (Specifying hostPath: at all is usually wrong, unless you can guarantee that the path will exist with the content you expect on every node in the cluster.)
So change:
volumes:
- name: my-volume
secret:
secretName: my-secret
# but no hostPath
I eventually figured out how to pass user credentials to a mounted directory within a Pod by using CIFS Flexvolume Plugin for Kubernetes (https://github.com/fstab/cifs).
With this Plugin, every user can pass her/his credentials to the Pod.
The user only needs to create a Kubernetes secret (cifs-secret), storing the username/password and use this secret for the mount within the Pod.
The volume is then mounted as follows:
(...)
volumes:
- name: test
flexVolume:
driver: "fstab/cifs"
fsType: "cifs"
secretRef:
name: "cifs-secret"
options:
networkPath: "//server/share"
mountOptions: "dir_mode=0755,file_mode=0644,noperm"