Spinnaker mounts volumes like this:
apiVersion: v1
kind: Pod
metadata:
...
spec:
containers:
...
volumes:
- configMap:
defaultMode: 420
items:
- key: config
path: config
name: kubectl-k8s-integration
name: "1551221025832"
- ...
I need the config file to be writeble by everyone so that I can use kubectl config use-context in the container, ie I need defaultMode to be 666 instead of 420. There doesn't seem to be place in the Spinnaker GUI to set this when defining volumes. What am I missing?
Based on https://github.com/spinnaker/spinnaker/issues/2118, it is not possible.
My workaround: I configure spinnaker to mount the configmap volume to a different filename, and added code to the container that automatically copies to the expected folder; the copy has write-access.
E.g. say my configmap has a key named "config", and I was previously mounting the configmap as /home/user/.kube, so .kube/config was the file with permission 420 instead of 666. Well I now mount it as /home/user/root.kube, and the container does the equivalent of cp -r /home/user/root.kube /home/user/.kube when it starts. Now /home/user/.kube/config is writable by user.
Related
I have the following setup:
An azure kubernetes cluster with some nodes where my application (consisting of multiple pods) is running.
I'm looking for a good way to make a project-specific configuration file (a few hundred lines) available for two of the deployed containers and their replicas.
The configuration file is different between my projects but the containers are not.
I'm looking for something like a read-only file mount in the containers, but haven't found an good way. I played around with persistent volume claims but there seems to be no automatic file placement possibility apart from copying (including uri and secret managing).
Best thing would be to have a possiblility where kubectl makes use of a yaml file to access a specific folder on my developer machine to push my configuration file into the cluster.
ConfigMaps are not a proper way to do it (because data has to be inside the yaml and my file is big and changing)
For volumes there seems to be no automatic way to place files inside them at creation time.
Can anybody guide me to a good solution that matches my situation?
You can use a configmap for this, but the configmap includes your config file. You can create a configmap with the content of your config file via the following:
kubectl create configmap my-config --from-file=my-config.ini=/path/to/your/config.ini
and the bind it as a volume in your pod:
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: mypod
...
volumeMounts:
- name: config
mountPath: "/config"
readOnly: true
volumes:
- name: config
configMap:
name: my-config #the name of your configmap
Afterwards your config is available in your pod under /config/my-config.ini
I am new to kubernetes and trying to add root certs to my existing secrets truststore.jks file. Using get secret mysecret -o yaml. I am able to view the details of truststore file inside mysecret but not sure how to replace with new truststore file or to edit the existing one with latest root certs. Can anyone help me to get the correct command to do this using kubectl?
Thanks
A Secret is an object that contains a small amount of sensitive data such as a password, a token, or a key. There is an official documentation about Kubernetes.io: Secrets.
Assuming that you created your secret by:
$ kubectl create secret generic NAME_OF_SECRET --from-file=keystore.jks
You can edit your secret by invoking command:
$ kubectl edit secret NAME_OF_SECRET
It will show you YAML definition similar to this:
apiVersion: v1
data:
keystore.jks: HERE_IS_YOUR_JKS_FILE
kind: Secret
metadata:
creationTimestamp: "2020-02-20T13:14:24Z"
name: NAME_OF_SECRET
namespace: default
resourceVersion: "430816"
selfLink: /api/v1/namespaces/default/secrets/jks-old
uid: 0ce898af-8678-498e-963d-f1537a2ac0c6
type: Opaque
To change it to new keystore.jks you would need to base64 encode it and paste in place of old one (HERE_IS_YOUR_JKS_FILE)
You can get a base64 encoded string by:
cat keystore.jks | base64
After successfully editing your secret it should give you a message:
secret/NAME_OF_SECRET edited
Also you can look on this StackOverflow answer
It shows a way to replace existing configmap but with a little of modification it can also replace a secret!
Example below:
Create a secret with keystore-old.jks:
$ kubectl create secret generic my-secret --from-file=keystore-old.jks
Update it with keystore-new.jks:
$ kubectl create secret generic my-secret --from-file=keystore-new.jks -o yaml --dry-run | kubectl replace -f -
Treating keystore.jks as a file allows you to use a volume mount to mount it to specific location inside a pod.
Example YAML below creates a pod with secret mounted as volume:
apiVersion: v1
kind: Pod
metadata:
name: ubuntu
spec:
containers:
- name: ubuntu
image: ubuntu
command:
- sleep
- "360000"
volumeMounts:
- name: secret-volume
mountPath: "/etc/secret"
volumes:
- name: secret-volume
secret:
secretName: NAME_OF_SECRET
Take a specific look on:
volumeMounts:
- name: secret-volume
mountPath: "/etc/secret"
volumes:
- name: secret-volume
secret:
secretName: NAME_OF_SECRET
This part will mount your secret inside your /etc/secret/ directory. It will be available there with a name keystore.jks
A word about mounted secrets:
Mounted Secrets are updated automatically
When a secret currently consumed in a volume is updated, projected keys are eventually updated as well. The kubelet checks whether the mounted secret is fresh on every periodic sync.
-- Kubernetes.io: Secrets.
Please let me know if you have any questions regarding that.
i am running a statefulset where i use volumeClaimTemplates. everything's fine there.
i also have a configmap where i would like to essentially replace some entries with the name of the pod for each pod that this config file is projected onto; eg, if the configmap data is:
ThisHost=<hostname -s>
OtherConfig1=1
OtherConfig1=2
...
then for the statefulset pod named mypod-0, the config file should contain ThisHost=mypod-0 and ThisHost=mypod-1 for mypod-1.
how could i do this?
The hostnames are contained in environment variables within the pod by default called HOSTNAME.
It is possible to modify the configmap itself if you first:
mount the configmap and set it to ThisHost=hostname -s (this will create a file in the pod's filesystem with that text)
pass a substitution command to the pod when starting (something like $ sed 's/hostname/$HOSTNAME/g' -i /path/to/configmapfile)
Basically, you mount the configmap and then replace it with the environment variable information that is available within the pod. It's just a substitution operation.
Look at the example below:
apiVersion: v1
kind: Pod
metadata:
name: command-demo
labels:
purpose: demonstrate-command
spec:
containers:
- name: command-demo-container
image: debian
command: ["sed"]
args: ["'s/hostname/$HOSTNAME'", "-i", "/path/to/config/map/mount/point"]
restartPolicy: OnFailure
The args' syntax might need some adjustments but you get the idea.
Please let me know if that helped.
Sorry if this is a noob question:
I am creating a pod in a kubernetes cluster using a pod defintion yaml file.
This pod defines just one container. I'd like to ... copy a few files to a particular directory in the container.
sort of like in docker-compose:
volumes:
- ./testHelpers/certs:/var/private/ssl/certs
Is it possible to do that at this point (point of defining the pod?)
If not, what could my alternatives be?
PS - I understand that the sample from docker-compose is very different since this maps local directory to a directory in container
It's better to use volumes in pod definition.
Initialize the pod before the container runs
Apart from this, you can also use ConfigMap to store certs and other config files you needed and than can access them in the container as volumes.
More details here
You should create a config map, you can do it from files or a directory.
kubectl create configmap my-config --from-file=configuration/
And then mount the config map as directory:
apiVersion: v1
kind: Pod
metadata:
name: configmap-pod
spec:
containers:
- name: test
image: busybox
volumeMounts:
- name: my-config
mountPath: /etc/config
volumes:
- name: my-config
configMap:
name: my-config
In K8S, what is the best way to execute scripts in container (POD) once at deployment, which reads from confuguration files which are part of the deployment and seed ex mongodb once?
my project consist of k8s manifest files + configuration files
I would like to be able to update the config files locally and then redeploy via kubectl or helm
In docker-compose i could create a volume ponting at the directory where the config files resides and then in the command part execute bash -c cmds reading from the config files in the volume. How is this best done in K8S? I don't want to include the configuration files in a image via dockerfile, forcing me to rebuild the image before redeploying again via kubectl or helm
How is this best done in K8S?
There are several ways to skin a cat, but my suggestion would be to do the following:
Keep configuration in configMap and mount it as separate volume. Such a map is kept as k8s manifest, making all changes to it separate from docker build image - no need to rebuild or keep sensitive data within image. You can also use instead (or together with) secret in the same manner as configMap.
Use initContainers to do the initialization before main container is to be brought online, covering for your 'once on deployment' automatically. Alternatively (if init operation is not repeatable) you can use Jobs instead and start it when necessary.
Here is excerpt of example we are using on gitlab runner:
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: ss-my-project
spec:
...
template:
....
spec:
...
volumes:
- name: volume-from-config-map-config-files
configMap:
name: cm-my-config-files
- name: volume-from-config-map-script
projected:
sources:
- configMap:
name: cm-my-scripts
items:
- key: run.sh
path: run.sh
mode: 0755
# if you need to run as non-root here is how it is done:
securityContext:
runAsNonRoot: true
runAsUser: 999
supplementalGroups: [999]
containers:
- image: ...
name: ...
command:
- /scripts/run.sh
...
volumeMounts:
- name: volume-from-config-map-script
mountPath: "/scripts"
readOnly: true
- mountPath: /usr/share/my-app-config/config.file
name: volume-from-config-map-config-files
subPath: config.file
...
You can, ofc, mount several volumes from config maps or combine them in one single, depending on frequency of your changes and affected parts. This is example with two separately mounted configMaps just to illustrate the principle (and mark script executable), but you can use only one for all required files, put several files into one or put single file into each - as per your need.
Example of such configMap is like so:
apiVersion: v1
kind: ConfigMap
metadata:
name: cm-my-scripts
data:
run.sh: |
#!/bin/bash
echo "Doing some work here..."
And example of configMap covering config file is like so:
kind: ConfigMap
apiVersion: v1
metadata:
name: cm-my-config-files
data:
config.file: |
---
# Some config.file (example name) required in project
# in whatever format config file actually is (just example)
... (here is actual content like server.host: "0" or EFG=True or whatever)
Playing with single or multiple files in configMaps can yield result you want, and depending on your need you can have as many or as few as you want.
In docker-compose i could create a volume ponting at the directory where the config files resides and then in the command part execute bash -c cmds reading from the config files in the volume.
In k8s equivalent of this would be hostPath but then you would seriously hamper k8s ability to schedule pods to different nodes. This might be ok if you have single node cluster (or while developing) to ease change of config files, but for actual deployment above approach is advised.