How to mount a secret file from Jenkins into a Kubernetes deployment - kubernetes

I need to fetch a credentials file from Jenkins and load it into a Kubernetes deployment.
This file needs to be loaded at the beginning, before the pods launch since it's a configuration file.
My idea was, in the Jenkins part, to do something like withCredentials([file(credentialsId: credentials, variable: 'credentials')]) { }
And use that variable to create a secret, then use it in a deployment. Right now I can only do this with stringData, something like
apiVersion: v1
kind: Secret
metadata:
name: 'metadata'
labels:
project: 'example'
type: Opaque
stringData:
TEST: '${TEST}'
I would need a way to use this, but with a file, within a Secret.
And to mount it in my deployment, I suppose with the volumeMounts option

This sounds a bit hacky in my opinion. I would rather use a configmap, which you can create with a custom (or search for one) kubernetes operator. Otherwise, a sidecar container could help as well.
https://learnk8s.io/sidecar-containers-patterns

Related

ConfigMap that can reference current Namespace

I'm working with a Pod (Shiny Proxy) that talks to Kubernetes API to start other pods. I'm wanting to make this generic, and so don't want to hardcode the namespace (because I intend to have multiple of these, deployed probably as an OpenShift Template or similar).
I am using Kustomize to set the namespace on all objects. Here's what my kustomization.yaml looks like for my overlay:
bases:
- ../../base
namespace: shiny
commonAnnotations:
technical_contact: A Local Developer <somedev#example.invalid>
Running Shiny Proxy and having it start the pods I need it to (I have service accounts and RBAC already sorted) works, so long as as in the configuration for Shiny Proxy I specify (hard-code) the namespace that the new pods should be generated in. The default namespace that Shiny Proxy will use is (unfortunately) 'default', which is inappropriate for my needs.
Currently for the configuration I'm using a ConfigMap (perhaps I should move to a Kustomize ConfigMapGenerator)
The ConfigMap in question is currently like the following:
apiVersion: v1
kind: ConfigMap
metadata:
name: shiny-proxy
data:
application_yml: |
...
container-backend: kubernetes
kubernetes:
url: https://kubernetes.default.svc.cluster.local:443
namespace: shiny
...
The above works, but 'shiny' is hardcoded; I would like to be able to do something like the following:
namespace: { .metadata.namespace }
But this doesn't appear to work in a ConfigMap, and I don't see anything in the documentation that would lead to believe that it would, or that a similar thing appears possible within the ConfigMap machinery.
Looking over the Kustomize documentation doesn't fill me with clarity either, particularly as the configuration file is essentially plain-text (and not a YAML document as far as the ConfigMap is concerned). I've seen some use of Vars, but https://github.com/kubernetes-sigs/kustomize/issues/741 leads to believe that's a non-starter.
Is there a nice declarative way of handling this? Or should I be looking to have the templating smarts happen within the container, which seems kinda wrong to me, but I am still new to Kubernetes (and OpenShift)
I'm using CodeReady Containers 1.24 (OpenShift 4.7.2) which is essentially Kubernetes 1.20 (IIRC). I'm preferring to keep this fairly well aligned with Kubernetes without getting too OpenShift specific, but this is still early days.
Thanks,
Cameron
If you don't want to hard-code a specific data in your manifest file, you can consider using Kustomize plugins. In this case, the sedtransformer plugin may be useful. This is an example plugin, maintained and tested by the kustomize maintainers, but not built-in to kustomize.
As you can see in the Kustomize plugins guide:
Kustomize offers a plugin framework allowing people to write their own resource generators and transformers.
For more information on creating and using Kustomize plugins, see Extending Kustomize.
I will create an example to illustrate how you can use the sedtransformer plugin in your case.
Suppose I have a shiny-proxy ConfigMap:
NOTE: I don't specify a namespace, I use namespace: NAMESPACE instead.
$ cat cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: shiny-proxy
data:
application_yml: |
container-backend: kubernetes
kubernetes:
url: https://kubernetes.default.svc.cluster.local:443
namespace: NAMESPACE
something_else:
something: something
To use the sedtransformer plugin, we first need to create the plugin’s configuration file which contains a YAML configuration object:
NOTE: In argsOneLiner: I specify that NAMESPACE should be replaced with shiny.
$ cat sedTransformer.yaml
apiVersion: someteam.example.com/v1
kind: SedTransformer
metadata:
name: sedtransformer
argsOneLiner: s/NAMESPACE/shiny/g
Next, we need to put the SedTransformer Bash script in the right place.
When loading, kustomize will first look for an executable file called
$XDG_CONFIG_HOME/kustomize/plugin/${apiVersion}/LOWERCASE(${kind})/${kind}
I create the necessary directories and download the SedTransformer script from the Github:
NOTE: The downloaded script need to be executable.
$ mkdir -p $HOME/.config/kustomize/plugin/someteam.example.com/v1/sedtransformer
$ cd $HOME/.config/kustomize/plugin/someteam.example.com/v1/sedtransformer
$ wget https://raw.githubusercontent.com/kubernetes-sigs/kustomize/master/plugin/someteam.example.com/v1/sedtransformer/SedTransformer
$ chmod a+x SedTransformer
Finally, we can check if it works as expected:
NOTE: To use this plugin, you need to provide the --enable-alpha-plugins flag.
$ tree
.
├── cm.yaml
├── kustomization.yaml
└── sedTransformer.yaml
0 directories, 3 files
$ cat kustomization.yaml
resources:
- cm.yaml
transformers:
- sedTransformer.yaml
$ kustomize build --enable-alpha-plugins .
apiVersion: v1
data:
application_yml: |
container-backend: kubernetes
kubernetes:
url: https://kubernetes.default.svc.cluster.local:443
namespace: shiny
something_else:
something: something
kind: ConfigMap
metadata:
name: shiny-proxy
Using the sedtransformer plugin can be especially useful if you want to replace NAMESPACE in a number of places.
I found the easiest way of doing this was to use an entrypoint script in the container that harvested the downward API (?) service credentials (specifically the namespace secret) that get mounted in the container, and exposes that as an environment variable.
export SHINY_K8S_NAMESPACE=`cat /run/secrets/kubernetes.io/serviceaccount/namespace`
cd /opt/shiny-proxy/working
exec java ${JVM_OPTIONS} -jar /opt/shiny-proxy/shiny-proxy.jar
Within the application configuration (shiny-proxy), it supports the use of environment variables in its configuration file, so I can refer to the pod's namespace using ${SHINY_K8S_NAMESPACE}
Although, I've just now seen the idea of a fieldRef (from https://docs.openshift.com/enterprise/3.2/dev_guide/downward_api.html), and would be generalisable to things other than just namespace.
apiVersion: v1
kind: Pod
metadata:
name: dapi-env-test-pod
spec:
containers:
- name: env-test-container
image: gcr.io/google_containers/busybox
command: ["/bin/sh", "-c", "env"]
env:
- name: MY_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
restartPolicy: Never

can I create secrets in openshift 4.3 from files?

I'm new to openshift and Kubernetes too (coming from Docker swarm world). and I'm looking to create secrets in openshift using definition file. these secrets are generated from a file. to give an example of what I'm trying to do let's say I have a file "apache.conf" and I want to add that file to containers as a secret mounted as a volume. In swarm I can just write the following in the stack file:
my-service:
secrets:
- source: my-secret
target: /home/myuser/
mode: 0700
secrets:
my-secret:
file: /from/host/apache.conf
In openshift I'm looking to have something similar like:
apiVersion: v1
kind: Secret
metadata:
- name: my-secret
files:
- "/from/host/apache.conf"
type: Opaque
The only way I've found that I can do something similar is by using kustomize and according to this post using Kustomize with openshift is cumbersome. is there a better way for creating secrets from a file?
No you can't
The reason is that the object Secret is stored in the etcd database and is not bound to any host. Therefore, the object doesn't understand the path.
You can create the secret from a file using the cli, and then the content will be saved in the Secret object.
oc create secret generic my-secret --from-file=fil1=pullsecret_private.json

Using sensitive environment variables in Kubernetes configMaps

I know you can use ConfigMap properties as environment variables in the pod spec, but can you use environment variables declared in the pods spec inside the configmap?
For example:
I have a secret password which I wish to access in my configmap application.properties. The secret looks like so:
apiVersion: v1
data:
pw: THV3OE9vcXVpYTll==
kind: Secret
metadata:
name: foo
namespace: foo-bar
type: Opaque
so inside the pod spec I reference the secret as an env var. The configMap will be mounted as a volume from within the spec:
env:
- name: PASSWORD
valueFrom:
secretKeyRef:
name: foo
key: pw
...
and inside my configMap I can then reference the secret value like so:
apiVersion: v1
kind: ConfigMap
metadata:
name: application.properties
namespace: foo-bar
data:
application.properties: /
secret.password=$(PASSWORD)
Anything I've found online is just about consuming configMap values as env vars and doesn't mention consuming env vars in configMap values.
Currently it's not a Kubernetes Feature.
There is a closed issue requesting this feature and it's kind of controversial topic because the discussion is ongoing many months after being closed:
Reference Secrets from ConfigMap #79224
Referencing the closing comment:
Best practice is to not use secret values in envvars, only as mounted files. if you want to keep all config values in a single object, you can place all the values in a secret object and reference them that way.
Referencing secrets via configmaps is a non-goal... it confuses whether things mounting or injecting the config map are mounting confidential values.
I suggest you to read the entire thread to understand his reasons and maybe find another approach for your environment to get this variables.
"OK, but this is Real Life, I need to make this work"
Then I recommend you this workaround:
Import Data to Config Map from Kubernetes Secret
It makes the substitution with a shell in the entrypoint of the container.

Single Container Pod yaml

Forgive my ignorance but I can't seem to find a way of using a yaml file to deploy a single container pod (read: kind: Pod). It appears the only way to do it is to use a deployment yaml file (read: kind: Deployment) with a replica of 1.
Is there really no way?
Why I ask is because it would be nice to put everything in source control, including the one off's like databases.
It would be awesome if there was a site with all the available options you can use in a yaml file (like vagrant's vagrantfile). There isn't one, right?
Thanks!
You should be able to find pod yaml files easily. For example, the documentation has an example of a Pod being created.
apiVersion: v1
kind: Pod
metadata:
name: hello-world
spec: # specification of the pod's contents
restartPolicy: Never
containers:
- name: hello
image: "ubuntu:14.04"
command: ["/bin/echo", "hello", "world"]
One thing to note is that if a deployment or a replicaset created a resource on your behalf, there is no reason why you couldn't do the same.
kubectl get pod <pod-name> -o yaml should give you the YAML spec of a created pod.
There is Kubernetes charts, which serves as a repository for configuration surrounding complex applications, using the helm package manager. This would serve you well for deploying more complex applications.
Never mind, figured it out. It's possible. You just use the multi-container yaml file (example found here: https://kubernetes.io/docs/user-guide/pods/multi-container/) but only specify one container.
I'd tried it before but had inadvertently mistyped the yaml formatting.
Thanks rubber ducky!

Restart pods when configmap updates in Kubernetes?

How do I automatically restart Kubernetes pods and pods associated with deployments when their configmap is changed/updated?
I know there's been talk about the ability to automatically restart pods when a config maps changes but to my knowledge this is not yet available in Kubernetes 1.2.
So what (I think) I'd like to do is a "rolling restart" of the deployment resource associated with the pods consuming the config map. Is it possible, and if so how, to force a rolling restart of a deployment in Kubernetes without changing anything in the actual template? Is this currently the best way to do it or is there a better option?
The current best solution to this problem (referenced deep in https://github.com/kubernetes/kubernetes/issues/22368 linked in the sibling answer) is to use Deployments, and consider your ConfigMaps to be immutable.
When you want to change your config, create a new ConfigMap with the changes you want to make, and point your deployment at the new ConfigMap. If the new config is broken, the Deployment will refuse to scale down your working ReplicaSet. If the new config works, then your old ReplicaSet will be scaled to 0 replicas and deleted, and new pods will be started with the new config.
Not quite as quick as just editing the ConfigMap in place, but much safer.
Signalling a pod on config map update is a feature in the works (https://github.com/kubernetes/kubernetes/issues/22368).
You can always write a custom pid1 that notices the confimap has changed and restarts your app.
You can also eg: mount the same config map in 2 containers, expose a http health check in the second container that fails if the hash of config map contents changes, and shove that as the liveness probe of the first container (because containers in a pod share the same network namespace). The kubelet will restart your first container for you when the probe fails.
Of course if you don't care about which nodes the pods are on, you can simply delete them and the replication controller will "restart" them for you.
The best way I've found to do it is run Reloader
It allows you to define configmaps or secrets to watch, when they get updated, a rolling update of your deployment is performed. Here's an example:
You have a deployment foo and a ConfigMap called foo-configmap. You want to roll the pods of the deployment every time the configmap is changed. You need to run Reloader with:
kubectl apply -f https://raw.githubusercontent.com/stakater/Reloader/master/deployments/kubernetes/reloader.yaml
Then specify this annotation in your deployment:
kind: Deployment
metadata:
annotations:
configmap.reloader.stakater.com/reload: "foo-configmap"
name: foo
...
Helm 3 doc page
Often times configmaps or secrets are injected as configuration files in containers. Depending on the application a restart may be required should those be updated with a subsequent helm upgrade, but if the deployment spec itself didn't change the application keeps running with the old configuration resulting in an inconsistent deployment.
The sha256sum function can be used together with the include function to ensure a deployments template section is updated if another spec changes:
kind: Deployment
spec:
template:
metadata:
annotations:
checksum/config: {{ include (print $.Template.BasePath "/secret.yaml") . | sha256sum }}
[...]
In my case, for some reasons, $.Template.BasePath didn't work but $.Chart.Name does:
spec:
replicas: 1
template:
metadata:
labels:
app: admin-app
annotations:
checksum/config: {{ include (print $.Chart.Name "/templates/" $.Chart.Name "-configmap.yaml") . | sha256sum }}
You can update a metadata annotation that is not relevant for your deployment. it will trigger a rolling-update
for example:
spec:
template:
metadata:
annotations:
configmap-version: 1
If k8>1.15; then doing a rollout restart worked best for me as part of CI/CD with App configuration path hooked up with a volume-mount. A reloader plugin or setting restartPolicy: Always in deployment manifest YML did not work for me. No application code changes needed, worked for both static assets as well as Microservice.
kubectl rollout restart deployment/<deploymentName> -n <namespace>
Had this problem where the Deployment was in a sub-chart and the values controlling it were in the parent chart's values file. This is what we used to trigger restart:
spec:
template:
metadata:
annotations:
checksum/config: {{ tpl (toYaml .Values) . | sha256sum }}
Obviously this will trigger restart on any value change but it works for our situation. What was originally in the child chart would only work if the config.yaml in the child chart itself changed:
checksum/config: {{ include (print $.Template.BasePath "/config.yaml") . | sha256sum }}
Consider using kustomize (or kubectl apply -k) and then leveraging it's powerful configMapGenerator feature. For example, from: https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/configmapgenerator/
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
# Just one example of many...
- name: my-app-config
literals:
- JAVA_HOME=/opt/java/jdk
- JAVA_TOOL_OPTIONS=-agentlib:hprof
# Explanation below...
- SECRETS_VERSION=1
Then simply reference my-app-config in your deployments. When building with kustomize, it'll automatically find and update references to my-app-config with an updated suffix, e.g. my-app-config-f7mm6mhf59.
Bonus, updating secrets: I also use this technique for forcing a reload of secrets (since they're affected in the same way). While I personally manage my secrets completely separately (using Mozilla sops), you can bundle a config map alongside your secrets, so for example in your deployment:
# ...
spec:
template:
spec:
containers:
- name: my-app
image: my-app:tag
envFrom:
# For any NON-secret environment variables. Name is automatically updated by Kustomize
- configMapRef:
name: my-app-config
# Defined separately OUTSIDE of Kustomize. Just modify SECRETS_VERSION=[number] in the my-app-config ConfigMap
# to trigger an update in both the config as well as the secrets (since the pod will get restarted).
- secretRef:
name: my-app-secrets
Then, just add a variable like SECRETS_VERSION into your ConfigMap like I did above. Then, each time you change my-app-secrets, just increment the value of SECRETS_VERSION, which serves no other purpose except to trigger a change in the kustomize'd ConfigMap name, which should also result in a restart of your pod. So then it becomes:
I also banged my head around this problem for some time and wished to solve this in an elegant but quick way.
Here are my 20 cents:
The answer using labels as mentioned here won't work if you are updating labels. But would work if you always add labels. More details here.
The answer mentioned here is the most elegant way to do this quickly according to me but had the problem of handling deletes. I am adding on to this answer:
Solution
I am doing this in one of the Kubernetes Operator where only a single task is performed in one reconcilation loop.
Compute the hash of the config map data. Say it comes as v2.
Create ConfigMap cm-v2 having labels: version: v2 and product: prime if it does not exist and RETURN. If it exists GO BELOW.
Find all the Deployments which have the label product: prime but do not have version: v2, If such deployments are found, DELETE them and RETURN. ELSE GO BELOW.
Delete all ConfigMap which has the label product: prime but does not have version: v2 ELSE GO BELOW.
Create Deployment deployment-v2 with labels product: prime and version: v2 and having config map attached as cm-v2 and RETURN, ELSE Do nothing.
That's it! It looks long, but this could be the fastest implementation and is in principle with treating infrastructure as Cattle (immutability).
Also, the above solution works when your Kubernetes Deployment has Recreate update strategy. Logic may require little tweaks for other scenarios.
How do I automatically restart Kubernetes pods and pods associated
with deployments when their configmap is changed/updated?
If you are using configmap as Environment you have to use the external option.
Reloader
Kube watcher
Configurator
Kubernetes auto-reload the config map if it's mounted as volume (If subpath there it won't work with that).
When a ConfigMap currently consumed in a volume is updated, projected
keys are eventually updated as well. The kubelet checks whether the
mounted ConfigMap is fresh on every periodic sync. However, the
kubelet uses its local cache for getting the current value of the
ConfigMap. The type of the cache is configurable using the
ConfigMapAndSecretChangeDetectionStrategy field in the
KubeletConfiguration struct. A ConfigMap can be either propagated by
watch (default), ttl-based, or by redirecting all requests directly to
the API server. As a result, the total delay from the moment when the
ConfigMap is updated to the moment when new keys are projected to the
Pod can be as long as the kubelet sync period + cache propagation
delay, where the cache propagation delay depends on the chosen cache
type (it equals to watch propagation delay, ttl of cache, or zero
correspondingly).
Official document : https://kubernetes.io/docs/concepts/configuration/configmap/#mounted-configmaps-are-updated-automatically
ConfigMaps consumed as environment variables are not updated automatically and require a pod restart.
Simple example Configmap
apiVersion: v1
kind: ConfigMap
metadata:
name: config
namespace: default
data:
foo: bar
POD config
spec:
containers:
- name: configmaptestapp
image: <Image>
volumeMounts:
- mountPath: /config
name: configmap-data-volume
ports:
- containerPort: 8080
volumes:
- name: configmap-data-volume
configMap:
name: config
Example : https://medium.com/#harsh.manvar111/update-configmap-without-restarting-pod-56801dce3388
Adding the immutable property to the config map totally avoids the problem. Using config hashing helps in a seamless rolling update but it does not help in a rollback. You can take a look at this open-source project - 'Configurator' - https://github.com/gopaddle-io/configurator.git .'Configurator' works by the following using the custom resources :
Configurator ties the deployment lifecycle with the configMap. When
the config map is updated, a new version is created for that
configMap. All the deployments that were attached to the configMap
get a rolling update with the latest configMap version tied to it.
When you roll back the deployment to an older version, it bounces to
configMap version it had before doing the rolling update.
This way you can maintain versions to the config map and facilitate rolling and rollback to your deployment along with the config map.
Another way is to stick it into the command section of the Deployment:
...
command: [ "echo", "
option = value\n
other_option = value\n
" ]
...
Alternatively, to make it more ConfigMap-like, use an additional Deployment that will just host that config in the command section and execute kubectl create on it while adding an unique 'version' to its name (like calculating a hash of the content) and modifying all the deployments that use that config:
...
command: [ "/usr/sbin/kubectl-apply-config.sh", "
option = value\n
other_option = value\n
" ]
...
I'll probably post kubectl-apply-config.sh if it ends up working.
(don't do that; it looks too bad)