Is there a way to set the values of some keys in a Knative service.yaml file using environment variables?
More detail
I am trying to deploy a Knative service to a Kubernetes cluster using GitLab CI. Some of the variables in my service.yaml file depend on the project and environment of the GitLab CI pipeline. Is there a way I can seamlessly plug those values into my service.yaml file without resorting to hacks like sed -i ...?
For example, given the following script, I want the $(KUBE_NAMESPACE), $(CI_ENVIRONMENT_SLUG), and $(CI_PROJECT_PATH_SLUG) values to be replaced by accordingly-named environment variables.
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: design
namespace: "$(KUBE_NAMESPACE)"
spec:
template:
metadata:
name: design-v1
annotations:
app.gitlab.com/env: "$(CI_ENVIRONMENT_SLUG)"
app.gitlab.com/app: "$(CI_PROJECT_PATH_SLUG)"
spec:
containers:
- name: user-container
image: ...
timeoutSeconds: 600
containerConcurrency: 8
I don't think there is a great way to expand environment variables inside of an existing yaml, but if you don't want to use sed, you might be able to use envsubst:
envsubst < original.yaml > modified.yaml
You would just run this command before you use the yaml to expand the environment variables contained within it.
Also I think you'll need your variables to use curly braces, instead of parentheses, like this: ${KUBE_NAMESPACE}.
EDIT: You might also be able to use this inline like this: kubectl apply -f <(envsubst < service.yaml)
More than a Knative issue this is more of Kubernetes limitation. Kubernetes allows some expansion but not in annotations or namespace definitions. For example, you can do it in container env definitions:
containers:
- env:
- name: PODID
valueFrom: ...
- name: LOG_PATH
value: /var/log/$(PODID)
If this is a CI/CD system like Gitlab the environment variables should be in a shell environment, so a simple shell expansion will do. For example.
#!/bin/bash
echo -e "
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: design
namespace: "${KUBE_NAMESPACE}"
spec:
template:
metadata:
name: design-v1
annotations:
app.gitlab.com/env: "${CI_ENVIRONMENT_SLUG}"
app.gitlab.com/app: "${CI_PROJECT_PATH_SLUG}"
spec:
containers:
- name: user-container
image: ...
timeoutSeconds: 600
containerConcurrency: 8
" | kubectl apply -f -
You can also use envsubst as a helper like mentioned in the other answer.
Related
I have a number of repeated values in my kubernetes yaml file and I wondering if there was a way I could store variables somewhere in the file, ideally at the top, that I can reuse further down
sort of like
variables:
- appName: &appname myapp
- buildNumber: &buildno 1.0.23
that I can reuse further down like
labels:
app: *appname
tags.datadoghq.com/version:*buildno
containers:
- name: *appname
...
image: 123456.com:*buildno
if those are possible
I know anchors are a thing in yaml I just couldn't find anything on setting variables
You can't do this in Kubernetes manifests, because you need a processor to manipulate the YAML files. Though you can share the anchors in the same YAML manifest like this:
---
apiVersion: v1
kind: ConfigMap
metadata:
name: &cmname myconfig
namespace: &namespace default
labels:
name: *cmname
deployedInNamespace: *namespace
data:
config.yaml: |
[myconfig]
example_field=1
This will result in:
apiVersion: v1
data:
config.yaml: |
[myconfig]
example_field=1
kind: ConfigMap
metadata:
creationTimestamp: "2023-01-25T10:06:27Z"
labels:
deployedInNamespace: default
name: myconfig
name: myconfig
namespace: default
resourceVersion: "147712"
uid: 4039cea4-1e64-4d1a-bdff-910d5ff2a485
As you can see the labels name && deployedInNamespace have the values resulted from the anchor evaluation.
Based on your use case description, what you would need is going the Helm chart path and template your manifests. You can then leverage helper functions and easily customize when you want these fields. From my experience, when you have an use case like this, Helm is the way to go, because it will help you customize everything within your manifests when you decide to change something else.
I guess there is a similar question with answer.
Please check below
How to reuse an environment variable in a YAML file?
I have a file for a Job resource, which looks something like below.I need to run multiple instances with this definition with separate arguments for each.
apiVersion: batch/v1
kind: Job
metadata:
generateName: abc-
spec:
template:
spec:
containers:
- name: abc
image: index.docker.io/some/image:latest
imagePullPolicy: Always
imagePullSecrets:
- name: some_secret
restartPolicy: Never
backoffLimit: 4
I can successfully run this job resource with
kubectl create -f my-job.yml
But, I'm not sure how I pass my arguments corresponding to
command:['arg1','arg2']
I think updating the file with my dynamic args for each request is just messy.
I tried kubectl patch -f my-job.yml --type='json' -p='[{"op": "add", "path": "/spec/template/spec/containers/0/", "value": {"command": ["arg1","arg2"] } }]', which works well for a Deployment kind but for Job it doesn't work
I tried
sudo kubectl run explicitly-provide-name-which-i-dont-want-to --image=index.docker.io/some/image:latest --restart=Never -- arg1 arg2, but for this I won't be able to pass the imagePullSecrets.
kind of a generic answer here, just trying to guide you. In general what you express is the need to 'parameterize' your kubernetes deployment descriptors. There are different ways some are simple, some others are a bit hacky and finally there is github.com/kubernetes/helm.
Personally I would strongly suggest you go through installing Helm on your cluster and then 'migrate' your job or any vanilla kubernetes deployment descriptor into a helm Chart. This will eventually give you the 'parameterization' power that you need to spin jobs in different ways and with different configs.
But, if this sounds like too much for you, I can recommend something that I was doing before I discover Helm. Using things like 'bash' / 'envsubst' I was eventually - templating manually the parts of the yaml file, with place holders (e.g env variables) and then I was feedind the yaml to tools like 'envsubst' where they were replacing the placeholders with the values from the environment. Ugly? Yes. Maintenable? maybe for a couple of simple examples. Example of envsubst here.
apiVersion: batch/v1
kind: Job
metadata:
spec:
template:
spec:
containers:
- name: abc
image: index.docker.io/some/image:latest
imagePullPolicy: Always
imagePullSecrets:
- name: $SOME_ENV_VALUE
restartPolicy: Never
backoffLimit: 4
Hope that helps..but seriously if you have time, consider checking 'Helm'.
I would also consider sourcing the command arguments from environment variables. These variables are then provided by helm as javapapo has mentioned.
I found this guide via Google:
Start Kubernetes job from command line with parameters
But the helm chart solution suggested by javapopo is the best way I guess.
I'm currently looking at GKE and some of the tutorials on google cloud. I was following this one here https://cloud.google.com/solutions/integrating-microservices-with-pubsub#building_images_for_the_app (source code https://github.com/GoogleCloudPlatform/gke-photoalbum-example)
This example has 3 deployments and one service. The example tutorial has you deploy everything via the command line which is fine and all works. I then started to look into how you could automate deployments via cloud build and discovered this:
https://cloud.google.com/build/docs/deploying-builds/deploy-gke#automating_deployments
These docs say you can create a build configuration for your a trigger (such as pushing to a particular repo) and it will trigger the build. The sample yaml they show for this is as follows:
# deploy container image to GKE
- name: "gcr.io/cloud-builders/gke-deploy"
args:
- run
- --filename=kubernetes-resource-file
- --image=gcr.io/project-id/image:tag
- --location=${_CLOUDSDK_COMPUTE_ZONE}
- --cluster=${_CLOUDSDK_CONTAINER_CLUSTER}
I understand how the location and cluster parameters can be passed in and these docs also say the following about the resource file (filename parameter) and image parameter:
kubernetes-resource-file is the file path of your Kubernetes configuration file or the directory path containing your Kubernetes resource files.
image is the desired name of the container image, usually the application name.
Relating this back to the demo application repo where all the services are in one repo, I believe I could supply a folder path to the filename parameter such as the config folder from the repo https://github.com/GoogleCloudPlatform/gke-photoalbum-example/tree/master/config
But the trouble here is that those resource files themselves have an image property in them so I don't know how this would relate to the image property of the cloud build trigger yaml. I also don't know how you could then have multiple "image" properties in the trigger yaml where each deployment would have it's own container image.
I'm new to GKE and Kubernetes in general, so I'm wondering if I'm misinterpreting what the kubernetes-resource-file should be in this instance.
But is it possible to automate deploying of multiple deployments/services in this fashion when they're all bundled into one repo? Or have Google just over simplified things for this tutorial - the reality being that most services would be in their own repo so as to be built/tested/deployed separately?
Either way, how would the image property relate to the fact that an image is already defined in the deployment yaml? e.g:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
name: photoalbum-app
name: photoalbum-app
spec:
replicas: 3
selector:
matchLabels:
name: photoalbum-app
template:
metadata:
labels:
name: photoalbum-app
spec:
containers:
- name: photoalbum-app
image: gcr.io/[PROJECT_ID]/photoalbum-app#[DIGEST]
tty: true
ports:
- containerPort: 8080
env:
- name: PROJECT_ID
value: "[PROJECT_ID]"
The command that you use is perfect for testing the deployment of one image. But when you work with Kubernetes (K8S), and the managed version of GCP (GKE), you usually never do this.
You use YAML file to describe your deployments, services and all other K8S object that you want. When you deploy, you can perform something like this
kubectl apply -f <file.yaml>
If you have several file, you can use wildcard is you want
kubectl apply -f config/*.yaml
If you prefer to use only one file, you can separate the object with ---
apiVersion: v1
kind: Service
metadata:
name: my-nginx-svc
labels:
app: nginx
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
labels:
app: nginx
spec:...
...
I have a pod that is defined by a deployment, and the yaml definition is stored in my codebase. There are time when I'd like to have a volume mount configured for the pod/container, so it would be great to have a script that could enable this. I know I can use kubectl edit to open up an editor and do this (then restart the pod), but it would be more applicable if our devs could simply do something like ./our_scripts/enable_mount.sh.
One option would be to simply have a copy of the YAML definition and create/apply that while deleting the other, but it would be nicer to modify the existing one in place.
Is there a way to achieve this? Does kubectl edit have any flags that I'm missing to achieve this?
Use Declarative Management of Kubernetes Objects Using Kustomize. You already have a deployment.yaml manifest in your codebase. Now, move that to base/deployment.yaml and also create a overlays/with-mount/deployment-with-mount.yaml that overrides with an mount when you want.
To deploy the base, you use
kubectl apply -k base/deployment.yaml
and when you want to deploy and also override so you get a mount, you use
kubectl apply -k overlays/with-mount/deployment-with-mount.yaml
You want to deploy pod differently on different conditions on environment. Helm allows you do it. You could have one template for the pod and then pass in values that change based on environment or conditions you want to run.
helm install --values ./k8s/charts/values1.yaml <chartname>
or
helm install --values ./k8s/charts/values2.yaml <chartname>
If this only need for templating then using helm may seem more involved. Potentially other solution could be to use a python script that manipulates yaml
Below is quick sample that works for python 2.7
import yaml
document = """
apiVersion: v1
kind: Pod
metadata:
name: myapp
labels:
name: myapp
spec:
containers:
- name: myapp
image: nginx
"""
volumemount = True
podSpec = yaml.load(document)
if volumemount:
volumeSpecDoc = """
volumes:
- configMap:
name: app-config
name : app-config-volume
"""
volumeSpec = yaml.load(volumeSpecDoc, Loader=yaml.FullLoader)
podSpec['spec'].update(volumeSpec)
containerVolumeMountDoc = """
volumeMounts:
- name: app-config-volume
mountPath: /etc/config
"""
containerVolumeMount = yaml.load(containerVolumeMountDoc,Loader=yaml.FullLoader)
original = podSpec['spec']['containers'][0]
original.update(containerVolumeMount)
podSpec['spec']['containers'][0] = original
print yaml.dump(podSpec)
I'd like to elaborate on reply made by #jonas .
One of the "advanced" commands for kubectl is kustomize.
It builds a kustomization target from a directory or a remote url.
Available from v1.14
it supports a few types of functionality
generating resources from other sources
setting cross-cutting fields for resources
composing and customizing collections of resources
I know I can use kubectl edit to open up an editor and do this (then restart the pod), but it would be more applicable if our devs could simply do something like ./our_scripts/enable_mount.sh.
"composing and customizing collections of resources" feature looks like the one you may consider in this context.
it may be an overkill if you like to bring up single pod (single yaml file); however it is useful if you need to do something that is a bit more complex :)
$ ls -go
total 28
-rw-r--r-- 1 49 Dec 17 13:23 kustomization.yaml
-rw-r--r-- 1 119 Dec 17 13:22 pod_no_volume.yaml
-rw-r--r-- 1 275 Dec 17 13:22 pod_with_volume.yaml
-rw-r--r-- 1 160 Dec 17 13:19 service.yaml
$ cat kustomization.yaml
resources:
- pod_with_volume.yaml
- service.yaml
If you run kubectl kustomize <dir> (here dir stands for the directory, your kustomization.yaml sits in and in my example it's just a . ) ,you can see that kubectl come up with the new objects to be applied:
$ kubectl kustomize .
apiVersion: v1
kind: Service
metadata:
labels:
run: my-nginx
name: my-nginx
spec:
ports:
- port: 80
protocol: TCP
selector:
run: my-nginx
---
apiVersion: v1
kind: Pod
metadata:
name: test-nginx-with-volume
spec:
containers:
- image: nginx
name: nginx
volumeMounts:
- mountPath: /etc/nginx/conf.d/
name: nginx-path
volumes:
- hostPath:
path: /tmp/somePath/conf.d
name: nginx-path
As you can guess it is merely a "concat" of
pod_with_volume.yaml
service.yaml
The only peculiarity of this method is that the file you are referring to shall be in or below the directory the kustomization.yaml sits in.
As you can see, ./our_scripts/enable_mount.sh can merely call properly formatted kubectl kustomize <dir> command
Hope you'll find this answer useful :)
I have a file for a Job resource, which looks something like below.I need to run multiple instances with this definition with separate arguments for each.
apiVersion: batch/v1
kind: Job
metadata:
generateName: abc-
spec:
template:
spec:
containers:
- name: abc
image: index.docker.io/some/image:latest
imagePullPolicy: Always
imagePullSecrets:
- name: some_secret
restartPolicy: Never
backoffLimit: 4
I can successfully run this job resource with
kubectl create -f my-job.yml
But, I'm not sure how I pass my arguments corresponding to
command:['arg1','arg2']
I think updating the file with my dynamic args for each request is just messy.
I tried kubectl patch -f my-job.yml --type='json' -p='[{"op": "add", "path": "/spec/template/spec/containers/0/", "value": {"command": ["arg1","arg2"] } }]', which works well for a Deployment kind but for Job it doesn't work
I tried
sudo kubectl run explicitly-provide-name-which-i-dont-want-to --image=index.docker.io/some/image:latest --restart=Never -- arg1 arg2, but for this I won't be able to pass the imagePullSecrets.
kind of a generic answer here, just trying to guide you. In general what you express is the need to 'parameterize' your kubernetes deployment descriptors. There are different ways some are simple, some others are a bit hacky and finally there is github.com/kubernetes/helm.
Personally I would strongly suggest you go through installing Helm on your cluster and then 'migrate' your job or any vanilla kubernetes deployment descriptor into a helm Chart. This will eventually give you the 'parameterization' power that you need to spin jobs in different ways and with different configs.
But, if this sounds like too much for you, I can recommend something that I was doing before I discover Helm. Using things like 'bash' / 'envsubst' I was eventually - templating manually the parts of the yaml file, with place holders (e.g env variables) and then I was feedind the yaml to tools like 'envsubst' where they were replacing the placeholders with the values from the environment. Ugly? Yes. Maintenable? maybe for a couple of simple examples. Example of envsubst here.
apiVersion: batch/v1
kind: Job
metadata:
spec:
template:
spec:
containers:
- name: abc
image: index.docker.io/some/image:latest
imagePullPolicy: Always
imagePullSecrets:
- name: $SOME_ENV_VALUE
restartPolicy: Never
backoffLimit: 4
Hope that helps..but seriously if you have time, consider checking 'Helm'.
I would also consider sourcing the command arguments from environment variables. These variables are then provided by helm as javapapo has mentioned.
I found this guide via Google:
Start Kubernetes job from command line with parameters
But the helm chart solution suggested by javapopo is the best way I guess.