In Azure pipeline I download kubernetes deployment.yml property file which contains following content.
spec:
imagePullSecrets:
- name: some-secret
containers:
- name: container-name
image: pathtoimage/data-processor:$(releaseVersion)
imagePullPolicy: Always
ports:
- containerPort: 8088
env:
My intention is to get the value from pipeline variable $(releaseVersion). But it seems like kubernetes task doesn't allow this value to be accessed from pipeline variable.
I tried using inline configuration type and it works.That means If I copy same configuration as inline content to kubernetes task configuration, it works.
Is there anyway that I can make it work for the configuration from a file?
As I understand, you may want to replace the variable of deployment.yml file content while build executed.
You can use one task which name is Replace Tokens task (Note:The token under this task name is not same with PAToken). This is the task which support replace values of files in projects with environments variables when setting up VSTS Build/Release processes.
Install Replace Tokens from marketplace first, then add Replace Tokens task into your pipeline.
Configure the .yml file path in the Root directory. For me, my target file is under the Drop folder of my local. And then, point out which file you want to operate and replace.
For more argument configured, you can check this doc which I ever refer: https://github.com/qetza/vsts-replacetokens-task#readme
Note: Please execute this task before Deploy to Kubernetes task, so that the change can be apply to the Kubernetes cluster.
Here also has another sample blog can for you refer.
You should have it as part of your pipeline, to substitute environment variables inside the deployment template
Something along the lines of:
- sed -i "s/$(releaseVersion)/${RELEASE_VERSION_IN_BUILD_RUNNER}/" deployment.yml
- kubectl apply -f deployment.yml
You can set the variables in your pipeline. https://learn.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&tabs=yaml%2Cbatch
Related
I have a set of K8s YAML descriptors as part of a project and I'm using kustomization to build them. I'm also using GitOps to do pull based deployments to my K8s cluster.
I now want to add some tests for my YAML files so that if I have any errors, I want to avoid or prevent Flux from pulling my changes into the cluster. So basically I want to do some unit test like thingy for my YAML files. I came across Kubeval and this could serve my purpose well. I'm just not sure how to use it.
Anyone already tried this? I want to basically do the following:
As soon as I push some YAML files into my repo, Kubeval kicks in and validates all the YAML files in a set of folders that I specify
If all the YAML files passes lint validations, then I want to proceed to the next stage where I call kustomize to build the deployment YAML.
If the YAML files fail lint validation, then my CI fails and nothing should happen
Any ideas on how I could do this?
Since my project is hosted on GitHub, I was able to get what I want using GitHub actions and kube-tools
So basically here is what I did!
In my GitHub project, added a main.yaml under project-root/.github/workflows/main.yml
The contents of my main.yaml is:
name: ValidateKubernetesYAML
branches: [ master ] pull_request:
branches: [ master ]
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Kubeval
uses: stefanprodan/kube-tools#v1.2.0
with:
kubectl: 1.16.2
kustomize: 3.4.0
helm: 2.16.1
helmv3: 3.0.0
command: |
echo "Run kubeval"
kubeval -d base,dev,production --force-color --strict --ignore-missing-schemas
Now when someone issues a pull request into master, this validation kicks in and if it fails the changes does not get promoted into master branch which is what I want!
Here is the output of such a validation:
Run kubeval
WARN - Set to ignore missing schemas
PASS - base/application/plant-simulator-deployment.yaml contains a valid Deployment
PASS - base/application/plant-simulator-ingress-service.yaml contains a valid Ingress
PASS - base/application/plant-simulator-namespace.yaml contains a valid Namespace
PASS - base/application/plant-simulator-service.yaml contains a valid Service
WARN - base/kustomization.yaml containing a Kustomization was not validated against a schema
PASS - base/monitoring/grafana/grafana-deployment.yaml contains a valid Deployment
PASS - base/monitoring/grafana/grafana-service.yaml contains a valid Service
PASS - base/monitoring/plant-simulator-monitoring-namespace.yaml contains a valid Namespace
PASS - base/monitoring/prometheus/config-map.yaml contains a valid ConfigMap
PASS - base/monitoring/prometheus/prometheus-deployment.yaml contains a valid Deployment
PASS - base/monitoring/prometheus/prometheus-roles.yaml contains a valid ClusterRole
PASS - base/monitoring/prometheus/prometheus-roles.yaml contains a valid ServiceAccount
PASS - base/monitoring/prometheus/prometheus-roles.yaml contains a valid ClusterRoleBinding
PASS - base/monitoring/prometheus/prometheus-service.yaml contains a valid Service
PASS - dev/flux-patch.yaml contains a valid Deployment
WARN - dev/kustomization.yaml containing a Kustomization was not validated against a schema
PASS - production/flux-patch.yaml contains a valid Deployment
WARN - production/kustomization.yaml containing a Kustomization was not validated against a schema
Building off another one of my questions about tying profiles to namespaces, is there a way to tie profiles to clusters?
I've found a couple times now that I accidentally run commands like skaffold run -p local -n skeleton when my current kubernetes context is pointing to docker-desktop. I'd like to prevent myself and other people on my team from committing the same mistake.
I found that there's a way of specifying contexts but that doesn't play nicely if developers use custom contexts like kubeclt set-context custom --user=custom --cluster=custom. I've also found a cluster field in the skaffold.yaml reference but it seems that doesn't satisfy my need because it doesn't let me specify a cluster name.
After digging through the skaffold documentation and performing several tests I finally managed to find at least partial solution of your problem, maybe not the most elegant one, but still functional. If I find a better way I will edit my answer.
Let's start from the beginning:
As we can read here:
When interacting with a Kubernetes cluster, just like any other
Kubernetes-native tool, Skaffold requires a valid Kubernetes context
to be configured. The selected kube-context determines the Kubernetes
cluster, the Kubernetes user, and the default namespace. By default,
Skaffold uses the current kube-context from your kube-config file.
This is quite important point as we are actually starting from kube-context and based on it we are able to trigger specific profile, never the oposite.
important to remember: kube-context is not activated based on the profile but the opposite is true: the specific profile is triggered based on the current context (selected by kubectl config use-context).
Although we can overwrite default settings from our skaffold.yaml config file by patching (compare related answer), it's not possible to overwrite the current-context based on slected profile e.g. manually as in your command:
skaffold -p prod
Here you are manually selecting specific profile. This way you bypass automatic profile triggering. As the documentation says:
Activations in skaffold.yaml: You can auto-activate a profile based on
kubecontext (could be either a string or a regexp: prefixing with ! will negate the match)
environment variable value
skaffold command (dev/run/build/deploy)
Let's say we want to activate our profile based on current kube-context only to make it simple however we can join different conditions together by AND and OR like in the example here.
solution
I want to make sure that if I run skaffold -p prod skaffold will fail
if my kubecontext points to a cluster other than my production
cluster.
I'm affraid it cannot be done this way. If you've already manually selected prod profile by -p prod you're bypassing selection of profile based on current context therefore you already chosen what can be done no matter how where it can be done is set (currently selected kube-context). In this situation skaffold doesn't have any mechanisms that would prevent you from running something on wrong cluster. In other words you're forcing this way certain behaviour of your pipeline. You already agree to it by selecting the profile. If you gave up using -p or --profile flags, certain profiles will never be triggerd unless currently selected kube-context does it automatically. skaffold just won't let that happen.
Let's look at the following example showing how to make it work:
apiVersion: skaffold/v2alpha3
kind: Config
metadata:
name: getting-started
build:
artifacts:
- image: skaffold-example
docker:
dockerfile: NonExistingDockerfile # the pipeline will fail at build stage
cluster:
deploy:
kubectl:
manifests:
- k8s-pod.yaml
flags:
global: # additional flags passed on every command.
- --namespace=default
kubeContext: minikube
profiles:
- name: prod
patches:
- op: replace
path: /build/artifacts/0/docker/dockerfile
value: Dockerfile
- op: replace
path: /deploy/kubectl/flags/global/0
value: --namespace=prod
activation:
- kubeContext: minikube
command: run
- kubeContext: minikube
command: dev
In general part of our skaffold.yaml config we configured:
dockerfile: NonExistingDockerfile # the pipeline will fail at build stage
Untill we name our Dockerfile - "NonExistingDockerfile" every pipeline will fail at its build stage. So by default all builds, no matter what kube-context is selected are destined to fail. Hovewer we can override this default behaviour by patching specific fragment of the skaffold.yaml in our profile section and setting again Dockerfile to its standard name. This way every:
skaffold run
or
skaffold dev
command will succeed only if the current kube-context is set to minikube. Otherwise it will fail.
We can check it with:
skaffold run --render-only
previously setting our current kube-context to the one that matches what is present in the activation section of our profile definition.
I've found a couple times now that I accidentally run commands like
skaffold run -p local -n skeleton when my current kubernetes context
is pointing to docker-desktop. I'd like to prevent myself and other
people on my team from committing the same mistake.
I understand your point that it would be nice to have some built-in mechanism that prevents overriding this automatic profile activation configured in skaffold.yaml by command line options, but it looks like currently it isn't possible. If you don't specify -p local, skaffold will always choose the correct profile based on the current context. Well, it looks like good material for feature request.
I was able to lock down the kubeContext for Skaffold both ways with:
skaffold dev --profile="dev-cluster-2" --kube-context="dev-cluster-2"
I also set in skaffold.yaml:
profiles:
- name: dev-cluster-2
activation:
- kubeContext: dev-cluster-2
deploy:
kubeContext: dev-cluster-2
It seems that using this combination is telling skaffold explicitly enough to not use the currentContext of $KUBECONFIG. With this combination, if --kube-context is missing from the cli parameters, the activation step in skaffold.yaml will trigger an error message if currentContext in $KUBECONFIG differs from the expected kubeContext of the activated Skaffold profile.
Hope this helps fellow developers who feel the pain when skaffold randomly switches the current kubernetes cluster, if the currentContext in $KUBECONFIG is changed as a side-effect from eg. another terminal window.
I have a react application that is hosted in a nginx container using static files that are prepared in a build step. The problem I run in to is that the API URL is then hard coded in the js files and I get a problem when I want to deploy the application to different environments.
So basically I have put a config.js file with the localhost API URL variable in the public directory which is then loaded in the application in the section of the index.html file. This works for the local environment. The problem comes when I want to deploy it to the test or production environment.
I have found out that it is possible to use a configMap with volume mounts, but that requires me to prepare one file for each environment in advance as I understand it. I want to be able to use the variables I have set in my Azure DevOps library to populate the API URL value.
So my question is if there is a way to replace the values in the config.js file in the nginx container using Kuberentes/Helm or if I can make use of a Azure DevOps pipeline task to replace the content of a pre-prepared config.js file and mount that using Kubernetes?
Not sure if it is clear what I want to do, but hopefully you can understand it...
config.js
window.env = {
API_URL: 'http://localhost:8080'
};
index.html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8" />
<link rel="icon" href="%PUBLIC_URL%/favicon.ico" />
<meta name="viewport" content="width=device-width, initial-scale=1" />
<title>My application</title>
<!--
config.js provides all environment specific configuration used in the client
-->
<script src="%PUBLIC_URL%/config.js"></script>
</head>
...
What I ended up doing was setting it up like this:
First I added a configmap.yaml to generate the config.js file
apiVersion: v1
kind: ConfigMap
metadata:
name: config-frontend
data:
config.js: |-
window.env = {
API_URL: "{{ .Values.service.apiUrl }}"
}
Values.service.apiUrl comes from the arguments provided in the "Package and deploy Helm charts" task --set service.apiUrl=$(backend.apiUrl)
Then I added a volume mount in the deployment.yaml to replace the config.js file in the nginx container
...
containers:
...
volumeMounts:
- name: config-frontend-volume
readOnly: true
mountPath: "/usr/share/nginx/html/config.js"
subPath: "config.js"
volumes:
- name: config-frontend-volume
configMap:
name: config-frontend
This did the trick and now I can control the variable from the Azure DevOps pipeline based on the environment I'm deploying to.
You can achieve this in several ways. Following are the few.
1.ConfigMap
Most effective and best way to achieve this, like one of the added comments. You can do something like this with a single config map.
Example ConfigMap might look something like this
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Values.definitionName }}-{{ .Values.envName }}-configmap
namespace: {{ .Values.Namespace }}
data:
API_URL: '{{ pluck .Values.envName .Values.API_URL | first }}'
Example Values file in helm charts would look like this
API_URL:
dev: dev.mycompany.io
staging: staging.mycompany.io
test: test.mycompany.io
prod: mycompany.io
And before helm install or helm upgrade run add a step in Azure devOps to run the bash command on your CI/CD pipeline, but make sure you have yq tool installed to do the thing. Or you can use any tool to do the same.
yq w -i values.yaml envName dev
This whole process replaces your config file with API_URL to dev.mycompany.io as I gave dev in yq tool.
But if you are confused with using yq tool or something, you can have multiple values files for each environment separately and make changes to helm install step in your deployment.
helm install ./path --values ./dev-values.yaml
But your configmap should look something like this if you have multiple values files and operating which values to pick from helm install
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Values.definitionName }}-{{ .Values.envName }}-configmap
namespace: {{ .Values.Namespace }}
data:
API_URL: '{{ .Values.API_URL }}'
Well this is one way of doing things.
2.Manipulating Dockerfile
You can also do this with dockerfile, something like this step in your dockerfile would replace the value of the file.
RUN sed -i "s/env/dev.mycompany.io/" /app/config.js
But as the url is unique to each env you can take values using ARG
ARG url
RUN sed -i "s/env/${url}" /app/config.js
And during your build pipeline you need to have a task for docker build and under that pass the value of url as an argument you can see that arguments column in your task add this --build-arg url=dev.mycompany.io
This is another way to add values to your config.js file, but it also adds four(based on four envs) docker builds. And so your agents would be busy building four different images for each git commit and queuing up others builds. If you feel that command is not working in Dockerfile add RUN cat /app/config.js in your docker file, and you can debug what's happening and check if the values are updated as you change.
Again it's debatable which is good and bad, but I personally prefer first one due to number of commits I make in an hour, but if the url changes you need not change your codebase you just need to update the docker build in your pipeline. So kinda debatable.
There are other ways to do this as well. But these two are somewhat simplest to achieve.
Hope this is helpful.
In addition to the method of #BinaryBullet provided, you can try with another way that it can make use of one Azure DevOps task to replace the content of config.js file before this .js is applied with kubernetes.
Replace Tokens
The use of this task is very simple.
Step1:
Configure yourself Token prefix:
Step2:
Then apply this Token prefix into your config.js file where you want it be replaced by various values dynamically:
Step3:
Do not forget to specify the value you want it passed to config.js into Variables tab:
Note: The variable name must same with the one you configured in config.js. During the task running, it will inject the corresponding variable value into the config.js file based on the replace format #{}# and same variable name.
For example, I use apiurl in my second screenshots, so here I add one variable apiurl and give it value which I want this value can be replaced into this config.js file at build time.
Build result:
This Replace token task do not has limitation. It can be used in various type file. See my another similar answer: #1.
Hope this is the one which can help you achieve your expectation.
I want to use the pre-install hook of helm,
https://github.com/helm/helm/blob/master/docs/charts_hooks.md
in the docs its written that you need to use annotation which is clear but
what is not clear how to combine it ?
apiVersion: ...
kind: ....
metadata:
annotations:
"helm.sh/hook": "pre-install"
for my case I need to execute a bash script which create some env variable , where should I put this pre-hook script inside my chart that helm can use
before installation ?
I guess I need to create inside the templates folder a file which called: pre-install.yaml is it true? if yes where should I put the commands which create the env variables during the installation of the chart?
UPDATE
The command which I need to execute in the pre-install is like:
export DB=prod_sales
export DOMAIN=www.test.com
export THENANT=VBAS
A Helm hook launches some other Kubernetes object, most often a Job, which will launch a separate Pod. Environment variable settings will only effect the current process and children it launches later, in the same Docker container, in the same Pod. That is: you can't use mechanisms like Helm pre-install hooks or Kubernetes initContainers to set environment variables like this.
If you just want to set environment variables to fixed strings like you show in the question, you can directly set that in a Pod spec. If the variables are, well, variable, but you don't want to hard-code them in your Pod spec, you can also put them in a ConfigMap and then set environment variables from that ConfigMap. You can also use Helm templating to inject settings from install-time configuration.
env:
- name: A_FIXED_VARIABLE
value: A fixed value
- name: SET_FROM_A_CONFIG_MAP
valueFrom:
configMapKeyRef:
name: the-config-map-name
key: someKey
- name: SET_FROM_HELM
value: {{ .Values.environmentValue | quote }}
With the specific values you're showing, the Helm values path is probably easiest. You can run a command like
helm install --set db=prod_sales --set domain=www.test.com ...
and then have access to .Values.db, .Values.domain, etc. in your templates.
If the value is really truly dynamic and you can't set it any other way, you can use a Docker entrypoint script to set it at container startup time. In this answer I describe the generic-Docker equivalents to this, including the entrypoint script setup.
You can take as an example the built-in helm-chart from arc* project, here is the source code.
*Arc - kind of bootstraper for Laravel projects, that can Dockerize/Kubernetize existing apps written in this PHP framework.
You can place ENV in POD.yaml under the template folder. That will be the easiest option.
We have an app that runs on GKE Kubernetes and which expects an auth url (to which user will be redirected via his browser) to be passed as environment variable.
We are using different namespaces per environment
So our current pod config looks something like this:
env:
- name: ENV
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: AUTH_URL
value: https://auth.$(ENV).example.org
And all works amazingly, we can have as many dynamic environments as we want, we just do apply -f config.yaml and it works flawlessly without changing a single config file and without any third party scripts.
Now for production we kind of want to use different domain, so the general pattern https://auth.$(ENV).example.org does not work anymore.
What options do we have?
Since configs are in git repo, create a separate branch for prod environment
Have a default ConfigMap and a specific one for prod environment, and run it via some script (if exists prod-config.yaml then use that, else use config.yaml) - but with this approach we cannot use kubectl directly anymore
Move this config to application level, and have separate config file for prod env - but this kind of goes against 12factor app?
Other...?
This seems like an ideal opportunity to use helm!
It's really easy to get started, simply install tiller into your cluster.
Helm gives you the ability to create "charts" (which are like packages) which can be installed into your cluster. You can template these really easily. As an example, you might have you config.yaml look like this:
env:
- name: AUTH_URL
value: {{ .Values.auth.url }}
Then, within the helm chart you have a values.yaml which contains defaults for the url, for example:
auth:
url: https://auth.namespace.example.org
You can use the --values option with helm to specify per environment values.yaml files, or even use the --set flag on helm to override them when using helm install.
Take a look at the documentation here for information about how values and templating works in helm. It seems perfect for your use case
jaxxstorms' answer is helpful, I just want to add what that means to the options you proposed:
Since configs are in git repo, create a separate branch for prod environment.
I would not recommend separate branches in GIT since the purpose of branches is to allow for concurrent editing of the same data, but what you have is different data (different configurations for the cluster).
Have a default ConfigMap and a specific one for prod environment, and run it via some script (if exists prod-config.yaml then use that,
else use config.yaml) - but with this approach we cannot use kubectl
directly anymore
Using Helm will solve this more elegantly. Instead of a script you use helm to generate the different files for different environments. And you can use kubectl (using the final files, which I would also check into GIT btw.).
Move this config to application level, and have separate config file for prod env - but this kind of goes against 12factor app?
This is a matter of opinion but I would recommend in general to split up the deployments by applications and technologies. For example when I deploy a cluster that runs 3 different applications A B and C and each application requires a Nginx, CockroachDB and Go app-servers then I'll have 9 configuration files, which allows me to separately deploy or update each of the technologies in the app context. This is important for allowing separate deployment actions in a CI server such as Jenkins and follows general separation of concerns.
Other...?
See jaxxstorms' answer about Helm.