Tie skaffold profile to namespace - kubernetes

Is there a way to tie a skaffold profile to a namespace? I'd like to make sure that dev, staging and prod deployments always go to the right namespace. I know that I can add a namespace to skaffold run like skaffold run -p dev -n dev but that's a little error prone. I'd like to make my builds even safer by tying profiles to namespaces.
I've tried adding the following to my skaffold.yaml based on the fact that there's a path in skaffold.yaml which is build/cluster/namespace but I suspect I'm misunderstanding the purpose of the cluster spec.
profiles:
- name: local
patches:
- op: replace
path: /build/artifacts/0/cluster/namespace
value: testing
but I get the error
❮❮❮ skaffold render -p local
FATA[0000] creating runner: applying profiles: applying profile local: invalid path: /build/artifacts/0/cluster/namespace
I've tried other variants of changing the cluster namespace but all of them fail.

if TL/DR: please go directly to "solution" (the last section)
Is there a way to tie a skaffold profile to a namespace? I'd like to
make sure that dev, staging and prod deployments always go to the
right namespace. I know that I can add a namespace to skaffold run
like skaffold run -p dev -n dev but that's a little error prone. I'd
like to make my builds even safer by tying profiles to namespaces.
At the beginning we need to clarify two things, namely if we are talking about namespaces in build or deploy stage of the pipeline. On one hand you write, that you want to make sure that dev, staging and prod deployments always go to the right namespace so I'm assuming you're rather interested in setting the appropriate namespace on your kubernetes cluster in which built images will be eventually deployed. Hovewer later you mentioned also about making builds even safer by tying profiles to namespaces. Please correct me if I'm wrong but my guess is that you rather mean namespaces at the deploy stage.
So answering your question: yes, it is possible to tie a skaffold profile to a specific namespace.
I've tried adding the following to my skaffold.yaml based on the
fact that there's a path in skaffold.yaml which is
build/cluster/namespace but I suspect I'm misunderstanding the
purpose of the cluster spec.
You're right, there is such path in skaffold.yaml but then your example should look as follows:
profiles:
- name: local
patches:
- op: replace
path: /build/cluster/namespace
value: testing
Note that cluster element is on the same indentation level as artifacts. As you can read in the reference:
cluster: # beta describes how to do an on-cluster build.
and as you can see, most of its options are related with kaniko. It can be also patched in the same way as other skaffold.yaml elements in specific profiles but anyway I don't think this is the element you're really concerned about so let's leave it for now.
Btw. you can easily validate your skaffold.yaml syntax by runnig:
skaffold fix
If every element is properly used, all the indentation levels are correct etc. it will print:
config is already latest version
otherwise something like the error below:
FATA[0000] creating runner: applying profiles: applying profile prod: invalid path: /build/cluster/namespace
solution
You can make sure your deployments go to the right namespace by setting kubectl flags. It assumes you're using docker as builder and kubectl as deployer. As there are plenty of different builders and deployers supported by skaffold the e.g. you deploy with helm the detailed solution may look quite different.
One very important caveat: the path must be already present in your general config part, otherwise you won't be able to patch it in profiles section e.g.:
if you have in your profiles section following patch:
profiles:
- name: prod
patches:
- op: replace
path: /build/artifacts/0/docker/dockerfile
value: DifferentNameForDockerfile
following section must be already present in your skaffold.yaml:
build:
artifacts:
- image: skaffold-example
docker:
dockerfile: Dockerfile # the pipeline will fail at build stage
Going back to our namaspaces, first we need to set default values in deploy section:
deploy:
kubectl:
manifests:
- k8s-pod.yaml
flags:
global: # additional flags passed on every command.
- --namespace=default
# apply: # additional flags passed on creations (kubectl apply).
# - --namespace=default
# delete: # additional flags passed on deletions (kubectl delete).
# - --namespace=default
I set only global flags but this is also possible to set for apply and delete commands separately.
In next step we need to override our default value (they must be already present, so we can override them) in our profiles:
profiles:
- name: dev
patches:
- op: replace
path: /deploy/kubectl/flags/global/0
value: --namespace=dev
- name: staging
patches:
- op: replace
path: /deploy/kubectl/flags/global/0
value: --namespace=staging
- name: prod
patches:
- op: replace
path: /deploy/kubectl/flags/global/0
value: --namespace=prod
Then we can run:
skaffold run --render-only --profile=prod
As we can see our Pod is going to be deployed in prod namespace of our kubernetes cluster:
Generating tags...
- skaffold-example -> skaffold-example:v1.3.1-15-g11d005d-dirty
Checking cache...
- skaffold-example: Found Locally
apiVersion: v1
kind: Pod
metadata:
labels:
app.kubernetes.io/managed-by: skaffold-v1.3.1
skaffold.dev/builder: local
skaffold.dev/cleanup: "true"
skaffold.dev/deployer: kubectl
skaffold.dev/docker-api-version: "1.39"
skaffold.dev/profile.0: prod
skaffold.dev/run-id: b83d48db-aec8-4570-8cb8-dbf9a7795c00
skaffold.dev/tag-policy: git-commit
skaffold.dev/tail: "true"
name: getting-started
namespace: prod
spec:
containers:
- image: skaffold-example:3e4840dfd2ad13c4d32785d73641dab66be7a89b43355eb815b85bc09f45c8b2
name: getting-started

Related

How to modify a manifest file when using Ansible to install a service on a kubernetes cluster?

I am trying to automate making a HA cluster with Ansible.
Normally I have two options to install the load balancer (MetalLb), with manifest or helm.
I really like that helm has a --values option. This is useful because I can add toleration to the MetalLB speakers, that way I can deploy them in the nodes that I dont want to deploy jobs on.
When making the playbook I want to have a way to deploy the MetalLB speakers with the toleration so they can get deploy but I don't want to install helm on one of the nodes.
When the playbook is ran I can download the manifest file https://raw.githubusercontent.com/metallb/metallb/v0.13.7/config/manifests/metallb-native.yaml but now I want to be able to add the tolerations. How can I accomplish this without me downloading the yaml file and editing it myself, something like the --values option in helm would be nice
https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/ lays out the general idea of how kustomize is going to work: take some bases, apply some transformations to them. In most cases, the strategic merge behaves like folks expect, and is how the kubectl patch you mentioned behaves1. But, dealing with array values in merges is tricky, so I have had better luck with using JSON Patch array add support, which is what we will use here
# the contents of "kustomization.yaml" in the current directory
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- https://raw.githubusercontent.com/metallb/metallb/v0.13.7/config/manifests/metallb-native.yaml
patches:
- target:
version: v1
group: apps
kind: DaemonSet
namespace: metallb-system
name: speaker
patch: |-
- op: add
path: /spec/template/spec/tolerations/-
value: {"effect":"NoSchedule","key":"example.com/some-taint","operator":"Exists"}
Then, using kubectl kustomize . we see the result from applying that patch:
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
operator: Exists
- effect: NoSchedule
key: node-role.kubernetes.io/control-plane
operator: Exists
- effect: NoSchedule
key: example.com/some-taint
operator: Exists
Obviously if you wanted to wholesale replace the tolerations, you may have better luck with the strategic merge flavor, but given that your question didn't specify and this case is the harder of the two, I started with it
FN 1: I saw you mention kubectl patch but that is for editing existing kubernetes resources, so after you already deployed your metallb-native.yaml into the cluster, only then would kubectl patch do anything for you. Using kustomize is the helm-replacement in that it is designed for the manifests to go into the cluster in the right state, versus fixing it up later

How is managed deletion of K8s objects in skaffold deploy

I am starting to play with skaffold to handle continuous deployment in my Kubernetes cluster.
I have a bunch of yaml files that just wait to be applied with kubectl, at one point a.yaml and b.yaml:
apiVersion: skaffold/v2beta29
kind: Config
metadata:
name: skaffold-deploy
deploy:
kubectl:
manifests:
- a.yaml
- b.yaml
Now, I make a development that needs to delete objects (in terms of kubectl delete) described in b.yaml (and I simply removed the file in my directory)
Is it possible to do so with skaffold?
If I skaffold deploy with this skaffold.yaml file:
apiVersion: skaffold/v2beta29
kind: Config
metadata:
name: skaffold-deploy
deploy:
kubectl:
manifests:
- a.yaml
objects in b.yaml are not deleted nor updated.
I was looking for a way to do this in the official documentation but could not find anything related to it. skaffold delete seems to delete everything that was previously deployed with it.
Thanks a lot in advance

Jenkins deployment with Kustomize - how to add JENKINS_OPTS

I feel like this should be an already asked question, but I'm having difficulties finding a concrete answer. I'm deploying Jenkins through ArgoCD by defining the deployment via kustomize (kubernetes yaml). I want to inject a prefix to have Jenkins start on /jenkins, but I don't see a way to add it. I saw online that I can have a env tag, but no full example of this was available. Where would I inject a prefix value if using kubernetes yaml for a Jenkins deployment?
So, I solved this issue myself, and I'd like to post the answer as this is the top searched question when searching "Kustomize Jenkins_opts".
In your project, assuming you are using Kustomize to deploy Jenkins (This will work with any app deployment where you want to inject values when deploying), you should have a project structure similar to this:
ProjectA
|
|---> app.yaml //contains the yaml definitions for your deployment
|---> kustomize.yaml //entry file to run Kustomize to deploy your app
Add a new file to your project structure. Name it whatever you want, I named mine something like app-env.yaml. It will look something like:
apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins
spec:
template:
spec:
containers:
- name: jenkins
env:
- name: JENKINS_OPTS
value: --prefix=/jenkins
This will specifically inject the --prefix flag to assign the prefix value for the URL to Jenkins on deployment to the Jenkins container. You can add multiple env variables. You can inject any value you want. My example is using Jenkins specific flags as this question centered around Jenkins, but it works for any app. Add this file to your Kustomize file from earlier:
namePrefix: kustomize-
resources:
- app.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
patchesStrategicMerge:
- app-env.yaml
When your app is deployed via K8s, it will run the startup process for your app, while passing the values defined in your env file. Hope this helps anyone else.

argo-cd how to test configuration in a "development" cluster before going live

I have a single argocd repository which contains all the configuration for the Kubernetes cluster. Now I want to work with PRs and only want to merge things which were tested on our continuous integration system before they can be merged. To do so, my idea was to have another cluster which I then deploy the branch to. Sadly, argocd defines the revision and targetRevision inside its yaml files – therefore, this is “hard-coded” inside git.
What is the best way to switch the revision, so I can “apply” any feature branch and still link it to a cluster?
Target
GIT - Branch master -> prod-Cluster
- Branch dev -> dev-Cluster
- Branch feature.. -> feature-Cluster using kind
ArgoCD Config
Application (root) -> ApplicationSet (app-of-appset) -> apps/* directory containing kustomization files
Example argo config for application set
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: cluster-addons
spec:
generators:
- git:
repoURL: https://github.com/argoproj-labs/applicationset.git
revision: HEAD <--------- Thats what I want to adjust for testing
directories:
- path: examples/git-generator-directory/cluster-addons/*
template:
metadata:
name: '{{path.basename}}'
spec:
project: default
source:
repoURL: https://github.com/argoproj-labs/applicationset.git
targetRevision: HEAD <--------- Thats what I want to adjust for testing
path: '{{path}}'
destination:
server: https://kubernetes.default.svc
namespace: '{{path.basename}}'
I think the only way is to deploy different apps for each branch within the same repo. Look at the following info from ArgoCD documentation:
You can also store parameter overrides in an application specific file, if you are sourcing multiple applications from a single path in your repository.
The application specific file must be named .argocd-source-<appname>.yaml, where is the name of the application the overrides are valid for.
If there exists an non-application specific .argocd-source.yaml, parameters included in that file will be merged first, and then the application specific parameters are merged, which can also contain overrides to the parameters stored in the non-application specific file.
Or you can try to patch the application:
argocd app patch APPNAME --patch '[{"op": "replace", "path": "/spec/template/spec/source/targetRevision", "value": "HEAD"}]'
and then argocd app sync APPNAME
However everything gets difficult when it's hardcoded.

Restart pods when configmap updates in Kubernetes?

How do I automatically restart Kubernetes pods and pods associated with deployments when their configmap is changed/updated?
I know there's been talk about the ability to automatically restart pods when a config maps changes but to my knowledge this is not yet available in Kubernetes 1.2.
So what (I think) I'd like to do is a "rolling restart" of the deployment resource associated with the pods consuming the config map. Is it possible, and if so how, to force a rolling restart of a deployment in Kubernetes without changing anything in the actual template? Is this currently the best way to do it or is there a better option?
The current best solution to this problem (referenced deep in https://github.com/kubernetes/kubernetes/issues/22368 linked in the sibling answer) is to use Deployments, and consider your ConfigMaps to be immutable.
When you want to change your config, create a new ConfigMap with the changes you want to make, and point your deployment at the new ConfigMap. If the new config is broken, the Deployment will refuse to scale down your working ReplicaSet. If the new config works, then your old ReplicaSet will be scaled to 0 replicas and deleted, and new pods will be started with the new config.
Not quite as quick as just editing the ConfigMap in place, but much safer.
Signalling a pod on config map update is a feature in the works (https://github.com/kubernetes/kubernetes/issues/22368).
You can always write a custom pid1 that notices the confimap has changed and restarts your app.
You can also eg: mount the same config map in 2 containers, expose a http health check in the second container that fails if the hash of config map contents changes, and shove that as the liveness probe of the first container (because containers in a pod share the same network namespace). The kubelet will restart your first container for you when the probe fails.
Of course if you don't care about which nodes the pods are on, you can simply delete them and the replication controller will "restart" them for you.
The best way I've found to do it is run Reloader
It allows you to define configmaps or secrets to watch, when they get updated, a rolling update of your deployment is performed. Here's an example:
You have a deployment foo and a ConfigMap called foo-configmap. You want to roll the pods of the deployment every time the configmap is changed. You need to run Reloader with:
kubectl apply -f https://raw.githubusercontent.com/stakater/Reloader/master/deployments/kubernetes/reloader.yaml
Then specify this annotation in your deployment:
kind: Deployment
metadata:
annotations:
configmap.reloader.stakater.com/reload: "foo-configmap"
name: foo
...
Helm 3 doc page
Often times configmaps or secrets are injected as configuration files in containers. Depending on the application a restart may be required should those be updated with a subsequent helm upgrade, but if the deployment spec itself didn't change the application keeps running with the old configuration resulting in an inconsistent deployment.
The sha256sum function can be used together with the include function to ensure a deployments template section is updated if another spec changes:
kind: Deployment
spec:
template:
metadata:
annotations:
checksum/config: {{ include (print $.Template.BasePath "/secret.yaml") . | sha256sum }}
[...]
In my case, for some reasons, $.Template.BasePath didn't work but $.Chart.Name does:
spec:
replicas: 1
template:
metadata:
labels:
app: admin-app
annotations:
checksum/config: {{ include (print $.Chart.Name "/templates/" $.Chart.Name "-configmap.yaml") . | sha256sum }}
You can update a metadata annotation that is not relevant for your deployment. it will trigger a rolling-update
for example:
spec:
template:
metadata:
annotations:
configmap-version: 1
If k8>1.15; then doing a rollout restart worked best for me as part of CI/CD with App configuration path hooked up with a volume-mount. A reloader plugin or setting restartPolicy: Always in deployment manifest YML did not work for me. No application code changes needed, worked for both static assets as well as Microservice.
kubectl rollout restart deployment/<deploymentName> -n <namespace>
Had this problem where the Deployment was in a sub-chart and the values controlling it were in the parent chart's values file. This is what we used to trigger restart:
spec:
template:
metadata:
annotations:
checksum/config: {{ tpl (toYaml .Values) . | sha256sum }}
Obviously this will trigger restart on any value change but it works for our situation. What was originally in the child chart would only work if the config.yaml in the child chart itself changed:
checksum/config: {{ include (print $.Template.BasePath "/config.yaml") . | sha256sum }}
Consider using kustomize (or kubectl apply -k) and then leveraging it's powerful configMapGenerator feature. For example, from: https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/configmapgenerator/
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
# Just one example of many...
- name: my-app-config
literals:
- JAVA_HOME=/opt/java/jdk
- JAVA_TOOL_OPTIONS=-agentlib:hprof
# Explanation below...
- SECRETS_VERSION=1
Then simply reference my-app-config in your deployments. When building with kustomize, it'll automatically find and update references to my-app-config with an updated suffix, e.g. my-app-config-f7mm6mhf59.
Bonus, updating secrets: I also use this technique for forcing a reload of secrets (since they're affected in the same way). While I personally manage my secrets completely separately (using Mozilla sops), you can bundle a config map alongside your secrets, so for example in your deployment:
# ...
spec:
template:
spec:
containers:
- name: my-app
image: my-app:tag
envFrom:
# For any NON-secret environment variables. Name is automatically updated by Kustomize
- configMapRef:
name: my-app-config
# Defined separately OUTSIDE of Kustomize. Just modify SECRETS_VERSION=[number] in the my-app-config ConfigMap
# to trigger an update in both the config as well as the secrets (since the pod will get restarted).
- secretRef:
name: my-app-secrets
Then, just add a variable like SECRETS_VERSION into your ConfigMap like I did above. Then, each time you change my-app-secrets, just increment the value of SECRETS_VERSION, which serves no other purpose except to trigger a change in the kustomize'd ConfigMap name, which should also result in a restart of your pod. So then it becomes:
I also banged my head around this problem for some time and wished to solve this in an elegant but quick way.
Here are my 20 cents:
The answer using labels as mentioned here won't work if you are updating labels. But would work if you always add labels. More details here.
The answer mentioned here is the most elegant way to do this quickly according to me but had the problem of handling deletes. I am adding on to this answer:
Solution
I am doing this in one of the Kubernetes Operator where only a single task is performed in one reconcilation loop.
Compute the hash of the config map data. Say it comes as v2.
Create ConfigMap cm-v2 having labels: version: v2 and product: prime if it does not exist and RETURN. If it exists GO BELOW.
Find all the Deployments which have the label product: prime but do not have version: v2, If such deployments are found, DELETE them and RETURN. ELSE GO BELOW.
Delete all ConfigMap which has the label product: prime but does not have version: v2 ELSE GO BELOW.
Create Deployment deployment-v2 with labels product: prime and version: v2 and having config map attached as cm-v2 and RETURN, ELSE Do nothing.
That's it! It looks long, but this could be the fastest implementation and is in principle with treating infrastructure as Cattle (immutability).
Also, the above solution works when your Kubernetes Deployment has Recreate update strategy. Logic may require little tweaks for other scenarios.
How do I automatically restart Kubernetes pods and pods associated
with deployments when their configmap is changed/updated?
If you are using configmap as Environment you have to use the external option.
Reloader
Kube watcher
Configurator
Kubernetes auto-reload the config map if it's mounted as volume (If subpath there it won't work with that).
When a ConfigMap currently consumed in a volume is updated, projected
keys are eventually updated as well. The kubelet checks whether the
mounted ConfigMap is fresh on every periodic sync. However, the
kubelet uses its local cache for getting the current value of the
ConfigMap. The type of the cache is configurable using the
ConfigMapAndSecretChangeDetectionStrategy field in the
KubeletConfiguration struct. A ConfigMap can be either propagated by
watch (default), ttl-based, or by redirecting all requests directly to
the API server. As a result, the total delay from the moment when the
ConfigMap is updated to the moment when new keys are projected to the
Pod can be as long as the kubelet sync period + cache propagation
delay, where the cache propagation delay depends on the chosen cache
type (it equals to watch propagation delay, ttl of cache, or zero
correspondingly).
Official document : https://kubernetes.io/docs/concepts/configuration/configmap/#mounted-configmaps-are-updated-automatically
ConfigMaps consumed as environment variables are not updated automatically and require a pod restart.
Simple example Configmap
apiVersion: v1
kind: ConfigMap
metadata:
name: config
namespace: default
data:
foo: bar
POD config
spec:
containers:
- name: configmaptestapp
image: <Image>
volumeMounts:
- mountPath: /config
name: configmap-data-volume
ports:
- containerPort: 8080
volumes:
- name: configmap-data-volume
configMap:
name: config
Example : https://medium.com/#harsh.manvar111/update-configmap-without-restarting-pod-56801dce3388
Adding the immutable property to the config map totally avoids the problem. Using config hashing helps in a seamless rolling update but it does not help in a rollback. You can take a look at this open-source project - 'Configurator' - https://github.com/gopaddle-io/configurator.git .'Configurator' works by the following using the custom resources :
Configurator ties the deployment lifecycle with the configMap. When
the config map is updated, a new version is created for that
configMap. All the deployments that were attached to the configMap
get a rolling update with the latest configMap version tied to it.
When you roll back the deployment to an older version, it bounces to
configMap version it had before doing the rolling update.
This way you can maintain versions to the config map and facilitate rolling and rollback to your deployment along with the config map.
Another way is to stick it into the command section of the Deployment:
...
command: [ "echo", "
option = value\n
other_option = value\n
" ]
...
Alternatively, to make it more ConfigMap-like, use an additional Deployment that will just host that config in the command section and execute kubectl create on it while adding an unique 'version' to its name (like calculating a hash of the content) and modifying all the deployments that use that config:
...
command: [ "/usr/sbin/kubectl-apply-config.sh", "
option = value\n
other_option = value\n
" ]
...
I'll probably post kubectl-apply-config.sh if it ends up working.
(don't do that; it looks too bad)