Tie skaffold profile to cluster - kubernetes

Building off another one of my questions about tying profiles to namespaces, is there a way to tie profiles to clusters?
I've found a couple times now that I accidentally run commands like skaffold run -p local -n skeleton when my current kubernetes context is pointing to docker-desktop. I'd like to prevent myself and other people on my team from committing the same mistake.
I found that there's a way of specifying contexts but that doesn't play nicely if developers use custom contexts like kubeclt set-context custom --user=custom --cluster=custom. I've also found a cluster field in the skaffold.yaml reference but it seems that doesn't satisfy my need because it doesn't let me specify a cluster name.

After digging through the skaffold documentation and performing several tests I finally managed to find at least partial solution of your problem, maybe not the most elegant one, but still functional. If I find a better way I will edit my answer.
Let's start from the beginning:
As we can read here:
When interacting with a Kubernetes cluster, just like any other
Kubernetes-native tool, Skaffold requires a valid Kubernetes context
to be configured. The selected kube-context determines the Kubernetes
cluster, the Kubernetes user, and the default namespace. By default,
Skaffold uses the current kube-context from your kube-config file.
This is quite important point as we are actually starting from kube-context and based on it we are able to trigger specific profile, never the oposite.
important to remember: kube-context is not activated based on the profile but the opposite is true: the specific profile is triggered based on the current context (selected by kubectl config use-context).
Although we can overwrite default settings from our skaffold.yaml config file by patching (compare related answer), it's not possible to overwrite the current-context based on slected profile e.g. manually as in your command:
skaffold -p prod
Here you are manually selecting specific profile. This way you bypass automatic profile triggering. As the documentation says:
Activations in skaffold.yaml: You can auto-activate a profile based on
kubecontext (could be either a string or a regexp: prefixing with ! will negate the match)
environment variable value
skaffold command (dev/run/build/deploy)
Let's say we want to activate our profile based on current kube-context only to make it simple however we can join different conditions together by AND and OR like in the example here.
solution
I want to make sure that if I run skaffold -p prod skaffold will fail
if my kubecontext points to a cluster other than my production
cluster.
I'm affraid it cannot be done this way. If you've already manually selected prod profile by -p prod you're bypassing selection of profile based on current context therefore you already chosen what can be done no matter how where it can be done is set (currently selected kube-context). In this situation skaffold doesn't have any mechanisms that would prevent you from running something on wrong cluster. In other words you're forcing this way certain behaviour of your pipeline. You already agree to it by selecting the profile. If you gave up using -p or --profile flags, certain profiles will never be triggerd unless currently selected kube-context does it automatically. skaffold just won't let that happen.
Let's look at the following example showing how to make it work:
apiVersion: skaffold/v2alpha3
kind: Config
metadata:
name: getting-started
build:
artifacts:
- image: skaffold-example
docker:
dockerfile: NonExistingDockerfile # the pipeline will fail at build stage
cluster:
deploy:
kubectl:
manifests:
- k8s-pod.yaml
flags:
global: # additional flags passed on every command.
- --namespace=default
kubeContext: minikube
profiles:
- name: prod
patches:
- op: replace
path: /build/artifacts/0/docker/dockerfile
value: Dockerfile
- op: replace
path: /deploy/kubectl/flags/global/0
value: --namespace=prod
activation:
- kubeContext: minikube
command: run
- kubeContext: minikube
command: dev
In general part of our skaffold.yaml config we configured:
dockerfile: NonExistingDockerfile # the pipeline will fail at build stage
Untill we name our Dockerfile - "NonExistingDockerfile" every pipeline will fail at its build stage. So by default all builds, no matter what kube-context is selected are destined to fail. Hovewer we can override this default behaviour by patching specific fragment of the skaffold.yaml in our profile section and setting again Dockerfile to its standard name. This way every:
skaffold run
or
skaffold dev
command will succeed only if the current kube-context is set to minikube. Otherwise it will fail.
We can check it with:
skaffold run --render-only
previously setting our current kube-context to the one that matches what is present in the activation section of our profile definition.
I've found a couple times now that I accidentally run commands like
skaffold run -p local -n skeleton when my current kubernetes context
is pointing to docker-desktop. I'd like to prevent myself and other
people on my team from committing the same mistake.
I understand your point that it would be nice to have some built-in mechanism that prevents overriding this automatic profile activation configured in skaffold.yaml by command line options, but it looks like currently it isn't possible. If you don't specify -p local, skaffold will always choose the correct profile based on the current context. Well, it looks like good material for feature request.

I was able to lock down the kubeContext for Skaffold both ways with:
skaffold dev --profile="dev-cluster-2" --kube-context="dev-cluster-2"
I also set in skaffold.yaml:
profiles:
- name: dev-cluster-2
activation:
- kubeContext: dev-cluster-2
deploy:
kubeContext: dev-cluster-2
It seems that using this combination is telling skaffold explicitly enough to not use the currentContext of $KUBECONFIG. With this combination, if --kube-context is missing from the cli parameters, the activation step in skaffold.yaml will trigger an error message if currentContext in $KUBECONFIG differs from the expected kubeContext of the activated Skaffold profile.
Hope this helps fellow developers who feel the pain when skaffold randomly switches the current kubernetes cluster, if the currentContext in $KUBECONFIG is changed as a side-effect from eg. another terminal window.

Related

What is the root password of postgresql-ha/helm?

Installed PostgreSQL in AWS Eks through Helm https://bitnami.com/stack/postgresql-ha/helm
I need to fulfill some tasks in deployments with root rights, but when
su -
requires a password that I don't know and where to take it, and to access the desired folders, such as /opt/bitnami/postgresql/
Error: Permission denied
How to get the necessary rights or what password?
Image attached: bitnami root error
I need [...] to place the .so libraries I need for postgresql in [...] /opt/bitnami/postgresql/lib
I'd consider this "extending" rather than "configuring" PostgreSQL; it's not a task you can do with a Helm chart alone. On a standalone server it's not something you could configure with only a text editor, for example, and while the Bitnami PostgreSQL-HA chart has a pretty wide swath of configuration options, none of them allow providing extra binary libraries.
The first step to doing this is to create a custom Docker image that includes the shared library. That can start FROM the Bitnami PostgreSQL image this chart uses:
ARG postgresql_tag=11.12.0-debian-10-r44
FROM bitnami/postgresql:${postgresql_tag}
# assumes the shared library is in the same directory as
# the Dockerfile
COPY whatever.so /opt/bitnami/postgresql/lib
# or RUN curl ..., or RUN apt-get, or ...
#
# You do not need EXPOSE, ENTRYPOINT, CMD, etc.
# These come from the base image
Build this image and push it to a Docker registry, the same way you do for your application code. (In a purely local context you might be able to docker build the image in minikube's context.)
When you deploy the chart, it has options to override the image it runs, so you can point it at your own custom image. Your Helm values could look like:
postgresqlImage:
registry: registry.example.com:5000
repository: infra/postgresql
tag: 11.12.0-debian-10-r44
# `docker run registry.example.com:5000/infra/postgresql:11.12.0-debian-10-r44`
and then you can provide this file via the helm install -f option when you deploy the chart.
You should almost never try to manually configure a Kubernetes pod by logging into it with kubectl exec. It is extremely routine to delete pods, and in many cases Kubernetes does this automatically (if the image tag in a Deployment or StatefulSet changes; if a HorizontalPodAutoscaler scales down; if a Node is taken offline); in these cases your manual changes will be lost. If there are multiple replicas of a pod (with an HA database setup there almost certainly will be) you also need to make identical changes in every replica.
Like they told you in the comments, you are using the wrong approach to the problem. Executing inside a container to make manual operations is (most of the times) useless, since Pods (and the containers which are part of such Pods) are ephimeral entities, which will be lost whenever the Pod restart.
Unless the path you are trying to interact with is supported by a persisted volume, as soon as the container will be restared, all your changes will be lost.
HELM Charts, like the bitnami-ha chart, exposes several way to refine / modify the default installation:
You could build a custom docker image starting from the one used by default, adding there the libraries and whatever you need. This way the container will be already "ready" in the way you want, as soon as it starts
You could add an additional Init Container to perfom operations such as preparing files for the main container on emptydir volumes, which can then be mounted at the expected path
You could inject an entrypoint script which does what you want at start, before calling the main entrypoint
Check the Readme as it lists all the possibilities offered by the Chart (such as how to override the image with your custom one and more)

Skaffold dev stream logs of pods created by helm hooks

I would like to see the output from my pre-install/post-install helm hooks when using skaffold dev, but this does not seem to work.
Which filters does skaffold use to get all the pods for log tailing? Is there a way to force skaffold to pick up the hooks by applying some labels (e.g. skaffold.dev/run-id: static) ?
Context
Doing dev with local docker, the image building is pretty fast, so for some use cases there is no need to use file sync and special dev-mode container images with file watching inside.
There is this feature request: https://github.com/GoogleContainerTools/skaffold/issues/1441, but this is for adding hooks to skaffold itself.
The pods created by helm hooks are not removed (https://github.com/GoogleContainerTools/skaffold/issues/2876), but this is expected behavior for helm delete.
Thanks #acristu for the question. Skaffold dev here.
Currently, skaffold is unaware of pods deployed in the pre and post helm hooks.
The reason, we don't parse the manifests in these hooks and hence can't transform those to add the required label skaffold.dev/run-id
Currently there is no way to force skaffold to pick up the logs from these pods/containers
That said we had a pending feature request to extend the current log configuration to include resourceType or resourceName like portForward section
portForward: # describes user defined resources to port-forward.
- resourceType: # Kubernetes type that should be port forwarded.
resourceName:
Supporting this in skaffold would be great idea.

How can I restart only one service by using skaffold?

I use skaffold for k8s based microservices app. I enter skaffold dev and skaffold run to run and skaffold delete to restart all microservices.
If I need to restart only one service, what must I do?
According to the docs:
Use skaffold dev to build and deploy your app every time your code changes,
Use skaffold run to build and deploy your app once, similar to a CI/CD pipeline
1. Deploy your services:
skaffold run --filename=skaffold_test_1.yaml
(in addition you can have multiple workflow configurations).
2. Change your skaffold workflow configuration and run:
skaffold delete --filename=skaffold_test2.yaml
Using this approach your deployments will be not removed like in the skaffold dev command after stopping skaffold.
Basically managing the content of the skaffold workflow configuration (by adding or removing additional entries allows you to deploy or remove particular service).
apiVersion: skaffold/v1
kind: Config
.
.
.
deploy:
kubectl:
manifests:
- k8s-service1.yaml
# - k8s-service2.yaml
You can use skaffold dev's --watch-image flag to restrict the artifacts to monitor. This takes a comma-separated list of images, specified by the artifact.image.

Helm on Minikube: update local image

I have Minikube (v1.1.0) running locally with Helm (v2.13.1) initialized and connected the local docker daemon with Minikube running eval $(minikube docker-env). In the code base of my application I created a chart with helm create chart. The first few lines of ./chart/values.yml I changed to:
image:
repository: app-development
tag: latest
pullPolicy: Never
I build the image locally and install/upgrade the chart with Helm:
docker build . -t app-development
helm upgrade --install example ./chart
Now, this works perfect the first time, but if I make changes to the application I would like to run the above two commands to upgrade the image. Is there any way to get this working?
workaround
To get the expected behaviour I can delete the chart from Minikube and install it again:
docker build . -t app-development
helm del --purge example
helm install example ./chart
When you make a change like this, Kubernetes is looking for some change in the Deployment object. If it sees that you want 1 Pod running app-development:latest, and it already has 1 Pod running an image named app-development:latest, then it's in the right state and it doesn't need to do anything (even if the local image that has that tag has changed).
The canonical advice here is to never use the :latest tag with Kubernetes. Every time you build an image, use a distinct tag (a time stamp or the current source control commit ID are easy unique things). With Helm it's easy enough to inject this based on a value you pass in:
image: app-development:{{ .Values.tag | default "latest" }}
This sort of build sequence would look a little more like
TAG=$(date +%Y%m%d-%H%m%S)
docker build -t "app-development:$TAG" .
helm upgrade --install --set "tag=$TAG"
If you're actively developing your component you may find it easier to try to separate out "hacking on code" from "deploying into Kubernetes" as much as you can. Some amount of this tends to be inevitable, but Kubernetes really isn't designed to be a live development environment.
One way you could solve this problem is using minikube and cloud code from google. When you initialize cloud code in your project, it creates skaffold yaml at root location. You can put helm chart for same project in the same code base. Go ahead and edit this configuration to match folder location for the helm chart:
deploy: helm:
releases:
- name: <chart_name>
chartPath: <folder path relative to this file>
now when you click on cloud code at the bottom of your visual code editor (or any editor), it should give you following options:
[1]: https://i.stack.imgur.com/vXK4U.png
Select "Run on Kubernetes" from the list.
Only changes you'll have to do in your helm chart is to read image url from Skaffold yaml using profile.
profiles:
- name: prod
deploy:
helm:
releases:
- name: <helm_chart_name>
chartPath: helm
skipBuildDependencies: true
artifactOverrides:
image: <url_production_image_url>
This will read image from configured url whereas in local, it should read from docker daemon. Cloud code also provide hot update / deployment when you make any changes to any file though.No need to always mention image tag while testing it locally. Once you're good with the code, update the image with latest version number which should trigger deployment in your integration / dev environment.

Kubernetes - different settings per environment

We have an app that runs on GKE Kubernetes and which expects an auth url (to which user will be redirected via his browser) to be passed as environment variable.
We are using different namespaces per environment
So our current pod config looks something like this:
env:
- name: ENV
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: AUTH_URL
value: https://auth.$(ENV).example.org
And all works amazingly, we can have as many dynamic environments as we want, we just do apply -f config.yaml and it works flawlessly without changing a single config file and without any third party scripts.
Now for production we kind of want to use different domain, so the general pattern https://auth.$(ENV).example.org does not work anymore.
What options do we have?
Since configs are in git repo, create a separate branch for prod environment
Have a default ConfigMap and a specific one for prod environment, and run it via some script (if exists prod-config.yaml then use that, else use config.yaml) - but with this approach we cannot use kubectl directly anymore
Move this config to application level, and have separate config file for prod env - but this kind of goes against 12factor app?
Other...?
This seems like an ideal opportunity to use helm!
It's really easy to get started, simply install tiller into your cluster.
Helm gives you the ability to create "charts" (which are like packages) which can be installed into your cluster. You can template these really easily. As an example, you might have you config.yaml look like this:
env:
- name: AUTH_URL
value: {{ .Values.auth.url }}
Then, within the helm chart you have a values.yaml which contains defaults for the url, for example:
auth:
url: https://auth.namespace.example.org
You can use the --values option with helm to specify per environment values.yaml files, or even use the --set flag on helm to override them when using helm install.
Take a look at the documentation here for information about how values and templating works in helm. It seems perfect for your use case
jaxxstorms' answer is helpful, I just want to add what that means to the options you proposed:
Since configs are in git repo, create a separate branch for prod environment.
I would not recommend separate branches in GIT since the purpose of branches is to allow for concurrent editing of the same data, but what you have is different data (different configurations for the cluster).
Have a default ConfigMap and a specific one for prod environment, and run it via some script (if exists prod-config.yaml then use that,
else use config.yaml) - but with this approach we cannot use kubectl
directly anymore
Using Helm will solve this more elegantly. Instead of a script you use helm to generate the different files for different environments. And you can use kubectl (using the final files, which I would also check into GIT btw.).
Move this config to application level, and have separate config file for prod env - but this kind of goes against 12factor app?
This is a matter of opinion but I would recommend in general to split up the deployments by applications and technologies. For example when I deploy a cluster that runs 3 different applications A B and C and each application requires a Nginx, CockroachDB and Go app-servers then I'll have 9 configuration files, which allows me to separately deploy or update each of the technologies in the app context. This is important for allowing separate deployment actions in a CI server such as Jenkins and follows general separation of concerns.
Other...?
See jaxxstorms' answer about Helm.