Way to stop deploy wrong kubernetes environment - kubernetes

We have a set of kubernetes yamls which is management by kustomize and they will be deployed to different clusters. Each cluster is slightly different which means every environment will have a sub directory (environ/<envname>) including some special kustomization overwrite.
We will manually deploy new version to different environments by command kubeclt apply -k environ/env. But sometimes we do stupid thing like this: kubectl apply -k environ/env1 to the cluster env2 . So is there some method to stop doing a kubectl apply to a wrong environment?

This is a community wiki answer. Feel free to expand it.
If you are aware that you made a mistake and want to cancel the command right away than there are some options for you:
$ kill -9 $! will kill the most recent process executed by the command ($! represents its process ID).
Suspend the current process by pressing Ctrl+z and then kill it using kill -9 %% or kill -9 %+. More details regarding this approach can be found here.
EDIT:
Including the solution proposed by VASャ from the comments:
I'd use shell scripts and different configs for each cluster, like
that: deploy-cluster1.sh where I'd have kubectl --kubeconfig .kube/cluster1 apply -k environ/cluster1 or even shorter: deploy.sh env1 where deploy.sh contains: kubectl --kubeconfig .kube/$1 apply -k environ/$1
More details regarding that approach can be found here.

Recently I got a new solution here.
direnv which can change env after switch into different directories this force me to switch the KUBECONFIG env. Follow Mastering the KUBECONFIG file to get more details.

Related

Create Symlink in Kubernetes deployment

I want to create a symlink using a kubernetes deployment yaml. Is this possible?
Thanks
Not really but you could set your command to something like [/bin/sh, -c, "ln -s whatever whatever && exec originalcommand"]. Kubernetes isn't involved per se, but it would probably do the job. Normally that should be part of your image build process, not a deployment-time thing.

kubectl "error: You must be logged in to the server (Unauthorized)" using kubectx, while no error if used the same config directly

I am encountering a weird behavior when i try to configure several KUBECONFIG environment entries concatenated with : such in the example here :
export KUBECONFIG=/Users/user/Work/company/project/setup/secrets/dev-qz/users/admin.conf:/Users/user/Work/company/project/setup/secrets/dev-wer/users/admin.conf:/Users/user/Work/company/project/setup/secrets/test-wer/users/admin.conf:/Users/user/Work/company/project/setup/secrets/test/users/admin.conf:/Users/user/Work/company/project/setup/secrets/dev-jg/users/admin.conf:/Users/user/Work/company/project/setup/secrets/preprod/users/admin.conf:/Users/user/Work/company/project/setup/secrets/dev/users/admin.conf:/Users/user/Work/company/project/setup/secrets/dev-fxc/users/admin.conf:/Users/user/Work/company/project/setup/secrets/cluster-setup/users/admin.conf:/Users/user/Work/company/project/setup/secrets/test-fxc/users/admin.conf:/Users/user/Work/company/project/setup/secrets/test-jg/users/admin.conf:/Users/user/Work/company/project/setup/secrets/test-qz/users/admin.conf
This is what is happening: if i choose with kubectx the cluster (not every cluster from the list, but just any), when i try kubectl get po i receive : error: You must be logged in to the server (Unauthorized) .
But, if try to reach the same cluster passing it directly to the kubectl command with --kubeconfig=<path to the config> it works.
I am pretty struggling with this and just wanna know if anyone else is facing this kind of issues as well and how have solved it
Eventually i found the problem. The flatten command that suggested to me #mario, helped me to debug better the situation.
Basically, the in memory or in file merge, makes what it supposed to do: create a kubeconfig with all uniq parameters of each kubeconfig files. This works perfectly unless on or more kubeconfig has the same labels that identify the same component. In this case the last in order wins. So if you have the following example:
grep -Rn 'name: kubernetes-admin$' infra/secrets/*/users/admin.conf
infra/secrets/cluster1/users/admin.conf:16:- name: kubernetes-admin
infra/secrets/cluster2/users/admin.conf:17:- name: kubernetes-admin
infra/secrets/cluster3/users/admin.conf:16:- name: kubernetes-admin
cluster1 and cluster2 won't work, while cluster3 will work perfectly, incidentally due to the order.
The solution to this problem is to avoid non uniq fields, by renaming the label that identifies the user (for the example above). Once is done this change, everything will work perfectly.
I agree with #Bernard. This doesn't look like anything specific to kubectx as it is just a bash script, which under the hood uses kubectl binary. You can see its code here. I guess that it will also fail in kubectl if you don't provide the
But, if try to reach the same cluster passing it directly to the
kubectl command with --kubeconfig= it works.
There is a bit of inconsistency in the way you're testing it as you don't provide the specific kubeconfig file to both commands. When you use kubectx it relies on your multiple in-memory merged kubeconfig files and you compare it with working kubectl example in which you directly specify the kubeconfig file that should be used. To make this comparison consistent you should also use kubectx with this particular kubeconfig file. And what happens if you run kubectl command without specifying --kubeconfig=<path to the config> ? I guess you get similar error to the one you get when running kubectx. Please correct me if I'm wrong.
There is a really good article written by Ahmet Alp Balkan - kubectx author, which nicely explains how you can work with multiple kubeconfig files. As you can read in the article:
Tip 2: Using multiple kubeconfigs at once
Sometimes you have a bunch of small kubeconfig files (e.g. one per
cluster) but you want to use them all at once, with tools like
kubectl or kubectx that
work with multiple contexts at once.
To do that, you need a “merged” kubeconfig file. Tip #3 explains how
you can merge the kubeconfigs into a single file, but you can also
merge them in-memory.
By specifying multiple files in KUBECONFIG environment variable,
you can temporarily stitch kubeconfig files together and use them all
in kubectl.
export KUBECONFIG=file1:file2
kubectl get pods --context=cluster-1
kubectl get pods --context=cluster-2
Tip 3: Merging kubeconfig files
Since kubeconfig files are structured YAML files, you can't just
append them to get one big kubeconfig file, but kubectl can help you
merge these files:
KUBECONFIG=file1:file2:file3 kubectl config view --merge --flatten > out.txt
Possible solutions:
Try to merge your multiple kubeconfig files to a single one like in the example above to see if it's possibly problem only with in-memory merge:
KUBECONFIG=file1:file2:file3 kubectl config view --merge --flatten > out.txt
Review all your kubeconfigs and test them individually just to make sure if they're working properly when specified in KUBECONFIG env variable separately. There might be some error in one of them which causes the issue.

Helm chart copy shell script from local machine to remote pod , change permission and exeucte

Is there a way I can copy shell script from local machine to pod using charts and helm, change the script permission and execute the script inside the pod?
No, Helm cannot do this. In effect only the Kubernetes commands it can run are kubectl apply and kubectl delete, though it can apply templating before sending YAML off to the Kubernetes server. The sorts of imperative commands you're describing (kubectl cp and kubectl exec) aren't things Helm can do.
(The sorts of imperative commands you're describing aren't generally good form in Kubernetes in any case. Generally you'd need to package your script up in a Docker image to be able to run it in the cluster, and you want to try to set up your containers to be able to set themselves up as much as they can. Also remember that pods get deleted routinely, sometimes even outside of your control, and anything you've manually copied into a pod will get lost when this happens.)

Kubectl drain node failed: "Forbidden: node updates may only change labels, taints, or capacity"

When attempting to drain a node on an AKS K8s cluster using:
kubectl drain ${node_name} --ignore-daemonsets
I get the following error:
"The Node \"aks-agentpool-xxxxx-0\" is invalid: []: Forbidden: node updates may only change labels, taints, or capacity (or configSource, if the DynamicKubeletConfig feature gate is enabled)"
Is there something extra that needs to be done on AKS nodes to allow draining?
(Context: This is part of an automation script I'm writing to drain a kubernetes node for maintenance operations without downtime, so the draining is definitely a prerequisite here)
An additional troubleshooting note:
This command is being run via Ansible's "shell" module, but when the command is run directly in BASH, it works fine.
Further, the ansible is being run via a Jenkins pipeline. Debug statements seem to show:
the command being correctly formed and executed.
the context seems correct (so kubeconfig is accessible)
pods can be listed (so kubeconfig is active and correct)
This command is being run via Ansible's "shell" module, but when the
command is run directly in BASH, it works fine.
Further, the ansible is being run via a Jenkins pipeline.
It's good that you added this information because it totally changes the perspective from which we should look at the issue you experience.
For debugging purposes instead of running your command, try to run:
kubectl auth can-i drain node --all-namespaces
both directly in bash shell as well as via Ansible's shell module
It should at least give you an answer if this is not a permission issue.
Other commands that you may use to debugging in this case are:
ls -l .kube/config
cat .kube/config
whoami
Last one to make sure that Ansible uses the same user. If you already know that it uses different user, try to run the script as the same user you use for running it in a bash shell.
Once you check this, we can continue the debugging process.

How to switch kubectl clusters between gcloud and minikube

I have Kubernetes working well in two different environments, namely in my local environment (MacBook running minikube) and as well as on Google's Container Engine (GCE, Kubernetes on Google Cloud). I use the MacBook/local environment to develop and test my YAML files and then, upon completion, try them on GCE.
Currently I need to work with each environment individually: I need to edit the YAML files in my local environment and, when ready, (git) clone them to a GCE environment and then use/deploy them. This is a somewhat cumbersome process.
Ideally, I would like to use kubectl from my Macbook to easily switch between the local minikube or GCE Kubernetes environments and to easily determine where the YAML files are used. Is there a simple way to switch contexts to do this?
You can switch from local (minikube) to gcloud and back with:
kubectl config use-context CONTEXT_NAME
to list all contexts:
kubectl config get-contexts
You can create different enviroments for local and gcloud and put it in separate yaml files.
List contexts
kubectl config get-contexts
Switch contexts
kubectl config set current-context MY-CONTEXT
A faster shortcut to the standard kubectl commands is to use kubectx:
List contexts: kubectx
Equivalent to kubectl config get-contexts
Switch context (to foo): kubectx foo
Equivalent to kubectl config use-context foo
To install on macOS: brew install kubectx
The kubectx package also includes a similar tool for switching namespaces called kubens.
These two are super convenient if you work in multiple contexts and namespaces regularly.
More info: https://ahmet.im/blog/kubectx/
If you're looking for a GUI-based solution for Mac and have the Docker Desktop installed, you can use the Docker Menu Bar icon. Here you can find "Kubernetes" menu with all the contexts you have in your kubeconfig and easily switch between them.
To get all context
C:\Users\arun>kubectl config get-contexts
To get current context
C:\Users\arun>kubectl config current-context
To switch context
C:\Users\arun>kubectl config use-context <any context name from above list>
Latest 2020 answer is here,
A simple way to switch between kubectl context,
kubectl top nodes **--context=**context01name
kubectl top nodes --context=context02name
You can also store the context name as env like
context01name=gke_${GOOGLE_CLOUD_PROJECT}_us-central1-a_standard-cluster-1
I got bored of typing this over and over so I wrote a simple bash utility to switch contexts
You can find it here https://github.com/josefkorbel/kube-switch
The canonical answer of switching/reading/manipulating different kubernetes environments (aka kubernetes contexts) is, as Mark mentioned, to use kubectl config, see below:
$ kubectl config
Modify kubeconfig files using subcommands like "kubectl config set current-context my-context"
Available Commands:
current-context Displays the current-context
delete-cluster Delete the specified cluster from the kubeconfig
delete-context Delete the specified context from the kubeconfig
get-clusters Display clusters defined in the kubeconfig
get-contexts Describe one or many contexts
rename-context Renames a context from the kubeconfig file.
set Sets an individual value in a kubeconfig file
set-cluster Sets a cluster entry in kubeconfig
set-context Sets a context entry in kubeconfig
set-credentials Sets a user entry in kubeconfig
unset Unsets an individual value in a kubeconfig file
use-context Sets the current-context in a kubeconfig file
view Display merged kubeconfig settings or a specified kubeconfig file
Usage:
kubectl config SUBCOMMAND [options]
Behind the scene, there is a ~/.kube/config YAML file that stores all the available contexts with their corresponding credentials and endpoints for each contexts.
Kubectl off the shelf doesn't make it easy to manage different kubernetes contexts as you probably already know. Rather than rolling your own script to manage all that, a better approach is to use a mature tool called kubectx, created by a Googler named "Ahmet Alp Balkan" who's on Kubernetes / Google Cloud Platform developer experiences Team that builds tooling like this. I highly recommend it.
https://github.com/ahmetb/kubectx
$ kctx --help
USAGE:
kubectx : list the contexts
kubectx <NAME> : switch to context <NAME>
kubectx - : switch to the previous context
kubectx <NEW_NAME>=<NAME> : rename context <NAME> to <NEW_NAME>
kubectx <NEW_NAME>=. : rename current-context to <NEW_NAME>
kubectx -d <NAME> [<NAME...>] : delete context <NAME> ('.' for current-context)
(this command won't delete the user/cluster entry
that is used by the context)
kubectx -h,--help : show this message
TL;DR: I created a GUI to switch Kubernetes contexts via AppleScript. I activate it via shift-cmd-x.
I too had the same issue. It was a pain switching contexts by the command line. I used FastScripts to set a key combo (shift-cmd-x) to run the following AppleScript (placed in this directory: $(HOME)/Library/Scripts/Applications/Terminal).
use AppleScript version "2.4" -- Yosemite (10.10) or later
use scripting additions
do shell script "/usr/local/bin/kubectl config current-context"
set curcontext to result
do shell script "/usr/local/bin/kubectl config get-contexts -o name"
set contexts to paragraphs of result
choose from list contexts with prompt "Select Context:" with title "K8s Context Selector" default items {curcontext}
set scriptArguments to item 1 of result
do shell script "/usr/local/bin/kubectl config use-context " & scriptArguments
display dialog "Switched to " & scriptArguments buttons {"ok"} default button 1
Cloning the YAML files across repos for different environments is definitely ideal. What you to do is templatize your YAML files - by extracting the parameters which differ from environment to environment.
You can, of course, use some templating engine and separate the values in a YAML and produce the YAML for a specific environment. But this is easily doable if you adopt the Helm Charts. To take a look at some sample charts go to stable directory at this Github repo
To take an example of the Wordpress chart, you could have two different commands for two environments:
For Dev:
helm install --name dev-release --set \
wordpressUsername=dev_admin, \
wordpressPassword=dev_password, \
mariadb.mariadbRootPassword=dev_secretpassword \
stable/wordpress
It is not necessary to pass these values on CLI though, you can store the values in a file called aptly values.yml and you could have different files for different environments
You will need some work in converting to Helm chart standards, but the effort will be worth it.
Check also the latest (docker 19.03) docker context command.
Ajeet Singh Raina ) illustrates it in "Docker 19.03.0 Pre-Release: Fast Context Switching, Rootless Docker, Sysctl support for Swarm Services"
A context is essentially the configuration that you use to access a particular cluster.
Say, for example, in my particular case, I have 4 different clusters – mix of Swarm and Kubernetes running locally and remotely.
Assume that I have a default cluster running on my Desktop machine , 2 node Swarm Cluster running on Google Cloud Platform, 5-Node Cluster running on Play with Docker playground and a single-node Kubernetes cluster running on Minikube and that I need to access pretty regularly.
Using docker context CLI I can easily switch from one cluster(which could be my development cluster) to test to production cluster in seconds.
$ sudo docker context --help
Usage: docker context COMMAND
Manage contexts
Commands:
create Create a context
export Export a context to a tar or kubeconfig file
import Import a context from a tar file
inspect Display detailed information on one or more contexts
ls List contexts
rm Remove one or more contexts
update Update a context
use Set the current docker context
Run 'docker context COMMAND --help' for more information on a command.
For example:
[:)Captain'sBay=>sudo docker context ls
NAME DESCRIPTION DOCKER ENDPOINT KUBERNETES ENDPOINT ORCHESTRATOR
default * Current DOCKER_HOST based configuration unix:///var/run/docker.sock https://127.0.0.1:16443 (default) swarm
swarm-context1
I use kubeswitch (disclaimer: I wrote the tool) that can be used just like kubectx, but is designed for a large number of kubeconfig files.
If you have to deal with hundreds or thousands of kubeconfig files, this tool might be useful to you, otherwise kubectx or kubectl config use-context might be sufficient.
For instance, it adds capabilities like reading from vault, hot reload while searching, and an index to speed up subsequent searches.
You can install it from here.
EDIT: now also includes support for GKE directly. So you can use and discover kubeconfig files without having to manually download and update them.
In case you might be looking for a simple way to switch between different contexts maybe this will be of help.
I got inspired by kubectx and kswitch scripts already mentioned, which I can recommend for most use-cases. They are helping with solving the switching task, but are breaking for me on some bigger or less standard configurations of ~/.kube/config. So I created a sys-exec invocation wrapper and a short-hand around kubectl.
If you call k without params you would see an intercepted prompt to switch context.
Switch kubectl to a different context/cluster/namespace.
Found following options to select from:
>>> context: [1] franz
>>> context: [2] gke_foo_us-central1-a_live-v1
>>> context: [3] minikube
--> new num [?/q]:
Further, k continues to act as a short-hand. The following is equivalent:
kubectl get pods --all-namespaces
k get pods -A
k p -A
yes, i think this is what your asking about. To view your current config, use kubectl config view. kubectl loads and merges config from the following locations (in order)
--kubeconfig=/path/to/.kube/config command line flag
KUBECONFIG=/path/to/.kube/config env variable
$HOME/.kube/config - The DEFAULT
i use --kubeconfig since i switch alot between multiple clusters. its slightly cumbersome but it works well.
see these for more info.
https://kubernetes.io/docs/tasks/administer-cluster/share-configuration/ and https://kubernetes.io/docs/concepts/cluster-administration/authenticate-across-clusters-kubeconfig/