Ensure kubectl is running in the correct context - kubernetes

Consider a simple script:
kubectl create -f foo.yaml
kubectl expose deployment foo
There seems to be a race condition, and no way to guarantee that the context of the second command runs in the same context as the first. (Consider the user going to another shell and invoking kubectl config set-context while the script is running.) How do you resolve that? How can I ensure consistency?

I suggest to always use --context flag:
$ kubectl options | grep context
--context='': The name of the kubeconfig context to use
for each kubectl command in order to define a context and prevent an issue described in a question:
ENV=<env_name>
kubectl create --context=$ENV -f foo.yaml
kubectl expose --context=$ENV deployment foo

Related

How to give annotations by using run command in kubernetes to a pod

I attempted but there is an error..i also see See 'kubectl run --help' for usage.
but i can't fix it..
kubectl run pod pod4 --image=aamirpinger/helloworld:latest --port=80 --annotaions=createdBy="Muhammad Shahbaz" --restart=Never
Error: unknown flag: --annotaions
kubectl run supports specifying annotations via the --annotations flag that can be specified multiple times to apply multiple annotations.
For example:
$ kubectl run --image myimage --annotations="foo=bar" --annotations="another=one" mypod
results in the following:
$ kubectl get pod mypod -o yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
foo: bar
another: one
[...]
kubectl run doesn't have an option to set annotations.
Unless you're running a one-off debugging pod, it's usually better practice to write out the full (Deployment) YAML file, commit to source control, and install it using kubectl apply -f. That will let you specify any Kubernetes object property you need to.
As David Maze mentioned ,there is no --annotations flag for kubectl run command.It is better to write deployment yaml file compared to running using kubectl run command.
However you can add annotations to kubernetes resources using Kubectl annotate command.All Kubernetes objects support the ability to store additional data with the object as annotations.
Hope this helps.

How to access a kubernetes pod by its partial name?

I often run tasks like:
Read the log of the service X
or
Attach a shell inside the service Y
I always use something in my history like:
kubectl logs `kubectl get pods --no-headers -o custom-columns=":metadata.name" | grep <partial_name>`
or
kubectl exec -it `kubectl get pods --no-headers -o custom-columns=":metadata.name" | grep <partial_name>` bash
Do you know if kubectl has already something in place for this? Or should I create my own set of aliases?
Kubernetes instances are loosely coupled by the means of labels (key-value pairs). Because of that Kubernetes provides various functionalities that can help you to operate on sets of objects based on labels.
In case you have several pods of the same service good chances that they are managed by some ReplicaSet with the use of some specific label. You should see it if you run:
kubectl get pods --show-labels
Now for aggregating logs for instance you could use label selector like:
kubectl logs -l key=value
For more info please see: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/ .
added to my .zshconfig
sshpod () {
kubectl exec --stdin --tty `kubectl get pods --no-headers -o custom-columns=":metadata.name" | grep ${1} | head -n 1` -- /bin/bash
}
usage
sshpod podname
this
finds all pods
greps needed name
picks the first
sshs into the pod
You can go access a pod by its deployment/service/etc:
kubectl exec -it svc/foo -- bash
kubectl exec -it deployment/bar -- bash
Kubernetes will pick a pod that matches the criteria and send you to it.
You can enable shell autocompletion. Kubectl provides this support for Bash and Zsh which will save you a lot of typing (you will use TAB to get the suggestion/completion).
Kuberentes documentations has a great set of information about how to enable autocompletion under Optional kubectl configurations. It covers Bash on Linux, Bash on MacOS and Zsh.

How to correctly export kubernetes resources?

I've created several resources using k8s Ansible module. Now I'd like to export the resources into Kubernetes manifests (so I don't need to use Ansible anymore). I've started by exporting a Service:
$ kubectl get svc myservice -o yaml --export > myservice.yaml
$ kubectl apply -f myservice.yaml
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
service/myservice configured
Why do I get the the warning? And why there's service/myservice configured and not service/myservice unchanged? Is there a better way to export resources?
You're doing right, don't worry about the warning.
If you want to get rid of it, you have to delete all the generation, selfLink and so on keys.
Yes, you are doing it right.
Let me show you small trick:
kubectl get svc myservice -o yaml --export | sed -e '/status:/d' -e '/creationTimestamp:/d' -e '/selfLink: [a-z0-9A-Z/]\+/d' -e '/resourceVersion: "[0-9]\+"/d' -e '/phase:/d' -e '/uid: [a-z0-9-]\+/d' > myservice.yaml
Will generate proper yaml file without status, creationTimestamp, selfLink, resourceVersion, phase and uid.
You get the warning because you're using kubectl apply on a resource that you previously created with kubectl create. If you create the initial Service with kubectl apply, you shouldn't get the warning.
The configured instead of unchanged might be because of some metadata or generated data that is also included in the output of kubectl get svc myservice -o yaml --export.

How to run a command against a specific context with kubectl?

I have multiple contexts and I want to be able to run commands against a context that I have access to but am not pointed to, while in another context.
How do I run a command, say $ kubectl get pods against context B while I'm currently pointed to context A?
--context is a global option for all kubectl commands. Simply run:
$ kubectl get pods --context <context_B>
For a list of all global kubectl options, run $ kubectl options
try this command
kubectl config set-context $(kubectl config current-context) --namespace=<namespace>
then run
kubectl get po

Kubernetes: How do I delete clusters and contexts from kubectl config?

kubectl config view shows contexts and clusters corresponding to clusters that I have deleted.
How can I remove those entries?
The command
kubectl config unset clusters
appears to delete all clusters. Is there a way to selectively delete cluster entries? What about contexts?
kubectl config unset takes a dot-delimited path. You can delete cluster/context/user entries by name. E.g.
kubectl config unset users.gke_project_zone_name
kubectl config unset contexts.aws_cluster1-kubernetes
kubectl config unset clusters.foobar-baz
Side note, if you teardown your cluster using cluster/kube-down.sh (or gcloud if you use Container Engine), it will delete the associated kubeconfig entries. There is also a planned kubectl config rework for a future release to make the commands more intuitive/usable/consistent.
For clusters and contexts you can also do
kubectl config delete-cluster my-cluster
kubectl config delete-context my-cluster-context
There's nothing specific for users though, so you still have to do
kubectl config unset users.my-cluster-admin
Run command below to get all contexts you have:
$ kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* Cluster_Name_1 Cluster_1 clusterUser_resource-group_Cluster_1
Delete context:
$ kubectl config delete-context Cluster_Name_1
Unrelated to question, but maybe a useful resource.
Have a look at kubectx + kubens: Power tools for kubectl.
They make it easy to switch contexts and namespace + have the option to delete
Change context:
kubectx dev-cluster-01
Change namespace:
kubens dev-ns-01
Delete context:
kubectx -d my-context