Pre-Type in Powershell - powershell

I am doing some kubernetes hands-on and i am using kubectl in Powershell on a minikube cluster.
I found myself many times tired of having to type
kubectl do this
and then
kubectl do that
Is there any way to set Powershell to pretype kubectl after each press of Enter?

You can hijack the CommandNotFoundAction handler to "default" to kubectl commands when you enter a term that otherwise fails to resolve:
$ExecutionContext.InvokeCommand.CommandNotFoundAction = {
param([string]$CommandName, [System.Management.Automation.CommandLookupEventArgs]$evtArgs)
$kubectlCommands = -split 'annotate api-resources api-versions apply attach auth autoscale certificate cluster-info completion config cordon cp create debug delete describe diff drain edit exec explain expose get kustomize label logs patch plugin port-forward proxy replace rollout run scale set taint top uncordon version wait'
if($CommandName -in $kubectlCommands){
$evtArgs.CommandScriptBlock = {
& kubectl $CommandName #args
}.GetNewClosure()
$evtArgs.StopSearch = $true
}
}
This will cause PowerShell to execute kubectl whenever the command name entered matches a kubectl command.
Beware this only works as a last resort - cp something somewhere will never execute kubectl cp something somewhere because cp can be resolved as a command name natively in PowerShell.

Related

searching for a keyword in all the pods/replicas of a kuberntes deployment

I am running a deployment called mydeployment that manages several pods/replicas for a certain service. I want to search all the service pods/instances/replicas of that deployment for a certain keyword. The command below defaults to one replica and returns the keyword matching in this replica only.
Kubectl logs -f deploy/mydeployment | grep "keyword"
Is it possible to customize the above command to return all the matching keywords out of all instances/pods of the deployment mydeployment? Any hint?
Save this to a file fetchLogs.sh file, and if you are using Linux box, use sh fetchLogs.sh
#!/bin/sh
podName="key-word-from-pod-name"
keyWord="actual-log-search-keyword"
nameSpace="name-space-where-the-pods-or-running"
echo "Running Script.."
for podName in `kubectl get pods -A -o name | grep -i ${podName} | cut -d'/' -f2`;
do
echo "searching pod ${podName}"
kubectl -n ${nameSpace} logs pod/${podName} | grep -i ${keyWord}
done
I used the pods, if you want to use deployment, the idea is same change the kubectl command accordingly.

Running script from Linux shell inside a Kubernetes pod

Team,
I need to execute a shell script that is within a kubernetes pod. However the call needs to come from outside the pod. Below is the script for your reference:
echo 'Enter Namespace: '; read namespace; echo $namespace;
kubectl exec -it `kubectl get po -n $namespace|grep -i podName|awk '{print $1}'` -n $namespace --- {scriptWhichNeedToExecute.sh}
Can anyone suggest on how to do this?`
There isn't really a good way. A simple option might be cat script.sh | kubectl exec -i -- bash but that can have weird side effects. The more correct solution would be to use a debug container but that feature is still in alpha right now.

How to kubectl wait for crd creation?

What is the best method for checking to see if a custom resource definition exists before running a script, using only kubectl command line?
We have a yaml file that contains definitions for a NATS cluster ServiceAccount, Role, ClusterRoleBinding and Deployment. The image used in the Deployment creates the crd, and the second script uses that crd to deploy a set of pods. At the moment our CI pipeline needs to run the second script a few times, only completing successfully once the crd has been fully created. I've tried to use kubectl wait but cannot figure out what condition to use that applies to the completion of a crd.
Below is my most recent, albeit completely wrong, attempt, however this illustrates the general sequence we'd like.
kubectl wait --for=condition=complete kubectl apply -f 1.nats-cluster-operator.yaml kubectl apply -f 2.nats-cluster.yaml
The condition for a CRD would be established:
kubectl -n <namespace-here> wait --for condition=established --timeout=60s crd/<crd-name-here>
You may want to adjust --timeout appropriately.
In case you are wanting to wait for a resource that may not exist yet, you can try something like this:
{ grep -q -m 1 "crontabs.stable.example.com"; kill $!; } < <(kubectl get crd -w)
or
{ sed -n /crontabs.stable.example.com/q; kill $!; } < <(kubectl get crd -w)
I understand the question would prefer to only use kubectl, however this answer helped in my case. The downside to this method is that the timeout will have to be set in a different way and that the condition itself is not actually checked.
In order to check the condition more thoroughly, I made the following:
#!/bin/bash
condition-established() {
local name="crontabs.stable.example.com"
local condition="Established"
jq --arg NAME $name --arg CONDITION $condition -n \
'first(inputs | if (.metadata.name==$NAME) and (.status.conditions[]?.type==$CONDITION) then
null | halt_error else empty end)'
# This is similar to the first, but the full condition is sent to stdout
#jq --arg NAME $name --arg CONDITION $condition -n \
# 'first(inputs | if (.metadata.name==$NAME) and (.status.conditions[]?.type==$CONDITION) then
# .status.conditions[] | select(.type==$CONDITION) else empty end)'
}
{ condition-established; kill $!; } < <(kubectl get crd -w -o json)
echo Complete
To explain what is happening, $! refers to the command run by bash's process substitution. I'm not sure how well this might work in other shells.
I tested with the CRD from the official kubernetes documentation.

Tell when Job is Complete

I'm looking for a way to tell (from within a script) when a Kubernetes Job has completed. I want to then get the logs out of the containers and perform cleanup.
What would be a good way to do this? Would the best way be to run kubectl describe job <job_name> and grep for 1 Succeeded or something of the sort?
Since version 1.11, you can do:
kubectl wait --for=condition=complete job/myjob
and you can also set a timeout:
kubectl wait --for=condition=complete --timeout=30s job/myjob
You can visually watch a job's status with this command:
kubectl get jobs myjob -w
The -w option watches for changes. You are looking for the SUCCESSFUL column to show 1.
For waiting in a shell script, I'd use this command:
until kubectl get jobs myjob -o jsonpath='{.status.conditions[?
(#.type=="Complete")].status}' | grep True ; do sleep 1 ; done
You can use official Python kubernetes-client.
https://github.com/kubernetes-client/python
Create new Python virtualenv:
virtualenv -p python3 kubernetes_venv
activate it with
source kubernetes_venv/bin/activate
and install kubernetes client with:
pip install kubernetes
Create new Python script and run:
from kubernetes import client, config
config.load_kube_config()
v1 = client.BatchV1Api()
ret = v1.list_namespaced_job(namespace='<YOUR-JOB-NAMESPACE>', watch=False)
for i in ret.items:
print(i.status.succeeded)
Remember to set up your specific kubeconfig in ~/.kube/config and valid value for your job namespace -> '<YOUR-JOB-NAMESPACE>'
I would use -w or --watch:
$ kubectl get jobs.batch --watch
NAME COMPLETIONS DURATION AGE
python 0/1 3m4s 3m4s
Adding the best answer, from a comment by #Coo, If you add a -f or --follow option when getting logs, it'll keep tailing the log and terminate when the job completes or fails. The $# status code is even non-zero when the job fails.
kubectl logs -l job-name=myjob --follow
One downside of this approach, that I'm aware of, is that there's no timeout option.
Another downside is the logs call may fail while the pod is in Pending (while the containers are being started). You can fix this by waiting for the pod:
# Wait for pod to be available; logs will fail if the pod is "Pending"
while [[ "$(kubectl get pod -l job-name=myjob -o json | jq -rc '.items | .[].status.phase')" == 'Pending' ]]; do
# Avoid flooding k8s with polls (seconds)
sleep 0.25
done
# Tail logs
kubectl logs -l job-name=myjob --tail=400 -f
It either one of these queries with kubectl
kubectl get job test-job -o jsonpath='{.status.succeeded}'
or
kubectl get job test-job -o jsonpath='{.status.conditions[?(#.type=="Complete")].status}'
Although kubectl wait --for=condition=complete job/myjob and kubectl wait --for=condition=complete job/myjob allow us to check whether the job completed but there is no way to check if the job just finished executing (irrespective of success or failure). If this is what you are looking for, a simple bash while loop with kubectl status check did the trick for me.
#!/bin/bash
while true; do
status=$(kubectl get job jobname -o jsonpath='{.status.conditions[0].type}')
echo "$status" | grep -qi 'Complete' && echo "0" && exit 0
echo "$status" | grep -qi 'Failed' && echo "1" && exit 1
done

How to configure kubectl with cluster information from a .conf file?

I have an admin.conf file containing info about a cluster, so that the following command works fine:
kubectl --kubeconfig ./admin.conf get nodes
How can I config kubectl to use the cluster, user and authentication from this file as default in one command? I only see separate set-cluster, set-credentials, set-context, use-context etc. I want to get the same output when I simply run:
kubectl get nodes
Here are the official documentation for how to configure kubectl
http://kubernetes.io/docs/user-guide/kubeconfig-file/
You have a few options, specifically to this question, you can just copy your admin.conf to ~/.kube/config
The best way I've found was to use an environment variable:
export KUBECONFIG=/path/to/admin.conf
I just alias the kubectl command into separate ones for my dev and production environments via .bashrc
alias k8='kubectl'
alias k8prd='kubectl --kubeconfig ~/.kube/config_prd.conf'
I prefer this method as it requires me to define the environment for each command.. whereas using an environment variable could potentially lead you to running a command within the wrong environment
Before answers have been very solid and informative, I will try to add
my 2 cents here
Configure kubeconfig file knowing its precedence
If you’re using kubectl, here’s the preference that takes effect while determining which kubeconfig file is used.
use --kubeconfig flag, if specified
use KUBECONFIG environment variable, if specified
use $HOME/.kube/config file
With this, you can easily override kubeconfig file you use per the kubectl command:
#
# using --kubeconfig flag
#
kubectl get pods --kubeconfig=file1
kubectl get pods --kubeconfig=file2
#
# or
# using `KUBECONFIG` environment variable
#
KUBECONFIG=file1 kubectl get pods
KUBECONFIG=file2 kubectl get pods
#
# or
# merging your kubeconfig file w/ $HOME/.kube/config (w/ cp backup)
#
cp $HOME/.kube/config $HOME/.kube/config.backup.$(date +%Y-%m-%d.%H:%M:%S)
KUBECONFIG= $HOME/.kube/config:file2:file3 kubectl config view --merge --flatten > \
~/.kube/merged_kubeconfig && mv ~/.kube/merged_kubeconfig ~/.kube/config
kubectl get pods --context=cluster-1
kubectl get pods --context=cluster-2
NOTE: The --minify flag allows us to extract only info about that context, and the --flatten flag allows us to keep the credentials unredacted.
For your example
kubectl get pods --kubeconfig=/path/to/admin.conf
#
# or:
#
KUBECONFIG=/path/to/admin.conf kubectl get pods
#
# or:
#
cp $HOME/.kube/config $HOME/.kube/config.backup.$(date)
KUBECONFIG= $HOME/.kube/config:/path/to/admin.conf kubectl config view --merge --flatten > \
~/.kube/merged_kubeconfig && mv ~/.kube/merged_kubeconfig ~/.kube/config
kubectl get pods --context=cluster-1
kubectl get pods --context=cluster-2
Although this precedence list not officially specified in the documentation it is codified here. If you’re developing client tools for Kubernetes, you should consider using cli-runtime library which will bring the standard --kubeconfig flag and $KUBECONFIG detection to your program.
ref article: https://ahmet.im/blog/mastering-kubeconfig/
I name all cluster configs as .kubeconfig and this lives in project directory.
Then in .bashrc or .bash_profile I have the following export:
export KUBECONFIG=.kubeconfig:$HOME/.kube/config
This way when I'm in the project directory kubectl will load local .kubeconfig.
Hope that helps
kubectl uses ~/.kube/config as the default configuration file. So you could just copy your admin.conf over it.
Because there is no built-in kubectl config merge command at the moment (follow this) you can add this function to your .bashrc (or .zshrc):
function kmerge() {
if [ $# -eq 0 ]
then
echo "Please pass the location of the kubeconfig you wish to merge"
fi
KUBECONFIG=~/.kube/config:$1 kubectl config view --flatten > ~/.kube/mergedkub && mv ~/.kube/mergedkub ~/.kube/config
}
Then you can just run from termial:
kmerge /path/to/admin.conf
and the config file will be merged to ~/.kube/config.
You can now switch to the new context with:
kubectl config use-context <new-context-name>
Or if you're using kubectx (recommended) you can run: kubectx <new-context-name>.
(The kmerge function is based on #MichaelSp answer at this post).
Kubernetes keeps the path to search for config files in $KUBECONFIG
If you want to add one more config path on top of the existing KUBECONFIG without overriding it (and keeping ~/.kube/config as the default path to search).
Just run the following each time you want to add a conf file to the KUBECONFIG path
export KUBECONFIG=${KUBECONFIG:-~/.kube/config}:/path/to/admin.conf
You can check it worked by listing the available contexts
kubectl config get-contexts
Then select the one you want to use
kubectl config use-context <context-name>
Manage your config files proper,place below in your profile file, source the .profile / .bash_profile
for kconfig in $HOME/.kube/config $(find $HOME/.kube/ -iname "*.config")
do
if [ -f "$kconfig" ];then
export KUBECONFIG=$KUBECONFIG:$kconfig
fi
done
switch the contexts from kubectl
When you type kubectl I guess you prefer to know which cluster you are pointing. Maybe it's worth creating an alias for that?
alias kube-mycluster='kubectl --kubeconfig ~/.kube/mycluster.conf'
This is possible:
export KUBECONFIG=~/.kube/config:~/.kube/cluster0:~/.kube/cluster1:~/.kube/cluster3
and:
kubectl config use-context cluster0