How to set kube-proxy settings using kubectl on AKS - kubernetes

I keep reading documentation that gives parameters for kube-proxy, but does not explain how where these parameters are supposed to be used. I create my cluster using az aks create with the azure-cli program, then I get credentials and use kubectl. So far everything I've done has involved yaml for services and deployments and such, but I can't figure out where all this kube-proxy stuff fits into all of this.
I've googled for days. I've opened question issues on github with AKS. I've asked on the kubernetes slack channel, but nobody has responded.

The kube-proxy on all your Kubernetes nodes runs as a Kubernetes DaemonSet and its configuration is stored on a Kubernetes ConfigMap. To make any changes or add/remove options you will have to edit the kube-proxy DaemonSet or ConfigMap on the kube-system namespace.
$ kubectl -n kube-system edit daemonset kube-proxy
or
$ kubectl -n kube-system edit configmap kube-proxy
For a reference on the kube-proxy command line options you can refer to here.

Related

How to restore accidentally deleted a kube-proxy DaemonSet in a Kubernetes cluster?

I accidentally deleted kube-proxy daemonset by using command: kubectl delete -n kube-system daemonset kube-proxy which should run kube-proxy pods in my cluster, what the best way to restore it?
That's how it should look
Kubernetes allows you to reinstall kube-proxy by running the following command which install the kube-proxy addon components via the API server.
$ kubeadm init phase addon kube-proxy --kubeconfig ~/.kube/config --apiserver-advertise-address string
This will generate the output as
[addons] Applied essential addon: kube-proxy
The IP address the API Server will advertise it's listening on. If not set the default network interface will be used.
Hence kube-proxy will be reinstalled in the cluster by creating a DaemonSet and launching the pods.
kube-proxy daemon got created at the time of cluster creation, so you need to write your own manifest for daemon-set unless you have a backup to restore it from there.

How to describe entire cluster (Nodes running and individual node basic information, we get with kubectl describe nodes)in Kubernetes maintenance?

Kubectl describe nodes ?
like wise do we have any commands like mentioned below to describe cluster information ?
kubectl describe cluster
"Kubectl describe <api-resource_type> <api_resource_name> "command is used to describe a specific resources running in your kubernetes cluster, Actually you need to verify different components separately as a developer to check your pods, nodes services and other tools that you have applied/created.
If you are the cluster administrator and you are asking about useful command to check the actual kube-system configuration it depends on your k8s cluster type for example if you are using "kubeadm" package to initialize k8s cluster on premises you can check and change the default cluster configuration using this command :
kubeadm config print init-defaults
after initializing your cluster all main server configurations files a.k.a manifests are located here /etc/kubernetes/manifests (and they are Realtime updated, change anything and the cluster will redeploy it automatically)
Useful kubectl commands :
For cluster infos (api-server domain and dns) run:
kubectl cluster-info
Either ways you can list all api-resources and check it one by one using these commands
kuectl api-resources (list all api-resources names and types)
kubectl get <api_resource_name> (specific to your cluster)
kubectl explain <api_resource_name> (explain the resource object with docs link)
For extra infos you can add specific flags, examples:
kubectl get nodes -o wide
kubectl get pods -n <specific-name-space> -o wide
kubectl describe pods <pod_name>
...
For more informations about the kubectl command line check the kubectl_cheatsheet

k3s cleanup of HelmChart?

I have followed instructions from this blog post to set up a k3s cluster on a couple of raspberry pi 4:
I'm now trying to get my hands dirty with traefik as front, but I'm having issues with the way it has been deployed as a 'HelmChart' I think.
From the k3s docs
It is also possible to deploy Helm charts. k3s supports a CRD
controller for installing charts. A YAML file specification can look
as following (example taken from
/var/lib/rancher/k3s/server/manifests/traefik.yaml):
So I have been starting up my k3s with the --no-deploy traefik option to manually add it with settings. So I therefore manually apply a yaml like this:
apiVersion: helm.cattle.io/v1
kind: HelmChart
metadata:
name: traefik
namespace: kube-system
spec:
chart: https://%{KUBERNETES_API}%/static/charts/traefik-1.64.0.tgz
set:
rbac.enabled: "true"
ssl.enabled: "true"
kubernetes.ingressEndpoint.useDefaultPublishedService: "true"
dashboard:
enabled: true
domain: "traefik.k3s1.local"
But when trying to iterate over settings to get it working as I want, I'm having trouble tearing it down. If I try kubectl delete -f on this yaml it just hangs indefinitely. And I can't seem to find a clean way to delete all the resources manually either.
I've been resorting now to just reinstall my entire cluster over and over because I can't seem to cleanup properly.
Is there a way to delete all the resources created by a chart like this without the helm cli (which I don't even have)?
Are you sure that kubectl delete -f is hanging?
I had the same issue as you and it seemed like kubectl delete -f was hanging, but it was really just taking a long time.
As far as I can tell, when you issue the kubectl delete -f a pod in the kube-system namespace with a name of helm-delete-* should spin up and try to delete the resources deployed via helm. You can get the full name of that container by running kubectl -n kube-system get pods, find the one with kube-delete-<name of yaml>-<id>. Then use the pod name to look at the logs using kubectl -n kube-system logs kube-delete-<name of yaml>-<id>.
An example of what I did was:
kubectl delete -f jenkins.yaml # seems to hang
kubectl -n kube-system get pods # look at pods in kube-system namespace
kubectl -n kube-system logs helm-delete-jenkins-wkjct # look at the delete logs
I see two options here:
Use the --now flag to delete your yaml file with minimal delay.
Use --grace-period=0 --force flags to force delete the resource.
There are other options but you'll need Helm CLI for them.
Please let me know if that helped.

How to Add or Repair kube-dns in EKS?

I'm running 1.10.13 on EKS on two clusters. I'm aware this will soon be obsolete for coredns on 1.11+.
One of our clusters has a functioning kube-dns deployment.
The other cluster does not have kube-dns objects running.
I've pulled kube-dns serviceAccount, clusterRole, clusterRoleBinding, deployment, and service manifests from here using kubectl get <k8s object> --export.
Now I plan on applying those files to a different cluster.
However, I still see a kube-dns secret and I'm not sure how that is created or where I can get it.
This all seems pretty roundabout. What is the proper way of installing or repairing kube-dns on an EKS cluster?
I believe the secret is usually part of the ServiceAccount, you'd still need to delete if it's there.
To create kube-dns you can try applying the official manifest:
$ kubectl apply -f https://storage.googleapis.com/kubernetes-the-hard-way/kube-dns.yaml`
Like you mentioned, you should consider moving to coredns as soon as possible.

Kubectl using command to get cluster status

I need to create a shell-script which examine the cluster
Status.**
I saw that the kubectl describe-nodes provides lots of data
I can output it to json and then parse it but maybe it’s just overkill.
Is there a simple way to with kubectl command to get the status of the cluster ? just if its up / down
The least expensive way to check if you can reach the API server is kubectl version. In addition kubectl cluster-info gives you some more info.
In addition to Michael's answer, that would only tell you about the API server or master and internal services like KubeDns etc, but not the nodes.
It depends on your need and definition of "status" here. You could run kubectl cluster-info followed by kubectl get nodes and check the STATUS column for all nodes using parsing tools like awk, jq or kubectl's own -o jsonpath option to verify that all nodes are ready.
The below command would display the health of scheduler, controller and etcd
kubectl get cs
Command below lists Kubernetes core components like, etcd, controller, scheduler, kube-proxy, core-dns, network plugin. All those pods should be running to be sure that Kubernetes is healthy.
kubectl get pod -n kube-system
Finally deploy one front-end and back-end Pod and verify the inter-pod communication to ensure that cluster is up and working correctly.
Below are the commands to get cluster status based on requirements:
To get information regarding where your Kubernetes master is running at, CoreDNS is running at, kubernetes-dasboard is running at, use
kubectl cluster-info
To get detailed information to further debug and diagnose cluster problem, use kubectl cluster-info dump
To get only the health status for your node use, kubectl get componentstatus or kubectl get cs
*To show detailed information about a resource use kubectl describe node <node>