coredns containers are running on only one master - kubernetes

I have setup Kubernetes HA cluster with 3 masters. Version 1.14.2. Observed that 2 coredns containers are running on only one master. If I stop this Master, coredns is stopped. Are there any configuration to spawn this to remaining masters?
How can I spawn the coredns containers to the remaining masters.

you need to deploy dns autoscaler. and then tune autoscale parameters.
follow the link
https://kubernetes.io/docs/tasks/administer-cluster/dns-horizontal-autoscaling/
follow the steps
kubectl apply -f https://raw.githubusercontent.com/epasham/docker-repo/master/k8s/dns-horizontal-autoscaler.yaml
kubectl get deployment --namespace=kube-system
kubectl edit configmap dns-autoscaler --namespace=kube-system
Look for this line:
linear: '{"coresPerReplica":256,"min":1,"nodesPerReplica":16}'
updae min value to 2 as shown below
kubectl edit configmap dns-autoscaler --namespace=kube-system
linear: '{"coresPerReplica":256,"min":2,"nodesPerReplica":16}'
you should get two coredns pods listed as below
master $ kubectl get po --namespace=kube-system|grep dns
coredns-78fcdf6894-l54db 1/1 Running 0 1h
coredns-78fcdf6894-vbk6q 1/1 Running 0 1h
dns-autoscaler-6f888f5957-fwpgl 1/1 Running 0 2m

Related

Kubectl : No resource found even tough there are pods running in the namespace

I have 2 pods running on default namespace as shown below
NAMESPACE NAME READY STATUS RESTARTS AGE
default alpaca-prod 1/1 Running 0 36m
default alpaca-test 1/1 Running 0 4m26s
kube-system coredns-78fcd69978-xd7jw 1/1 Running 0 23h
But when I try to get deployments I do not see any
kubectl get deployments
No resources found in default namespace.
Can someone explain this behavior ?
I am running k8 on Minikube.
I think these are pods which were spawned without Deployment, StatefulSet or DaemonSet.
You can run pod like this using the command, e.g.:
kubectl run nginx-test --image=nginx -n default
pods created via DaemonSet usually end with -xxxxx
pods created via Deployment usually end with -xxxxxxxxxx-xxxxx
pods created via StatefulSet usually end with -0, -1 etc.
pods created without upper resource, usually have exact name as you specified e.g. nginx-test, nginx, etc.
So my guess that is a standalone Pod resource (last option)

Add some labels to a deployment?

I'm very beginner with K8S and I've a question about the labels with kubernetes. On youtube video (In french here), I've seen that :
The man create three deploys with these commands and run the command kubectl get deployment then kubectl get deployment --show-labels :
kubectl run monnginx --image nginx --labels "env=prod,group=front"
kubectl run monnginx2 --image nginx --labels "env=dev,group=front"
kubectl run monnginx3 --image nginx --labels "env=prod,group=back"
root#kubmaster:# kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
monnginx 1/1 1 1 46s
monnginx2 1/1 1 1 22s
monnginx3 1/1 1 1 10s
root#kubmaster:# kubectl get deployments --show-labels
NAME READY UP-TO-DATE AVAILABLE AGE LABELS
monnginx 1/1 1 1 46s env=prod,group=front
monnginx2 1/1 1 1 22s env=dev,group=front
monnginx3 1/1 1 1 10s env=prod,group=back
Currently, if I try to do the same things :
root#kubermaster:~ kubectl run mynginx --image nginx --labels "env=prod,group=front"
pod/mynginx created
root#kubermaster:~ kubectl run mynginx2 --image nginx --labels "env=dev,group=front"
pod/mynginx2 created
root#kubermaster:~ kubectl run mynginx3 --image nginx --labels "env=dev,group=back"
pod/mynginx3 created
When I try the command kubectl get deployments --show-labels, the output is :
No resources found in default namespace.
But if I try kubectl get pods --show-labels, the output is :
NAME READY STATUS RESTARTS AGE LABELS
mynginx 1/1 Running 0 2m39s env=prod,group=front
mynginx2 1/1 Running 0 2m32s env=dev,group=front
mynginx3 1/1 Running 0 2m25s env=dev,group=back
If I follow every steps from the videos, there is a way to put some labels on deployments... But the command kubectl create deployment does not accept the flag --labels :
Error: unknown flag: --labels
There is someone to explain why I've this error and How put some label ?
Thanks a lot !
Because $ kubectl create deployment doesn't support --labels flag. But you can use $ kubectl label to add labels to your deployment.
Examples:
# Update deployment 'my-deployment' with the label 'unhealthy' and the value 'true'.
$ kubectl label deployment my-deployment unhealthy=true
# Update deployment 'my-deployment' with the label 'status' and the value 'unhealthy', overwriting any existing value.
$ kubectl label --overwrite deployment my-deployment status=unhealthy
It works with other Kubernetes objects too.
Format: kubectl label [--overwrite] (-f FILENAME | TYPE NAME) KEY_1=VAL_1 ... KEY_N=VAL_N
I think the problem is something different.
Until Kubernetes 1.17 the command kubectl run created a deployment.
Since Kubernetes 1.18 the command kubectl run creates a pod.
Release Notes of Kubernetes 1.18
kubectl run has removed the previously deprecated generators, along with flags
unrelated to creating pods. kubectl run now only creates pods. See specific
kubectl create subcommands to create objects other than pods. (#87077,
#soltysh) [SIG Architecture, CLI and Testing]
As of 2022 you can use the following imperative command to create a pod with labels as:-
kubectl run POD_NAME --image IMAGE_NAME -l myapp:app
where, myapp:app is the label name.

Inquiring pod and service subnets from inside Kubernetes cluster

How can one inquire the Kubernetes pod and service subnets in use (e.g. 10.244.0.0/16 and 10.96.0.0/12 respectively) from inside a Kubernetes cluster in a portable and simple way?
For instance, kubectl get cm -n kube-system kubeadm-config -o yaml reports podSubnet and serviceSubnet. But this is not fully portable because a cluster may have been set up by another means than kubeadm.
kubectl get cm -n kube-system kube-proxy -o yaml reports clusterCIDR (i.e. pod subnet) and kubectl get pod -n kube-system kube-apiserver-master1 -o yaml reports the value
passed as command-line option --service-cluster-ip-range to kube-apiserver (i.e. service subnet). master1 stands for the name of any control plane node. But this seems a bit complex.
Is there a better way available e.g. with the Kubernetes 1.17 API?
I don't think it would be possible to obtain what you want in a portable and simple way.
If you don't specify Cidr's parameters it will assign default one.
As you have many ways to run kubernetes as unmanaged clusters like kubeadm, minikbue, k3s, micork8s or managed like Cloud providers (GKE, Azure, AWS) it's hard to find one way to list all cidrs in all environments. Another obstacle can be versions of Kubernetes or CNI.
In Kubernetes 1.17 Release notes you can find information that
Deprecate the default service IP CIDR. The previous default was 10.0.0.0/24 which will be removed in 6 months/2 releases. Cluster admins must specify their own desired value, by using --service-cluster-ip-range on kube-apiserver.
As example of Kubeadm: $ kubeadm init --pod-network-cidr 10.100.0.0/12 --service-cidr 10.99.0.0/12
There are a few ways to get this pod and service-cidr:
$ kubectl cluster-info dump | grep -E '(service-cluster-ip-range|cluster-cidr)'
"--service-cluster-ip-range=10.99.0.0/12",
"--cluster-cidr=10.100.0.0/12",
$ kubeadm config view | grep Subnet
podSubnet: 10.100.0.0/12
serviceSubnet: 10.99.0.0/12
But if you will check all pods in this cluster, some pods are starting with 192.168.190.X or 192.168.137.X
$ kubectl get pods -A -owide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default nginx 1/1 Running 0 62m 192.168.190.129 kubeadm-worker <none> <none>
kube-system calico-kube-controllers-77c5fc8d7f-9n6m5 1/1 Running 0 118m 192.168.137.66 kubeadm-master <none> <none>
kube-system calico-node-2kx2v 1/1 Running 0 117m 10.128.0.4 kubeadm-worker <none> <none>
kube-system calico-node-8xqd9 1/1 Running 0 118m 10.128.0.3 kubeadm-master <none> <none>
kube-system coredns-66bff467f8-sgmkw 1/1 Running 0 120m 192.168.137.65 kubeadm-master <none> <none>
kube-system coredns-66bff467f8-t84ht 1/1 Running 0 120m 192.168.137.67 kubeadm-master <none> <none>
If you will describe any CNI pods you can find another CIDRs:
CALICO_IPV4POOL_CIDR: 192.168.0.0/16
For GKE example you will have:
node CIDRs
$ kubectl describe node | grep CIDRs
PodCIDRs: 10.52.1.0/24
PodCIDRs: 10.52.0.0/24
PodCIDRs: 10.52.2.0/24
$ gcloud container clusters describe cluster-2 --zone=europe-west2-b | grep Cidr
clusterIpv4Cidr: 10.52.0.0/14
clusterIpv4Cidr: 10.52.0.0/14
clusterIpv4CidrBlock: 10.52.0.0/14
servicesIpv4Cidr: 10.116.0.0/20
servicesIpv4CidrBlock: 10.116.0.0/20
podIpv4CidrSize: 24
servicesIpv4Cidr: 10.116.0.0/20
Honestly I don't think there is an easy and portable way to list all podCidrs and serviceCidrs in one simple command.

Kubernetes pods are pending not active

If I run this:
kubectl get pods -n kube-system
I get this output:
NAME READY STATUS RESTARTS AGE
coredns-6fdd4f6856-6bl64 0/1 Pending 0 1h
coredns-6fdd4f6856-xgrbm 0/1 Pending 0 1h
kubernetes-dashboard-65c76f6c97-c69jg 0/1 Pending 0 13m
supposedly I need a kubernetes scheduler in order to actually launch containers? Does anyone know how to initiate a kube-scheduler?
More than a Kubernetes scheduler issue, it looks like it's more about not having enough resources on your nodes (or no nodes at all) in your cluster to schedule any workloads. You can check your nodes with:
$ kubectl get nodes
Also, you are not likely able to see any control plane resource on the kube-system namespace because you may be using managed services like EKS or GKE.

Pods on google kubernetes engine pending?

Trying to set up the jupyterhub server on Google Kubernetes following this tutorial. Everything went through fine. But when I install jupyterhub/jupyterhub image with helm, it's always showing the pods are pending:
kubectl --namespace=jupyter-server get pod
NAME READY STATUS RESTARTS AGE
hub-6dbd4df8b8-nqvnf 0/1 Pending 0 17h
proxy-7bb666576c-fx726 0/2 Pending 0 17h
Even after 17 hours.
The helm version is 2.6.2 as suggested in the tutorial. And I am using 3 f1-micro instances in the kubernetes cluster. Are these instances too small? Thanks for any advice.
Try describing the pod, and the describing the nodes in the cluster, to get more info about why exactly they're still pending:
kubectl describe po/hub-6dbd4df8b8-nqvnf -n jupyter-server
kubectl describe po/proxy-7bb666576c-fx726 -n jupyter-server
kubectl describe nodes