kube-apiserver.service is running with --authorization-mode=Node,RBAC
$ kubectl api-versions | grep rbac
rbac.authorization.k8s.io/v1
rbac.authorization.k8s.io/v1beta1
Believe this is enough to enable RBAC.
However, any new user i create can view all resources without any rolebindings.
Steps to create new user:
$ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes nonadmin-csr.json | cfssljson -bare nonadmin
$ kubectl config set-cluster nonadmin --certificate-authority ca.pem --server https://127.0.0.1:6443
$ kubectl config set-credentials nonadmin --client-certificate nonadmin.pem --client-key nonadmin-key.pem
$ kubectl config set-context nonadmin --cluster nonadmin --user nonadmin
$ kubectl config use-context nonadmin
User nonadmin can view pods, svc without any rolebindings
$ kubectl get svc --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.32.0.1 <none> 443/TCP 5d4h
ingress-nginx ingress-nginx NodePort 10.32.0.129 <none> 80:30989/TCP,443:30686/TCP 5d3h
kube-system calico-typha ClusterIP 10.32.0.225 <none> 5473/TCP 5d3h
kube-system kube-dns ClusterIP 10.32.0.10 <none> 53/UDP,53/TCP 5d3h
rook-ceph rook-ceph-mgr ClusterIP 10.32.0.2 <none> 9283/TCP 4d22h
rook-ceph rook-ceph-mgr-dashboard ClusterIP 10.32.0.156 <none> 8443/TCP 4d22h
rook-ceph rook-ceph-mon-a ClusterIP 10.32.0.55 <none> 6790/TCP 4d22h
rook-ceph rook-ceph-mon-b ClusterIP 10.32.0.187 <none> 6790/TCP 4d22h
rook-ceph rook-ceph-mon-c ClusterIP 10.32.0.128 <none> 6790/TCP 4d22h
Version:
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.2", GitCommit:"cff46ab41ff0bb44d8584413b598ad8360ec1def", GitTreeState:"clean", BuildDate:"2019-01-10T23:35:51Z", GoVersion:"go1.11.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.2", GitCommit:"cff46ab41ff0bb44d8584413b598ad8360ec1def", GitTreeState:"clean", BuildDate:"2019-01-10T23:28:14Z", GoVersion:"go1.11.4", Compiler:"gc", Platform:"linux/amd64"}
This is an unmanaged kubernetes setup on Ubuntu 18 VMs.
Where am i going wrong?
Edit1: Adding kubectl config view
$ kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/dadmin/ca.pem
server: https://192.168.1.111:6443
name: gabbar
- cluster:
certificate-authority: /home/dadmin/ca.pem
server: https://127.0.0.1:6443
name: nonadmin
- cluster:
certificate-authority: /home/dadmin/ca.pem
server: https://192.168.1.111:6443
name: kubernetes
contexts:
- context:
cluster: gabbar
namespace: testing
user: gabbar
name: gabbar
- context:
cluster: nonadmin
user: nonadmin
name: nonadmin
- context:
cluster: kubernetes
user: admin
name: kubernetes
current-context: nonadmin
kind: Config
preferences: {}
users:
- name: admin
user:
client-certificate: /home/dadmin/admin.pem
client-key: /home/dadmin/admin-key.pem
- name: gabbar
user:
client-certificate: /home/dadmin/gabbar.pem
client-key: /home/dadmin/gabbar-key.pem
- name: nonadmin
user:
client-certificate: /home/dadmin/nonadmin.pem
client-key: /home/dadmin/nonadmin-key.pem
Edit 2:
Solution as suggested by #VKR:
cat > operator-csr.json <<EOF
{
"CN": "operator",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "IN",
"L": "BGLR",
"O": "system:view", <==== HERE
"OU": "CKA"
}
]
}
EOF
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
operator-csr.json | cfssljson -bare operator
MasterNode~$ kubectl config set-cluster operator --certificate-authority ca.pem --server $SERVER
Cluster "operator" set.
MasterNode~$ kubectl config set-credentials operator --client-certificate operator.pem --client-key operator-key.pem
User "operator" set.
MasterNode~$ kubectl config set-context operator --cluster operator --user operator
Context "operator" created.
MasterNode~$ kubectl auth can-i get pods --as operator
no
MasterNode~$ kubectl create rolebinding operator --clusterrole view --user operator -n default --save-config
rolebinding.rbac.authorization.k8s.io/operator created
MasterNode~$ cat crb-view.yml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: view
subjects:
- kind: User
name: operator
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: view
apiGroup: rbac.authorization.k8s.io
MasterNode~$ kubectl create -f crb-view.yml --record --save-config
clusterrolebinding.rbac.authorization.k8s.io/view created
MasterNode~$ kubectl auth can-i get pods --as operator --all-namespaces
yes
MasterNode~$ kubectl auth can-i create pods --as operator --all-namespaces
no
MasterNode~$ kubectl config use-context operator
Switched to context "operator".
MasterNode~$ kubectl auth can-i "*" "*"
no
MasterNode~$ kubectl run db --image mongo
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
Error from server (Forbidden): deployments.apps is forbidden: User "operator" cannot create resource "deployments" in API group "apps" in the namespace "default"
Most probably root cause of such behavior is that use set "O": "system:masters" group while generating nonadmin-csr.json
system:masters group bounds to the cluster-admin super-user default role and as a result - every newly created user will have full access.
Here is a good article that provide you step-by-step instruction on how to create users with limited namespace access.
Quick test shows that similar users but with different groups have huge access differences
-subj "/CN=employee/O=testgroup" :
kubectl --context=employee-context get pods --all-namespaces
Error from server (Forbidden): pods is forbidden: User "employee" cannot list resource "pods" in API group "" at the cluster scope
-subj "/CN=newemployee/O=system:masters" :
kubectl --context=newemployee-context get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
ingress-nginx nginx-ingress-controller-797b884cbc-pckj6 1/1 Running 0 85d
ingress-nginx prometheus-server-8658d8cdbb-92629 1/1 Running 0 36d
kube-system coredns-86c58d9df4-gwk28 1/1 Running 0 92d
kube-system coredns-86c58d9df4-jxl84 1/1 Running 0 92d
kube-system etcd-kube-master-1 1/1 Running 0 92d
kube-system kube-apiserver-kube-master-1 1/1 Running 0 92d
kube-system kube-controller-manager-kube-master-1 1/1 Running 4 92d
kube-system kube-flannel-ds-amd64-k6sgd 1/1 Running 0 92d
kube-system kube-flannel-ds-amd64-mtrnc 1/1 Running 0 92d
kube-system kube-flannel-ds-amd64-zdzjl 1/1 Running 1 92d
kube-system kube-proxy-4pm27 1/1 Running 1 92d
kube-system kube-proxy-ghc7w 1/1 Running 0 92d
kube-system kube-proxy-wsq4h 1/1 Running 0 92d
kube-system kube-scheduler-kube-master-1 1/1 Running 4 92d
kube-system tiller-deploy-5b7c66d59c-6wx89 1/1 Running 0 36d
Related
I'm trying to install Kubernetes with dashboard but I get the following issue:
test#ubuntukubernetes1:~$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-flannel kube-flannel-ds-ksc9n 0/1 CrashLoopBackOff 14 (2m15s ago) 49m
kube-system coredns-6d4b75cb6d-27m6b 0/1 ContainerCreating 0 4h
kube-system coredns-6d4b75cb6d-vrgtk 0/1 ContainerCreating 0 4h
kube-system etcd-ubuntukubernetes1 1/1 Running 1 (106m ago) 4h
kube-system kube-apiserver-ubuntukubernetes1 1/1 Running 1 (106m ago) 4h
kube-system kube-controller-manager-ubuntukubernetes1 1/1 Running 1 (106m ago) 4h
kube-system kube-proxy-6v8w6 1/1 Running 1 (106m ago) 4h
kube-system kube-scheduler-ubuntukubernetes1 1/1 Running 1 (106m ago) 4h
kubernetes-dashboard dashboard-metrics-scraper-7bfdf779ff-dfn4q 0/1 Pending 0 48m
kubernetes-dashboard dashboard-metrics-scraper-8c47d4b5d-9kh7h 0/1 Pending 0 73m
kubernetes-dashboard kubernetes-dashboard-5676d8b865-q459s 0/1 Pending 0 73m
kubernetes-dashboard kubernetes-dashboard-6cdd697d84-kqnxl 0/1 Pending 0 48m
test#ubuntukubernetes1:~$
Log files:
test#ubuntukubernetes1:~$ kubectl logs --namespace kube-flannel kube-flannel-ds-ksc9n
Defaulted container "kube-flannel" out of: kube-flannel, install-cni-plugin (init), install-cni (init)
I0808 23:40:17.324664 1 main.go:207] CLI flags config: {etcdEndpoints:http://127.0.0.1:4001,http://127.0.0.1:2379 etcdPrefix:/coreos.com/network etcdKeyfile: etcdCertfile: etcdCAFile: etcdUsername: etcdPassword: version:false kubeSubnetMgr:true kubeApiUrl: kubeAnnotationPrefix:flannel.alpha.coreos.com kubeConfigFile: iface:[] ifaceRegex:[] ipMasq:true ifaceCanReach: subnetFile:/run/flannel/subnet.env publicIP: publicIPv6: subnetLeaseRenewMargin:60 healthzIP:0.0.0.0 healthzPort:0 iptablesResyncSeconds:5 iptablesForwardRules:true netConfPath:/etc/kube-flannel/net-conf.json setNodeNetworkUnavailable:true}
W0808 23:40:17.324753 1 client_config.go:614] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
E0808 23:40:17.547453 1 main.go:224] Failed to create SubnetManager: error retrieving pod spec for 'kube-flannel/kube-flannel-ds-ksc9n': pods "kube-flannel-ds-ksc9n" is forbidden: User "system:serviceaccount:kube-flannel:flannel" cannot get resource "pods" in API group "" in the namespace "kube-flannel"
test#ubuntukubernetes1:~$
Do you know how this issue can be solved? I tried the following installation:
Swapoff -a
Remove following line from /etc/fstab
/swap.img none swap sw 0 0
sudo apt update
sudo apt install docker.io
sudo systemctl start docker
sudo systemctl enable docker
sudo apt install apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" >> ~/kubernetes.list
sudo mv ~/kubernetes.list /etc/apt/sources.list.d
sudo apt update
sudo apt install kubeadm kubelet kubectl kubernetes-cni
sudo kubeadm init --pod-network-cidr=192.168.0.0/16
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.5.0/aio/deploy/recommended.yaml
kubectl proxy --address 192.168.1.133 --accept-hosts '.*'
Can you advise?
I had the same situation on a new deployment today. Turns out, the kube-flannel-rbac.yml file had the wrong namespace. It's now 'kube-flannel', not 'kube-system', so I modified it and re-applied.
I also added a 'namespace' entry under each 'name' entry in kube-flannel.yml, except for under the roleRef heading. (it threw an error when I added it there) All pods came up as 'Running' after the new yml was applied.
Seems like the problem is with kube-flannel-rbac.yaml
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml
it expecting a service account in the kube-system
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flannel
subjects:
- kind: ServiceAccount
name: flannel
namespace: kube-system
so just delete this
kubectl delete -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml
as the kube-flannel.yml already creating this in the right namespace.
https://github.com/flannel-io/flannel/blob/master/Documentation/kube-flannel.yml#L43
I tried to deploy a 3 node cluster with 1 master and 2 workers. I followed similar method as described above.
Then tried to depploy Nginx but it failed. When I checked my pods, flannel on master was running but on the worker nodes it is failing.
I deleted flannel and started from beginning.
First I just used only, since there was some mention that kube-flannel-rbac.yaml was causing issues.
ubuntu#master:~$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
namespace/kube-flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
ubuntu#master:~$ kubectl describe ClusterRoleBinding flannel
Name: flannel
Labels:
Annotations:
Role:
Kind: ClusterRole
Name: flannel
Subjects:
Kind Name Namespace
ServiceAccount flannel kube-flannel
Then I was able to create nginx image.
However, I then delete image and applied the second yaml. This changed the namespace
ubuntu#master:~$ kubectl describe ClusterRoleBinding flannel
Name: flannel
Labels:
Annotations:
Role:
Kind: ClusterRole
Name: flannel
Subjects:
Kind Name Namespace
ServiceAccount flannel kube-system
and again the nginx was successful.
What is the purpose of this config? Is it needed since the image is being deployed with and without it?
https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml
I'm trying to ping the kube-dns service from a dnstools pod using the cluster IP assigned to the kube-dns service. The ping request times out. From the same dnstools pod, I tried to curl the kube-dns service using the exposed port, but that timed out as well.
Following is the output of kubectl get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
default pod/busybox 1/1 Running 62 2d14h 192.168.1.37 kubenode <none>
default pod/dnstools 1/1 Running 0 2d13h 192.168.1.45 kubenode <none>
default pod/nginx-deploy-7c45b84548-ckqzb 1/1 Running 0 6d11h 192.168.1.5 kubenode <none>
default pod/nginx-deploy-7c45b84548-vl4kh 1/1 Running 0 6d11h 192.168.1.4 kubenode <none>
dmi pod/elastic-deploy-5d7c85b8c-btptq 1/1 Running 0 2d14h 192.168.1.39 kubenode <none>
kube-system pod/calico-node-68lc7 2/2 Running 0 6d11h 10.62.194.5 kubenode <none>
kube-system pod/calico-node-9c2jz 2/2 Running 0 6d12h 10.62.194.4 kubemaster <none>
kube-system pod/coredns-5c98db65d4-5nprd 1/1 Running 0 6d12h 192.168.0.2 kubemaster <none>
kube-system pod/coredns-5c98db65d4-5vw95 1/1 Running 0 6d12h 192.168.0.3 kubemaster <none>
kube-system pod/etcd-kubemaster 1/1 Running 0 6d12h 10.62.194.4 kubemaster <none>
kube-system pod/kube-apiserver-kubemaster 1/1 Running 0 6d12h 10.62.194.4 kubemaster <none>
kube-system pod/kube-controller-manager-kubemaster 1/1 Running 1 6d12h 10.62.194.4 kubemaster <none>
kube-system pod/kube-proxy-9hcgv 1/1 Running 0 6d11h 10.62.194.5 kubenode <none>
kube-system pod/kube-proxy-bxw9s 1/1 Running 0 6d12h 10.62.194.4 kubemaster <none>
kube-system pod/kube-scheduler-kubemaster 1/1 Running 1 6d12h 10.62.194.4 kubemaster <none>
kube-system pod/tiller-deploy-767d9b9584-5k95j 1/1 Running 0 3d9h 192.168.1.8 kubenode <none>
nginx-ingress pod/nginx-ingress-66wts 1/1 Running 0 5d17h 192.168.1.6 kubenode <none>
In the above output, why do some pods have an IP assigned in the 192.168.0.0/24 subnet whereas others have an IP that is equal to the IP address of my node/master? (10.62.194.4 is the IP of my master, 10.62.194.5 is the IP of my node)
This is the config.yml I used to initialize the cluster using kubeadm init --config=config.yml
apiServer:
certSANs:
- 10.62.194.4
extraArgs:
authorization-mode: Node,RBAC
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: dev-cluster
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.15.1
networking:
dnsDomain: cluster.local
podSubnet: 192.168.0.0/16
serviceSubnet: 10.96.0.0/12
scheduler: {}
Result of kubectl get svc --all-namespaces -o wide
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6d12h <none>
default service/nginx-deploy ClusterIP 10.97.5.194 <none> 80/TCP 5d17h run=nginx
dmi service/elasticsearch ClusterIP 10.107.84.159 <none> 9200/TCP,9300/TCP 2d14h app=dmi,component=elasticse
dmi service/metric-server ClusterIP 10.106.117.2 <none> 8098/TCP 2d14h app=dmi,component=metric-se
kube-system service/calico-typha ClusterIP 10.97.201.232 <none> 5473/TCP 6d12h k8s-app=calico-typha
kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 6d12h k8s-app=kube-dns
kube-system service/tiller-deploy ClusterIP 10.98.133.94 <none> 44134/TCP 3d9h app=helm,name=tiller
The command I ran was kubectl exec -ti dnstools -- curl 10.96.0.10:53
EDIT:
I raised this question because I got this error when trying to resolve service names from within the cluster. I was under the impression that I got this error because I cannot ping the DNS server from a pod.
Output of kubectl exec -ti dnstools -- nslookup kubernetes.default
;; connection timed out; no servers could be reached
command terminated with exit code 1
Output of kubectl exec dnstools cat /etc/resolv.conf
nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local reddog.microsoft.com
options ndots:5
Result of kubectl get ep kube-dns --namespace=kube-system
NAME ENDPOINTS AGE
kube-dns 192.168.0.2:53,192.168.0.3:53,192.168.0.2:53 + 3 more... 6d13h
EDIT:
Ping-ing the CoreDNS pod directly using its Pod IP times out as well:
/ # ping 192.168.0.2
PING 192.168.0.2 (192.168.0.2): 56 data bytes
^C
--- 192.168.0.2 ping statistics ---
24 packets transmitted, 0 packets received, 100% packet loss
EDIT:
I think something has gone wrong when I was setting up the cluster. Below are the steps I took when setting up the cluster:
Edit host files on master and worker to include the IP's and hostnames of the nodes
Disabled swap using swapoff -a and disabled swap permanantly by editing /etc/fstab
Install docker prerequisites using apt-get install apt-transport-https ca-certificates curl software-properties-common -y
Added Docker GPG key using curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
Added Docker repo using add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
Install Docker using apt-get update -y; -get install docker-ce -y
Install Kubernetes prerequisites using curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
Added Kubernetes repo using echo 'deb http://apt.kubernetes.io/ kubernetes-xenial main' | sudo tee /etc/apt/sources.list.d/kubernetes.list
Update repo and install Kubernetes components using apt-get update -y; apt-get install kubelet kubeadm kubectl -y
Configure master node:
kubeadm init --apiserver-advertise-address=10.62.194.4 --apiserver-cert-extra-sans=10.62.194.4 --pod-network-cidr=192.168.0.0/16
Copy Kube config to $HOME: mkdir -p $HOME/.kube; sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config; sudo chown $(id -u):$(id -g) $HOME/.kube/config
Installed Calico using kubectl apply -f https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml; kubectl apply -f https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
On node:
On the node I did the kubeadm join command using the command printed out from kubeadm token create --print-join-command on the master
The kubernetes system pods get assigned the host ip since they provide low level services that are not dependant on an overlay network (or in case of calico even provide the overlay network). They have the ip of the node where they run.
A common pod uses the overlay network and gets assigned an ip from the calico range, not from the metal node they run on.
You can't access DNS (port 53) with HTTP using curl. You can use dig to query a DNS resolver.
A service IP is not reachable by ping since it is a virtual IP just used as a routing handle for the iptables rules setup by kube-proxy, therefore a TCP connection works, but ICMP not.
You can ping a pod IP though, since it is assigned from the overlay network.
You should check on the same namespace
Currently, you are in default namespace and curl to other kube-system namespace.
You should check in the same namespace, I think it works.
On some cases the local host that Elasticsearch publishes is not routable/accessible from other hosts. On these cases you will have to configure network.publish_host in the yml config file, in order for Elasticsearch to use and publish the right address.
Try configuring network.publish_host to the right public address.
See more here:
https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-network.html#advanced-network-settings
note that control plane components like api server, etcd that runs on master node are bound to host network. and hence you see the ip address of the master server.
On the other hand, the apps that you deployed are going to get the ips from the pod subnet range. those vary from cluster node ip's
Try below steps to test dns working or not
deploy nginx.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
labels:
app: nginx
spec:
serviceName: "nginx"
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: k8s.gcr.io/nginx-slim:0.8
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumes:
- name: www
emptyDir:
kuebctl create -f nginx.yaml
master $ kubectl get po
NAME READY STATUS RESTARTS AGE
web-0 1/1 Running 0 1m
web-1 1/1 Running 0 1m
master $ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 35m
nginx ClusterIP None <none> 80/TCP 2m
master $ kubectl run -i --tty --image busybox:1.28 dns-test --restart=Never --rm
If you don't see a command prompt, try pressing enter.
/ # nslookup nginx
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: nginx
Address 1: 10.40.0.1 web-0.nginx.default.svc.cluster.local
Address 2: 10.40.0.2 web-1.nginx.default.svc.cluster.local
/ #
/ # nslookup web-0.nginx
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: web-0.nginx
Address 1: 10.40.0.1 web-0.nginx.default.svc.cluster.local
/ # nslookup web-0.nginx.default.svc.cluster.local
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: web-0.nginx.default.svc.cluster.local
Address 1: 10.40.0.1 web-0.nginx.default.svc.cluster.local
How to change the default nodeport range on Mac (docker-desktop)?
I'd like to change the default nodeport range on Mac. Is it possible? I'm glad to have found this article: http://www.thinkcode.se/blog/2019/02/20/kubernetes-service-node-port-range. Since I can't find /etc/kubernetes/manifests/kube-apiserver.yaml in my environment, I tried to achieve what I want to do by running sudo kubectl edit pod kube-apiserver-docker-desktop --namespace=kube-system and add the parameter --service-node-port-range=443-22000. But when I tried to save it, I got the following error:
# pods "kube-apiserver-docker-desktop" was not valid:
# * spec: Forbidden: pod updates may not change fields other than `spec.containers[*].image`, `spec.initContainers[*].image`, `spec.activeDeadlineSeconds` or `spec.tolerations` (only additions to existing tolerations)
(I get the same error even if I don't touch port 443.) Can someone please share his/her thoughts or experience? Thanks!
Append:
skwok-mbp:kubernetes skwok$ kubectl get deployment -A
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
docker compose 1/1 1 1 15d
docker compose-api 1/1 1 1 15d
ingress-nginx nginx-ingress-controller 1/1 1 1 37m
kube-system coredns 2/2 2 2 15d
skwok-mbp:kubernetes skwok$ kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
default fortune-configmap-volume 2/2 Running 4 14d
default kubia-2qzmm 1/1 Running 2 15d
docker compose-6c67d745f6-qqmpb 1/1 Running 2 15d
docker compose-api-57ff65b8c7-g8884 1/1 Running 4 15d
ingress-nginx nginx-ingress-controller-756f65dd87-sq6lt 1/1 Running 0 37m
kube-system coredns-fb8b8dccf-jn8cm 1/1 Running 6 15d
kube-system coredns-fb8b8dccf-t6qhs 1/1 Running 6 15d
kube-system etcd-docker-desktop 1/1 Running 2 15d
kube-system kube-apiserver-docker-desktop 1/1 Running 2 15d
kube-system kube-controller-manager-docker-desktop 1/1 Running 29 15d
kube-system kube-proxy-6nzqx 1/1 Running 2 15d
kube-system kube-scheduler-docker-desktop 1/1 Running 30 15d
Update: The example from the documentation shows a way to adjust apiserver parameters during Minikube start:
minikube start --extra-config=apiserver.service-node-port-range=1-65535
--extra-config: A set of key=value pairs that describe configuration that may be passed to different components. The key should be '.' separated, and the first part before the dot is the component to apply the configuration to. Valid components are: kubelet, apiserver, controller-manager, etcd, proxy, scheduler. link
The list of available options could be found in CLI documentation
Another way to change kube-apiserver parameters for Docker-for-desktop on Mac:
login to Docker VM:
$ screen ~/Library/Containers/com.docker.docker/Data/vms/0/tty
#(you can also use privileged container for the same purpose)
docker run -it --privileged --pid=host debian nsenter -t 1 -m -u -n -i sh
#or
docker run --rm -it --privileged --pid=host walkerlee/nsenter -t 1 -m -u -i -n sh
# as suggested here: https://forums.docker.com/t/is-it-possible-to-ssh-to-the-xhyve-machine/17426/5
# in case of minikube use the following command:
$ minikube ssh
Edit kube-apiserver.yaml (it's one of static pods, they are created by kubelet using files in /etc/kubernetes/manifests)
$ vi /etc/kubernetes/manifests/kube-apiserver.yaml
# for minikube
$ sudo vi /etc/kubernetes/manifests/kube-apiserver.yaml
Add the following line to the pod spec:
spec:
containers:
- command:
- kube-apiserver
- --advertise-address=192.168.65.3
...
- --service-node-port-range=443-22000 # <-- add this line
...
Save and exit. Pod kube-apiserver will be restarted with new parameters.
Exit Docker VM (for screen: Ctrl-a,k , for container: Ctrl-d )
Check the results:
$ kubectl get pod kube-apiserver-docker-desktop -o yaml -n kube-system | less
Create simple deployment and expose it with service:
$ kubectl run nginx1 --image=nginx --replicas=2
$ kubectl expose deployment nginx1 --port 80 --type=NodePort
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 14d
nginx1 NodePort 10.99.173.234 <none> 80:14966/TCP 5s
As you can see NodePort was chosen from the new range.
There are other ways to expose your container: HostNetwork, HostPort, MetalLB
You need to add the correct security context for that purpose, check out how the ingress addon in minikube works, for example.
...
ports:
- containerPort: 80
hostPort: 80
protocol: TCP
- containerPort: 443
hostPort: 443
protocol: TCP
...
securityContext:
capabilities:
add:
- NET_BIND_SERVICE
drop:
- ALL
I'm trying to follow GitHub - kubernetes/dashboard: General-purpose web UI for Kubernetes clusters.
deploy/access:
# export KUBECONFIG=/etc/kubernetes/admin.conf
# kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
secret/kubernetes-dashboard-certs created
serviceaccount/kubernetes-dashboard created
role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
deployment.apps/kubernetes-dashboard created
service/kubernetes-dashboard created
# kubectl proxy
Starting to serve on 127.0.0.1:8001
curl:
# curl http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "no endpoints available for service \"kubernetes-dashboard\"",
"reason": "ServiceUnavailable",
"code": 503
}#
Please advise.
per #VKR
$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-576cbf47c7-56vg7 0/1 ContainerCreating 0 57m
kube-system coredns-576cbf47c7-sn2fk 0/1 ContainerCreating 0 57m
kube-system etcd-wcmisdlin02.uftwf.local 1/1 Running 0 56m
kube-system kube-apiserver-wcmisdlin02.uftwf.local 1/1 Running 0 56m
kube-system kube-controller-manager-wcmisdlin02.uftwf.local 1/1 Running 0 56m
kube-system kube-proxy-2hhf7 1/1 Running 0 6m57s
kube-system kube-proxy-lzfcx 1/1 Running 0 7m35s
kube-system kube-proxy-rndhm 1/1 Running 0 57m
kube-system kube-scheduler-wcmisdlin02.uftwf.local 1/1 Running 0 56m
kube-system kubernetes-dashboard-77fd78f978-g2hts 0/1 Pending 0 2m38s
$
logs:
$ kubectl logs kubernetes-dashboard-77fd78f978-g2hts -n kube-system
$
describe:
$ kubectl describe pod kubernetes-dashboard-77fd78f978-g2hts -n kube-system
Name: kubernetes-dashboard-77fd78f978-g2hts
Namespace: kube-system
Priority: 0
PriorityClassName: <none>
Node: <none>
Labels: k8s-app=kubernetes-dashboard
pod-template-hash=77fd78f978
Annotations: <none>
Status: Pending
IP:
Controlled By: ReplicaSet/kubernetes-dashboard-77fd78f978
Containers:
kubernetes-dashboard:
Image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0
Port: 8443/TCP
Host Port: 0/TCP
Args:
--auto-generate-certificates
Liveness: http-get https://:8443/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/certs from kubernetes-dashboard-certs (rw)
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kubernetes-dashboard-token-gp4l7 (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
kubernetes-dashboard-certs:
Type: Secret (a volume populated by a Secret)
SecretName: kubernetes-dashboard-certs
Optional: false
tmp-volume:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
kubernetes-dashboard-token-gp4l7:
Type: Secret (a volume populated by a Secret)
SecretName: kubernetes-dashboard-token-gp4l7
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 4m39s (x21689 over 20h) default-scheduler 0/3 nodes are available: 3 node(s) had taints that the pod didn't tolerate.
$
It would appear that you are attempting to deploy Kubernetes leveraging kubeadm but have skipped the step of Installing a pod network add-on (CNI). Notice the warning:
The network must be deployed before any applications. Also, CoreDNS will not start up before a network is installed. kubeadm only supports Container Network Interface (CNI) based networks (and does not support kubenet).
Once you do this, the CoreDNS pods should come up healthy. This can be verified with:
kubectl -n kube-system -l=k8s-app=kube-dns get pods
Then the kubernetes-dashboard pod should come up healthy as well.
you could refer to https://github.com/kubernetes/dashboard#getting-started
Also, I see "https" in your link
Please try this link instead
http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
I had the same problem. In the end it turned out as a Calico Network configuration problem. But step by step...
First I checked if the Dashboard Pod was running:
kubectl get pods --all-namespaces
The result for me was:
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-bcc6f659f-j57l9 1/1 Running 2 19h
kube-system calico-node-hdxp6 0/1 CrashLoopBackOff 13 15h
kube-system calico-node-z6l56 0/1 Running 68 19h
kube-system coredns-74ff55c5b-8l6m6 1/1 Running 2 19h
kube-system coredns-74ff55c5b-v7pkc 1/1 Running 2 19h
kube-system etcd-got-virtualbox 1/1 Running 3 19h
kube-system kube-apiserver-got-virtualbox 1/1 Running 3 19h
kube-system kube-controller-manager-got-virtualbox 1/1 Running 3 19h
kube-system kube-proxy-q99s5 1/1 Running 2 19h
kube-system kube-proxy-vrpcd 1/1 Running 1 15h
kube-system kube-scheduler-got-virtualbox 1/1 Running 2 19h
kubernetes-dashboard dashboard-metrics-scraper-7b59f7d4df-qc9ms 1/1 Running 0 28m
kubernetes-dashboard kubernetes-dashboard-74d688b6bc-zrdk4 0/1 CrashLoopBackOff 9 28m
The last line indicates, that the dashboard pod could not have been started (status=CrashLoopBackOff).
And the 2nd line shows that the calico node has problems. Most likely the root cause is Calico.
Next step is to have a look at the pod log (change namespace / name as listed in YOUR pods list):
kubectl logs kubernetes-dashboard-74d688b6bc-zrdk4 -n kubernetes-dashboard
The result for me was:
2021/03/05 13:01:12 Starting overwatch
2021/03/05 13:01:12 Using namespace: kubernetes-dashboard
2021/03/05 13:01:12 Using in-cluster config to connect to apiserver
2021/03/05 13:01:12 Using secret token for csrf signing
2021/03/05 13:01:12 Initializing csrf token from kubernetes-dashboard-csrf secret
panic: Get https://10.96.0.1:443/api/v1/namespaces/kubernetes-dashboard/secrets/kubernetes-dashboard-csrf: dial tcp 10.96.0.1:443: i/o timeout
Hm - not really helpful. After searching for "dial tcp 10.96.0.1:443: i/o timeout" I found this information, where it says ...
If you follow the kubeadm instructions to the letter ... Which means install docker, kubernetes (kubeadm, kubectl, & kubelet), and calico with the Kubeadm hosted instructions ... and your computer nodes have a physical ip address in the range of 192.168.X.X then you will end up with the above mentioned non-working dashboard. This is because the node ip addresses clash with the internal calico ip addresses.
https://github.com/kubernetes/dashboard/issues/1578#issuecomment-329904648
Yes, in deed I do have a physical IP in the range of 192.168.x.x - like many others might have as well. I wish Calico would check this during setup.
So let's move the pod network to a different IP range:
You should use a classless reserved IP range for Private Networks like
10.0.0.0/8 (16.777.216 addresses)
172.16.0.0/12 (1.048.576 addresses)
192.168.0.0/16 (65.536 addresses). Otherwise Calico will terminate with an error saying "Invalid CIDR specified in CALICO_IPV4POOL_CIDR" ...
sudo kubeadm reset
sudo rm /etc/cni/net.d/10-calico.conflist
sudo rm /etc/cni/net.d/calico-kubeconfig
export CALICO_IPV4POOL_CIDR=172.16.0.0
export MASTER_IP=192.168.100.122
sudo kubeadm init --pod-network-cidr=$CALICO_IPV4POOL_CIDR/12 --apiserver-advertise-address=$MASTER_IP --apiserver-cert-extra-sans=$MASTER_IP
mkdir -p $HOME/.kube
sudo rm -f $HOME/.kube/config
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
sudo chown $(id -u):$(id -g) /etc/kubernetes/kubelet.conf
wget https://docs.projectcalico.org/v3.8/manifests/calico.yaml -O calico.yaml
sudo sed -i "s/192.168.0.0\/16/$CALICO_IPV4POOL_CIDR\/12/g" calico.yaml
sudo sed -i "s/192.168.0.0/$CALICO_IPV4POOL_CIDR/g" calico.yaml
kubectl apply -f calico.yaml
Now we test if all calico pods are running:
kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-bcc6f659f-ns7kz 1/1 Running 0 15m
kube-system calico-node-htvdv 1/1 Running 6 15m
kube-system coredns-74ff55c5b-lqwpd 1/1 Running 0 17m
kube-system coredns-74ff55c5b-qzc87 1/1 Running 0 17m
kube-system etcd-got-virtualbox 1/1 Running 0 17m
kube-system kube-apiserver-got-virtualbox 1/1 Running 0 17m
kube-system kube-controller-manager-got-virtualbox 1/1 Running 0 18m
kube-system kube-proxy-6xr5j 1/1 Running 0 17m
kube-system kube-scheduler-got-virtualbox 1/1 Running 0 17m
Looks good. If not check CALICO_IPV4POOL_CIDR by editing the node config: KUBE_EDITOR="nano" kubectl edit -n kube-system ds calico-node
Let's apply the kubernetes-dashboard and start the proxy:
export KUBECONFIG=$HOME/.kube/config
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml
kubectl proxy
Now I can load http://127.0.0.1:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
if you are using helm,
check if kubectl proxy is running
then goto
http://localhost:8001/api/v1/namespaces/default/services/https:kubernetes-dashboard:https/proxy
two tips in above link:
use helm to install, the namespaces will be /default (not /kubernetes-dashboard
need add https after /https:kubernetes-dashboard:
better way is
helm delete kubernetes-dashboard
kubectl create namespace kubernetes-dashboard
helm install -n kubernetes-dashboard kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard
then goto
http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:https/proxy
then you can easily follow creating-sample-user to get token to login
i was facing the same issue, so i followed the official docs and then went to https://github.com/kubernetes/dashboard url, there is another way using helm on this link https://artifacthub.io/packages/helm/k8s-dashboard/kubernetes-dashboard
after installing helm and run this 2 commands
helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/
helm install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard
it worked but on default namespace on this link
http://localhost:8001/api/v1/namespaces/default/services/https:kubernetes-dashboard:https/proxy/#/workloads?namespace=default
I'm writing some scripts that check the system to make sure of some cluster characteristics. Things running on private IP address spaces, etc. These checks are just a manual step when setting up a cluster, and used just for sanity checking.
They'll be run on each node, but I'd like a set of them to run when on the master node. Is there a bash, curl, kubectl, or another command that has information indicating the current node is a master node?
The master(s) usually has the 'master' role associated with it. For example:
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-x-x-x-x.us-west-2.compute.internal Ready <none> 7d v1.11.2
ip-x-x-x-x.us-west-2.compute.internal Ready master 78d v1.11.2
ip-x-x-x-x.us-west-2.compute.internal Ready <none> 7d v1.11.2
ip-x-x-x-x.us-west-2.compute.internal Ready <none> 7d v1.11.2
ip-x-x-x-x.us-west-2.compute.internal Ready <none> 7d v1.11.2
It also has a label node-role.kubernetes.io/master associated with it. For example:
$ kubectl get node ip-x-x-x-x.us-west-2.compute.internal -o=yaml
apiVersion: v1
kind: Node
metadata:
annotations:
kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: "0"
projectcalico.org/IPv4Address: x.x.x.x/20
volumes.kubernetes.io/controller-managed-attach-detach: "true"
creationTimestamp: 2018-07-23T21:10:22Z
labels:
beta.kubernetes.io/arch: amd64
beta.kubernetes.io/instance-type: t3.medium
beta.kubernetes.io/os: linux
failure-domain.beta.kubernetes.io/region: us-west-2
failure-domain.beta.kubernetes.io/zone: us-west-2c
kubernetes.io/hostname: ip-x-x-x-x.us-west-2.compute.internal
node-role.kubernetes.io/master: ""
Some more ways:
$ kubectl cluster-info
Kubernetes master is running at https://node1.example.com:8443
...
You can use kubectl with label selector:
$ kubectl get nodes -l node-role.kubernetes.io/master=true
NAME STATUS ROLES AGE VERSION
node1.example.com Ready master 1d v1.10.5
node2.example.com Ready master 1d v1.10.5
And you can get specific data via jsonpath, e.g master IPs/hostnames:
$ kubectl get nodes -l node-role.kubernetes.io/master=true -o 'jsonpath={.items[*].status.addresses[?(#.type=="InternalIP")].address}'
192.168.168.197 192.168.168.198
$ kubectl get nodes -l node-role.kubernetes.io/master=true -o 'jsonpath={.items[*].status.addresses[?(#.type=="Hostname")].address}'
node1.example.com node2.example.com