Install kubernetes dashboard for Raspberry Pi ARM cluster error - kubernetes

I am following the instructions from this link:
https://kubecloud.io/kubernetes-dashboard-on-arm-with-rbac-61309310a640
and I run this command:
kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/72832429656c74c4c568ad5b7163fa9716c3e0ec/src/deploy/recommended/kubernetes-dashboard-arm.yaml
But I'm getting this output/error:
kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/72832429656c74c4c568ad5b7163fa9716c3e0ec/src/deploy/recommended/kubernetes-dashboard-arm.yaml
secret/kubernetes-dashboard-certs created
serviceaccount/kubernetes-dashboard created
role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
service/kubernetes-dashboard created
error: unable to recognize "https://raw.githubusercontent.com/kubernetes/dashboard/72832429656c74c4c568ad5b7163fa9716c3e0ec/src/deploy/recommended/kubernetes-dashboard-arm.yaml": no match
es for kind "Deployment" in version "apps/v1beta2"
I'm not sure how to proceed from here?
I'm trying to install the Kubernetes Dashboard for a Raspberry PI cluster.
Here is my setup:
pi#k8s-master:/etc/kubernetes$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 2d11h v1.16.2
k8s-node1 Ready worker 2d3h v1.16.2
k8s-node2 Ready worker 2d2h v1.15.2
k8s-node3 Ready worker 2d2h v1.16.2

The reason behind your error is that after 1.16.0 kubernetes stopped using apps/v1beta2for deployments. You should use apps/v1 instead.
Please donwnload the file:
wget https://raw.githubusercontent.com/kubernetes/dashboard/72832429656c74c4c568ad5b7163fa9716c3e0ec/src/deploy/recommended/kubernetes-dashboard-arm.yaml
Edit the file using nano or vi and change the deployment api version to apps/v1.
Don`t forget to save the file when exiting.
Then:
kubectl apply -f [file_name]
You may find more about there release changes here.

Related

Kubernetes pod can't communicate with other pods in the same node

We are using Kubernetes 1.21.7 , Istio 1.11.4 , Flannel 0.14.0 .
kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-d0 Ready control-plane,master 204d v1.21.7
k8s-d1 Ready <none> 204d v1.21.7
k8s-d2 Ready <none> 204d v1.21.7
If pod-a and pod-b are in the same node, for example k8s-d1, they can't communicate (using curl for example). But if I force pods to be in different nodes, they communicate just fine.
This issue only occurs in "istio-system" namespace, but it seems it is not an Istio bug (I already tried opening an issue here , but unsuccessful)
I figured out what was missing:
modprobe br_netfilter
echo "br_netfilter" >> /etc/modules-load.d/modules.conf
At same point, I restarted those nodes and br_netfilter didn't load up automatically. Now that it is written in /etc/modules-load.d/modules.conf , it does apply on boot.
Thank you for your support.

Unable to get kubernetes dashboard

I've installad a new cluster (version 1.13.5 of kubectl kubelet kubeadm), then I've installed flannel and add a worker node.
Now I'm trying to add kubernetes dashboard to my cluster but after i run
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
I've this situation
kubernetes-dashboard-**** 0/1 CrashLoopBackOff 1 8s
Then if I get the log i can see this
Error while initializing connection to Kubernetes apiserver...
Where I'm wrong?
It seems that the problem was on the worker, when I put the dashboard on master the pod starts.
Maybe the kube dashboard has to be installed on the master or there is something wrong with flannel and the master-node communication.
Check api-server pod is running or not and KubeDNS is working fine or not.

How to start & stop Kubernetes 1.8.5 cluster?

Question
What are the commands to start/stop the K8S cluster? After installation is done following Using kubeadm to Create a Cluster, restarted the CentOS server and the K8S cluster is not running after restart.
There are services mentioned in Fedora (Single Node) listing services but there are no such services installed via kubeadm.
Failed to restart etcd.service: Unit not found.
Failed to restart kube-apiserver.service: Unit not found.
Failed to restart kube-controller-manager.service: Unit not found.
Environment
CentOS 7 on Virtual Box. K8S 1.8.5
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 36m v1.8.5
node01 Ready <none> 35m v1.8.5
node02 Ready <none> 35m v1.8.5
As you are using kubeadm to initiate and administrate the k8s cluster.As I understand kubeadm use following approach
Systemd manage only kubelet service on the node.
Kubelet create and manage k8s control plane componenets (kube-api server, kube-controller-manager , etcd and scheduler, kube-proxy) as a static pod.
Kubelet access their json manifest files from /etc/kubernetes/manifests.
So if you want to remove control plane components you just need to move these manifest files in another directory.

Helm: Error: no available release name found

I am getting a couple of errors with Helm that I can not find explanations for elsewhere. The two errors are below.
Error: no available release name found
Error: the server does not allow access to the requested resource (get configmaps)
Further details of the two errors are in the code block further below.
I have installed a Kubernetes cluster on Ubuntu 16.04. I have a Master (K8SMST01) and two nodes (K8SN01 & K8SN02).
This was created using kubeadm using Weave network for 1.6+.
Everything seems to run perfectly well as far as Deployments, Services, Pods, etc... DNS seems to work fine, meaning pods can access services using the DNS name (myservicename.default).
Using "helm create" and "helm search" work, but interacting with the tiller deployment do not seem to work. Tiller is installed and running according to the Helm install documentation.
root#K8SMST01:/home/blah/charts# helm version
Client: &version.Version{SemVer:"v2.3.0",
GitCommit:"d83c245fc324117885ed83afc90ac74afed271b4", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.3.0", GitCommit:"d83c245fc324117885ed83afc90ac74afed271b4", GitTreeState:"clean"}
root#K8SMST01:/home/blah/charts# helm install ./mychart
Error: no available release name found
root#K8SMST01:/home/blah/charts# helm ls
Error: the server does not allow access to the requested resource (get configmaps)
Here are the running pods:
root#K8SMST01:/home/blah/charts# kubectl get pods -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE
etcd-k8smst01 1/1 Running 4 1d 10.139.75.19 k8smst01
kube-apiserver-k8smst01 1/1 Running 3 19h 10.139.75.19 k8smst01
kube-controller-manager-k8smst01 1/1 Running 2 1d 10.139.75.19 k8smst01
kube-dns-3913472980-dm661 3/3 Running 6 1d 10.32.0.2 k8smst01
kube-proxy-56nzd 1/1 Running 2 1d 10.139.75.19 k8smst01
kube-proxy-7hflb 1/1 Running 1 1d 10.139.75.20 k8sn01
kube-proxy-nbc4c 1/1 Running 1 1d 10.139.75.21 k8sn02
kube-scheduler-k8smst01 1/1 Running 3 1d 10.139.75.19 k8smst01
tiller-deploy-1172528075-x3d82 1/1 Running 0 22m 10.44.0.3 k8sn01
weave-net-45335 2/2 Running 2 1d 10.139.75.21 k8sn02
weave-net-7j45p 2/2 Running 2 1d 10.139.75.20 k8sn01
weave-net-h279l 2/2 Running 5 1d 10.139.75.19 k8smst01
The solution given by kujenga from the GitHub issue worked without any other modifications:
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
I think it's an RBAC issue. It seems that helm isn't ready for 1.6.1's RBAC.
There is a issue open for this on Helm's Github.
https://github.com/kubernetes/helm/issues/2224
"When installing a cluster for the first time using kubeadm v1.6.1,
the initialization defaults to setting up RBAC controlled access,
which messes with permissions needed by Tiller to do installations,
scan for installed components, and so on. helm init works without
issue, but helm list, helm install, and so on all do not work, citing
some missing permission or another."
A temporary work around has been suggest:
"We "disable" RBAC using the command kubectl create clusterrolebinding
permissive-binding --clusterrole=cluster-admin --user=admin
--user=kubelet --group=system:serviceaccounts;"
But I can not speak for it's validity. The good news is that this is a known issue and work is being done to fix it. Hope this helps.
I had the same issue with the kubeadm setup on to CentOS 7.
Helm doesn't make a service account when you "helm init" and the default one doesn't have the permissions to read from the configmaps - so it will fail to be able to run a check to see if the deployment name it wants to use is unique.
This got me past it:
kubectl create clusterrolebinding add-on-cluster-admin \
--clusterrole=cluster-admin \
--serviceaccount=kube-system:default
But that is giving the default account tons of power, I just did this so I could get on with my work. Helm needs to add the creation of their own service account to the "helm init" code.
All addons in the kubernetes use the "defaults" service account.
So Helm also runs with "default" service account. You should provide permissions to it. Assign rolebindings to it.
For read-only permissions:
kubectl create rolebinding default-view --clusterrole=view \ --serviceaccount=kube-system:default --namespace=kube-system
For admin access: Eg: to install packages.
kubectl create clusterrolebinding add-on-cluster-admin \ --clusterrole=cluster-admin \ --serviceaccount=kube-system:default
You can also install tiller server in adifferent namespace using the below command.
First create the namesapce
Create the serviceaccount for the namespace
install the tiller in this respective namespace using the below command.
helm init --tiller-namespace test-namespace
This solution has worked for me: https://github.com/helm/helm/issues/3055#issuecomment-397296485
$ kubectl create serviceaccount --namespace kube-system tiller
$ kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
$ helm init --service-account tiller --upgrade
$ helm update repo
$ helm install stable/redis --version 3.3.5
But after that, something has changed ; I have to add --insecure-skip-tls-verify=true flag to my kubectl commands ! I don't know how to fix that knowing that I am interacting with a gcloud containers cluster.
Per https://github.com/kubernetes/helm/issues/2224#issuecomment-356344286, the following commands resolved the error for me too:
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
Per https://github.com/kubernetes/helm/issues/3055
helm init --service-account default
This worked for me when the RBAC (serviceaccount) commands didn't.
It's an RBAC issue. You need to have a service account with a cluster-admin role. And you should pass this service account during HELM initialization.
For example, if you have created a service account with the name tiller, you heml command would look like the following.
helm init --service-account=tiller
I followed this blog to resolve this issue. https://scriptcrunch.com/helm-error-no-available-release/
check the logs for your tiller container:
kubectl logs tiller-deploy-XXXX --namespace=kube-system
if you found something like this:
Error: 'dial tcp 10.44.0.16:3000: connect: no route to host'
Then probably a firewall/iptables as described here solution is to remove some rules:
sudo iptables -D INPUT -j REJECT --reject-with icmp-host-prohibited
sudo iptables -D FORWARD -j REJECT --reject-with icmp-host-prohibited

Minikube got stuck when creating container

I recently got started to learn Kubernetes by using Minikube locally in my Mac. Previously, I was able to start a local Kubernetes cluster with Minikube 0.10.0, created a deployment and viewed Kubernetes dashboard.
Yesterday I tried to delete the cluster and re-did everything from scratch. However, I found I cannot get the assets deployed and cannot view the dashboard. From what I saw, everything seemed to get stuck during container creation.
After I ran minikube start, it reported
Starting local Kubernetes cluster...
Kubectl is now configured to use the cluster.
When I ran kubectl get pods --all-namespaces, it reported (pay attention to the STATUS column):
kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system kube-addon-manager-minikube 0/1 ContainerCreating 0 51s
docker ps showed nothing:
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
minikube status tells me the VM and cluster are running:
minikubeVM: Running
localkube: Running
If I tried to create a deployment and an autoscaler, I was told they were created successfully:
kubectl create -f configs
deployment "hello-minikube" created
horizontalpodautoscaler "hello-minikube-autoscaler" created
$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default hello-minikube-661011369-1pgey 0/1 ContainerCreating 0 1m
default hello-minikube-661011369-91iyw 0/1 ContainerCreating 0 1m
kube-system kube-addon-manager-minikube 0/1 ContainerCreating 0 21m
When exposing the service, it said:
$ kubectl expose deployment hello-minikube --type=NodePort
service "hello-minikube" exposed
$ kubectl get service
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-minikube 10.0.0.32 <nodes> 8080/TCP 6s
kubernetes 10.0.0.1 <none> 443/TCP 22m
When I tried to access the service, I was told:
curl $(minikube service hello-minikube --url)
Waiting, endpoint for service is not ready yet...
docker ps still showed nothing. It looked to me everything got stuck when creating a container. I tried some other ways to work around this issue:
Upgraded to minikube 0.11.0
Use the xhyve driver instead of the Virtualbox driver
Delete everything cached, like ~/.minikube, ~/.kube, and the cluster, and re-try
None of them worked for me.
Kubernetes is still new to me and I would like to know:
How can I troubleshoot this kind of issue?
What could be the cause of this issue?
Any help is appreciated. Thanks.
It turned out to be a network problem in my case.
The pod status is "ContainerCreating", and I found during container creation, docker image will be pulled from gcr.io, which is inaccessible in China (blocked by GFW). Previous time it worked for me because I happened to connect to it via a VPN.
I didn't try minikube but I use kubernetes. With the information provided it is difficult to say the cause of the issue. Your minikube has no problem in creating resources but ContainerCreating is a problem related to docker daemon or improper communication between kube-api and docker daemon or some problem with kubelet.
You can try the following command:
kubectl describe po POD_NAME
This will give you the POD's events. Maybe this will provide a path to the root cause of issue.
You may also check the logs of kubelet to get the events.
I had this problem on Windows, but it was related to an NTLM proxy. I deleted the minikube VM then recreated it with the correct proxy settings for my CNTLM installation:
minikube start \
--docker-env http_proxy=http://10.0.2.2:3128 \
--docker-env https_proxy=http://10.0.2.2:3128 \
--docker-env no_proxy=localhost,127.0.0.1,::1,192.168.99.100
See https://blog.alexellis.io/minikube-behind-proxy/
The horizontalpodautoscaler (hpa) requires heapster to use. You'll need to run heapster in minikube for that to work. You can always debug these kinds of issues with minikube logs or interactively through the dashboard found at minikube dashboard.
You can find the steps to run heapster and grafana at https://github.com/kubernetes/heapster
For me, it takes several minutes before I see the ContainerCreating problem. After executing the following command:
systemctl status kube-controller-manager.service
I get this error:
Sync "default/redis-master-2229813293" failed with unable to create pods: No API token found for service account "default", retry after the token is automatically created and added to the service account.
There are two ways to solve this:
Set the service account with token
Remove the ServiceAccount setting of KUBE_ADMISSION_CONTROL in api-server