Running Kubernetes Sample Controller - kubernetes

I am trying to run the Kubernetes sample controller example by following the link https://github.com/kubernetes/sample-controller. I have the repo set up on an Ubuntu 18.04 system and was able to build the sample-controller package.
However, when I try to run the go package, I am getting some errors and am unable to debug the issue. Can someone please help me with this?
Here are the steps that I followed :
user#ubuntu-user:~$ go get k8s.io/sample-controller
user#ubuntu-user:~$ cd $GOPATH/src/k8s.io/sample-controller
Here's the error that I get on running the controller:
user#ubuntu-user:~/go/src/k8s.io/sample-controller$ ./sample-controller -kubeconfig=$HOME/.kube/config
E0426 15:05:57.721696 31517 reflector.go:125] k8s.io/sample-controller/pkg/generated/informers/externalversions/factory.go:117: Failed to list *v1alpha1.Foo: the server could not find the requested resource (get foos.samplecontroller.k8s.io)
Kubectl Version :
user#ubuntu-user:~$ kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:11:31Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:02:58Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"```

I have reproduced your issue. The order of commands in this tutorial is wrong.
In this case you received this error due to lack of resource (samplecontroller)
$ ./sample-controller -kubeconfig=$HOME/.kube/config
E0430 12:55:05.089624 147744 reflector.go:125] k8s.io/sample-controller/pkg/generated/informers/externalversions/factory.go:117: Failed to list *v1alpha1.Foo: the server could not find the requested resource (get foos.samplecontroller.k8s.io)
^CF0430 12:55:05.643778 147744 main.go:74] Error running controller: failed to wait for caches to sync
goroutine 1 [running]:
k8s.io/klog.stacks(0xc0002feb00, 0xc000282200, 0x66, 0x1f5)
/usr/local/google/home/user/go/src/k8s.io/klog/klog.go:840 +0xb1
k8s.io/klog.(*loggingT).output(0x2134040, 0xc000000003, 0xc0002e12d0, 0x20afafb, 0x7, 0x4a, 0x0)
/usr/local/google/home/user/go/src/k8s.io/klog/klog.go:791 +0x303
k8s.io/klog.(*loggingT).printf(0x2134040, 0x3, 0x14720f2, 0x1c, 0xc0003c1f48, 0x1, 0x1)
/usr/local/google/home/user/go/src/k8s.io/klog/klog.go:690 +0x14e
k8s.io/klog.Fatalf(...)
/usr/local/google/home/user/go/src/k8s.io/klog/klog.go:1241
main.main()
/usr/local/google/home/user/go/src/k8s.io/sample-controller/main.go:74 +0x3f5
You can verify that this api was not created
$ kubectl api-versions | grep sample
$ <emptyResult>
In the tutorial you have command to create Custom Resource Definition
$ kubectl create -f artifacts/examples/crd.yaml
customresourcedefinition.apiextensions.k8s.io/foos.samplecontroller.k8s.io created
Now you can search this CRD, it will be on the list now.
$ kubectl api-versions | grep sample
samplecontroller.k8s.io/v1alpha1
Next step is to create Foo resource
$ kubectl create -f artifacts/examples/example-foo.yaml
foo.samplecontroller.k8s.io/example-foo created
Those commands will not create any objects yet.
user#user:~/go/src/k8s.io/sample-controller$ kubectl get all
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP XX.XXX.XXX.XX <none> 443/TCP 14d
All resources will be created after you will run ./sample-controller -kubeconfig=$HOME/.kube/config
user#user:~/go/src/k8s.io/sample-controller$ ./sample-controller -kubeconfig=$HOME/.kube/config
user#user:~/go/src/k8s.io/sample-controller$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/example-foo-6cbc69bf5d-8k59h 1/1 Running 0 43s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.39.240.1 <none> 443/TCP 14d
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/example-foo 1 1 1 1 43s
NAME DESIRED CURRENT READY AGE
replicaset.apps/example-foo-6cbc69bf5d 1 1 1 43s
Correct order:
$ go get k8s.io/sample-controller
$ cd $GOPATH/src/k8s.io/sample-controller
$ go build -o sample-controller .
$ kubectl create -f artifacts/examples/crd.yaml
$ kubectl create -f artifacts/examples/example-foo.yaml
$ ./sample-controller -kubeconfig=$HOME/.kube/config
$ kubectl get deployments

Did you run those two commands from readme?
# create a CustomResourceDefinition
kubectl create -f artifacts/examples/crd.yaml
# create a custom resource of type Foo
kubectl create -f artifacts/examples/example-foo.yaml

Related

unable to launch kubectl dashboard on Mac

I have deployed minikube on MacOS using the instructions here https://kubernetes.io/docs/tasks/tools/install-minikube/
The brew install was ok and the minikube status shows
$ minikube status
host: Running
kubelet: Running
apiserver: Running
kubectl: Correctly Configured: pointing to minikube-vm at 192.168.99.102
I am able to interact with the cluster using kubectl
$kubectl run hello-minikube --image=k8s.gcr.io/echoserver:1.10 --port=8080
Viewing the pods is also ok
$kubectl get pod
NAME READY STATUS RESTARTS AGE
hello-minikube-856979d68c-glhsx 1/1 Running 0 18m
But when i try to launch the kubectl dashboard, i get 503 error
$minikube dashboard
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
The Dashboard service seems to present
$kubectl -n kube-system get svc -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 3h19m k8s-app=kube-dns
kubernetes-dashboard ClusterIP 10.109.210.119 <none> 80/TCP 119m app=kubernetes-dashboard
below is the kubectl version info
$kubectl version
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.2", GitCommit:"f6278300bebbb750328ac16ee6dd3aa7d3549568", GitTreeState:"clean", BuildDate:"2019-08-05T16:57:42Z", GoVersion:"go1.12.7", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.2", GitCommit:"f6278300bebbb750328ac16ee6dd3aa7d3549568", GitTreeState:"clean", BuildDate:"2019-08-05T09:15:22Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
Any pointers on what is missing ? How to get the dashboard working
Thanks
Praveen
Check kubectl cluster-info, you can find more here
kubectl -n kube-system port-forward svc/kubernetes-dashboard 8080:80
Your dashboard should be accessible on http://localhost:8080, keep in mind that the dashboard is deprecated so you can check octant.
Try $ minikube dashboard command. It will open a new tab in your default browser showing the minikube dashbord.
Finally able to solve this issue with instructions from GitHub issues
https://github.com/kubernetes/minikube/issues/4352
Basically, with these commands
minikube stop
minikube start --extra-config=apiserver.authorization-mode=RBAC
kubectl create clusterrolebinding add-on-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default
minikube dashboard

Installing helm on minikube return an error

I have followed the steps as described in this link.
When i am on section of helm install (Step 2), and trying to run:
helm install --name web ./demo
I am getting the following error:
Get https://10.96.0.1:443/version?timeout=32s: dial tcp 10.96.0.1:443: i/o timeout
Expected Result: It should install and deploy the chart.
This issue relates to your kubernetes configuration and not to your helm.
Assume you are also not able see outputs from other helm commands like helm list , etc.
Lots of people have this issue because of not properly configured CNI(typically this is calico). And sometimes this happens because of your kubeconfig absence.
Solutions are:
migrate from calico to flannel
Change the --pod-network-cidr for calico from 192.168.0.0/16 to 172.16.0.0/16 when using kubeadm to init cluster, like kubeadm init --pod-network-cidr=172.16.0.0
More related info you han find on similar github helm issue
Simple single-machine example:
1) kubeadm init --pod-network-cidr=172.16.0.0/16
2) kubectl taint nodes --all node-role.kubernetes.io/master-
3) kubectl apply -f https://docs.projectcalico.org/v3.8/manifests/calico.yaml
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
4)install helm
curl https://raw.githubusercontent.com/helm/helm/master/scripts/get | bash
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
helm init --service-account tiller
5)create and install chart
$ helm create demo
Creating demo
$ helm install --name web ./demo
NAME: web
LAST DEPLOYED: Tue Jul 16 10:44:15 2019
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/Deployment
NAME READY UP-TO-DATE AVAILABLE AGE
web-demo 0/1 1 0 0s
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
web-demo-6986c66d7d-vctql 0/1 ContainerCreating 0 0s
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
web-demo ClusterIP 10.106.140.176 <none> 80/TCP 0s
NOTES:
1. Get the application URL by running these commands:
export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=demo,app.kubernetes.io/instance=web" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl port-forward $POD_NAME 8080:80
6)result
$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/web-demo-6986c66d7d-vctql 1/1 Running 0 75s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 11m
service/web-demo ClusterIP 10.106.140.176 <none> 80/TCP 75s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/web-demo 1/1 1 1 75s
NAME DESIRED CURRENT READY AGE
replicaset.apps/web-demo-6986c66d7d 1 1 1 75s
You can find more info in how to configure helm and kubernetes itself in Get Started With Kubernetes Using Minikube article

Get ServiceUnavailable from `kubectl top` with heapster

I have a managed kubernetes setup, called Cluster Container Engine (CCE), in the Open Telekom Cloud. Their documentation can be found online.
My CCE has one master and three nodes which run k8s version 1.9.2 (more details below). I can access the CCE through kubectl and deploy new pods onto it.
The CCE has a deployment of heapster preinstalled. However, attempting to inspect node resource usage fails (I can observe the same effect for pod usage):
$ kubectl top pods
Error from server (ServiceUnavailable): the server is currently unable to handle the request (get services http:heapster:)
I've attempted all debugging steps I could think of (see below) and I'm still lost when it comes to fixing this. Any advice?
The deployment, pod and service items for heapster are present (outputs filtered to include only heapster):
$ kubectl get po -n kube-system
NAME READY STATUS RESTARTS AGE
heapster-apiserver-84b844ffcf-lzh4b 1/1 Running 0 47m
$ kubectl get svc -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
heapster ClusterIP 10.247.150.244 <none> 80/TCP 19d
$ kubectl get deploy -n kube-system
NAME READY UP-TO-DATE AVAILABLE AGE
heapster-apiserver 1/1 1 1 19d
To check that heapster does indeed collect metrics properly, I've ssh'd into one of the nodes and executed:
$ curl -k http://10.247.150.244:80/api/v1/model/metrics/
[
"cpu/usage_rate",
"memory/usage",
"cpu/request",
"cpu/limit",
"memory/request",
"memory/limit"
]
Pod Log Output
Finally, I checked the log output from the heapster-apiserver-84b844ffcf-lzh4b pod:
$ kubectl logs -n kube-system heapster-apiserver-84b844ffcf-lzh4b
I0311 13:38:18.334525 1 heapster.go:78] /heapster --source=kubernetes.summary_api:''?kubeletHttps=true&inClusterConfig=false&insecure=true&auth=/srv/config --api-server --secure-port=6443
I0311 13:38:18.334718 1 heapster.go:79] Heapster version v1.5.3
I0311 13:38:18.340912 1 configs.go:61] Using Kubernetes client with master "https://192.168.1.228:5443" and version <nil>
I0311 13:38:18.340996 1 configs.go:62] Using kubelet port 10255
I0311 13:38:18.358918 1 heapster.go:202] Starting with Metric Sink
I0311 13:38:18.510751 1 serving.go:327] Generated self-signed cert (/var/run/kubernetes/apiserver.crt, /var/run/kubernetes/apiserver.key)
E0311 13:38:18.540860 1 heapster.go:128] Could not create the API server: missing clientCA file
I0311 13:38:18.558944 1 heapster.go:112] Starting heapster on port 8082
Cluster Info
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.3", GitCommit:"721bfa751924da8d1680787490c54b9179b1fed0", GitTreeState:"clean", BuildDate:"2019-02-01T20:08:12Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"9+", GitVersion:"v1.9.2-CCE2.0.7-B003", GitCommit:"302f471a1e2caa114c9bb708c077fbb363aa2f13", GitTreeState:"clean", BuildDate:"2018-06-20T03:27:16Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
$ kubectl get nodes
192.168.1.163 Ready worker 19d v1.9.2-CCE2.0.7-B003
192.168.1.211 Ready nfs-server 19d v1.9.2-CCE2.0.7-B003
192.168.1.227 Ready worker 19d v1.9.2-CCE2.0.7-B003
All nodes use EulerOS_2.0_SP2 with kernel version 3.10.0-327.59.59.46.h38.x86_64.
I0311 13:38:18.510751 1 serving.go:327] Generated self-signed cert (/var/run/kubernetes/apiserver.crt, /var/run/kubernetes/apiserver.key)
It seems like your API server is running on HTTP but heapster has a https url configured. You need to set the --source parameter to override the Kubernetes master like described here:
--source=kubernetes:http://master-ip?inClusterConfig=false&useServiceAccount=true&auth=
BTW: heapster has been deprecated and it is advised to switch to metrics server.

Can not connect to service in 192.168.99.101

I am new on kubernetes and need your help.
I followed installation instruction on this page and everything worked fine. Here are some Info after installation:
~ > kubectl version
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.1", GitCommit:"eec55b9ba98609a46fee712359c7b5b365bdd920", GitTreeState:"clean", BuildDate:"2018-12-13T19:44:10Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.4", GitCommit:"f49fa022dbe63faafd0da106ef7e05a29721d3f1", GitTreeState:"clean", BuildDate:"2018-12-14T06:59:37Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
~ > kubectl cluster-info
Kubernetes master is running at https://192.168.99.101:8443
~ > minikube ip
192.168.99.101
Then I try to run a simple hello world service:
~ > minikube start
~ > kubectl get nodes
NAME STATUS ROLES AGE VERSION
minikube Ready <none> 18h v1.12.4
~ > kubectl run hw --image=karthequian/helloworld --port=80
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/hw created
~ > kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
hw 1 1 1 1 21s
~ > kubectl get rs
NAME DESIRED CURRENT READY AGE
hw-854c64787 1 1 1 71s
~ > kubectl get pods
NAME READY STATUS RESTARTS AGE
hw-854c64787-8xhw6 1/1 Running 0 59s
~ > kubectl expose deployment hw --type=NodePort
service/hw exposed
~> kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hw NodePort 10.106.4.194 <none> 80:31269/TCP 6m23s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 20h
~ > minikube service hw
Opening kubernetes service default/hw in default browser...
After the last line I got this output:
Did I do something wrong? Is there anything I have not configured right? Thank you.
Here is how I solved this problem. First I go to Docker -> Preferences -> Kubernetes and Tick "Enable Kubernetes". It will take around 5 mins to install.
Then I delete my current minikube with minikube delete and repeat the whole process(minikube start, kubectl run hw --image=karthequian/helloworld --port=80, kubectl expose deployment hw --type=NodePort, minikube service hw
) and it works :).

Kubernetes dashboard not working, “already exists” and “could not find the requested resource (get services heapster)”

I am new to Kubernetes
The goal is to get Kubernetes cluster dashboard working
The Kubernetes cluster was deployed using Kubespray: github.com/kubernetes-incubator/kubespray
Versions:
Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.6", GitCommit:"4bc5e7f9a6c25dc4c03d4d656f2cefd21540e28c", GitTreeState:"clean", BuildDate:"2017-09-15T08:51:21Z", GoVersion:"go1.9", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.3+coreos.0", GitCommit:"42de91f04e456f7625941a6c4aaedaa69708be1b", GitTreeState:"clean", BuildDate:"2017-08-07T19:44:31Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
When I do kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml --validate=false as described here
I get:
Error from server (AlreadyExists): error when creating "https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml": secrets "kubernetes-dashboard-certs" already exists
Error from server (AlreadyExists): error when creating "https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml": serviceaccounts "kubernetes-dashboard" already exists
Error from server (AlreadyExists): error when creating "https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml": roles.rbac.authorization.k8s.io "kubernetes-dashboard-minimal" already exists
Error from server (AlreadyExists): error when creating "https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml": rolebindings.rbac.authorization.k8s.io "kubernetes-dashboard-minimal" already exists
Error from server (AlreadyExists): error when creating "https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml": deployments.extensions "kubernetes-dashboard" already exists
Error from server (AlreadyExists): error when creating "https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml": services "kubernetes-dashboard" already exists
When I run kubectl get services --namespace kube-system, I get:
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns 10.233.0.3 <none> 53/UDP,53/TCP 10d
kubernetes-dashboard 10.233.28.132 <none> 80/TCP 9d
When I try to reach the dashboard kubernetes cluster, I get Connection refused
kubectl logs --namespace=kube-system kubernetes-dashboard-4167803980-1dz53 output:
2017/09/27 10:54:11 Using in-cluster config to connect to apiserver
2017/09/27 10:54:11 Using service account token for csrf signing
2017/09/27 10:54:11 No request provided. Skipping authorization
2017/09/27 10:54:11 Starting overwatch
2017/09/27 10:54:11 Successful initial request to the apiserver, version: v1.7.3+coreos.0
2017/09/27 10:54:11 New synchronizer has been registered: kubernetes-dashboard-key-holder-kube-system. Starting
2017/09/27 10:54:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system
2017/09/27 10:54:11 Initializing secret synchronizer synchronously using secret kubernetes-dashboard-key-holder from namespace kube-system
2017/09/27 10:54:11 Initializing JWE encryption key from synchronized object
2017/09/27 10:54:11 Creating in-cluster Heapster client
2017/09/27 10:54:11 Serving securely on HTTPS port: 8443
2017/09/27 10:54:11 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
Other outputs:
kubectl get pods --namespace=kube-system:
NAME READY STATUS RESTARTS AGE
calico-node-bqckz 1/1 Running 0 12d
calico-node-r9svd 1/1 Running 2 12d
calico-node-w3tps 1/1 Running 0 12d
kube-apiserver-kubetest1 1/1 Running 0 12d
kube-apiserver-kubetest2 1/1 Running 0 12d
kube-controller-manager-kubetest1 1/1 Running 2 12d
kube-controller-manager-kubetest2 1/1 Running 2 12d
kube-dns-3888408129-n0m8d 3/3 Running 0 12d
kube-dns-3888408129-z8xx3 3/3 Running 0 12d
kube-proxy-kubetest1 1/1 Running 0 12d
kube-proxy-kubetest2 1/1 Running 0 12d
kube-proxy-kubetest3 1/1 Running 0 12d
kube-scheduler-kubetest1 1/1 Running 2 12d
kube-scheduler-kubetest2 1/1 Running 2 12d
kubedns-autoscaler-1629318612-sd924 1/1 Running 0 12d
kubernetes-dashboard-4167803980-1dz53 1/1 Running 0 1d
nginx-proxy-kubetest3 1/1 Running 0 12d
kubectl proxy:
Starting to serve on 127.0.0.1:8001panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x2692f20]
goroutine 1 [running]:
k8s.io/kubernetes/pkg/kubectl.(*ProxyServer).ServeOnListener(0x0, 0x3a95a60, 0xc420114110, 0x17, 0xc4208b7c28)
/private/tmp/kubernetes-cli-20170915-41661-iccjh1/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/kubectl/proxy_server.go:201 +0x70
k8s.io/kubernetes/pkg/kubectl/cmd.RunProxy(0x3aa5ec0, 0xc42074e960, 0x3a7f1e0, 0xc42000c018, 0xc4201d7200, 0x0, 0x0)
/private/tmp/kubernetes-cli-20170915-41661-iccjh1/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/kubectl/cmd/proxy.go:156 +0x774
k8s.io/kubernetes/pkg/kubectl/cmd.NewCmdProxy.func1(0xc4201d7200, 0xc4203586e0, 0x0, 0x2)
/private/tmp/kubernetes-cli-20170915-41661-iccjh1/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/kubectl/cmd/proxy.go:79 +0x4f
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc4201d7200, 0xc420358500, 0x2, 0x2, 0xc4201d7200, 0xc420358500)
/private/tmp/kubernetes-cli-20170915-41661-iccjh1/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:603 +0x234
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc4202e4240, 0x5000107, 0x0, 0xffffffffffffffff)
/private/tmp/kubernetes-cli-20170915-41661-iccjh1/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:689 +0x2fe
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(0xc4202e4240, 0xc42074e960, 0x3a7f1a0)
/private/tmp/kubernetes-cli-20170915-41661-iccjh1/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:648 +0x2b
k8s.io/kubernetes/cmd/kubectl/app.Run(0x0, 0x0)
/private/tmp/kubernetes-cli-20170915-41661-iccjh1/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/cmd/kubectl/app/kubectl.go:39 +0xd5
main.main()
/private/tmp/kubernetes-cli-20170915-41661-iccjh1/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/cmd/kubectl/kubectl.go:26 +0x22
kubectl top nodes:
Error from server (NotFound): the server could not find the requested resource (get services http:heapster:)
kubectl get svc --namespace=kube-system:
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns 10.233.0.3 <none> 53/UDP,53/TCP 12d
kubernetes-dashboard 10.233.28.132 <none> 80/TCP 11d
curl http://localhost:8001/ui:
curl: (7) Failed to connect to 10.2.3.211 port 8001: Connection refused
How can I get the dashboard working? Appreciate your help.
you may be installing dashboard version 1.7. try installing version 1.6.3 its well tested.
kubectl create clusterrolebinding add-on-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default
kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.6.3/src/deploy/kubernetes-dashboard.yaml
Update 10/2/17:
can you try this: Delete and install 1.6.3 version.
kubectl delete -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
kubectl create clusterrolebinding add-on-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default
kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.6.3/src/deploy/kubernetes-dashboard.yaml
I believe the kubernetes dashboard is by default available already if you are deploying it through GCP or Azure. The first error explains this already. To verify, you may do type the following command to look for the pods/service in the namespace kube-system.
>kubectl get pods --namespace=kube-system
>kubectl get svc --namespace=kube-system
From the above command, you should find your available kubernetes dashboard and so you don't need to deploy it again. To access the dashboard, you could type the following command.
>kubectl proxy
This will make the Dashboard available at http://localhost:8001/ui on the machine where you type this command.
But to understand more about your problem, may I know which version of kubernetes and what environment are you using now? Also, it will be great if you could show me the result of these two commands.
>kubectl get pods --namespace=kube-system
>kubectl top nodes