kubernetes dashboard is not found - kubernetes

http://10.199.135.36:8080/api/v1/proxy/namespaces/kube-system/services/kube-ui/#/dashboard/
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "endpoints \"kube-ui\" not found",
"reason": "NotFound",
"details": {
"name": "kube-ui",
"kind": "endpoints"
},
"code": 404
}

In the newer version of kubernetes,dashboard is a alternative solution for kube-ui.Using endpoint named kube-ui may lead to endpoints \"kube-ui\" not found(404).To solve this problem,you can choose an new endpoint named kubernetes-dashboard.For more details,see:
http://kubernetes.io/docs/user-guide/ui/
All in all,if you use kube-ui(example:v3),the automatic redirection maybe not correct and the 404 error appears because of the unlocation of the resources.
Good luck!

You're hitting a 10-dot ip which is only routable within your cluster, so I'm going to assume you're eg: curling that url from a node.
Please debug the service and report as to what fails: http://kubernetes.io/docs/user-guide/debugging-services/, I'm guessing kubectl --namespace=kube-system get ep kube-ui shows nothing.

If you modify your service name or namespace in dashboard.yaml, you should change your URL:
http://cluster_ip_address:8080/api/v1/proxy/namespaces/modify-namespace/services/modify-service-name/#/dashboard/

dashboard-controller.yaml
dashboard-service.yaml
standard configure file
...
kubernetes.io/cluster-service: "true"
spec:
containers:
- name: kubernetes-dashboard
#image: gcr.io/google_containers/kube-ui:v3
**image: index.tenxcloud.com/google_containers/kubernetes-dashboard-amd64:v1.0.1**
resources:
...
after run and test logs info such as
[root#test-ops-node1 pods]# kubectl logs kubernetes-dashboard-v1.0.1-mhz6w --namespace=kube-system
2016/05/20 08:54:10 Starting HTTP server on port 9090
2016/05/20 08:54:10 Creating API server client for http://localhost:8080
2016/05/20 08:54:10 Creating in-cluster Heapster client
2016/05/20 09:09:56 Incoming HTTP/1.1 GET /api/v1/replicationcontrollers request from 172.17.80.0:39277
2016/05/20 09:09:56 Getting list of all replication controllers in the cluster
2016/05/20 09:09:56 Get http://localhost:8080/api/v1/replicationcontrollers: dial tcp [::1]:8080: getsockopt: connection refused
2016/05/20 09:09:56 Outcoming response to 172.17.80.0:39277 with 500 status code

Kubernetes-UI is addon. You can create it in Ubuntu follow this link
http://kubernetes.io/docs/getting-started-guides/ubuntu/
on Deploy addons section.

Related

How to get k8s controller manager's metrics?

I have deploy a k8s cluster with kubeadm, I want to get controller manager's metrics with following command:
curl -k https://localhost:10257/metrics
but got the following error:
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "forbidden: User \"system:anonymous\" cannot get path \"/metrics\"",
"reason": "Forbidden",
"details": {
},
"code": 403
}
So my question is, how to get k8s controller manager's metrics?
This is a forbidden error due to permission issues which need to be authenticated with a valid user. For this,You need to create a service account, then give that service account access permissions to the metrics Path through RBAC, then this will make that service account to get the metrics.
As per this Role and Cluster Binding doc, you need to allow metrics path(replace with /healthz) as below and give a try.
Allow GET and POST requests to the non-resource endpoint /healthz and all subpaths (must be in a ClusterRole bound with a ClusterRoleBinding to be effective):
rules:
- nonResourceURLs: ["/healthz", "/healthz/*"] # '*' in a nonResourceURL is a suffix glob match
verbs: ["get", "post"]

Can't access deployed Kubernetes-Dashboard - Error 503

i folowed guide on official Kubernetes Dashboard github (https://github.com/kubernetes/dashboard) and now I'm facing problem with accessing it. I used kubectl proxy to redirect internal port outside, but when I try to open address:
http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
it just ends up with this error:
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "error trying to reach service: dial tcp 192.168.23.7:8443: i/o timeout",
"reason": "ServiceUnavailable",
"code": 503
}
What am I supposed to do?
You get a timeout. Check if dashboard pods are working (kubectl get pods -n kubernetes-dashboard)
Check u have enough access control
Check here https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/README.md

Scaling and listing deployments via Kubernetes HTTP API?

I'm trying to scale up/down some deployments over HTTP and also list the deployments on my cluster. I'm able to list pods, but can't figure out the deployments piece.
http://localhost:8080/api/v1/namespaces/default/deployments
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "the server could not find the requested resource",
"reason": "NotFound",
"details": {},
"code": 404
}
Deployments are in the apps/v1 namespace, and you need to include apps in the URL. The API documentation for the "list deployments" endpoint gives the URL as
GET /apis/apps/v1/namespaces/{namespace}/deployments
You can use the normal read-modify-write sequence to change the replicas: field in a deployment spec to scale it.
There is also a dedicated endpoint to scale deployments, though it's slightly underdocumented. Manage replicas count for deployment using Kubernetes API
suggests reading and patching the scale resource, or there is an example with a minimal JSON payload.

Kubernetes: How are etcd component services health checked?

I have a k8s cluster in AWS that looks partially up, but won't actually do deployments. When looking at the health of components, etcd is shown as unhealthy. This looks like it's an issue with the etcd endpoints getting queried as http versus https:
kubectl --kubeconfig=Lab_42/kubeconfig.yaml get componentstatuses --namespace=default
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-2 Unhealthy Get http://ip-10-42-2-50.ec2.internal:2379/health: malformed HTTP response "\x15\x03\x01\x00\x02\x02"
etcd-1 Unhealthy Get http://ip-10-42-2-41.ec2.internal:2379/health: malformed HTTP response "\x15\x03\x01\x00\x02\x02"
etcd-0 Unhealthy Get http://ip-10-42-2-40.ec2.internal:2379/health: malformed HTTP response "\x15\x03\x01\x00\x02\x02"
I'm not using the --ca-config option, but putting the config values directly in the apiserver run. My apiserver config:
command:
- /hyperkube
- apiserver
- --advertise-address=10.42.2.50
- --admission_control=NamespaceLifecycle,NamespaceAutoProvision,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota
- --allow-privileged=true
- --authorization-mode=AlwaysAllow
- --bind-address=0.0.0.0
- --client-ca-file=/etc/ssl/kubernetes/k8s-ca.pem
- --etcd-cafile=/etc/ssl/etcd/etcd-ca.pem
- --etcd-certfile=/etc/ssl/etcd/etcd-client.pem
- --etcd-keyfile=/etc/ssl/etcd/etcd-client-key.pem
- --etcd-servers=https://127.0.0.1:2379
- --kubelet-certificate-authority=/etc/ssl/kubernetes/k8s-ca.pem
- --kubelet-client-certificate=/etc/ssl/kubernetes/k8s-apiserver-client.pem
- --kubelet-client-key=/etc/ssl/kubernetes/k8s-apiserver-client-key.pem
- --kubelet-https=true
- --logtostderr=true
- --runtime-config=extensions/v1beta1/deployments=true,extensions/v1beta1/daemonsets=true,api/all
- --secure-port=443
- --service-account-lookup=false
- --service-cluster-ip-range=10.3.0.0/24
- --tls-cert-file=/etc/ssl/kubernetes/k8s-apiserver.pem
- --tls-private-key-file=/etc/ssl/kubernetes/k8s-apiserver-key.pem
The actual problem is that simple deployments don't actually do anything, and I'm not sure if etcd being unhealthy is causing the problem or not as we have many other certificates in the mix.
kubectl --kubeconfig=Lab_42/kubeconfig.yaml get deployments --namespace=default
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx-deployment 3 0 0 0 2h
I can actually query etcd directly if I use the local https endpoint
/usr/bin/etcdctl --ca-file /etc/ssl/etcd/etcd-ca.pem --cert-file /etc/ssl/etcd/etcd-client.pem --key-file /etc/ssl/etcd/etcd-client-key.pem
--endpoints 'https://127.0.0.1:2379' \
get /registry/minions/ip-10-42-2-50.ec2.internal | jq "."
{
"kind": "Node",
"apiVersion": "v1",
"metadata": {
"name": "ip-10-42-2-50.ec2.internal",
"selfLink": "/api/v1/nodes/ip-10-42-2-50.ec2.internal",
...SNIP
So it turns out that the component statuses was a red herring. The real problem was due to the fact that my controller configuration was wrong. The master was set for http://master_ip:8080 instead of http://127.0.0.1:8080. The insecure port for apiserver is not exposed to external interfaces, so the controller could not connect.
Switching to either loopback insecure or :443 solved my problem.
When using the CoreOS hypercube and kubelet-wrapper, you lose out on the automatically linked container logs in /var/log/containers. To find those, you can do something like:
ls -latr /var/lib/docker/containers/*/*-json.log
I was actually able to see the errors causing my problem this way.
I think your kube-apiserver's config is missing the option --etcd-server=xxx

Kubernetes: kubectl returns 404 not found when fetch pod logs

I'm trying to get logs from my pod, but it doesn't work for some reason though kubectl describe pod works well, docker logs works well. I have Kubernetes 1.2.3 Debian 8 x64 installed manually on a single node
$ kubectl logs -f web-backend-alzc1 --namespace=my-namespace --v=6
round_trippers.go:286] GET http://localhost:8080/api 200 OK in 0 milliseconds
round_trippers.go:286] GET http://localhost:8080/apis 200 OK in 0 milliseconds
round_trippers.go:286] GET http://localhost:8080/api/v1/namespaces/my-namespace/pods/web-backend-alzc1 200 OK in 1 milliseconds
round_trippers.go:286] GET http://localhost:8080/api 200 OK in 0 milliseconds
round_trippers.go:286] GET http://localhost:8080/apis 200 OK in 0 milliseconds
round_trippers.go:286] GET http://localhost:8080/api/v1/namespaces/my-namespace/pods/web-backend-alzc1/log?follow=true 404 Not Found in 1 milliseconds
helpers.go:172] server response object: [{
"metadata": {},
"status": "Failure",
"message": "the server could not find the requested resource ( pods/log web-backend-alzc1)",
"reason": "NotFound",
"details": {
"name": "web-backend-alzc1",
"kind": "pods/log"
},
"code": 404
}]
helpers.go:107] Error from server: the server could not find the requested resource ( pods/log web-backend-alzc1)
Is there something I should describe in RC scheme to enable logs for this pod?
I tried to recreate RC and look at journalctl, I see these messages:
hyperkube[443]: I0510 12:14:13.754922 443 hairpin.go:51] Unable to find pair interface, setting up all interfaces: exec: "ethtool": executable file not found in $PATH
hyperkube[443]: I0510 12:14:13.756866 443 provider.go:91] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider
hyperkube[435]: W0510 12:14:38.835863 435 request.go:344] Field selector: v1 - serviceaccounts - metadata.name - default: need to check if this is versioned correctly.
This is caused by the --enable-debugging-handlers flag being set to false, which prevents the kubelet from attaching to containers and fetching the logs. Restarting the kubelet without this flag (it defaults to true) should fix it.