K8s Dashboard not logging in (k8s version 1.11) - kubernetes

I did K8s(1.11) cluster using kubeadm tool. It 1 master and one node in the cluster.
I applied dashboard UI there.
kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
Created service account (followed this link: https://github.com/kubernetes/dashboard/wiki/Creating-sample-user)
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kube-system
and
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kube-system
Start kube proxy: kubectl proxy --address 0.0.0.0 --accept-hosts '.*'
And access dashboard from remote host using this URL: http://<k8s master node IP>:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login
Its asking for token for login: got token using this command: kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
After copy and apply the token in browser.. its not logging in. Its not showing authentication error too… Not sure wht is wrong with this? Is my token wrong or my kube proxy command wrong?

I recreated all the steps in accordance to what you've posted.
Turns out the issue is in the <k8s master node IP>, you should use localhost in this case. So to access the proper dashboard, you have to use:
http://127.0.0.1:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login
When you start kubectl proxy - you create a tunnel to your apiserver on the master node. By default, Dashboard is starting with ServiceType: ClusterIP. The Port on the master node in this mode is not open, and that is the reason you can't reach it on the 'master node IP'. If you would like to use master node IP, you have to change the ServiceType to NodePort.
You have to delete the old service and update the config by changing service type to NodePort as in the example below (note that ClusterIP is not there because it is assumed by default).
Create a new yaml file name newservice.yaml
---
# ------------------- Dashboard Service ------------------- #
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
type: NodePort
ports:
- port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
Delete the old service
kubectl delete service kubernetes-dashboard -n kube-system
Apply the new service
kubectl apply -f newservice.yaml
Run describe service
kubectl describe svc kubernetes-dashboard -n kube-system | grep "NodePort"
and you can use that port with the IP address of the master node
Type: NodePort
NodePort: <unset> 30518/TCP
http://<k8s master node IP>:30518/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login
Note that the port number is generated randomly and yours will be probably different.

Related

Expose every replica pods in a deployment/replicaset with nodeport service

I have 2 bare metal worker nodes and a replicaset with 10 pod replicas. How can I expose each pod using nodeport?
You can use the kubectl expose command to create a NodePort Service for a ReplicaSet.
This is a template that may be useful:
kubectl expose rs <REPLICASET_NAME> --port=<PORT> --target-port=<TARGET_PORT> --type=NodePort
The most important flags are:
NOTE: Detailed information on this command can be found in the Kubectl Reference Docs.
--port
The port that the service should serve on. Copied from the resource being exposed, if unspecified
--target-port
Name or number for the port on the container that the service should direct traffic to. Optional.
--type
Type for this service: ClusterIP, NodePort, LoadBalancer, or ExternalName. Default is 'ClusterIP'.
I will create an example to illustrate how it works
First, I created a simple app-1 ReplicaSet:
$ cat app-1-rs.yaml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: app-1
labels:
app: app-1
spec:
replicas: 10
selector:
matchLabels:
app: app-1
template:
metadata:
labels:
app: app-1
spec:
containers:
- name: nginx
image: nginx
$ kubectl apply -f app-1-rs.yaml
replicaset.apps/app-1 created
$ kubectl get rs
NAME DESIRED CURRENT READY AGE
app-1 10 10 10 93s
Then I used the kubectl expose command to create a NodePort Service for the app-1 ReplicaSet:
### kubectl expose rs <REPLICASET_NAME> --port=<PORT> --target-port=<TARGET_PORT> --type=NodePort
$ kubectl expose rs app-1 --port=80 --type=NodePort
service/app-1 exposed
$ kubectl get svc app-1
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
app-1 NodePort 10.8.6.92 <none> 80:32275/TCP 18s
$ kubectl describe svc app-1 | grep -i endpoints
Endpoints: 10.4.0.15:80,10.4.0.16:80,10.4.0.17:80 + 7 more...
If you want to make some modifications before creating the Service, you can export the Service defintion to a manifest file and apply it after the modifications:
kubectl expose rs app-1 --port=80 --type=NodePort --dry-run=client -oyaml > app-1-svc.yaml
Additionally, it's worth considering if you can use Deployment instead of directly using ReplicaSet. As we can find in the Kubernetes ReplicaSet documentation:
Deployment is a higher-level concept that manages ReplicaSets and provides declarative updates to Pods along with a lot of other useful features. Therefore, we recommend using Deployments instead of directly using ReplicaSets, unless you require custom update orchestration or don't require updates at all.
you can create a service with selector point to the pods, use NodePort type, and be sure deploy kube-proxy on every node.
example yaml like below, you should set right selector and port. get service detail for nodePort after created
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: MyApp
type: NodePort
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
the official doc is good enough https://kubernetes.io/docs/concepts/services-networking/service/

Site can not be reached problem while accessing on local laptop from deployed kubernetes dashboard service on remote machine

When I am trying to access the Kubernetes dashboard service from my local laptop, I am getting the message that site can not be reached.
Procedure followed:
I followed the documentation from the following link,
https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/
I created my cluster with one master and one worker node on my premise machine. Each machine is ubuntu 16.04. And I installed kubectl and accessing this cluster from my control vm where I am running Jenkins for ci/cd pipeline. From this control vm I followed to bind the clusterrole and deployed the Kubernetes dashboard as explained in the documentation.
I run the following command for deploying the default dashboard service from my control vm by using kuectl command (outside the cluster):
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml
I created the role binding yaml dashboard-adminuser.yaml with following content ,
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
And created this by using the following command:
kubectl apply -f dashboard-adminuser.yaml
Accessed the token by using following command:
kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')
And run the following command for serving the dashboard service:
kubectl proxy
When I run the command showing "Starting serving on 127.0.0.1:8001".
And I tried to access the dashboard by putting the following URL on browser,
http://192.168.16.170:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
But I am only getting the message that site can not be reached.
Updates
Now I am trying to access using NodePort mechanism by editing the dashboard service type to NodePort type. When I am trying to access the URL , I am getting the error like "your connection is not private". I am adding the screenshot below,
Where have I gone wrong?
You need to change the service type to NodePort to access it from your local.
NodePort
This way of accessing Dashboard is only recommended for development environments in a single node setup.
Edit kubernetes-dashboard service.
$ kubectl -n kubernetes-dashboard edit service kubernetes-dashboard
You should see yaml representation of the service. Change type: ClusterIP to type: NodePort and save file.
apiVersion: v1
...
name: kubernetes-dashboard
namespace: kubernetes-dashboard
resourceVersion: "343478"
selfLink: /api/v1/namespaces/kubernetes-dashboard/services/kubernetes-
dashboard
uid: 8e48f478-993d-11e7-87e0-901b0e532516
spec:
clusterIP: 10.100.124.90
externalTrafficPolicy: Cluster
ports:
- port: 443
protocol: TCP
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
Next we need to check port on which Dashboard was exposed.
$ kubectl -n kubernetes-dashboard get service kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
AGE
kubernetes-dashboard NodePort 10.100.124.90 <nodes> 443:31707/TCP
21h
Dashboard has been exposed on port 31707 (HTTPS). Now you can access it from your browser at: https://<master-ip>:31707. master-ip can be found by executing kubectl cluster-info. Usually it is either 127.0.0.1 or IP of your machine, assuming that your cluster is running directly on the machine, on which these commands are executed.
In case you are trying to expose Dashboard using NodePort on a multi-node cluster, then you have to find out IP of the node on which Dashboard is running to access it. Instead of accessing https://<master-ip>:<nodePort> you should access https://<node-ip>:<nodePort>.
The UI can only be accessed from the machine where the command(kubectl proxy) is executed. In that machine try
http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
Edit:
Otherwise use nodeport mechanism for accessing it without using kubectl proxy
https://github.com/kubernetes/dashboard/blob/master/docs/user/accessing-dashboard/1.7.x-and-above.md#nodeport
Update:
Accessing the the dashboard using kubectl proxy
Run kubectl proxy and then access
http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#/overview?namespace=default
I used a token for auth and here is now I created the token:
# Create the service account in the current namespace
# (we assume default)
kubectl create serviceaccount my-dashboard-sa
# Give that service account root on the cluster
kubectl create clusterrolebinding my-dashboard-sa \
--clusterrole=cluster-admin \
--serviceaccount=default:my-dashboard-sa
# Find the secret that was created to hold the token for the SA
kubectl get secrets
# Show the contents of the secret to extract the token
kubectl describe secret my-dashboard-sa-token-xxxxx
Accessing the dashboard via publicly exposed API Server
Use this url in browser https://<master-ip>:<apiserver-port>/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
This will give you below error:
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "services \"https:kubernetes-dashboard:\" is forbidden: User \"system:anonymous\" cannot get resource \"services/proxy\" in API group \"\" in the namespace \"kube-system\"",
"reason": "Forbidden",
"details": {
"name": "https:kubernetes-dashboard:",
"kind": "services"
},
"code": 403
}
To solve above error apply below yaml to configure RBAC:
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: kubernetes-dashboard-anonymous
rules:
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["https:kubernetes-dashboard:"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- nonResourceURLs: ["/ui", "/ui/*", "/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/*"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard-anonymous
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubernetes-dashboard-anonymous
subjects:
- kind: User
name: system:anonymous
You will still need either a kubeconfig or a token to access. Token can be created by mechanism described above.

Kubernetes ingress (hostNetwork=true), can't reach service by node IP - GCP

I am trying to expose deployment using Ingress where DeamonSet has hostNetwork=true which would allow me to skip additional LoadBalancer layer and expose my service directly on the Kubernetes external node IP. Unfortunately I can't reach the Ingress controller from the external network.
I am running Kubernetes version 1.11.16-gke.2 on GCP.
I setup my fresh cluster like this:
gcloud container clusters get-credentials gcp-cluster
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
helm init --service-account tiller --upgrade
helm install --name ingress --namespace ingress-nginx --set rbac.create=true,controller.kind=DaemonSet,controller.service.type=ClusterIP,controller.hostNetwork=true stable/nginx-ingress
I run the deployment:
cat <<EOF | kubectl create -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-node
spec:
selector:
matchLabels:
app: hello-node
template:
metadata:
labels:
app: hello-node
spec:
containers:
- name: hello-node
image: gcr.io/google-samples/node-hello:1.0
ports:
- containerPort: 8080
EOF
Then I create service:
cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Service
metadata:
name: hello-node
spec:
ports:
- port: 80
targetPort: 8080
selector:
app: hello-node
EOF
and ingress resource:
cat <<EOF | kubectl create -f -
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
name: hello-node-single-ingress
spec:
backend:
serviceName: hello-node
servicePort: 80
EOF
I get the node external IP:
12:50 $ kubectl get nodes -o json | jq '.items[] | .status .addresses[] | select(.type=="ExternalIP") | .address'
"35.197.204.75"
Check if ingress is running:
12:50 $ kubectl get ing
NAME HOSTS ADDRESS PORTS AGE
hello-node-single-ingress * 35.197.204.75 80 8m
12:50 $ kubectl get pods --namespace ingress-nginx
NAME READY STATUS RESTARTS AGE
ingress-nginx-ingress-controller-7kqgz 1/1 Running 0 23m
ingress-nginx-ingress-default-backend-677b99f864-tg6db 1/1 Running 0 23m
12:50 $ kubectl get svc --namespace ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-ingress-controller ClusterIP 10.43.250.102 <none> 80/TCP,443/TCP 24m
ingress-nginx-ingress-default-backend ClusterIP 10.43.255.43 <none> 80/TCP 24m
Then trying to connect from the external network:
curl 35.197.204.75
Unfortunately it times out
On Kubernetes Github there is a page regarding ingress-nginx (host-netork: true) setup:
https://kubernetes.github.io/ingress-nginx/deploy/baremetal/#via-the-host-network
which mentions:
"This approach does not leverage any Service object to expose the NGINX Ingress controller. If the ingress-nginx Service exists in the target cluster, it is recommended to delete it."
I've tried to follow that and delete ingress-nginx services:
kubectl delete svc --namespace ingress-nginx ingress-nginx-ingress-controller ingress-nginx-ingress-default-backend
but this doesn't help.
Any ideas how to set up the Ingress on the node external IP? What I am doing wrong? The amount of confusion over running Ingress reliably without the LB overwhelms me. Any help much appreciated !
EDIT:
When another service accessing my deployment with NodePort gets created:
cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Service
metadata:
name: hello-node2
spec:
ports:
- port: 80
targetPort: 8080
type: NodePort
selector:
app: hello-node
EOF
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-node ClusterIP 10.47.246.91 <none> 80/TCP 2m
hello-node2 NodePort 10.47.248.51 <none> 80:31151/TCP 6s
I still can't access my service e.g. using: curl 35.197.204.75:31151.
However when I create 3rd service with LoadBalancer type:
cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Service
metadata:
name: hello-node3
spec:
ports:
- port: 80
targetPort: 8080
type: LoadBalancer
selector:
app: hello-node
EOF
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-node ClusterIP 10.47.246.91 <none> 80/TCP 7m
hello-node2 NodePort 10.47.248.51 <none> 80:31151/TCP 4m
hello-node3 LoadBalancer 10.47.250.47 35.189.106.111 80:31367/TCP 56s
I can access my service using the external LB: 35.189.106.111 IP.
The problem was missing firewall rules on GCP.
Found the answer: https://stackoverflow.com/a/42040506/2263395
Running:
gcloud compute firewall-rules create myservice --allow tcp:80,tcp:30301
Where 80 is the ingress port and 30301 is the NodePort port. On production you would probabaly use just the ingress port.

Not able to create Prometheus in K8S cluster

I'm trying to install Prometheus on my K8S cluster
when I run command
kubectl get namespaces
I got the following namespace:
default Active 26h
kube-public Active 26h
kube-system Active 26h
monitoring Active 153m
prod Active 5h49m
Now I want to create the Prometheus via
helm install stable/prometheus --name prom -f k8s-values.yml
and I got error:
Error: release prom-demo failed: namespaces "default" is forbidden:
User "system:serviceaccount:kube-system:default" cannot get resource
"namespaces" in API group "" in the namespace "default"
even if I switch to monitoring ns I got the same error,
the k8s-values.yml look like following
rbac:
create: false
server:
name: server
service:
nodePort: 30002
type: NodePort
Any idea what could be missing here ?
You are getting this error because you are using RBAC without giving the right permissions.
Give the tiller permissions:
taken from https://github.com/helm/helm/blob/master/docs/rbac.md
Example: Service account with cluster-admin role
In rbac-config.yaml:
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
Note: The cluster-admin role is created by default in a Kubernetes cluster, so you don't have to define it explicitly.
$ kubectl create -f rbac-config.yaml
serviceaccount "tiller" created
clusterrolebinding "tiller" created
$ helm init --service-account tiller
Create a service account for prometheus:
Change the value of rbac.create to true:
rbac:
create: true
server:
name: server
service:
nodePort: 30002
type: NodePort
Look at prometheus operator to spin up all monitoring services from prometheus stack.
below link is helpful
https://github.com/coreos/prometheus-operator/tree/master/contrib/kube-prometheus/manifests
all the manifests are listed there. go through those files and deploy whatever you need to monitor in your k8s cluster

How to access dashboard service internal use Kubernetes

I have kubernetes-dashboard service with type is ClusterIP. How can I access dashboard internal? I use Alibaba Cloud.
My service.yml
---
kind: Service
apiVersion: v1
metadata:
labels:
kubernetes.io/cluster-service: "true"
app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 9090
selector:
app: kubernetes-dashboard
I would like to run my dashboard at http://MASTER_IP:80
The status when running kubectl cluster-info:
Kubernetes master is running at https://MASTER_IP:6443
Heapster is running at https://MASTER_IP:6443/api/v1/namespaces/kube-system/services/heapster/proxy
KubeDNS is running at https://MASTER_IP:6443/api/v1/namespaces/kube-system/services/kube-dns/proxy
kubernetes-dashboard is running at https://MASTER_IP:6443/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy
monitoring-influxdb is running at https://MASTER_IP:6443/api/v1/namespaces/kube-system/services/monitoring-influxdb/proxy
When I access https://MASTER_IP:6443, I got the error default backend - 404.
Note: Don't use NodePort and kubectl proxy.
Many thanks.
Change the dashboard service type to NodePort then you can access dashboard with any cluster :
change service type from ClusterIP to NodePort
kubectl -n kube-system edit svc kubernetes-dashboard
Get the service port number.
kubectl -n kube-system get svc kubernetes-dashboard -o yaml |grep nodePort
Access dahboard with https://masererverIP:nodeportnumber
In this answer you can find the different ways to access the dashboard.
If you are not using NodePort or kubectl proxy, your best options are
API Server
In case Kubernetes API server is exposed and accessible from outside you can directly access dashboard at: https://<master-ip>:<apiserver-port>/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
Ingress
Dashboard can be also exposed using Ingress resource. For example
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: kubernetes-dashboard-ingress
namespace: kube-system
spec:
rules:
— host: kubernetes
http:
paths:
— path: /ui
backend:
serviceName: kubernetes-dashboard
servicePort: 80