Pod assigned node role instead of service account role on AWS EKS - kubernetes

Before I get started I have seen questions this and this, and they did not help.
I have a k8s cluster on AWS EKS on which I am deploying a custom k8s controller for my application. Using instructions from eksworkshop.com, I created my service account with the appropriate IAM role using eksctl. I assign the role in my deployment.yaml as seen below. I also set the securityContext as that seemed to solve problem in another case as described here.
apiVersion: apps/v1
kind: Deployment
metadata:
name: tel-controller
namespace: tel
spec:
replicas: 2
selector:
matchLabels:
app: tel-controller
strategy:
rollingUpdate:
maxSurge: 50%
maxUnavailable: 50%
type: RollingUpdate
template:
metadata:
labels:
app: tel-controller
spec:
serviceAccountName: tel-controller-serviceaccount
securityContext:
fsGroup: 65534
containers:
- image: <image name>
imagePullPolicy: Always
name: tel-controller
args:
- --metrics-bind-address=:8080
- --health-probe-bind-address=:8081
- --leader-elect=true
ports:
- name: webhook-server
containerPort: 9443
protocol: TCP
- name: metrics-port
containerPort: 8080
protocol: TCP
- name: health-port
containerPort: 8081
protocol: TCP
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
allowPrivilegeEscalation: false
But this does not seem to be working. If I describe the pod, I see the correct role.
AWS_DEFAULT_REGION: us-east-1
AWS_REGION: us-east-1
AWS_ROLE_ARN: arn:aws:iam::xxxxxxxxx:role/eksctl-eks-tel-addon-iamserviceaccount-tel-t-Role1-3APV5KCV33U8
AWS_WEB_IDENTITY_TOKEN_FILE: /var/run/secrets/eks.amazonaws.com/serviceaccount/token
Mounts:
/var/run/secrets/eks.amazonaws.com/serviceaccount from aws-iam-token (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6ngsr (ro)
But if I do a sts.GetCallerIdentityInput() from inside the controller application, I see the node role. And obviously i get an access denied error.
caller identity: (go string) {
Account: "xxxxxxxxxxxx",
Arn: "arn:aws:sts::xxxxxxxxxxx:assumed-role/eksctl-eks-tel-nodegroup-voice-NodeInstanceRole-BJNYF5YC2CE3/i-0694a2766c5d70901",
UserId: "AROAZUYK7F2GRLKRGGNXZ:i-0694a2766c5d70901"
}
This is how I created by service account
eksctl create iamserviceaccount --cluster ${EKS_CLUSTER_NAME} \
--namespace tel \
--name tel-controller-serviceaccount \
--attach-policy-arn arn:aws:iam::xxxxxxxxxx:policy/telcontrollerRoute53Policy \
--override-existing-serviceaccounts --approve
I have done this successfully in the past. The difference this time is that I also have role & role bindings attached to this service account. My rbac.yaml for this SA.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: tel-controller-role
labels:
app: tel-controller
rules:
- apiGroups: [""]
resources: [events]
verbs: [create, delete, get, list, update, watch]
- apiGroups: ["networking.k8s.io"]
resources: [ingressclasses]
verbs: [get, list]
- apiGroups: ["", "networking.k8s.io"]
resources: [services, ingresses]
verbs: [create, get, list, patch, update, delete, watch]
- apiGroups: [""]
resources: [configmaps]
verbs: [create, delete, get, update]
- apiGroups: ["coordination.k8s.io"]
resources: ["leases"]
verbs: [get, create, update]
- apiGroups: [""]
resources: [pods]
verbs: [get, list, watch, update]
- apiGroups: ["", "networking.k8s.io"]
resources: [services/status, ingresses/status]
verbs: [update, patch]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: tel-controller-rolebinding
labels:
app: tel-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: tel-controller-role
subjects:
- kind: ServiceAccount
name: tel-controller-serviceaccount
namespace: tel
What am I doing wrong here? Thanks.
PS: I am deploying using kubectl
PPS: from go.mod I am using github.com/aws/aws-sdk-go v1.44.28

So the Go SDK has 2 methods to create a session. Apparently I was using the deprecated method to create my session (session.New). Using the recommended method to create the new session, session.NewSession solved my problem.

Related

Error: stat /mnt/jenkins-store: no such file or directory

Very confused, I used an ec2 instance to bootstrap an eks cluster and everything worked completely fine yesterday. I deleted that cluster last night, just spun a new one up and now I'm getting this error when trying to build my jenkins pod
Error: stat /mnt/jenkins-store: no such file or directory
I find it strange how this error didn't show up yesterday and I set everything up the exact same way today. That error is what I got when I described my jenkins pod.
Here's my jenkins.yaml file for reference
apiVersion: v1
kind: ServiceAccount
metadata:
name: jenkins
namespace: default
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: jenkins
namespace: default
rules:
- apiGroups: [""]
resources: ["pods","services"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: ["apps"]
resources: ["deployments"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["create","delete","get","list","patch","update","watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: jenkins
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: jenkins
subjects:
- kind: ServiceAccount
name: jenkins
---
# Allows jenkins to create persistent volumes
# This cluster role binding allows anyone in the "manager" group to read secrets in any namespace.
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: jenkins-crb
subjects:
- kind: ServiceAccount
namespace: default
name: jenkins
roleRef:
kind: ClusterRole
name: jenkinsclusterrole
apiGroup: rbac.authorization.k8s.io
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
# "namespace" omitted since ClusterRoles are not namespaced
name: jenkinsclusterrole
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["create","delete","get","list","patch","update","watch"]
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins
namespace: default
spec:
selector:
matchLabels:
app: jenkins
replicas: 1
template:
metadata:
labels:
app: jenkins
spec:
containers:
- name: jenkins
image: jenkins/jenkins:lts
env:
- name: JAVA_OPTS
value: -Djenkins.install.runSetupWizard=false
ports:
- name: http-port
containerPort: 8080
- name: jnlp-port
containerPort: 50000
volumeMounts:
- name: jenkins-home
mountPath: /var
subPath: jenkins_home
- name: docker-sock-volume
mountPath: "/var/run/docker.sock"
imagePullPolicy: Always
volumes:
# This allows jenkins to use the docker daemon on the host, for running builds
# see https://stackoverflow.com/questions/27879713/is-it-ok-to-run-docker-from-inside-docker
- name: docker-sock-volume
hostPath:
path: /var/run/docker.sock
- name: jenkins-home
hostPath:
path: /mnt/jenkins-store
serviceAccountName: jenkins
---
apiVersion: v1
kind: Service
metadata:
name: jenkins
namespace: default
spec:
type: NodePort
ports:
- name: ui
port: 8080
targetPort: 8080
nodePort: 31000
- name: jnlp
port: 50000
targetPort: 50000
selector:
app: jenkins
---
Similarly to this answer, check if your ec2-run-instances launched as your new ec2 spawned instance should not include some volumes, which would be then used by the jenkins.yaml.
The lack of such a volume would explain the Error: stat /mnt/jenkins-store message.

How to fix "Failed to watch *v1beta1.IngressClass: failed to list *v1beta1.IngressClass: ingressclasses.networking.k8s.io is forbidden"

I have HA proxy ingress in Kubernetes AKS. After upgrading Kubernetes version, I get errors from HA proxy. I tried to solve the problem modifying my old haproxy.yaml to avoid deprecated API's and to get the latest image of HA proxy ingress. But the error persist. How can I fix the errors?.
I also tried this answer, but it doesn't work for me.
I checked this issue on github, but despite I use v0.12-snapshot.3 the error persist.
This is my modified haproxy.yaml:
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: ingress-controller
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: ingress-controller
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- "extensions"
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- "extensions"
resources:
- ingresses/status
verbs:
- update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: ingress-controller
namespace: default
rules:
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
- create
- update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: ingress-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: ingress-controller
subjects:
- kind: ServiceAccount
name: ingress-controller
namespace: default
- apiGroup: rbac.authorization.k8s.io
kind: User
name: ingress-controller
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: ingress-controller
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: ingress-controller
subjects:
- kind: ServiceAccount
name: ingress-controller
namespace: default
- apiGroup: rbac.authorization.k8s.io
kind: User
name: ingress-controller
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
run: ingress-default-backend
name: ingress-default-backend
namespace: default
spec:
selector:
matchLabels:
run: ingress-default-backend
template:
metadata:
labels:
run: ingress-default-backend
spec:
containers:
- name: ingress-default-backend
image: gcr.io/google_containers/defaultbackend:1.0
ports:
- containerPort: 8080
resources:
limits:
cpu: 10m
memory: 20Mi
---
apiVersion: v1
kind: Service
metadata:
name: ingress-default-backend
namespace: default
spec:
ports:
- port: 8080
selector:
run: ingress-default-backend
---
apiVersion: v1
kind: ConfigMap
metadata:
name: haproxy-ingress
namespace: default
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
run: haproxy-ingress
name: haproxy-ingress
spec:
selector:
matchLabels:
run: haproxy-ingress
template:
metadata:
labels:
run: haproxy-ingress
spec:
serviceAccountName: ingress-controller
containers:
- name: haproxy-ingress
image: quay.io/jcmoraisjr/haproxy-ingress:v0.12.1
imagePullPolicy: Always
resources:
requests:
memory: "64Mi"
cpu: "75m"
limits:
memory: "256Mi"
cpu: "500m"
args:
- --default-backend-service=$(POD_NAMESPACE)/ingress-default-backend
- --configmap=$(POD_NAMESPACE)/haproxy-ingress
- --reload-strategy=reusesocket
ports:
- name: https
containerPort: 443
- name: stat
containerPort: 1936
livenessProbe:
httpGet:
path: /healthz
port: 10253
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
---
apiVersion: v1
kind: Service
metadata:
labels:
run: haproxy-ingress
name: haproxy-ingress
namespace: default
spec:
type: LoadBalancer
externalTrafficPolicy: Local
ports:
- name: https
port: 443
- name: stat
port: 1936
selector:
run: haproxy-ingress
The following is the output of kubectl logs :
I0307 20:52:16.873675 6 launch.go:215]
Name: HAProxy
Release: v0.12-snapshot.3
Build: git-b34edd0
Repository: https://github.com/jcmoraisjr/haproxy-ingress
I0307 20:52:16.873776 6 launch.go:218] watching for ingress resources with 'kubernetes.io/ingress.class' annotation: haproxy
I0307 20:52:16.873787 6 launch.go:225] watching for ingress resources with IngressClass' controller name: haproxy-ingress.github.io/controller
I0307 20:52:16.873802 6 launch.go:230] ignoring ingress resources without any class reference - --watch-ingress-without-class is false
I0307 20:52:16.873968 6 launch.go:492] Creating API client for https://10.0.0.1:443
I0307 20:52:16.902520 6 launch.go:504] Running in Kubernetes Cluster version v1.17 (v1.17.16) - git (clean) commit d88fadbd65c5e8bde22630d251766a634c7613b0 - platform linux/amd64
I0307 20:52:16.908078 6 launch.go:257] validated default/ingress-default-backend as the default backend
I0307 20:52:18.693995 6 listers.go:134] loading object cache...
E0307 20:52:18.696953 6 reflector.go:127] pkg/mod/k8s.io/client-go#v0.19.0/tools/cache/reflector.go:156: Failed to watch *v1beta1.IngressClass: failed to list *v1beta1.IngressClass: ingressclasses.networking.k8s.io is forbidden: User "system:serviceaccount:default:ingress-controller" cannot list resource "ingressclasses" in API group "networking.k8s.io" at the cluster scope
E0307 20:52:19.982962 6 reflector.go:127] pkg/mod/k8s.io/client-go#v0.19.0/tools/cache/reflector.go:156: Failed to watch *v1beta1.IngressClass: failed to list *v1beta1.IngressClass: ingressclasses.networking.k8s.io is forbidden: User "system:serviceaccount:default:ingress-controller" cannot list resource "ingressclasses" in API group "networking.k8s.io" at the cluster scope
E0307 20:52:23.089836 6 reflector.go:127] pkg/mod/k8s.io/client-go#v0.19.0/tools/cache/reflector.go:156: Failed to watch *v1beta1.IngressClass: failed to list *v1beta1.IngressClass: ingressclasses.networking.k8s.io is forbidden: User "system:serviceaccount:default:ingress-controller" cannot list resource "ingressclasses" in API group "networking.k8s.io" at the cluster scope
E0307 20:52:28.419408 6 reflector.go:127] pkg/mod/k8s.io/client-go#v0.19.0/tools/cache/reflector.go:156: Failed to watch *v1beta1.IngressClass: failed to list *v1beta1.IngressClass: ingressclasses.networking.k8s.io is forbidden: User "system:serviceaccount:default:ingress-controller" cannot list resource "ingressclasses" in API group "networking.k8s.io" at the cluster scope
E0307 20:52:37.624105 6 reflector.go:127] pkg/mod/k8s.io/client-go#v0.19.0/tools/cache/reflector.go:156: Failed to watch *v1beta1.IngressClass: failed to list *v1beta1.IngressClass: ingressclasses.networking.k8s.io is forbidden: User "system:serviceaccount:default:ingress-controller" cannot list resource "ingressclasses" in API group "networking.k8s.io" at the cluster scope
I0307 20:52:45.320562 6 main.go:47] Shutting down with signal terminated
I0307 20:52:45.320631 6 controller.go:208] shutting down controller queues
E0307 20:52:45.320675 6 listers.go:132] initial cache sync has timed out or shutdown has requested
I0307 20:52:45.320711 6 controller.go:87] HAProxy Ingress successfully initialized
I0307 20:52:45.320722 6 main.go:40] Exiting (0)
As per #jesús-lópez comment, upgrading the kubernetes version to 1.18.4 from 1.17 and reinstalling haproxy resolved the issue.

how to deploy a statefulset when a pod security policy is in place in kubernetes

I am trying to play around with PodSecurityPolicies in kubernetes so pods can't be created if they are using the root user.
This is my psp definition:
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: eks.restrictive
spec:
hostNetwork: false
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
runAsUser:
rule: MustRunAsNonRoot
fsGroup:
rule: RunAsAny
volumes:
- '*'
and this is my statefulset definition
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
app: nginx # has to match .spec.template.metadata.labels
serviceName: "nginx"
replicas: 3 # by default is 1
template:
metadata:
labels:
app: nginx # has to match .spec.selector.matchLabels
spec:
securityContext:
#only takes integers.
runAsUser: 1000
terminationGracePeriodSeconds: 10
containers:
- name: nginx
image: k8s.gcr.io/nginx-slim:0.8
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "my-storage-class"
resources:
requests:
storage: 1Gi
When trying to create this statefulset I get
create Pod web-0 in StatefulSet web failed error: pods "web-0" is forbidden: unable to validate against any pod security policy:
It doesn't specify what policy am I violating, and since I am specifying I want to run this on user 1000, I am not running this as root (Hence my understanding is that this statefulset pod definition is not violating any rules defined in the PSP). There is no USER specified in the Dockerfile used for this image.
The other weird part, is that this works fine for standard pods (kind: Pod, instead of kind:Statefulset), for example, this works just fine, when the same PSP exists:
apiVersion: v1
kind: Pod
metadata:
name: my-nodejs
spec:
securityContext:
runAsUser: 1000
containers:
- name: my-node
image: node
ports:
- name: web
containerPort: 80
protocol: TCP
command:
- /bin/sh
- -c
- |
npm install http-server-g
npx http-server
What am I missing / doing wrong?
You seems to have forgotten about binding this psp to a service account.
You need to apply the following:
cat << EOF | kubectl apply -f-
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: psp-role
rules:
- apiGroups: ['policy']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames:
- eks.restrictive
EOF
cat << EOF | kubectl apply -f-
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: psp-role-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: psp-role
subjects:
- kind: ServiceAccount
name: default
namespace: default
EOF
If you dont want to use the default account you can create a separate service account and bind the role to it.
Read more about it k8s documentation - pod-security-policy.

Whitelisting sysctls for containers in Kubernetes Kind

I'm trying to deploy a container in a Kubernetes Kind cluster. The container I'm trying to deploy needs a couple of sysctls flags to be set.
The deployment fails with
forbidden sysctl: "kernel.msgmnb" not whitelisted
UPDATE
I have since added a cluster policy as suggested, created a role that grants usage to it and assigned the Cluster Role to the default service account:
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: sysctl-psp
spec:
privileged: false # Don't allow privileged pods!
# The rest fills in some required fields.
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
runAsUser:
rule: RunAsAny
fsGroup:
rule: RunAsAny
volumes:
- '*'
allowedUnsafeSysctls:
- kernel.msg*
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: role_allow_sysctl
rules:
- apiGroups: ['policy']
resources: ['podsecuritypolicies']
verbs: ['*']
resourceNames:
- sysctl-psp
- apiGroups: ['']
resources:
- replicasets
- services
- pods
verbs: ['*']
- apiGroups: ['apps']
resources:
- deployments
verbs: ['*']
The cluster role binding is like this:
kubectl -n <namespace> create rolebinding default:role_allow_sysctl --clusterrole=role_allow_sysctl --serviceaccount=<namespace>:default
I am then trying to create a deployment and a service in the same namespace:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-app
labels:
app: test-app
spec:
selector:
matchLabels:
app: test-app
tier: dev
strategy:
type: Recreate
template:
metadata:
labels:
app: test-app
tier: dev
spec:
securityContext:
sysctls:
- name: kernel.msgmnb
value: "6553600"
- name: kernel.msgmax
value: "1048800"
- name: kernel.msgmni
value: "32768"
- name: kernel.sem
value: "128 32768 128 4096"
containers:
- image: registry:5000/<container>:1.0.0
name: test-app
imagePullPolicy: IfNotPresent
ports:
- containerPort: 10666
name:port-1
---
The problem remains the same however, I'm getting multiple pods spawned, all failing with the same message forbidden sysctl: "kernel.msgmnb" not whitelisted
I don't think that --alowed-unsafe-sysctls flag could work with Kind nodes, because Kind nodes themselves are containers, whose sysctl FS is read-only.
My workaround is to change the needed sysctl values on my host machine. Kind nodes (and in turn their containers) will reuse these values.

Kubernetes Dashboard not connecting in the network

I've tried many options to access kubernetes dashboard from a different machine, which is within the network of my master server.
From https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/
I installed k8s dashboard using:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml
And opened dashboard by running kube proxy then changed the service yaml file from ClusterIP to NodePort and added a port: 32414. But, it is not working both inside and outside master.
So, deleted
kubectl delete -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml
Then, created dashboard using below yaml file.
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ------------------- Dashboard Secret ------------------- #
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-certs
namespace: kube-system
type: Opaque
---
# ------------------- Dashboard Service Account ------------------- #
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
---
# ------------------- Dashboard Role & Role Binding ------------------- #
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: kubernetes-dashboard-minimal
namespace: kube-system
rules:
# Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.
- apiGroups: [""]
resources: ["secrets"]
verbs: ["create"]
# Allow Dashboard to create 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["create"]
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
# Allow Dashboard to get metrics from heapster.
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster"]
verbs: ["proxy"]
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["heapster", "http:heapster:", "https:heapster:"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kubernetes-dashboard-minimal
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard-minimal
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system
---
# ------------------- Dashboard Deployment ------------------- #
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
containers:
- name: kubernetes-dashboard
image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: kubernetes-dashboard-certs
- name: tmp-volume
emptyDir: {}
serviceAccountName: kubernetes-dashboard
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
---
# ------------------- Dashboard Service ------------------- #
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
type: NodePort
ports:
- port: 443
targetPort: 8443
nodePort: 32323
selector:
k8s-app: kubernetes-dashboard
Then , added sa_cluster_admin.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: dashboard-admin
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: cluster-admin-rolebinding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: dashboard-admin
namespace: kube-system
Applied both using kubectl apply -f <yaml_files>
Then, I can access the dashboard with the IP address and without running kubectl proxy.
But it is working only on the master.
When I tried to connect from another machine, it is not connecting.
https://masternode_ip:NodePort_added