EKS Kubernetes user with RBAC seen as system:anonymous - kubernetes

I've been following this post to create user access to my kubernetes cluster (running on Amazon EKS). I did create key, csr, approved the request and downloaded the certificate for the user. Then I did create a kubeconfig file with the key and crt. When I run kubectl with this kubeconfig, I'm recognized as system:anonymous.
$ kubectl --kubeconfig test-user-2.kube.yaml get pods
Error from server (Forbidden): pods is forbidden: User "system:anonymous" cannot list pods in the namespace "default"
I expected the user to be recognized but get denied access.
$ kubectl --kubeconfig test-user-2.kube.yaml version
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-18T11:37:06Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-28T20:13:43Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
$ kubectl --kubeconfig test-user-2.kube.yaml config view
apiVersion: v1
clusters:
- cluster:
insecure-skip-tls-verify: true
server: REDACTED
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: test-user-2
name: kubernetes
current-context: kubernetes
kind: Config
preferences: {}
users:
- name: test-user-2
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
# running with my other account (which uses heptio-authenticator-aws)
$ kubectl describe certificatesigningrequest.certificates.k8s.io/user-request-test-user-2
Name: user-request-test-user-2
Labels: <none>
Annotations: <none>
CreationTimestamp: Wed, 01 Aug 2018 15:20:15 +0200
Requesting User:
Status: Approved,Issued
Subject:
Common Name: test-user-2
Serial Number:
Events: <none>
I did create a ClusterRoleBinding with admin (also tried cluster-admin) roles for this user but that should not matter for this step. I'm not sure how I can further debug 1) if the user is created or not or 2) if I missed some configuration.
Any help is appreciated!

As mentioned in this article:
When you create an Amazon EKS cluster, the IAM entity user or role (for example, for federated users) that creates the cluster is automatically granted system:master permissions in the cluster's RBAC configuration. To grant additional AWS users or roles the ability to interact with your cluster, you must edit the aws-auth ConfigMap within Kubernetes.
Check if you have aws-auth ConfigMap applied to your cluster:
kubectl describe configmap -n kube-system aws-auth
If ConfigMap is present, skip this step and proceed to step 3.
If ConfigMap is not applied yet, you should do the following:
Download the stock ConfigMap:
curl -O https://amazon-eks.s3-us-west-2.amazonaws.com/1.10.3/2018-07-26/aws-auth-cm.yaml
Adjust it using your NodeInstanceRole ARN in the rolearn: . To get NodeInstanceRole value check out this manual and you will find it at steps 3.8 - 3.10.
data:
mapRoles: |
- rolearn: <ARN of instance role (not instance profile)>
Apply this config map to the cluster:
kubectl apply -f aws-auth-cm.yaml
Wait for cluster nodes becoming Ready:
kubectl get nodes --watch
Edit aws-auth ConfigMap and add users to it according to the example below:
kubectl edit -n kube-system configmap/aws-auth
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
data:
mapRoles: |
- rolearn: arn:aws:iam::555555555555:role/devel-worker-nodes-NodeInstanceRole-74RF4UBDUKL6
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
mapUsers: |
- userarn: arn:aws:iam::555555555555:user/admin
username: admin
groups:
- system:masters
- userarn: arn:aws:iam::111122223333:user/ops-user
username: ops-user
groups:
- system:masters
Save and exit the editor.
Create kubeconfig for your IAM user following this manual.

I got this back from AWS support today.
Thanks for your patience. I have just heard back from the EKS team. They have confirmed that the aws-iam-authenticator has to be used with EKS and, because of that, it is not possible to authenticate using certificates.
I haven't heard whether this is expected to be supported in the future, but it is definitely broken at the moment.

This seems to be a limitation of EKS. Even though the CSR is approved, user can not authenticate. I used the same procedure on another kubernetes cluster and it worked fine.

Related

K3s kubeconfig authenticate with token instead of client cert

I set up K3s on a server with:
curl -sfL https://get.k3s.io | K3S_TOKEN={token} INSTALL_K3S_EXEC="server --cluster-init --disable=traefik --write-kubeconfig-mode 644" sh -s -
Then I grabbed the kube config from /etc/rancher/k3s/k3s.yaml and copy it to my local machine so I can interact with the cluster from my machine rather than the server node I installed K3s on. I had to swap out references to 127.0.0.1 and change it to the actual hostname of the server I installed K3s on as well but other than that it worked.
I then hooked up 2 more server nodes to the cluster for a High Availability setup using:
curl -sfL https://get.k3s.io | K3S_TOKEN={token} INSTALL_K3S_EXEC="server --server {https://{hostname or IP of server 1}:6443 --disable=traefik --write-kubeconfig-mode 644" sh -s -
Now on my local machine again I run kubectl get pods (for example) and that works but I want a highly available setup so I placed a TCP Load Balancer (NGINX actually) in front of my cluster. Now I am trying to connect to the Kubernetes API through that proxy / load balancer and unfortunately, since my ~/.kube/config has a client certificate for authentication, this no longer works because my load balancer / proxy that lives in front of that server cannot pass my client cert on to the K3s server.
My ~/.kube/config:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: {omitted}
server: https://my-cluster-hostname:6443
name: default
contexts:
- context:
cluster: default
user: default
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: default
user:
client-certificate-data: {omitted}
client-key-data: {omitted}
I also grabbed that client cert and key in my kube config, exported it to a file, and hit the API server with curl and it works when I directly hit the server nodes but NOT when I go through my proxy / load balancer.
What I would like to do instead of using the client certificate approach is use token authentication as my proxy would not interfere with that. However, I am not sure how to get such a token. I read the Kubernetes Authenticating guide and specifically I tried creating a new service account and getting the token associated with it as described in the Service Account Tokens section but that also did not work. I also dug through K3s server config options to see if there was any mention of static token file, etc. but didn't find anything that seemed likely.
Is this some limitation of K3s or am I just doing something wrong (likely)?
My kubectl version output:
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.7", GitCommit:"132a687512d7fb058d0f5890f07d4121b3f0a2e2", GitTreeState:"clean", BuildDate:"2021-05-12T12:40:09Z", GoVersion:"go1.15.12", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.7+k3s1", GitCommit:"ac70570999c566ac3507d2cc17369bb0629c1cc0", GitTreeState:"clean", BuildDate:"2021-11-29T16:40:13Z", GoVersion:"go1.16.10", Compiler:"gc", Platform:"linux/amd64"}
I figured out an approach that works for me by reading through the Kubernetes Authenticating Guide in more detail. I settled on the Service Account Tokens approach as it says:
Normally these secrets are mounted into pods for in-cluster access to
the API server, but can be used from outside the cluster as well.
My use is for outside the cluster.
First, I created a new ServiceAccount called cluster-admin:
kubectl create serviceaccount cluster-admin
I then created a ClusterRoleBinding to assign cluster-wide permissions to my ServiceAccount (I named this cluster-admin-manual because K3s already had created one called cluster-admin that I didn't want to mess with):
kubectl create clusterrolebinding cluster-admin-manual --clusterrole=cluster-admin --serviceaccount=default:cluster-admin
Now you have to get the Secret that is created for you when you created your ServiceAccount:
kubectl get serviceaccount cluster-admin -o yaml
You'll see something like this returned:
apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: "2021-12-20T15:55:55Z"
name: cluster-admin
namespace: default
resourceVersion: "3973955"
uid: 66bab124-8d71-4e5f-9886-0bad0ebd30b2
secrets:
- name: cluster-admin-token-67jtw
Get the Secret content with:
kubectl get secret cluster-admin-token-67jtw -o yaml
In that output you will see the data/token property. This is a base64 encoded JWT bearer token. Decode it with:
echo {base64-encoded-token} | base64 --decode
Now you have your bearer token and you can add a user to your ~/.kube/config with the following command. You can also paste that JWT into jwt.io to take a look at the properties and make sure you base64 decoded it properly.
kubectl config set-credentials my-cluster-admin --token={token}
Then make sure your existing context in your ~/.kube/config has the user set appropriately (I did this manually by editing my kube config file but there's probably a kubectl config command for it). For example:
- context:
cluster: my-cluster
user: my-cluster-admin
name: my-cluster
My user in the kube config looks like this:
- name: my-cluster-admin
user:
token: {token}
Now I can authenticate to the cluster using the token instead of relying on a transport layer specific mechanism (TLS with Mutual Auth) that my proxy / load-balancer does not interfere with.
Other resources I found helpful:
Kubernetes — Role-Based Access Control (RBAC) Overview by Anish Patel

Error from server (Forbidden): Forbidden (user=system:anonymous, verb=get,

[xueke#master-01 admin]$ kubectl logs nginx-deployment-76bf4969df-999x8
Error from server (Forbidden): Forbidden (user=system:anonymous, verb=get, resource=nodes, subresource=proxy) ( pods/log nginx-deployment-76bf4969df-999x8)
[xueke#master-01 admin]$ kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://192.168.0.101:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: admin
name: kubernetes
current-context: kubernetes
kind: Config
preferences: {}
users:
- name: admin
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
I specified the admin user here
How do I need to modify it?
The above error means your apiserver doesn't have the credentials (kubelet cert and key) to authenticate the kubelet's log/exec commands and hence the Forbidden error message.
You need to provide --kubelet-client-certificate=<path_to_cert> and --kubelet-client-key=<path_to_key> to your apiserver, this way apiserver authenticate the kubelet with the certficate and key pair.
For more information, have a look at:
https://kubernetes.io/docs/reference/access-authn-authz/kubelet-authn-authz/
In our case, the error stemmed from Azure services being downgraded because of a bug in DNS resolution, introduced in Ubuntu 18.04.
See Azure status and the technical thread. I ran this command to set a fallback DNS address in the nodes:
az vmss list-instances -g <resourcegroup> -n vmss --query "[].id" --output tsv \
| az vmss run-command invoke --scripts "echo FallbackDNS=168.63.129.16 >> /etc/systemd/resolved.conf; systemctl restart systemd-resolved.service" --command-id RunShellScript --ids #-
That's an RBAC error. The user had no permissions to see logs. If you have a user with cluster-admin permissions you can fix this error with
kubectl create clusterrolebinding the-boss --user system:anonymous --clusterrole cluster-admin
Note: Not a good idea to give an anonymous user cluster-admin role. Will fix the issue though.

openshift Crash Loop Back Off error with turbine-server

Hi I created a project in Openshift and attempted to add a turbine-server image to it. A Pod was added but I keep receiving the following error in the logs. I am very new to OpenShift and i would appreciate any advice or suggestions as to how to resolve this error. I can supply either further information that is required.
io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: GET at: https://kubernetes.default.svc/api/v1/namespaces/booking/pods/turbine-server-2-q7v8l . Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked..
How to diagnose
Make sure you have configured a service account, role, and role binding to the account. Make sure the service account is set to the pod spec.
spec:
serviceAccountName: your-service-account
Start monitoring atomic-openshift-node service on the node the pod is deployed and the API server.
$ journalctl -b -f -u atomic-openshift-node
Run the pod and monitor the journald output. You would see "Forbidden".
Jan 28 18:27:38 <hostname> atomic-openshift-node[64298]:
logging error output: "Forbidden (user=system:serviceaccount:logging:appuser, verb=get, resource=nodes, subresource=proxy)"
This means the service account appuser doest not have the authorisation to do get on the nodes/proxy resource. Then update the role to be able to allow the verb "get" on the resource.
- apiGroups: [""]
resources:
- "nodes"
- "nodes/status"
- "nodes/log"
- "nodes/metrics"
- "nodes/proxy" <----
- "nodes/spec"
- "nodes/stats"
- "namespaces"
- "events"
- "services"
- "pods"
- "pods/status"
verbs: ["get", "list", "view"]
Note that some resources are not default legacy "" group as in Unable to list deployments resources using RBAC.
How to verify the authorisations
To verify who can execute the verb against the resource, for example patch verb against pod.
$ oadm policy who-can patch pod
Namespace: default
Verb: patch
Resource: pods
Users: auser
system:admin
system:serviceaccount:cicd:jenkins
Groups: system:cluster-admins
system:masters
OpenShift vs K8S
OpenShift has command oc policy or oadm policy:
oc policy add-role-to-user <role> <user-name>
oadm policy add-cluster-role-to-user <role> <user-name>
This is the same with K8S role binding. You can use K8S RBAC but the API version in OpenShift needs to be v1 instead of rbac.authorization.k8s.io/v1 in K8s.
References
Managing Authorization Policies
Using RBAC Authorization
User and Role Management
Hi thank you for the replies - I was able to resolve the issue by executing the following commands using the oc command line utility:
oc policy add-role-to-group view system:serviceaccounts -n <project>
oc policy add-role-to-group edit system:serviceaccounts -n <project>

kubectl not working from other host, but works fine from localhost

What happened:
I'm testing kubernetes 1.9.0 to upgrade production cluster and I cannot access it with kubectl from other host.
I'm getting following error:
pods is forbidden: User \"system:anonymous\" cannot list pods in the namespace \"default\"
I tried with admin user and with other user created before with read only role.
What you expected to happen:
Works fine on kubernetes 1.5
How to reproduce it (as minimally and precisely as possible):
I installed kubernetes 1.9.0 with kubeadm.
I can access to local cluster from master with following command:
kubectl --kubeconfig kubeconfig get pods
with server: https://127.0.0.1:6443
I added a rule on haproxy to redirect that port to another, but I do some tests:
Old environment have a proxy configured to all requests asking for https://example.org/api/k8s will be redirect to k8s api endpoint.
I configured this new environment with same configuration but not working. (Error: pods is forbidden: User \"system:anonymous\" cannot list pods in the namespace \"default\" )
I configured this new enviroment with a new DNS name and proxying on tcp mode linking port 443 to 6443, but not working. (Error: pods is forbidden: User \"system:anonymous\" cannot list pods in the namespace \"default\" )
kubeconfig file set server field as: https://k8s.example.org
Anything else we need to know?:
kubeconfig file (kubeconfig for admin user is similar):
`
api
Version: v1
clusters:
- cluster:
certificate-authority-data: ***
server: https://127.0.0.1:6443
#server: https://k.example.org
name: kubernetes
contexts:
- context:
cluster: kubernetes
namespace: default
user: read_only
name: read_only-context
current-context: read_only-context
kind: Config
preferences: {}
users:
- name: read_only
user:
as-user-extra: {}
client-certificate: /etc/kubernetes/users/read_only/read_only.crt
client-key: /etc/kubernetes/users/read_only/read_only.key
user
name: read_only
`
Environment:
Kubernetes version (use kubectl version): 1.9.0
Cloud provider or hardware configuration: bare metal (in fact a VM on AWS)
OS (e.g. from /etc/os-release): Centos 7
Kernel (e.g. uname -a): 3.10.0-514.10.2.el7.x86_64
Install tools: kubeadm
Others:

spinnaker /halyard : Unable to communicate with the Kubernetes cluster

I am trying to deploy spinnaker on multi node . I have 2 VMs : the first with halyard and kubectl the second contain the kubernetes master api.
my kubectl is well configured and able to communicate with the remote kubernetes api,
the "kubectl get namespaces " works
kubectl get namespaces
NAME STATUS AGE
default Active 16d
kube-public Active 16d
kube-system Active 16d
but when I run this cmd
hal config provider -d kubernetes account add spin-kubernetes --docker-registries myregistry
I get this error
Add the spin-kubernetes account
Failure
Problems in default.provider.kubernetes.spin-kubernetes:
- WARNING You have not specified a Kubernetes context in your
halconfig, Spinnaker will use "default-system" instead.
? We recommend explicitly setting a context in your halconfig, to
ensure changes to your kubeconfig won't break your deployment.
? Options include:
- default-system
! ERROR Unable to communicate with your Kubernetes cluster:
Operation: [list] for kind: [Namespace] with name: [null] in namespace:
[null] failed..
? Unable to authenticate with your Kubernetes cluster. Try using
kubectl to verify your credentials.
- Failed to add account spin-kubernetes for provider
kubernetes.
From the error message there seem to be two approaches to this, set your halconfig to talk to the default-system context, so it could communicate with your cluster or the other way around, that is configure your context.
Try this:
kubectl config view
I suppose you'll see the context and current context over there to be default-system, try changing those.
For more help do
kubectl config --help
I guess you're looking for the set-context option.
Hope that helps.
You can set this in your halconfig as mentioned by #Naim Salameh.
Another way is to try setting your K8S cluster info in your default Kubernetes config ~/.kube/config.
Not certain this will work since you are running halyard and kubectl on different VM's.
# ~/.kube/config
apiVersion: v1
clusters:
- cluster:
server: http://my-kubernetes-url
name: my-k8s-cluster
contexts:
- context:
cluster: my-k8s-cluster
namespace: default
name: my-context
current-context: my-context
kind: Config
preferences: {}
users: []