Kubernetes RBAC - user has access to get pods but it says 'Unauthorized' - kubernetes

I have configured keycloak for Kubernetes RBAC.
user having access to get pods
vagrant#haproxy:~/.kube$ kubectl auth can-i get pods --user=oidc
Warning: the server doesn't have a resource type 'pods'
yes
vagrant#haproxy:~/.kube$ kubectl get pods --user=oidc
error: You must be logged in to the server (Unauthorized)
my kubeconfig file for the user looks like below
users:
- name: oidc
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args:
- oidc-login
- get-token
- --oidc-issuer-url=https://test.example.com/auth/realms/kubernetes
- --oidc-client-id=kubernetes
- --oidc-client-secret=e479f74d-d9fd-415b-b1db-fd7946d3ad90
- --username=test
- --grant-type=authcode-keyboard
command: kubectl
Is there anyway to get this to work?

The issue was with the ip address of the cluster. You might have to configure the DNS name if the ip address.

Related

Error from server (Forbidden): Forbidden (user=system:anonymous, verb=get,

[xueke#master-01 admin]$ kubectl logs nginx-deployment-76bf4969df-999x8
Error from server (Forbidden): Forbidden (user=system:anonymous, verb=get, resource=nodes, subresource=proxy) ( pods/log nginx-deployment-76bf4969df-999x8)
[xueke#master-01 admin]$ kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://192.168.0.101:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: admin
name: kubernetes
current-context: kubernetes
kind: Config
preferences: {}
users:
- name: admin
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
I specified the admin user here
How do I need to modify it?
The above error means your apiserver doesn't have the credentials (kubelet cert and key) to authenticate the kubelet's log/exec commands and hence the Forbidden error message.
You need to provide --kubelet-client-certificate=<path_to_cert> and --kubelet-client-key=<path_to_key> to your apiserver, this way apiserver authenticate the kubelet with the certficate and key pair.
For more information, have a look at:
https://kubernetes.io/docs/reference/access-authn-authz/kubelet-authn-authz/
In our case, the error stemmed from Azure services being downgraded because of a bug in DNS resolution, introduced in Ubuntu 18.04.
See Azure status and the technical thread. I ran this command to set a fallback DNS address in the nodes:
az vmss list-instances -g <resourcegroup> -n vmss --query "[].id" --output tsv \
| az vmss run-command invoke --scripts "echo FallbackDNS=168.63.129.16 >> /etc/systemd/resolved.conf; systemctl restart systemd-resolved.service" --command-id RunShellScript --ids #-
That's an RBAC error. The user had no permissions to see logs. If you have a user with cluster-admin permissions you can fix this error with
kubectl create clusterrolebinding the-boss --user system:anonymous --clusterrole cluster-admin
Note: Not a good idea to give an anonymous user cluster-admin role. Will fix the issue though.

openshift Crash Loop Back Off error with turbine-server

Hi I created a project in Openshift and attempted to add a turbine-server image to it. A Pod was added but I keep receiving the following error in the logs. I am very new to OpenShift and i would appreciate any advice or suggestions as to how to resolve this error. I can supply either further information that is required.
io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: GET at: https://kubernetes.default.svc/api/v1/namespaces/booking/pods/turbine-server-2-q7v8l . Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked..
How to diagnose
Make sure you have configured a service account, role, and role binding to the account. Make sure the service account is set to the pod spec.
spec:
serviceAccountName: your-service-account
Start monitoring atomic-openshift-node service on the node the pod is deployed and the API server.
$ journalctl -b -f -u atomic-openshift-node
Run the pod and monitor the journald output. You would see "Forbidden".
Jan 28 18:27:38 <hostname> atomic-openshift-node[64298]:
logging error output: "Forbidden (user=system:serviceaccount:logging:appuser, verb=get, resource=nodes, subresource=proxy)"
This means the service account appuser doest not have the authorisation to do get on the nodes/proxy resource. Then update the role to be able to allow the verb "get" on the resource.
- apiGroups: [""]
resources:
- "nodes"
- "nodes/status"
- "nodes/log"
- "nodes/metrics"
- "nodes/proxy" <----
- "nodes/spec"
- "nodes/stats"
- "namespaces"
- "events"
- "services"
- "pods"
- "pods/status"
verbs: ["get", "list", "view"]
Note that some resources are not default legacy "" group as in Unable to list deployments resources using RBAC.
How to verify the authorisations
To verify who can execute the verb against the resource, for example patch verb against pod.
$ oadm policy who-can patch pod
Namespace: default
Verb: patch
Resource: pods
Users: auser
system:admin
system:serviceaccount:cicd:jenkins
Groups: system:cluster-admins
system:masters
OpenShift vs K8S
OpenShift has command oc policy or oadm policy:
oc policy add-role-to-user <role> <user-name>
oadm policy add-cluster-role-to-user <role> <user-name>
This is the same with K8S role binding. You can use K8S RBAC but the API version in OpenShift needs to be v1 instead of rbac.authorization.k8s.io/v1 in K8s.
References
Managing Authorization Policies
Using RBAC Authorization
User and Role Management
Hi thank you for the replies - I was able to resolve the issue by executing the following commands using the oc command line utility:
oc policy add-role-to-group view system:serviceaccounts -n <project>
oc policy add-role-to-group edit system:serviceaccounts -n <project>

kubectl not working from other host, but works fine from localhost

What happened:
I'm testing kubernetes 1.9.0 to upgrade production cluster and I cannot access it with kubectl from other host.
I'm getting following error:
pods is forbidden: User \"system:anonymous\" cannot list pods in the namespace \"default\"
I tried with admin user and with other user created before with read only role.
What you expected to happen:
Works fine on kubernetes 1.5
How to reproduce it (as minimally and precisely as possible):
I installed kubernetes 1.9.0 with kubeadm.
I can access to local cluster from master with following command:
kubectl --kubeconfig kubeconfig get pods
with server: https://127.0.0.1:6443
I added a rule on haproxy to redirect that port to another, but I do some tests:
Old environment have a proxy configured to all requests asking for https://example.org/api/k8s will be redirect to k8s api endpoint.
I configured this new environment with same configuration but not working. (Error: pods is forbidden: User \"system:anonymous\" cannot list pods in the namespace \"default\" )
I configured this new enviroment with a new DNS name and proxying on tcp mode linking port 443 to 6443, but not working. (Error: pods is forbidden: User \"system:anonymous\" cannot list pods in the namespace \"default\" )
kubeconfig file set server field as: https://k8s.example.org
Anything else we need to know?:
kubeconfig file (kubeconfig for admin user is similar):
`
api
Version: v1
clusters:
- cluster:
certificate-authority-data: ***
server: https://127.0.0.1:6443
#server: https://k.example.org
name: kubernetes
contexts:
- context:
cluster: kubernetes
namespace: default
user: read_only
name: read_only-context
current-context: read_only-context
kind: Config
preferences: {}
users:
- name: read_only
user:
as-user-extra: {}
client-certificate: /etc/kubernetes/users/read_only/read_only.crt
client-key: /etc/kubernetes/users/read_only/read_only.key
user
name: read_only
`
Environment:
Kubernetes version (use kubectl version): 1.9.0
Cloud provider or hardware configuration: bare metal (in fact a VM on AWS)
OS (e.g. from /etc/os-release): Centos 7
Kernel (e.g. uname -a): 3.10.0-514.10.2.el7.x86_64
Install tools: kubeadm
Others:

forbidden returned when mounting the default tokens in HA kubernetes cluster

I have a problem with mounting the default tokens in kubernetes it no longer works with me, I wanted to ask directly before creating an issue on Github, so my setup consists of basically a HA bare metal cluster with manually deployed etcd (which includes certs ca, keys).The deployments run the nodes register, I just cannot deploy pods, always giving the error:
MountVolume.SetUp failed for volume "default-token-ddj5s" : secrets "default-token-ddj5s" is forbidden: User "system:node:tweak-node-1" cannot get secrets in the namespace "default": no path found to object
where tweak-node-1 is one of my node names and hostnames, I have found some similar issues:
- https://github.com/kubernetes/kubernetes/issues/18239
- https://github.com/kubernetes/kubernetes/issues/25828
but none came close to fixing my issue as the issue was not the same.I only use default namespaces when trying to run pods and tried setting both RBAC ABAC, both gave the same result, this is a template I use for deploying showing version an etcd config:
apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
api:
advertiseAddress: IP1
bindPort: 6443
authorizationMode: ABAC
kubernetesVersion: 1.8.5
etcd:
endpoints:
- https://IP1:2379
- https://IP2:2379
- https://IP3:2379
caFile: /opt/cfg/etcd/pki/etcd-ca.crt
certFile: /opt/cfg/etcd/pki/etcd.crt
keyFile: /opt/cfg/etcd/pki/etcd.key
dataDir: /var/lib/etcd
etcdVersion: v3.2.9
networking:
podSubnet: 10.244.0.0/16
apiServerCertSANs:
- IP1
- IP2
- IP3
- DNS-NAME1
- DNS-NAME2
- DNS-NAME3
Your node must use credentials that match its Node API object name, as described in https://kubernetes.io/docs/admin/authorization/node/#overview
In order to be authorized by the Node authorizer, kubelets must use a credential that identifies them as being in the system:nodes group, with a username of system:node:. This group and user name format match the identity created for each kubelet as part of kubelet TLS bootstrapping.
update
So the specific solution, the problem was because I was using version 1.8.x and was copying the certs and keys manually each kubelet didn't have its own system:node binding or specific key as specified in https://kubernetes.io/docs/admin/authorization/node/#overview:
RBAC Node Permissions In 1.8, the binding will not be created at all.
When using RBAC, the system:node cluster role will continue to be
created, for compatibility with deployment methods that bind other
users or groups to that role.
I fixed using either two ways :
1 - Using kubeadm join instead of copying the /etc/kubernetes file from master1
2 - after deployment patching the clusterrolebinding for system:node
kubectl patch clusterrolebinding system:node -p '{"apiVersion":
"rbac.authorization.k8s.io/v1beta1","kind":
"ClusterRoleBinding","metadata": {"name": "system:node"},"subjects":
[{"kind": "Group","name": "system:nodes"}]}'

kubernetes: Error from server (Forbidden): User "system:anonymous" cannot list nodes at the cluster scope even after granting permission

Even after granting cluster roles to user, I get Error from server (Forbidden): User "system:anonymous" cannot list nodes at the cluster scope. (get nodes)
I have the following set for the user:
- context:
cluster: kubernetes
user: user#gmail.com
name: user#kubernetes` set in the ~/.kube/config file
and the below added to admin.yaml to create cluster-role and cluster-rolebindings:
kind: CluserRouster: kubernetes user: nsp#gmail.com name: nsp#kubernetese
apiVersion: rbac.authorization.k8s.io/v1alpha1
metadata:
name: admin-role
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]
---
oidckind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1alpha1
metadata:
name: admin-binding
subjects:
- kind: User
name: nsp#gmail.com
roleRef:
kind: ClusterRole
name: admin-role
When I try the command I still get error.
kubectl --username=user#gmail.com get nodes
Error from server (Forbidden): User "system:anonymous" cannot list nodes at the cluster scope. (get nodes)
Can someone please suggest on how to proceed.
Your problem is not with your ClusterRoleBindings but rather with user authentication. Kubernetes tells you that it identified you as system:anonymous (which is similar to *NIX's nobody) and not nsp#example.com (to which you applied your binding).
In your specific case the reason for that is that the username flag uses HTTP Basic authentication and needs the password flag to actually do anything. But even if you did supply the password, you'd still need to actually tell the API server to accept that specific user.
Have a look at this part of the Kubernetes documentation which deals with different methods of authentication. For the username and password authentication to work, you'd want to look at the Static Password File section, but I would actually recommend you go with X509 Client Certs since they are more secure and are operationally much simpler (no secrets on the Server, no state to replicate between API servers).
In my case i was receiving nearly similar error due to RBAC
Error
root#k8master:~# kubectl cluster-info dump --insecure-skip-tls-verify=true
Error from server (Forbidden): nodes is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
Solution:
As Solution i have done below things to reconfigure my user to access cluster
cd $HOME
sudo whoami
sudo cp /etc/kubernetes/admin.conf $HOME/
sudo chown $(id -u):$(id -g) $HOME/admin.conf
export KUBECONFIG=$HOME/admin.conf
echo "export KUBECONFIG=$HOME/admin.conf" | tee -a ~/.bashrc
After doing above when i take cluster dump i got result
root#k8master:~# kubectl cluster-info
Kubernetes master is running at https://192.168.10.15:6443
KubeDNS is running at https://192.168.10.15:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy