Openshift Monitoring with REST_API - kubernetes

I am trying to use Openshift REST-API's to get the status of my cron-jobs. I am the admin of my namespace but I don't have cluster access so I can't do anything on cluster level.
Now, to get the status, I am first creating the role :
# oc create role podreader --verb=get --verb=list --verb=watch --resource=pods,cronjobs.batch,jobs.batch
role.rbac.authorization.k8s.io/podreader created
But when I try to add a role to a service account it fails.
# oc create serviceaccount nagios
# oc policy add-role-to-user podreader system:serviceaccount:uc-immoscout-dev:nagios
Warning: role 'podreader' not found
Error from server (NotFound): clusterroles.rbac.authorization.k8s.io "podreader" not found
My main intention is to to get the status of my cron-jobs, jobs and pods which I am scheduling.

You'll have to add --role-namespace=namespace-of-role to the oc policy add-role-to-user command otherwise the role is treated as a cluster role.
From the docs:
--role-namespace='': namespace where the role is located: empty means a role defined in cluster policy

Related

why i can't create pods a a user with enough permissions in kubernetes

I am following a tutorial regarding RBAC, I think I understand the main idea but I don't get why this is failing:
kc auth can-i "*" pod/compute --as deploy#test.com
no
kc create clusterrole deploy --verb="*" --resource=pods --resource-name=compute
clusterrole.rbac.authorization.k8s.io/deploy created
kc create clusterrolebinding deploy --user=deploy#test.com --clusterrole=deploy
clusterrolebinding.rbac.authorization.k8s.io/deploy created
# this tells me that deploy#test.com should be able to create a pod named compute
kc auth can-i "*" pod/compute --as deploy#test.com
yes
# but it fails when trying to do so
kc run compute --image=nginx --as deploy#test.com
Error from server (Forbidden): pods is forbidden: User "deploy#test.com" cannot create resource "pods" in API group "" in the namespace "default"
the namespace name should be irrelevant afaik, since this is a clusterrole.
Restricting the create permission to a specific resource name is not supported.
This is from the Kubernetes documentation:
Note: You cannot restrict create or deletecollection requests by resourceName. For create, this limitation is because the object name is not known at authorization time.
This means the ClusterRole you created doesn't allow you to create any Pod.
You need to have another ClusterRole assigned where you don't specify the resource name.

Problem deploying K8s with gitlab runner get an error

I changed something and deployed on a new cluster then I got this error even though I didn't change anything in the code. Has anybody seen it before?
from server for:
"/builds/dropcunt/nettmoster.com/deployment/webapp.yml": ingresses.extensions "nettmoster.comn-273414" is forbidden: User "system:serviceaccount:gitlab-managed-apps:default" cannot get resource "ingresses" in API group "extensions" in the namespace "nettmoster-com-9777808"
59 Error from server (Forbidden): error when retrieving current configuration of:
As suggested I runkubectl auth can-i --list --as=system:serviceaccount:gitlab-managed-apps:default
It returns:
This is a RBAC problem. The service account system:serviceaccount:gitlab-managed-apps:default does not have permission to get ingress resource in the new cluster.
You can compare what permission a service account have by running below command in both the cluster
kubectl auth can-i --list --as=system:serviceaccount:gitlab-managed-apps:default
Run below commands to get permission via RBAC
kubectl create role ingress-reader --verb=get,list,watch,update --resource=ingress
kubectl create rolebinding ingress-reader-role --role=ingress-reader --serviceaccount=gitlab-managed-apps:default

GCP K8s (kubectl) error message (Required "container.leases.get" permission)

I am getting an error message after running some kubectl commands (GCP command line - gcloud). I have a K8S cluster created in GKE.
Example:
kubectl describe node
gke_k8s_cluster_name
Error from server (Forbidden): leases.coordination.k8s.io "gke_k8s_cluster_name" is forbidden: User "MY_SERVICE_ACCOUNT" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease": Required "container.leases.get" permission.
The point is that "container.leases.get" permission is not listed in IAM (as custom permissions or regular role).
How could I grant that permission to the service account in GCP ?
thanks,
Jose
You may need to grant additional permissions to yourself on GCP IAM and GKE sides, for example:
PROJECT_ID=$(gcloud config get-value core/project)
USER_ID=$(gcloud config get-value core/account)
gcloud projects add-iam-policy-binding ${PROJECT_ID} --member=user:${USER_ID} --role=roles/container.admin
kubectl create clusterrolebinding cluster-admin-binding --clusterrole cluster-admin --user ${USER_ID}
See also GCP IAM & GKE RBAC integration.

What is the name of the role that allows one to use oc/kubectl port-forward?

I'd like to create a Service Account which is allowed to do oc port-forward on OpenShift.com Online (AKA kubectl port-forward on Kubernetes), but can't for the life of me figure out which of the many roles I see in oc get clusterrole would permit that? (oc get role is empty.)
error: error upgrading connection: pods "minecraft-storeys-maker-40-ps85h" is forbidden: User "system:serviceaccount:learn-study:oc-port-forward-container" cannot create pods/portforward in the namespace "learn-study": User "system:serviceaccount:learn-study:oc-port-forward-container" cannot create pods/portforward in project "learn-study"
So based on this error message I've tried "pods/portforward", but no good:
oc policy add-role-to-user pods/portforward -z oc-port-forward-container
Error from server (BadRequest): Name parameter invalid: "pods/portforward": may not contain '/'
Also just "portforward" is no good:
oc policy add-role-to-user portforward -z oc-port-forward-container
Error from server (NotFound): rolebindings.authorization.openshift.io "portforward" not found
It's for https://github.com/OASIS-learn-study/oc-port-forward-container.
In OpenShift the edit and admin cluster roles should have create permissions on pods/portforward.
Therefore, a command such as oc policy add-role-to-user edit -z <your-sa-name> should add the permissions you need.

Kubernetes RBAC authentication for default user

I am using kops in AWS to create my Kubernetes cluster.
I have created a cluster with RBAC enabled via --authorization=RBAC as described here.
I am trying to use the default service account token to interact with the cluster and getting this error:
Error from server (Forbidden): User "system:serviceaccount:default:default" cannot list pods in the namespace "default". (get pods)
Am I missing a role or binding somewhere?
I thing it is not a good idea to give the cluster-admin role to default service account in default namespace.
If you will give cluster-admin access to default user in default namespace - every app (pod) that will be deployed in cluster, in default namespace - will be able to manipulate the cluster (delete system pods/deployments or make other bad stuff).
By default the clusterrole cluster-admin is given to default service account in kube-system namespace.
You can use it for interacting with cluster.
try to give admin role and try.
kubectl create clusterrolebinding add-on-cluster-admin --clusterrole=cluster-admin --serviceaccount=default:default