Unable to run manifest file in kubernetes with jenkins user - kubernetes

I have used ansible to create my k8s cluster on ubuntu machine and made ubuntu user passwordless to do so.
I am unable to run manifest file in my kubernetes cluster with jenkins user and getting error.
[Pipeline] sh
kubectl apply -f deployment.yml -f service.yml
Error from server (Forbidden): window.location.replace('/login?from=%2Fswagger-2.0.0.pb-v1%3Ftimeout%3D32s');
I tried to add below rbac but still getting same error.
ubuntu#controlpanel:~$ cat role.yml
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: jenkins
name: jenkins
rules:
- apiGroups: ["", "extensions", "apps"]
resources: ["deployments", "replicasets", "pods"]
verbs: ["get", "list", "watch", "create", "update", "delete", "patch"]
ubuntu#controlpanel:~$ cat sa.yml
apiVersion: v1
kind: ServiceAccount
metadata:
name: myaccount
namespace: jenkins
ubuntu#controlpanel:~$ cat rolbin.yml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: jenkins
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: jenkins
subjects:
- kind: ServiceAccount
name: myaccount

Related

Shell or bash in pods in Kubernetes

How do you disable shell or bash access to pods in a container? I do not want anyone to get access inside the pod via kubectl exec or docker exec or via k9s
Kubectl is a CLI tool so it connects with the K8s API server and authenticates.
You can restrict the user by their Role, so using the RBAC with proper permission will resolve your issue.
Ref : https://kubernetes.io/docs/reference/access-authn-authz/rbac/
Example :
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: serviceaccount
namespace: default
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: default-user-role
namespace: default
rules:
- apiGroups: [""]
resources:
- pods/attach
- pods/exec
verbs: [""]
- apiGroups: ["", "extensions", "apps"]
resources: ["*"]
verbs: ["*"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: default-user-view
namespace: default
subjects:
- kind: ServiceAccount
name: serviceaccount
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: default-user-role
check auth using the
kubectl auth can-i --as=system:serviceaccount:default:serviceaccount exec pod

How to write a psp in k8s only for a specific user?

minikube start
--extra-config=apiserver.enable-admission-plugins=PodSecurityPolicy
--addons=pod-security-policy
we have a default namespace in which the nginx service account does not have the rights to launch the nginx container
when creating a pod, use the command
kubectl run nginx --image=nginx -n default --as system:serviceaccount:default:nginx-sa
as a result, we get an error
Error: container has runAsNonRoot and image will run as root (pod: "nginx_default(49e939b0-d238-4e04-a122-43f4cfabea22)", container: nginx)
as I understand it, it is necessary to write a psp policy that will allow the nginx-sa service account to run under, but I do not understand how to write it correctly for a specific service account
apiVersion: v1
kind: ServiceAccount
metadata:
name: nginx-sa
namespace: default
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nginx-sa-role
namespace: default
rules:
- apiGroups: ["extensions", "apps",""]
resources: [ "deployments","pods" ]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nginx-sa-role-binding
namespace: default
subjects:
- kind: ServiceAccount
name: nginx-sa
namespace: default
roleRef:
kind: Role
name: nginx-sa-role
apiGroup: rbac.authorization.k8s.io
...but I do not understand how to write it correctly for a specific service account
After you get your special psp ready for your nginx, you can grant your nginx-sa to use the special psp like this:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: role-to-use-special-psp
rules:
- apiGroups:
- policy
resourceNames:
- special-psp-for-nginx
resources:
- podsecuritypolicies
verbs:
- use
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: bind-to-role
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: role-to-use-special-psp
subjects:
- kind: ServiceAccount
name: nginx-sa
namespace: default

Forbidden error while describe/scale deployment by user system:node:ip.xx

I'm trying to execute K8S kubectl cmds from inside the container(name: autodeploy).
I have configured ClusterRole, ServiceAccount and ClusterRoleBinding. But getting Forbidden error while performing Describe and Scale actions on K8S Deployments.
Error from server (Forbidden): deployments.apps "test-deployment" is
forbidden: User "system:node:ip-xx-xx-xx-xx.ec2.internal" cannot get
resource "deployments" in API group "apps" in the namespace "abc"
autodeploy container also in same namespace abc
ClusterRole:
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: autodeploy
rules:
- apiGroups: ["*"]
resources: ["deployments", "deployments/scale", "pods"]
verbs: ["get", "list", "update"]
ServiceAccount:
apiVersion: v1
kind: ServiceAccount
metadata:
name: autodeploy
namespace: abc
ClusterRoleBinding:
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: autodeploy
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: autodeploy
subjects:
- kind: ServiceAccount
name: autodeploy
namespace: abc

How to execute shell script from POD A to another POD B

So like you said I created a another pod which is of kind:job and included the script.sh.
In the script.sh file, I run "kubectl exec" to the main pod to run few commands
The script gets executed, but I get the error "cannot create resource "pods/exec in API group"
So I created a clusterrole with resources: ["pods/exec"] and bind it to the default service account using ClusterRoleBinding
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: pod-reader
rules:
- apiGroups: [""]
resources: ["pods", "pods/log"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: service-account-role-binding
namespace: default
subjects:
- kind: ServiceAccount
name: default
namespace: default
roleRef:
kind: ClusterRole
name: pod-reader
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: default
namespace: default
In the pod which is of kind:job, I include the service account like shown below
restartPolicy: Never
serviceAccountName: default
but I still get the same error. What am I doing wrong here ?
Error from server (Forbidden): pods "mongo-0" is forbidden: User "system:serviceaccount:default:default" cannot create resource "pods/exec" in API group "" in the namespace "default"
If this is something that needs to be regularly run for maintenance look into Kubernetes daemon set object.

Read only kubernetes user

I'm trying to create a read only user. I want the user to be able to list nodes and pods and view the dashboard. I got the certs created and can connect but I'm getting the following error.
$ kubectl --context minikube-ro get pods --all-namespaces
Error from server (Forbidden): pods is forbidden: User "erst-operation" cannot list pods at the cluster scope
My cluster role...
$ cat helm/namespace-core/templates/pod-reader-cluster-role.yaml
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: '*'
name: pod-reader
rules:
- apiGroups: ["extensions", "apps"]
resources: ["pods"]
verbs: ["get", "list", "watch"]
My cluster role binding...
$ cat helm/namespace-core/templates/pod-reader-role-binding.yaml
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: read-pods
namespace: default
subjects:
- kind: User
name: erst-operation
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: pod-reader
apiGroup: rbac.authorization.k8s.io
I'm aware the above shouldn't grant permissions to see the dashboard but how do I get it to just list the pods?
You cluster role should contain Core group as resource pods are in Core group.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: '*'
name: pod-reader
rules:
- apiGroups: ["extensions", "apps", ""]
resources: ["pods"]
verbs: ["get", "list", "watch"]