Installing Tekton Triggers EventListener (for GitLab) on OpenShift leads to: error configmaps is forbidden: cannot get resource configmaps in API - kubernetes

We're working on the integration of GitLab and Tekton / OpenShift Pipelines via Webhooks and Tekton Triggers. We followed this example project and crafted our EventListener that ships with the needed Interceptor, TriggerBinding and TriggerTemplate as gitlab-push-listener.yml:
apiVersion: triggers.tekton.dev/v1beta1
kind: EventListener
metadata:
name: gitlab-listener
spec:
serviceAccountName: tekton-triggers-example-sa
triggers:
- name: gitlab-push-events-trigger
interceptors:
- name: "verify-gitlab-payload"
ref:
name: "gitlab"
kind: ClusterInterceptor
params:
- name: secretRef
value:
secretName: "gitlab-secret"
secretKey: "secretToken"
- name: eventTypes
value:
- "Push Hook"
bindings:
- name: gitrevision
value: $(body.checkout_sha)
- name: gitrepositoryurl
value: $(body.repository.git_http_url)
template:
spec:
params:
- name: gitrevision
- name: gitrepositoryurl
- name: message
description: The message to print
default: This is the default message
- name: contenttype
description: The Content-Type of the event
resourcetemplates:
- apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
generateName: buildpacks-test-pipeline-run-
#name: buildpacks-test-pipeline-run
spec:
serviceAccountName: buildpacks-service-account-gitlab # Only needed if you set up authorization
pipelineRef:
name: buildpacks-test-pipeline
workspaces:
- name: source-workspace
subPath: source
persistentVolumeClaim:
claimName: buildpacks-source-pvc
- name: cache-workspace
subPath: cache
persistentVolumeClaim:
claimName: buildpacks-source-pvc
params:
- name: IMAGE
value: registry.gitlab.com/jonashackt/microservice-api-spring-boot # This defines the name of output image
- name: SOURCE_URL
value: https://gitlab.com/jonashackt/microservice-api-spring-boot
- name: SOURCE_REVISION
value: main
As stated in the example (and in the Tekton docs) we also created and kubectl applyed a ServiceAccount named tekton-triggers-example-sa, RoleBinding and ClusterRoleBinding:
apiVersion: v1
kind: ServiceAccount
metadata:
name: tekton-triggers-example-sa
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: triggers-example-eventlistener-binding
subjects:
- kind: ServiceAccount
name: tekton-triggers-example-sa
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: tekton-triggers-eventlistener-roles
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: triggers-example-eventlistener-clusterbinding
subjects:
- kind: ServiceAccount
name: tekton-triggers-example-sa
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: tekton-triggers-eventlistener-clusterroles
Now installing our EventListener via kubectl apply -f gitlab-push-listener.yml, no Triggering from GitLab or even a curl is triggering a PipelineRun as intended. Looking into the logs of the el-gitlab-listener Deployment and Pod, we see the following error:
kubectl logs el-gitlab-listener-69f4c5c8f8-t4zdj
{"level":"info","ts":"2021-11-30T09:38:32.444Z","caller":"logging/config.go:116","msg":"Successfully created the logger."}
{"level":"info","ts":"2021-11-30T09:38:32.444Z","caller":"logging/config.go:117","msg":"Logging level set to: info"}
{"level":"info","ts":"2021-11-30T09:38:32.444Z","caller":"logging/config.go:79","msg":"Fetch GitHub commit ID from kodata failed","error":"\"KO_DATA_PATH\" does not exist or is empty"}
{"level":"info","ts":"2021-11-30T09:38:32.444Z","logger":"eventlistener","caller":"logging/logging.go:46","msg":"Starting the Configuration eventlistener","knative.dev/controller":"eventlistener"}
{"level":"info","ts":"2021-11-30T09:38:32.445Z","logger":"eventlistener","caller":"profiling/server.go:64","msg":"Profiling enabled: false","knative.dev/controller":"eventlistener"}
{"level":"fatal","ts":"2021-11-30T09:38:32.451Z","logger":"eventlistener","caller":"eventlistenersink/main.go:104","msg":"Error reading ConfigMap config-observability-triggers","knative.dev/controller":"eventlistener","error":"configmaps \"config-observability-triggers\" is forbidden: User \"system:serviceaccount:default:tekton-triggers-example-sa\" cannot get resource \"configmaps\" in API group \"\" in the namespace \"default\": RBAC: [clusterrole.rbac.authorization.k8s.io \"tekton-triggers-eventlistener-clusterroles\" not found, clusterrole.rbac.authorization.k8s.io \"tekton-triggers-eventlistener-roles\" not found]","stacktrace":"main.main\n\t/opt/app-root/src/go/src/github.com/tektoncd/triggers/cmd/eventlistenersink/main.go:104\nruntime.main\n\t/usr/lib/golang/src/runtime/proc.go:203"}

The OpenShift Pipelines documentation does not directly document it. But if you skim the docs especially in the Triggers section, you might recognize that there is no ServiceAccount created whatsoever. But one is used by every Trigger component. It's called pipeline. Simply run kubectl get serviceaccount to see it:
$ kubectl get serviceaccount
NAME SECRETS AGE
default 2 49d
deployer 2 49d
pipeline 2 48d
This pipeline ServiceAccount is ready to use inside your Tekton Triggers & EventListeners. So your gitlab-push-listener.yml can directly reference it:
apiVersion: triggers.tekton.dev/v1beta1
kind: EventListener
metadata:
name: gitlab-listener
spec:
serviceAccountName: pipeline
triggers:
- name: gitlab-push-events-trigger
interceptors:
...
You can simply delete your manually created ServiceAccount tekton-triggers-example-sa. It's not needed in OpenShift Pipelines! Now your Tekton Triggers EventListener should work and trigger your Tekton Pipelines as defined.

Related

I am getting permission issue (cannot create resource \"Job\" in API group \"batch) while creating jobs via sensors of argo-events

I am trying to trigger a job creation from a sensor but I am getting the error below:
Job.batch is forbidden: User \"system:serviceaccount:samplens:sample-sa\" cannot create resource \"Job\" in API group \"batch\" in the namespace \"samplens\"","errorVerbose":"timed out waiting for the condition: Job.batch is forbidden: User \"system:serviceaccount:samplens:sample-sa\" cannot create resource \"Job\" in API group \"batch\" in the namespace \"samplens\"\nfailed to execute trigger\ngithub.com/argoproj/argo-events/sensors.(*SensorContext).triggerOne\n\t/home/jenkins/agent/workspace/argo-events_master/sensors/listener.go:328\ngithub.com/argoproj/argo-events/sensors.(*SensorContext).triggerActions\n\t/home/jenkins/agent/workspace/argo-events_master/sensors/listener.go:269\ngithub.com/argoproj/argo-events/sensors.(*SensorContext).listenEvents.func1.3\n\t/home/jenkins/agent/workspace/argo-events_master/sensors/listener.go:181\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1357","triggerName":"sample-job","triggeredBy":["payload"],"triggeredByEvents":["38333939613965312d376132372d343262302d393032662d663731393035613130303130"],"stacktrace":"github.com/argoproj/argo-events/sensors.(*SensorContext).triggerActions\n\t/home/jenkins/agent/workspace/argo-events_master/sensors/listener.go:271\ngithub.com/argoproj/argo-events/sensors.(*SensorContext).listenEvents.func1.3\n\t/home/jenkins/agent/workspace/argo-events_master/sensors/listener.go:181"}
12
Although I have created a serviceaccount, role and rolebinding.
Here is my serviceaccount creation file:
apiVersion: v1
kind: ServiceAccount
metadata:
name: sample-sa
namespace: samplens
Here is my rbac.yaml:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: sample-role
namespace: samplens
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- create
- delete
- get
- watch
- patch
- apiGroups:
- "batch"
resources:
- jobs
verbs:
- create
- delete
- get
- watch
- patch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: sample-role-binding
namespace: samplens
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: sample-role
subjects:
- kind: ServiceAccount
name: sample-sa
namespace: samplens
and here is my sensor.yaml:
apiVersion: argoproj.io/v1alpha1
kind: Sensor
metadata:
name: webhook
spec:
template:
serviceAccountName: sample-sa
dependencies:
- name: payload
eventSourceName: webhook
eventName: devops-toolkit
triggers:
- template:
name: sample-job
k8s:
group: batch
version: v1
resource: Job
operation: create
source:
resource:
apiVersion: batch/v1
kind: Job
metadata:
name: samplejob-crypto
annotations:
argocd.argoproj.io/hook: PreSync
argocd.argoproj.io/hook-delete-policy: HookSucceeded
spec:
ttlSecondsAfterFinished: 100
serviceAccountName: sample-sa
template:
spec:
serviceAccountName: sample-sa
restartPolicy: OnFailure
containers:
- name: sample-crypto-job
image: docker.artifactory.xxx.com/abc/def/yyz:master-b1b347a
Sensor is getting triggered correctly but is failing to create the job.
Can someone please help, what am I missing?
Posting this as community wiki for better visibility, feel free to edit and expand it.
The original issue was resolved by adjusting role and giving * verbs. Which means argo sensor requires more permissions in fact.
This is a working solution for testing environment, while for production RBAC should be used with principle of least privileges.
How to test RBAC
There's a kubectl syntax which allows to test if RBAC (service account + role + rolebinding) was set up as expected.
Below is example how to check if SERVICE_ACCOUNT_NAME in NAMESPACE can create jobs in namespace NAMESPACE:
kubectl auth can-i --as=system:serviceaccount:NAMESPACE:SERVICE_ACCOUNT_NAME create jobs -n NAMESPACE
The answer will be simple: yes or no.
Usefull links:
Using RBAC authorization
Checking API access
Just ran into the same issue in argo-events. Hopefully this gets fixed in the near future or at least some better documentation.
Change the following value in your sensor.yaml:
spec.triggers[0].template.k8s.resource: jobs
The relevant documentation (at this moment) seems to be pointing to some old Kubernetes API v1.13 documentation, so I've no idea why this needs to be written in the plural "jobs" but this solved the issue for me.
In the example trigger, where a Pod is triggered, the value "pods" is used the same field which pointed me in the right direction.

Setting up a Kubernetes Cronjob: Cannot get resource "horizontalpodautoscalers" in API group "autoscaling" in the namespace "staging"

I've tried to setup a cron job to patch a horizontal pod autoscaler, but the job returns horizontalpodautoscalers.autoscaling "my-web-service" is forbidden: User "system:serviceaccount:staging:sa-cron-runner" cannot get resource "horizontalpodautoscalers" in API group "autoscaling" in the namespace "staging"
I've tried setting up all the role permissions as below:
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: staging
name: cron-runner
rules:
- apiGroups: ["autoscaling"]
resources: ["horizontalpodautoscalers"]
verbs: ["patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: cron-runner
namespace: staging
subjects:
- kind: ServiceAccount
name: sa-cron-runner
namespace: staging
roleRef:
kind: Role
name: cron-runner
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: sa-cron-runner
namespace: staging
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: scale-up-job
namespace: staging
spec:
schedule: "16 20 * * 1-6"
concurrencyPolicy: Forbid
jobTemplate:
spec:
template:
spec:
serviceAccountName: sa-cron-runner
containers:
- name: scale-up-job
image: bitnami/kubectl:latest
command:
- /bin/sh
- -c
- kubectl patch hpa my-web-service --patch '{\"spec\":{\"minReplicas\":6}}'
restartPolicy: OnFailure
kubectl auth can-i patch horizontalpodautoscalers -n staging --as sa-cron-runner also returns no, so the permissions can't be setup correctly.
How can I debug this? I can't see how this is setup incorrectly
I think that the problem is that the cronjob needs to first get the resource and then Patch the same. So, you need to explicitly specify the permission to get in the Role spec.
The error also mentions authentication problem with getting the resource:
horizontalpodautoscalers.autoscaling "my-web-service" is forbidden: User "system:serviceaccount:staging:sa-cron-runner" cannot get resource "horizontalpodautoscalers" in API group "autoscaling" in the namespace "staging"
Try modifying the verbs part of the Role as:
verbs: ["patch", "get"]
You could also include list and watch depending on the requirements in the scripts that is run inside the cronjob.

Receiving error when calling Kubernetes API from the Pod

I have .NET Standard (4.7.2) simple application that is containerized. It has a method to list all namespaces in a cluster. I used csharp kubernetes client to interact with the API. According to official documentation the default credential of API server are created in a pod and used to communicate with API server, but while calling kubernetes API from the pod, getting following error:
Operation returned an invalid status code 'Forbidden'
My deployment yaml is very minimal:
apiVersion: v1
kind: Pod
metadata:
name: cmd-dotnetstdk8stest
spec:
nodeSelector:
kubernetes.io/os: windows
containers:
- name: cmd-dotnetstdk8stest
image: eddyuk/dotnetstdk8stest:1.0.8-cmd
ports:
- containerPort: 80
I think you have RBAC activatet inside your Cluster. You need to assign a ServiceAccount to your pod which containing a Role, that allows this ServerAccount to get a list of all Namespaces. When no ServiceAccount is specified in the Pod-Template, the namespaces default ServiceAccount will be assigned to the pods running in this namespace.
First, you should create the Role
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: <YOUR NAMESPACE>
name: namespace-reader
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["namespaces"] # Resource is namespaces
verbs: ["get", "list"] # Allowing this roll to get and list namespaces
Create a new ServiceAccount inside your Namespace
apiVersion: v1
kind: ServiceAccount
metadata:
name: application-sa
namespace: <YOUR-NAMESPACE>
Assign your Role created Role to the Service-Account:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: allow-namespace-listing
namespace: <YOUR-NAMESPACE>
subjects:
- kind: ServiceAccount
name: application-sa # Your newly created Service-Account
namespace: <YOUR-NAMESPACE>
roleRef:
kind: Role
name: namespace-reader # Your newly created Role
apiGroup: rbac.authorization.k8s.io
Assign the new Role to your Pod by adding a ServiceAccount to your Pod Spec:
apiVersion: v1
kind: Pod
metadata:
name: podname
namespace: <YOUR-NAMESPACE>
spec:
serviceAccountName: application-sa
You can read more about RBAC in the official docs. Maybe you want to use kubectl-Commands instead of YAML definitions.

secrets can be seen as a env variable even if RBAC policies are configured

Setup a secret and two service accounts(access-sa and no-access-sa) in test namespace in kubernetes.
Then after RoleBind them to appropriate ClusterRoles (access-cr and no-access-cr) where one is having access to secrets in a test namespace and other not.
Created two pods (access-pod and no-access-pod) one using access-sa and other using no-access-sa, having a shell script passed to command which pints env variable.
Question is why the pod logs shows the secret for no-access-pod even when RBAC policy is configured to not have access to secrets.
apiVersion: v1
kind: Secret
metadata:
namespace: test
name: api-access-secret
type: Opaque
data:
username: YWRtaW4=
password: cGFzc3dvcmQ=
---
# Service account for preventing API access
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: test
name: no-access-sa
---
# Service account for accessing secrets API
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: test
name: secret-access-sa
---
# A role with no access
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: test
name: no-access-cr
rules:
- apiGroups: [""] # "" indicates the core API group
resources: [""]
verbs: [""]
---
# A role for reading/listing secrets
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: test
name: secret-access-cr
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["secrets", "pods"]
verbs: ["get", "watch", "list"]
---
# The role binding to combine the no-access service account and role
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
namespace: test
name: no-access-rb
subjects:
- kind: ServiceAccount
name: no-access-sa
roleRef:
kind: Role
name: no-access-cr
apiGroup: rbac.authorization.k8s.io
---
# The role binding to combine the secret-access service account and role
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
namespace: test
name: secret-access-rb
subjects:
- kind: ServiceAccount
name: secret-access-sa
roleRef:
kind: Role
name: secret-access-cr
apiGroup: rbac.authorization.k8s.io
---
# Create a pod with the no-access service account
kind: Pod
apiVersion: v1
metadata:
namespace: test
name: no-access-pod
spec:
serviceAccountName: no-access-sa
containers:
- name: no-access-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c" ]
args:
- while true; do
env;
done
env:
# Define the environment variable
- name: SPECIAL_LEVEL_KEY
valueFrom:
secretKeyRef:
name: api-access-secret
key: username
---
# Create a pod with the secret-access service account
kind: Pod
apiVersion: v1
metadata:
namespace: test
name: secret-access-pod
spec:
serviceAccountName: secret-access-sa
containers:
- name: access-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c" ]
args:
- while true; do
env;
done
env:
# Define the environment variable
- name: SPECIAL_LEVEL_KEY
valueFrom:
secretKeyRef:
name: api-access-secret
key: username
In both the cases I am able to see the value of SPECIAL_LEVEL_KEY as admin
SPECIAL_LEVEL_KEY=admin
Please be aware, that the Authorization process (RBAC) starts first for you, as a cluster operator using client tool (kubectl). It simply verifies, whether you are authorized to create resource objects specified in manifest files you shared. In your case Authorization includes checking if you can perform 'get' action on 'secrets' resource, you not any of ServiceAccounts you declared before.
If you want to verify, if your RBAC policies are working as intended from inside the pod, just follow the instructions on accessing Kubernetes API from inside the cluster and query the following API URI: '/api/v1/namespaces/test/secrets'

.kube/config how to make it available to a rest service deployed in kubernetes

Whats the best approach to provide a .kube/config file in a rest service deployed on kubernetes?
This will enable my service to (for example) use the kuberntes client api.
R
Create service account:
kubectl create serviceaccount example-sa
Create a role:
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: default
name: example-role
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["pods"]
verbs: ["get", "watch", "list"]
Create role binding:
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1alpha1
metadata:
name: example-role-binding
namespace: default
subjects:
- kind: "ServiceAccount"
name: example-sa
roleRef:
kind: Role
name: example-role
apiGroup: rbac.authorization.k8s.io
create pod using example-sa
kind: Pod
apiVersion: v1
metadata:
name: example-pod
spec:
serviceAccountName: example-sa
containers:
- name: secret-access-container
image: example-image
The most important line in pod definition is serviceAccountName: example-sa. After creating service account and adding this line to your pod's definition you will be able to access your api access token at /var/run/secrets/kubernetes.io/serviceaccount/token.
Here you can find a little bit more detailed version of the above example.