I'm trying to deploy a java spring project on my local minikube using gitlab-ci pipeline.. but I keep getting
ERROR: Job failed (system failure): prepare environment: setting up credentials: secrets is forbidden: User "system:serviceaccount:maverick:default" cannot create resource "secrets" in API group "" in the namespace "maverick". Check https://docs.gitlab.com/runner/shells/index.html#shell-profile-loading for more information
I've installed gitlab-runner on the "maverick" namespace
apiVersion: v1
kind: ServiceAccount
metadata:
name: gitlab-runner
namespace: maverick
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: gitlab-runner
namespace: maverick
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["list", "get", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get"]
- apiGroups: [""]
resources: ["pods/attach"]
verbs: ["list", "get", "create", "delete", "update"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["list", "get", "create", "delete", "update"]
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["list", "get", "watch", "create", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: gitlab-runner
namespace: maverick
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: maverick
subjects:
- namespace: maverick
kind: ServiceAccount
name: gitlab-runner
and the values
gitlabUrl: https://gitlab.com/
runnerRegistrationToken: ".... my token .... "
runners:
privileged: false
tags: k8s
serviceAccountName: gitlab-runner
My gitlab-ci.yml is like this:
docker-build-job:
stage: docker-build
image: $MAVEN_IMAGE
script:
- mvn jib:build -Djib.to.image=${CI_REGISTRY_IMAGE}:latest -Djib.to.auth.username=${CI_REGISTRY_USER} -Djib.to.auth.password=${CI_REGISTRY_PASSWORD}
deploy-job:
image: alpine/helm:3.2.1
stage: deploy
tags:
- k8s
script:
- helm upgrade ${APP_NAME} ./charts --install --values=./charts/values.yaml --namespace ${APP_NAME}
rules:
- if: $CI_COMMIT_BRANCH == 'master'
when: always
And the chart folder has the deployment.yaml like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: maverick
namespace: maverick
spec:
replicas: 1
selector:
matchLabels:
app: maverick
template:
metadata:
labels:
app: maverick
spec:
containers:
- name: maverick
image: registry.gitlab.com/gfalco77/maverick:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8001
imagePullSecrets:
- name: registry-credentials
---
apiVersion: v1
kind: Service
metadata:
name: maverick
spec:
ports:
- name: maverick
port: 8001
targetPort: 8001
protocol: TCP
selector:
app: maverick
There's also a registry-credentials which I created according to https://chris-vermeulen.com/using-gitlab-registry-with-kubernetes/ and they are installed in the maverick namespace
apiVersion: v1
kind: Secret
metadata:
name: registry-credentials
namespace: maverick
type: kubernetes.io/dockerconfigjson
data:
.dockerconfigjson: .. base64 creds ..
I can see the gitlab-runner has the permissions on apigroup "" for create.. but still it seems it can't download the image from the registry maybe, not sure what is wrong?
Thanks in advance
Problem solved adding the following ClusterRole and ClusterRoleBinding, especially the second one with name "default"
After this the job in gitlab continues and then tries to use the user system:serviceaccount:maverick:gitlab-runner , but it fails on something else I need to figure out
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: cluster-admin
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["list", "get", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get"]
- apiGroups: [""]
resources: ["pods/attach"]
verbs: ["list", "get", "create", "delete", "update"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["list", "watch", "get", "create", "delete", "update"]
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["list", "get", "watch", "create", "delete", "update"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: cluster-admin-role
subjects:
- kind: ServiceAccount
name: gitlab-runner
namespace: maverick
roleRef: # referring to your ClusterRole
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: cluster-admin-role
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: default
namespace: maverick
Related
Hi I am trying to write a simple pipeline to delete some ecr images that clutter the repo. I want Jenkins to do it. I get error:
An error occurred (AccessDeniedException) when calling the BatchDeleteImage operation: User: arn:aws:sts::~:assumed-role/~cluster-nodegr-NodeInstanceRole-~/i-~ is not authorized to perform: ecr:BatchDeleteImage on resource: arn:aws:ecr:~:~:repository/~ because no identity-based policy allows the ecr:BatchDeleteImage action
Jenkins is running on k8s. I used similar yaml in addition to other yamls to get up and running:
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: jenkins
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: jenkins
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["watch"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: jenkins
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: jenkins
subjects:
- kind: ServiceAccount
name: jenkins
The pipeline looks like this:
pipeline {
agent {
kubernetes {
inheritFrom 'jenkins-slave'
}
}
stage('test') {
steps {
sh '''aws ecr batch-delete-image \
--repository-name <repo-name> \
--image-ids imageDigest=<img digest>
'''
}
}
}
I tried to add this:
- apiGroups: ["ecr"]
resources: ["*"]
verbs: ["batchDeleteImage"]
resourceNames:
- "*"
but didn't work.
Very confused, I used an ec2 instance to bootstrap an eks cluster and everything worked completely fine yesterday. I deleted that cluster last night, just spun a new one up and now I'm getting this error when trying to build my jenkins pod
Error: stat /mnt/jenkins-store: no such file or directory
I find it strange how this error didn't show up yesterday and I set everything up the exact same way today. That error is what I got when I described my jenkins pod.
Here's my jenkins.yaml file for reference
apiVersion: v1
kind: ServiceAccount
metadata:
name: jenkins
namespace: default
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: jenkins
namespace: default
rules:
- apiGroups: [""]
resources: ["pods","services"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: ["apps"]
resources: ["deployments"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["create","delete","get","list","patch","update","watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: jenkins
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: jenkins
subjects:
- kind: ServiceAccount
name: jenkins
---
# Allows jenkins to create persistent volumes
# This cluster role binding allows anyone in the "manager" group to read secrets in any namespace.
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: jenkins-crb
subjects:
- kind: ServiceAccount
namespace: default
name: jenkins
roleRef:
kind: ClusterRole
name: jenkinsclusterrole
apiGroup: rbac.authorization.k8s.io
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
# "namespace" omitted since ClusterRoles are not namespaced
name: jenkinsclusterrole
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["create","delete","get","list","patch","update","watch"]
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins
namespace: default
spec:
selector:
matchLabels:
app: jenkins
replicas: 1
template:
metadata:
labels:
app: jenkins
spec:
containers:
- name: jenkins
image: jenkins/jenkins:lts
env:
- name: JAVA_OPTS
value: -Djenkins.install.runSetupWizard=false
ports:
- name: http-port
containerPort: 8080
- name: jnlp-port
containerPort: 50000
volumeMounts:
- name: jenkins-home
mountPath: /var
subPath: jenkins_home
- name: docker-sock-volume
mountPath: "/var/run/docker.sock"
imagePullPolicy: Always
volumes:
# This allows jenkins to use the docker daemon on the host, for running builds
# see https://stackoverflow.com/questions/27879713/is-it-ok-to-run-docker-from-inside-docker
- name: docker-sock-volume
hostPath:
path: /var/run/docker.sock
- name: jenkins-home
hostPath:
path: /mnt/jenkins-store
serviceAccountName: jenkins
---
apiVersion: v1
kind: Service
metadata:
name: jenkins
namespace: default
spec:
type: NodePort
ports:
- name: ui
port: 8080
targetPort: 8080
nodePort: 31000
- name: jnlp
port: 50000
targetPort: 50000
selector:
app: jenkins
---
Similarly to this answer, check if your ec2-run-instances launched as your new ec2 spawned instance should not include some volumes, which would be then used by the jenkins.yaml.
The lack of such a volume would explain the Error: stat /mnt/jenkins-store message.
I have access to only one namespace inside the cluster and that too is restricted.
kind: Role
kind: ClusterRole
kind: RoleBinding
kind: ClusterRoleBinding
are forbidden to me. So im not able to create kubernetes dashboard as per the recommended yaml.
How to get around this?
It's not possible to achieve it unless you ask someone with enough rights to create the objects you can't for you.
Here is a sample manifest used to apply the dashboard to a cluster. As you can see you have to be able to manage Role, ClusterRole, RoleBinding and ClusterRoleBinding to apply it.
So it's impossible to create it with the rights you have as they are essential in this case.
Here is the part affected by lack of your rights:
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
rules:
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
# Allow Dashboard to get metrics.
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster", "dashboard-metrics-scraper"]
verbs: ["proxy"]
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
verbs: ["get"]
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
rules:
# Allow Metrics Scraper to get metrics from the Metrics server
- apiGroups: ["metrics.k8s.io"]
resources: ["pods", "nodes"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
```
I set up a Ceph cluster and mounted manually using the sudo mount -t command following the official documentation, and I checked the status of my Ceph cluster - no problems there. Now I am trying to mount my CephFS on Kubernetes but my pod is stuck in ContainerCreating when I run the kubectl create command because it is failing to mount. I looked at many related problems/solutions online but nothing works.
As reference, I am following this guide: https://medium.com/velotio-perspectives/an-innovators-guide-to-kubernetes-storage-using-ceph-a4b919f4e469
My setup consists of 5 AWS instances, and they are as follows:
Node 1: Ceph Mon
Node 2: OSD1 + MDS
Node 3: OSD2 + K8s Master
Node 4: OSD3 + K8s Worker1
Node 5: CephFS + K8s Worker2
Is it okay to stack K8s on top of the same instance as Ceph? I am pretty sure that is allowed, but if that is not allowed, please let me know.
In the describe pod logs, this is the error/warning:
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /root/userone/kubelet/pods/bbf28924-3639-11ea-879d-0a6b51accf30/volumes/kubernetes.io~cephfs/pvc-4777686c-3639-11ea-879d-0a6b51accf30 --scope -- mount -t ceph -o name=kubernetes-dynamic-user-4d05a2df-3639-11ea-b2d3-5a4147fda646,secret=AQC4whxeqQ9ZERADD2nUgxxOktLE1OIGXThBmw== 172.31.15.110:6789:/pvc-volumes/kubernetes/kubernetes-dynamic-pvc-4d05a269-3639-11ea-b2d3-5a4147fda646 /root/userone/kubelet/pods/bbf28924-3639-11ea-879d-0a6b51accf30/volumes/kubernetes.io~cephfs/pvc-4777686c-3639-11ea-879d-0a6b51accf30
Output: Running scope as unit run-2382233.scope.
couldn't finalize options: -34
These are my .yaml files:
Provisioner:
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: test-provisioner-dt
namespace: test-dt
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update", "create"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
- apiGroups: [""]
resources: ["services"]
resourceNames: ["kube-dns","coredns"]
verbs: ["list", "get"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["create", "get", "delete"]
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: test-provisioner-dt
namespace: test-dt
subjects:
- kind: ServiceAccount
name: test-provisioner-dt
namespace: test-dt
roleRef:
kind: ClusterRole
name: test-provisioner-dt
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: test-provisioner-dt
namespace: test-dt
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["create", "get", "delete"]
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
StorageClass:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: postgres-pv
namespace: test-dt
provisioner: ceph.com/cephfs
parameters:
monitors: 172.31.15.110:6789
adminId: admin
adminSecretName: ceph-secret-admin-dt
adminSecretNamespace: test-dt
claimRoot: /pvc-volumes
PVC:
apiVersion: v1
metadata:
name: postgres-pvc
namespace: test-dt
spec:
storageClassName: postgres-pv
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Gi
Output of kubectl get pv and kubectl get pvc show the volumes are bound and claimed, no errors.
Output of the provisioner pod logs all show success/no errors.
Please help!
I want to access to the kubernetes deployment objects via api server .
I have a service account file shown as below .
apiVersion: v1
kind: ServiceAccount
metadata:
name: name
namespace: namespace
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: name
namespace: namespace
rules:
- apiGroups: [""]
resources: ["deployment"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["deployment/exec"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["deployment/log"]
verbs: ["get","list","watch"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: name
namespace: namespace
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: name
subjects:
- kind: ServiceAccount
name: name
I'm getting 403 Forbidden error the token with owner of this service-accounts while access the endpoint
/apis/apps/v1beta1/namespaces/namespace/deployments
All of your role's rules are for the core (empty) API group. However, the URL you're trying to access, /apis/apps/v1beta1, is in the "apps" API group (the part of the path after /apis). So to access that particular API path, you need to change the role definition to
rules:
- apiGroups: ["apps"]
resources: ["deployment"]
verbs: ["create","delete","get","list","patch","update","watch"]
# and also the other deployment subpaths