I have an application that uses a database. I want to set up a GitLab CI/CD pipeline to deploy my app to a Kubernetes cluster. My issue right now is that I can't seem to get persistent storage to work. My thought processes are as follows:
Create a persistent Volume -> Create a persistent Volume Claim -> Mount that PVC to my pod running a database
I am running into the issue that a PV is a system-wide configuration, so GitLab can't seem to create one. If I manage to make A PV before deployment GitLab only allows me to work with objects within a specific namespace. This means the PVC won't see the PV I created when my pipeline is run.
manifest.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-sql0001
labels:
type: amazoneEBS
spec:
capacity:
storage: 15Gi
accessModes:
- ReadWriteOnce
awsElasticBlockStore:
volumeID: <volume ID>
fsType: ext4
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: sql-pvc
labels:
type: amazoneEBS
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 15Gi
selector:
matchLabels:
type: "amazoneEBS"
kubectl Error
kubectl apply -f manifest.yaml
persistentvolumeclaim/sql-pvc created
Error from server (Forbidden): error when retrieving current configuration of:
Resource: "/v1, Resource=persistentvolumes", GroupVersionKind: "/v1, Kind=PersistentVolume"
Name: "pv-sql0001", Namespace: ""
from server for: "manifest.yaml": persistentvolumes "pv-sql0001" is forbidden: User "system:serviceaccount:namespace:namespace-service-account" cannot get resource "persistentvolumes" in API group "" at the cluster scope
I tried what was recommended in #Rakesh Gupta post but I am still getting the same error. Unless I am misunderstanding.
eddy#DESKTOP-1MHAKBA:~$ kubectl describe ClusterRole stateful-site-26554211-CR --namespace=stateful-site-26554211-pr
Name: stateful-site-26554211-CR
Labels: <none>
Annotations: <none>
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
namespaces [] [] [list watch create]
nodes [] [] [list watch create]
persistentvolumes [] [] [list watch create]
storageclasses.storage.k8s.io [] [] [list watch create]
eddy#DESKTOP-1MHAKBA:~$ kubectl describe ClusterRoleBinding stateful-site-26554211-CRB --namespace=stateful-site-26554211-production
Name: stateful-site-26554211-CRB
Labels: <none>
Annotations: <none>
Role:
Kind: ClusterRole
Name: stateful-site-26554211-CR
Subjects:
Kind Name Namespace
---- ---- ---------
ServiceAccount stateful-site-26554211-production-service-account stateful-site-26554211-production
Any insight into how I should do this would be appreciated. I might just be doing this all wrong, and maybe there is a better way. I will be around to answer any questions.
You need to create a ServiceAccount, ClusterRole and ClusterRoleBinding as PV, PVC, Nodes are cluster scoped objects.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: <name of your cluster role>
rules:
- apiGroups: [""]
resources:
- nodes
- persistentvolumes
- namespaces
verbs: ["list", "watch", "create"]
- apiGroups: ["storage.k8s.io"]
resources:
- storageclasses
verbs: ["list", "watch", "create"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: <name of your cluster role binding>
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: <name of your cluster role which should be matched with the previous one>
subjects:
- kind: ServiceAccount
name: <service account name>
Reference: https://stackoverflow.com/a/60617584/2777988
If this does not work, you may try to remove "PersistentVolume" section from your yaml. Looks like your setup doesn't allow PersistentVolume creation. Howerver, PVC may in turn create a PV.
Related
I want to create a Kubernetes CronJob that deletes resources (Namespace, ClusterRole, ClusterRoleBinding) that may be left over (initially, the criteria will be "has label=Something" and "is older than 30 minutes". (Each namespace contains resources for a test run).
I created the CronJob, a ServiceAccount, a ClusterRole, a ClusterRoleBinding, and assigned the service account to the pod of the cronjob.
The cronjob uses an image that contains kubectl, and some script to select the correct resources.
My first draft looks like this:
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: my-app
namespace: default
labels:
app: my-app
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: my-app
namespace: default
labels:
app: my-app
spec:
concurrencyPolicy: Forbid
schedule: "*/1 * * * *"
jobTemplate:
# job spec
spec:
template:
# pod spec
spec:
serviceAccountName: my-app
restartPolicy: Never
containers:
- name: my-app
image: image-with-kubectl
env:
- name: MINIMUM_AGE_MINUTES
value: '2'
command: [sh, -c]
args:
# final script is more complex than this
- |
kubectl get namespaces
kubectl get clusterroles
kubectl get clusterrolebindings
kubectl delete Namespace,ClusterRole,ClusterRoleBinding --all-namespaces --selector=bla=true
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: my-app
labels:
app: my-app
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: my-app
subjects:
- kind: ServiceAccount
name: my-app
namespace: default
apiGroup: ""
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: my-app
labels:
app: my-app
rules:
- apiGroups: [""]
resources:
- namespaces
- clusterroles
- clusterrolebindings
verbs: [list, delete]
The cronjob is able to list and delete namespaces, but not cluster roles or cluster role bindings. What am I missing?
(Actually, I'm testing this with a Job first, before moving to a CronJob):
NAME STATUS AGE
cattle-system Active 16d
default Active 16d
fleet-system Active 16d
gitlab-runner Active 7d6h
ingress-nginx Active 16d
kube-node-lease Active 16d
kube-public Active 16d
kube-system Active 16d
security-scan Active 16d
Error from server (Forbidden): clusterroles.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:default:my-app" cannot list resource "clusterroles" in API group "rbac.authorization.k8s.io" at the cluster scope
Error from server (Forbidden): clusterrolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:default:my-app" cannot list resource "clusterrolebindings" in API group "rbac.authorization.k8s.io" at the cluster scope
Error from server (Forbidden): clusterroles.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:default:my-app" cannot list resource "clusterroles" in API group "rbac.authorization.k8s.io" at the cluster scope
Error from server (Forbidden): clusterrolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:default:my-app" cannot list resource "clusterrolebindings" in API group "rbac.authorization.k8s.io" at the cluster scope`
You need to change your ClusterRole like this :
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: my-app
labels:
app: my-app
rules:
- apiGroups: [""]
resources:
- namespaces
verbs: [list, delete]
- apiGroups: ["rbac.authorization.k8s.io"]
resources:
- clusterroles
- clusterrolebindings
verbs: [list, delete]
The ressources are now in the right apiGroup
So like you said I created a another pod which is of kind:job and included the script.sh.
In the script.sh file, I run "kubectl exec" to the main pod to run few commands
The script gets executed, but I get the error "cannot create resource "pods/exec in API group"
So I created a clusterrole with resources: ["pods/exec"] and bind it to the default service account using ClusterRoleBinding
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: pod-reader
rules:
- apiGroups: [""]
resources: ["pods", "pods/log"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: service-account-role-binding
namespace: default
subjects:
- kind: ServiceAccount
name: default
namespace: default
roleRef:
kind: ClusterRole
name: pod-reader
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: default
namespace: default
In the pod which is of kind:job, I include the service account like shown below
restartPolicy: Never
serviceAccountName: default
but I still get the same error. What am I doing wrong here ?
Error from server (Forbidden): pods "mongo-0" is forbidden: User "system:serviceaccount:default:default" cannot create resource "pods/exec" in API group "" in the namespace "default"
If this is something that needs to be regularly run for maintenance look into Kubernetes daemon set object.
I have .NET Standard (4.7.2) simple application that is containerized. It has a method to list all namespaces in a cluster. I used csharp kubernetes client to interact with the API. According to official documentation the default credential of API server are created in a pod and used to communicate with API server, but while calling kubernetes API from the pod, getting following error:
Operation returned an invalid status code 'Forbidden'
My deployment yaml is very minimal:
apiVersion: v1
kind: Pod
metadata:
name: cmd-dotnetstdk8stest
spec:
nodeSelector:
kubernetes.io/os: windows
containers:
- name: cmd-dotnetstdk8stest
image: eddyuk/dotnetstdk8stest:1.0.8-cmd
ports:
- containerPort: 80
I think you have RBAC activatet inside your Cluster. You need to assign a ServiceAccount to your pod which containing a Role, that allows this ServerAccount to get a list of all Namespaces. When no ServiceAccount is specified in the Pod-Template, the namespaces default ServiceAccount will be assigned to the pods running in this namespace.
First, you should create the Role
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: <YOUR NAMESPACE>
name: namespace-reader
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["namespaces"] # Resource is namespaces
verbs: ["get", "list"] # Allowing this roll to get and list namespaces
Create a new ServiceAccount inside your Namespace
apiVersion: v1
kind: ServiceAccount
metadata:
name: application-sa
namespace: <YOUR-NAMESPACE>
Assign your Role created Role to the Service-Account:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: allow-namespace-listing
namespace: <YOUR-NAMESPACE>
subjects:
- kind: ServiceAccount
name: application-sa # Your newly created Service-Account
namespace: <YOUR-NAMESPACE>
roleRef:
kind: Role
name: namespace-reader # Your newly created Role
apiGroup: rbac.authorization.k8s.io
Assign the new Role to your Pod by adding a ServiceAccount to your Pod Spec:
apiVersion: v1
kind: Pod
metadata:
name: podname
namespace: <YOUR-NAMESPACE>
spec:
serviceAccountName: application-sa
You can read more about RBAC in the official docs. Maybe you want to use kubectl-Commands instead of YAML definitions.
I'm trying to create RBAC Role / rules for a service that needs a persistent volume and it's still failing with forbidden error.
Here is my role config:
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: logdrop-user-full-access
namespace: logdrop
rules:
- apiGroups: ["", "extensions", "apps", "autoscaling"]
resources: ["*"]
verbs: ["*"]
- apiGroups: ["batch"]
resources:
- jobs
- cronjobs
verbs: ["*"]
And this is my cut down PersistentVolume manifest:
apiVersion: v1
kind: PersistentVolume
metadata:
name: logdrop-pv
namespace: logdrop
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
claimRef:
namespace: logdrop
name: logdrop-pvc
hostPath:
path: /efs/logdrop/logdrop-pv
When I try to apply it I get a forbidden error.
$ kubectl --kubeconfig ~/logdrop/kubeconfig-logdrop.yml apply -f pv-test.yml
Error from server (Forbidden): error when retrieving current configuration of:
Resource: "/v1, Resource=persistentvolumes", GroupVersionKind: "/v1, Kind=PersistentVolume"
Name: "logdrop-pv", Namespace: ""
Object: &{map["apiVersion":"v1" "kind":"PersistentVolume" "metadata":map["annotations":map["kubectl.kubernetes.io/last-applied-configuration":""] "name":"logdrop-pv"] "spec":map["accessModes":["ReadWriteMany"] "capacity":map["storage":"10Gi"] "claimRef":map["name":"logdrop-pvc" "namespace":"logdrop"] "hostPath":map["path":"/efs/logdrop/logdrop-pv"] "persistentVolumeReclaimPolicy":"Retain"]]}
from server for: "pv-test.yml": persistentvolumes "logdrop-pv" is forbidden: User "system:serviceaccount:logdrop:logdrop-user" cannot get resource "persistentvolumes" in API group "" at the cluster scope
On the last line it specifically says resource "persistentvolumes" in API group "" - that's what I have allowed in the rules!
I can create the PV with admin credentials from the same yaml file and I can create any other resources (pods, services, etc) with the logdrop permissions. Just the PersistentVolume doesn't work for some reason. Any idea why?
I'm using Kubernetes 1.15.0.
Update:
This is my role binding as requested:
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: logdrop-user-view
namespace: logdrop
subjects:
- kind: ServiceAccount
name: logdrop-user
namespace: logdrop
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: logdrop-user-full-access
It's not a ClusterRoleBinding as my intention is to give the user access only to one namespace (logdrop), not to all namespaces across the cluster.
PVs, namespaces, nodes and storages are cluster-scoped objects. As a best practice, to be able to list/watch those objects, you need to create ClusterRole and bind them to a ServiceAccount via ClusterRoleBinding. As an example;
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: <name of your cluster role>
rules:
- apiGroups: [""]
resources:
- nodes
- persistentvolumes
- namespaces
verbs: ["list", "watch"]
- apiGroups: ["storage.k8s.io"]
resources:
- storageclasses
verbs: ["list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: <name of your cluster role binding>
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: <name of your cluster role which should be matched with the previous one>
subjects:
- kind: ServiceAccount
name: <service account name>
I see a potential problem here.
PersistentVolumes are cluster scoped resources. They are expected to be provisioned by the administrator without any namespace.
PersistentVolumeClaims however, can be created by users within a particular namespace as they are a namespaced resources.
That's why when you use admin credentials it works but with logdrop it returns an error.
Please let me know if that makes sense.
The new role needs to be granted to a user, or group of users, with a rolebinding, e.g.:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: logdrop-rolebinding
namespace: logdrop
subjects:
- kind: User
name: logdrop-user
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: logdrop-user-full-access
apiGroup: rbac.authorization.k8s.io
So i have 3 name spaces when i deployed prometheus on kubernetes i see the error in the logs. it is unable to monitor all the name spaces.
Error :
\"system:serviceaccount:development:default\" cannot list endpoints at the cluster scope"
level=error ts=2018-06-28T21:22:07.390161824Z caller=main.go:216 component=k8s_client_runtime err="github.com/prometheus/prometheus/discovery/kubernetes/kubernetes.go:268: Failed to list *v1.Endpoints: endpoints is forbidden: User \"system:serviceaccount:devops:default\" cannot list endpoints at the cluster scope"
You'd better use a service account to access the kubernetes, and give the sa special privilidge that the prometheus needed. like the following:
apiVersion: v1
kind: ServiceAccount
metadata:
name: prometheus
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: prometheus
rules:
- apiGroups: [""]
resources:
- nodes
- services
- endpoints
- pods
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources:
- configmaps
verbs: ["get"]
- nonResourceURLs: ["/metrics"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: prometheus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: prometheus
subjects:
- kind: ServiceAccount
name: prometheus
namespace: kube-system
Presumes that you deploy prometheus in the kube-system namespace. Also you need specify the sa like ' serviceAccount: prometheus' in your prometheus deployment file .