Well, I'm using minikube v1.26.0 and kubectl to manage Kubernetes on my local machine, when I decide to create a ConfigMap with kubectl apply -f ConfigMapFile.yaml I'm getting error no matches for kind "configmap" in version "apps/v1"
ConfigMapFile.yaml
apiVersion: apps/v1
kind: ConfigMap
metadata:
name: test-config-map
data:
.........
Seems Like ConfigMap is not allowed or deprecated in Kubernetes apps/v1, but cannot find any solution or tips that would help me with this problem.
you need to use apiVersion: v1 for configmap. You can also check the version of any resource using:
kubectl api-resources |grep -i configmap
configmaps cm v1 true ConfigMap
Related
What will be the equivalent ConfigMap YAML code for the following command line?
kubectl create configmap mongo-initdb --from-file=init-mongo.js
There is actually a kubectl command that lets you get the generated yaml for a created config map. In your case:
kubectl get configmaps mongo-initdb -o yaml
You can use the command as suggested by Mo Xue, however here is YAML,
Example :
apiVersion: v1
kind: ConfigMap
metadata:
name: mongo-initdb
data:
init-mongo.js: |
<Content>
Read more about : https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#create-configmaps-from-directories
Create the configmap from a file.
Just create a sample file like ui.properties
example:
cat ui.properties
1. name=x
2. rollno=y
Command to create a configmap from above file
kubectl create configmap ui-configmap --from-file=ui.properties
Verify the data.
kubectl get configmap ui-configmap -o yaml
apiVersion: v1
kind: ConfigMap
metadata:
creationTimestamp: <>
name: ui-configmap
namespace: default
data:
name: x
role: y
I have created a service account "serviceacc" in a namespace xyz and gave it needed permissions. Yet it could not list pods. Here are the steps I followed.
$kubectl create namespace xyz
$kubectl apply -f objects.yaml
Where content of objects.yaml
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: xyz
name: listpodser
rules:
- apiGroups: [""]
resources: ["pod"]
verbs: ["get", "list"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: xyz
name: service-listpodser
subjects:
- kind: ServiceAccount
name: serviceacc
apiGroup: ""
roleRef:
kind: Role
name: listpodser
apiGroup: ""
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: serviceacc
namespace: xyz
Then I checked if the service account has permission to list pods:
$ kubectl auth can-i get pods --namespace xyz --as system:serviceaccount:xyz:serviceacc
no
$ kubectl auth can-i list pods --namespace xyz --as system:serviceaccount:xyz:serviceacc
no
As we can see from the output of above command, it cannot get/list pods.
Simple naming confusion. Use pods instead of pod in the resource list.
You can do it simpler and easier this way:
kubectl create sa serviceacc -n xyz
kubectl create role listpodser --verb=get,list --resource=po -n xyz
kubectl create -n xyz rolebinding service-listpodser --role=listpodser --serviceaccount=xyz:serviceacc
Note the short name for pods is accepted here po, you can use the short name for any api object
short names: If you need to list the short names for all objects, run this command kubectl apu-resources, the same way you can use the short name for other objects.e.g. pv instead of persistentvolume.
This way you can write these three lines into a shell script to create all in one shot
I was wondering if anyone has any ideas on the best way to create 200 namespaces within a cluster.
Ideally a simple bash loop to create kubectl create namespaces would be good.
You can dynamically create a YAML file using any programming language you are most comfortable with (bash or python), consisting of a list of k8s namespaces with the following format:
$ cat namespaces-list.yaml
---
apiVersion: v1
kind: List
items:
- apiVersion: v1
kind: Namespace
metadata:
name: namespace-list1
- apiVersion: v1
kind: Namespace
metadata:
name: namespace-list2
- apiVersion: v1
kind: Namespace
metadata:
name: namespace-list3
Then execute the following command to create them all in one shot! kubectl apply -f namespaces-list.yaml
Hope this helped!
I eventually came up with something as simple this
#!/usr/bin/env bash
for i in $(seq 100); do
kubectl create namespace test-${i}
done
I have a problem in my Kubernetes cluster, that suddendly appeared two weeks ago. The ClusterRoles I create are not visible when RBAC for a given ServiceAccount are resolved. Here is a minimal set to reproduce the problem.
Create relevant ClusterRole, ClusterRoleBinding and a ServiceAccount in the default namespace to have the rights to see Endpoints with this SA.
# test.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: test-sa
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: test-cr
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: test-crb
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: test-cr
subjects:
- kind: ServiceAccount
name: test-sa
namespace: default
$ kubectl apply -f test.yaml
serviceaccount/test-sa created
clusterrole.rbac.authorization.k8s.io/test-cr created
clusterrolebinding.rbac.authorization.k8s.io/test-crb created
All objects, in particular the ClusterRole, are visible if requested directly.
$ kubectl get serviceaccount test-sa
NAME SECRETS AGE
test-sa 1 57s
$ kubectl get clusterrolebinding test-crb
NAME AGE
test-crb 115s
$ kubectl get clusterrole test-cr
NAME AGE
test-cr 2m19s
However, when I try to resolve the effective rights for this ServiceAccount, here the error I get back:
$ kubectl auth can-i get endpoints --as=system:serviceaccount:default:test-sa
no - RBAC: clusterrole.rbac.authorization.k8s.io "test-cr" not found
The RBAC rules created before the breakage are working properly. For instance, here for the ServiceAccount of my etcd-operator that I deployed with Helm several months ago:
$ kubectl auth can-i get endpoints --as=system:serviceaccount:etcd:etcd-etcd-operator-etcd-operator
yes
The version of Kubernetes in this cluster is the 1.17.0-0.
I am also seeing very slow deployements lately of new Pods, that can take up to 5 mins to start to be deployed after they have been created by a StatefulSet or a Deployment, if this can help.
Do you have any insight of what is going on, or even what I could do about it? Please note that my Kubernetes cluster is managed, so I do not have any control on the underlying system, I just have the cluster-admin privileges as a customer. But it would greatly help anyway if I could give any direction to the administrators.
Thanks in advance!
Thanks a lot for your answers!
It turned out that we will certainly never have the final world about what happen. The cluster provider just restarted the kube-apiserver, and this fixed the issue.
I suppose that something went wrong like caching or other transient failures, that can not be defined as a reproductible error.
To give a little more data for a future reader, the error occured on a Kubernetes cluster managed by OVH, and their specificity is to run the control plane itself as pods deployed in a master Kubernetes cluster on their side.
I have installed helm 2.6.2 on the kubernetes 8 cluster. helm init worked fine. but when I run helm list it giving this error.
helm list
Error: configmaps is forbidden: User "system:serviceaccount:kube-system:default" cannot list configmaps in the namespace "kube-system"
How to fix this RABC error message?
Once these commands:
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
helm init --service-account tiller --upgrade
were run, the issue has been solved.
More Secure Answer
The accepted answer gives full admin access to Helm which is not the best solution security wise. With a little more work, we can restrict Helm's access to a particular namespace. More details in the Helm documentation.
$ kubectl create namespace tiller-world
namespace "tiller-world" created
$ kubectl create serviceaccount tiller --namespace tiller-world
serviceaccount "tiller" created
Define a Role that allows Tiller to manage all resources in tiller-world like in role-tiller.yaml:
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: tiller-manager
namespace: tiller-world
rules:
- apiGroups: ["", "batch", "extensions", "apps"]
resources: ["*"]
verbs: ["*"]
Then run:
$ kubectl create -f role-tiller.yaml
role "tiller-manager" created
In rolebinding-tiller.yaml,
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: tiller-binding
namespace: tiller-world
subjects:
- kind: ServiceAccount
name: tiller
namespace: tiller-world
roleRef:
kind: Role
name: tiller-manager
apiGroup: rbac.authorization.k8s.io
Then run:
$ kubectl create -f rolebinding-tiller.yaml
rolebinding "tiller-binding" created
Afterwards you can run helm init to install Tiller in the tiller-world namespace.
$ helm init --service-account tiller --tiller-namespace tiller-world
Now prefix all commands with --tiller-namespace tiller-world or set TILLER_NAMESPACE=tiller-world in your environment variables.
More Future Proof Answer
Stop using Tiller. Helm 3 removes the need for Tiller completely. If you are using Helm 2, you can use helm template to generate the yaml from your Helm chart and then run kubectl apply to apply the objects to your Kubernetes cluster.
helm template --name foo --namespace bar --output-dir ./output ./chart-template
kubectl apply --namespace bar --recursive --filename ./output -o yaml
Helm runs with "default" service account. You should provide permissions to it.
For read-only permissions:
kubectl create rolebinding default-view --clusterrole=view --serviceaccount=kube-system:default --namespace=kube-system
For admin access: Eg: to install packages.
kubectl create clusterrolebinding add-on-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default
The default serviceaccount does not have API permissions. Helm likely needs to be assigned a service account, and that service account given API permissions. See the RBAC documentation for granting permissions to service accounts: https://kubernetes.io/docs/admin/authorization/rbac/#service-account-permissions
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
kubectl apply -f your-config-file-name.yaml
and then update helm instalation to use serviceAccount:
helm init --service-account tiller --upgrade
I got a this error while trying to install tiller in offline mode, I thought the 'tiller' service account didn't have enough rights but at it turns out that a network policy was blocking the communication between tiller and the api-server.
The solution was to create a network policy for tiller allowing all egress communication of tiller
export TILLER_NAMESPACE=<your-tiller-namespace> solved it for me, if <your-tiller-namespace> is not kube-system. This points the Helm client to the right Tiller namespace.
If you are using an EKS cluster from AWS and are facing the forbidden issue ( eg: forbidden: User ... cannot list resource "jobs" in API group "batch" in the namespace "default" then this worked for me:
Solution:
Ensure you have configured AWS
Ensure that configured user has the permission to access the cluster.