How can i create a service Account as a one liner using kubectl create serviceAccount test-role and then how to pass the metadata?
apiVersion: v1
kind: ServiceAccount
metadata:
name: test-role
namespace: utility
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::xxx:role/rolename
kubectl version
Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.5", GitCommit:"c285e781331a3785a7f436042c65c5641ce8a9e9", GitTreeState:"archive", BuildDate:"1980-01-01T00:00:00Z", GoVersion:"go1.17.10", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"23+", GitVersion:"v1.23.14-eks-ffeb93d", GitCommit:"96e7d52c98a32f2b296ca7f19dc9346cf79915ba", GitTreeState:"clean", BuildDate:"2022-11-29T18:43:31Z", GoVersion:"go1.17.13", Compiler:"gc", Platform:"linux/amd64"}
If by one line you mean one command you can use a heredoc:
kubectl apply -f - <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
name: test-role
namespace: utility
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::xxx:role/rolename
EOF
Using the imperative kubectl commands, requires running two commands:
kubectl -n utility create serviceaccount test-role
kubectl -n utility annotate serviceaccount eks.amazonaws.com/role-arn=arn:aws:iam::xxx:role/rolename
Related
I have Kubernetes version 1.24.3, and I created a new service account named "deployer", but when I checked it, it shows it doesn't have any secrets.
This is how I created the service account:
kubectl apply -f - << EOF
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: deployer
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: deployer-role
rules:
- apiGroups: ["", "extensions", "apps"]
resources:
- deployments
verbs: ["list", "get", "describe", "apply", "delete", "patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: deployer-crb
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: deployer-role
subjects:
- kind: ServiceAccount
name: deployer
namespace: default
---
apiVersion: v1
kind: Secret
type: kubernetes.io/service-account-token
metadata:
name: token-secret
annotations:
kubernetes.io/service-account.name: deployer
EOF
When I checked it, it shows that it doesn't have secrets:
cyber#manager1:~$ kubectl get sa deployer
NAME SECRETS AGE
deployer 0 4m32s
cyber#manager1:~$ kubectl get sa deployer -o yaml
apiVersion: v1
kind: ServiceAccount
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"ServiceAccount","metadata":{"annotations":{},"name":"deployer","namespace":"default"}}
creationTimestamp: "2022-10-13T08:36:54Z"
name: deployer
namespace: default
resourceVersion: "2129964"
uid: cd2bf19f-92b2-4830-8b5a-879914a18af5
And this is the secret that should be associated to the above service account:
cyber#manager1:~$ kubectl get secrets token-secret -o yaml
apiVersion: v1
data:
ca.crt: <REDACTED>
namespace: ZGVmYXVsdA==
token: <REDACTED>
kind: Secret
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Secret","metadata":{"annotations":{"kubernetes.io/service-account.name":"deployer"},"name":"token-secret","namespace":"default"},"type":"kubernetes.io/service-account-token"}
kubernetes.io/service-account.name: deployer
kubernetes.io/service-account.uid: cd2bf19f-92b2-4830-8b5a-879914a18af5
creationTimestamp: "2022-10-13T08:36:54Z"
name: token-secret
namespace: default
resourceVersion: "2129968"
uid: d960c933-5e7b-4750-865d-e843f52f1b48
type: kubernetes.io/service-account-token
What can be the reason?
Update:
The answer help, but for the protocol, it doesn't matter, the token works even it shows 0 secrets:
kubectl get pods --token `cat ./token` -s https://192.168.49.2:8443 --certificate-authority /home/cyber/.minikube/ca.crt --all-namespaces
Other Details:
I am working on Kubernetes version 1.24:
cyber#manager1:~$ kubectl version
WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version.
Client Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.0", GitCommit:"a866cbe2e5bbaa01cfd5e969aa3e033f3282a8a2", GitTreeState:"clean", BuildDate:"2022-08-23T17:44:59Z", GoVersion:"go1.19", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v4.5.7
Server Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.3", GitCommit:"aef86a93758dc3cb2c658dd9657ab4ad4afc21cb", GitTreeState:"clean", BuildDate:"2022-07-13T14:23:26Z", GoVersion:"go1.18.3", Compiler:"gc", Platform:"linux/amd64"}
You can delete it by running:
kubectl delete clusterroles deployer-role
kubectl delete clusterrolebindings deployer-crb
kubectl delete sa deployer
kubectl delete secrets token-secret
Reference to Kubernetes 1.24 changes:
Change log 1.24
Creating secret through the documentation
Base on the change log, the auto-generation of tokens is no longer available for every service account.
The LegacyServiceAccountTokenNoAutoGeneration feature gate is beta, and enabled by default. When enabled, Secret API objects containing service account tokens are no longer auto-generated for every ServiceAccount. Use the TokenRequest API to acquire service account tokens, or if a non-expiring token is required, create a Secret API object for the token controller to populate with a service account token by following this guide.
token-request-v1
stops auto-generation of legacy tokens because they are less secure
work-around
or you can use
kubectl create token SERVICE_ACCOUNT_NAME
kubectl create token deployer
Request a service account token.
Should the roleRef not reference the developer Role which has the name deployer role. Inwould try to replace
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: sdr
with
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: deplyer-role
I trying to make a PodsSecurityPolicy on my Kubernetes Cluster and I got a Official manual from here
Itn't work: I made all steps on my Kubernetes Cluter, but I can't to get a Forbidden massage.
My Kubernetes-cluster:
nks#comp:~$ kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.0", GitCommit:"9e991415386e4cf155a24b1da15becaa390438d8", GitTreeState:"clean", BuildDate:"2020-03-25T14:58:59Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.4", GitCommit:"8d8aa39598534325ad77120c120a22b3a990b5ea", GitTreeState:"clean", BuildDate:"2020-03-12T20:55:23Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
Steps in my case (I marked trhe places "(?!)" where I should get the Forbidden-message but didn't it):
nks#comp:~$ cat psp.yml
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: nksrole
rules:
- apiGroups: ['policy']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames:
- example
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: nkscrb
roleRef:
kind: ClusterRole
name: nksrole
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: Group
apiGroup: rbac.authorization.k8s.io
name: system:serviceaccounts
---
nks#comp:~$ kubectl apply -f psp.yml
clusterrole.rbac.authorization.k8s.io/nksrole created
clusterrolebinding.rbac.authorization.k8s.io/nkscrb created
nks#comp:~$ kubectl create namespace psp-example
namespace/psp-example created
nks#comp:~$ kubectl create serviceaccount -n psp-example fake-user
serviceaccount/fake-user created
nks#comp:~$ kubectl create rolebinding -n psp-example fake-editor --clusterrole=edit --serviceaccount=psp-example:fake-user
rolebinding.rbac.authorization.k8s.io/fake-editor created
nks#comp:~$ alias kubectl-admin='kubectl -n psp-example'
nks#comp:~$ alias kubectl-user='kubectl --as=system:serviceaccount:psp-example:fake-user -n psp-example'
nks#comp:~$ cat example-psp.yaml
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: example
spec:
privileged: false # Don't allow privileged pods!
# The rest fills in some required fields.
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
runAsUser:
rule: RunAsAny
fsGroup:
rule: RunAsAny
volumes:
- '*'
nks#comp:~$ kubectl-admin create -f example-psp.yaml
podsecuritypolicy.policy/example created
nks#comp:~$ kubectl-user create -f- <<EOF
> apiVersion: v1
> kind: Pod
> metadata:
> name: pause
> spec:
> containers:
> - name: pause
> image: k8s.gcr.io/pause
> EOF
pod/pause created
nks#comp:~$ kubectl-user auth can-i use podsecuritypolicy/example
Warning: resource 'podsecuritypolicies' is not namespace scoped in group 'policy'
yes
(?!)
nks#comp:~$ kubectl-admin create role psp:unprivileged \
> --verb=use \
> --resource=podsecuritypolicy \
> --resource-name=example
role.rbac.authorization.k8s.io/psp:unprivileged created
nks#comp:~$ kubectl-admin create rolebinding fake-user:psp:unprivileged \
> --role=psp:unprivileged \
> --serviceaccount=psp-example:fake-user
rolebinding.rbac.authorization.k8s.io/fake-user:psp:unprivileged created
nks#comp:~$ kubectl-user auth can-i use podsecuritypolicy/example
Warning: resource 'podsecuritypolicies' is not namespace scoped in group 'policy'
yes
nks#comp:~$ kubectl-user create -f- <<EOF
> apiVersion: v1
> kind: Pod
> metadata:
> name: privileged
> spec:
> containers:
> - name: pause
> image: k8s.gcr.io/pause
> securityContext:
> privileged: true
> EOF
pod/privileged created
(?!)
Can you help me, please! I have not idea what is wrong
Your cluster version is v1.17.4 and the feature is beta in v1.18 , try after upgrading your cluster.
Also make sure admission controller is enabled for Pod Security Policies,
You need to enable a PSP support in a admission controller
[master]# vi /etc/kubernetes/manifests/kube-apiserver.yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.100.50:6443
creationTimestamp: null
labels:
component: kube-apiserver
tier: control-plane
name: kube-apiserver
namespace: kube-system
spec:
containers:
- command:
- kube-apiserver
- --advertise-address=192.168.100.50
- --allow-privileged=true
- --authorization-mode=Node,RBAC
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --enable-admission-plugins=NodeRestriction,PodSecurityPolicy ## Added PodSecurityPolicy
- --enable-bootstrap-token-auth=true
- --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
- --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
- --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
- --etcd-servers=https://127.0.0.1:2379
...
[master]# systemctl restart kubelet
It'll be useful for PSP
[master]# kubectl-user create -f- <<EOF
apiVersion: v1
kind: Pod
metadata:
name: privileged
spec:
containers:
- name: pause
image: k8s.gcr.io/pause
securityContext:
privileged: true
EOF
Error from server (Forbidden): error when creating "STDIN": pods "privileged" is forbidden: unable to validate agains t any pod security policy: [spec.containers[0].securityContext.privileged: Invalid value: true: Privileged containers are not allowed]
It's my case on a Kubernetes v1.18 - I can't try on Kuberentes v1.17 now
I have just created a GKE cluster on Google Cloud platform. I have installed in the cloud console helm :
$ helm version
version.BuildInfo{Version:"v3.0.0", GitCommit:"e29ce2a54e96cd02ccfce88bee4f58bb6e2a28b6", GitTreeState:"clean", GoVersion:"go1.13.4"}
I have also created the necessary serviceaccount and clusterrolebinding objects:
$ cat helm-rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
$ kubectl apply -f helm-rbac.yaml
serviceaccount/tiller created
clusterrolebinding.rbac.authorization.k8s.io/tiller created
However trying to initialise tiller gives me the following error:
$ helm init --service-account tiller --history-max 300
Error: unknown flag: --service-account
Why is that?
However trying to initialise tiller gives me the following error:
Error: unknown flag: --service-account
Why is that?
Helm 3 is a major upgrade. The Tiller component is now obsolete.
There is no command helm init therefore also the flag --service-account is removed.
The internal implementation of Helm 3 has changed considerably from Helm 2. The most apparent change is the removal of Tiller.
I create a deployment.yaml to create deployment of kubernetes.
Here is my tries:
apiVersion: apps/v1
get error: unable to recognize "./slate-master/deployment.yaml": no matches for kind "Deployment" in version "apps/v1"
apiVersion: extensions/v1beta1 and apiVersion: apps/v1beta1
both of them, get Error from server (BadRequest): error when creating "./slate-master/deployment.yaml": Deployment in version "v1beta1" cannot be handled as a Deployment: v1beta1.Deployment: ...
here is my kubernetes version:
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.2", GitCommit:"81753b10df112992bf51bbc2c2f85208aad78335", GitTreeState:"clean", BuildDate:"2018-05-12T04:12:12Z", GoVersion:"go1.9.6", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.7", GitCommit:"b30876a5539f09684ff9fde266fda10b37738c9c", GitTreeState:"clean", BuildDate:"2018-01-16T21:52:38Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
So, why kubernetes create deployment failed?
Check in "env" section, for apiVersion:
apps/v1
apps/v1beta1
apps/v1beta2
All the env variables should be a string, add the quote: e.g
- name: POSTGRES_PORT
value: {{ .Values.db.env.POSTGRES_PORT | quote }}
Change the apiVersion: apps/v1 by:
apiVersion: extensions/v1beta1
I deployed heketi/gluster on Kubernetes 1.6 cluster. Then I followed the guide to create a StorageClass for dynamic persistent volumes, but no pv are created if I create pvc.
heketi and glusterfs pods running and working as expected if I use heketi-cli manually and create pv manually. The pv are also claimed by pvc.
I feels like that I'm missing a step, but I don't know which one. I followed the guides and I assumed that dynamic persistent volumes should "just work".
install heketi-cli and glusterfs-client
use ./gk-deploy -g
create StorageClass
create PVC
Did I missed a step?
StorageClass
$ kubectl get storageclasses
NAME TYPE
slow kubernetes.io/glusterfs
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
creationTimestamp: 2017-06-07T06:54:35Z
name: slow
resourceVersion: "82741"
selfLink: /apis/storage.k8s.io/v1/storageclassesslow
uid: 2aab0a5c-4b4e-11e7-9ee4-001a4a3f1eb3
parameters:
restauthenabled: "false"
resturl: http://10.2.35.3:8080/
restuser: ""
restuserkey: ""
provisioner: kubernetes.io/glusterfs
PVC
$ kubectl -nkube-system get pvc
NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE
gluster1 Bound glusterfs-b427d1f1 1Gi RWO 15m
influxdb-persistent-storage Pending slow 14h
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{"volume.beta.kubernetes.io/storage-class":"slow"},"labels":{"k8s-app":"influxGrafana"},"name":"influxdb-persistent-storage","namespace":"kube-system"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}}}}
volume.beta.kubernetes.io/storage-class: slow
volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/glusterfs
creationTimestamp: 2017-06-06T16:48:46Z
labels:
k8s-app: influxGrafana
name: influxdb-persistent-storage
namespace: kube-system
resourceVersion: "87638"
selfLink: /api/v1/namespaces/kube-system/persistentvolumeclaims/influxdb-persistent-storage
uid: 021b69c4-4ad8-11e7-9ee4-001a4a3f1eb3
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
status:
phase: Pending
Sources:
https://github.com/gluster/gluster-kubernetes
http://blog.lwolf.org/post/how-i-deployed-glusterfs-cluster-to-kubernetes/
Environment:
$ kubectl cluster-info
Kubernetes master is running at https://andrea-master-0.muellerpublic.de:443
KubeDNS is running at https://andrea-master-0.muellerpublic.de:443/api/v1/proxy/namespaces/kube-system/services/kube-dns
kubernetes-dashboard is running at https://andrea-master-0.muellerpublic.de:443/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard
$ heketi-cli cluster list
Clusters:
24dca142f655fb698e523970b33238a9
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4", GitCommit:"d6f433224538d4f9ca2f7ae19b252e6fcb66a3ae", GitTreeState:"clean", BuildDate:"2017-05-19T18:44:27Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4+coreos.0", GitCommit:"8996efde382d88f0baef1f015ae801488fcad8c4", GitTreeState:"clean", BuildDate:"2017-05-19T21:11:20Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
The problem was the slash in the StorageClass resturl.
resturl: http://10.2.35.3:8080/ must be resturl: http://10.2.35.3:8080
PS: o.O ....