I create a deployment.yaml to create deployment of kubernetes.
Here is my tries:
apiVersion: apps/v1
get error: unable to recognize "./slate-master/deployment.yaml": no matches for kind "Deployment" in version "apps/v1"
apiVersion: extensions/v1beta1 and apiVersion: apps/v1beta1
both of them, get Error from server (BadRequest): error when creating "./slate-master/deployment.yaml": Deployment in version "v1beta1" cannot be handled as a Deployment: v1beta1.Deployment: ...
here is my kubernetes version:
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.2", GitCommit:"81753b10df112992bf51bbc2c2f85208aad78335", GitTreeState:"clean", BuildDate:"2018-05-12T04:12:12Z", GoVersion:"go1.9.6", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.7", GitCommit:"b30876a5539f09684ff9fde266fda10b37738c9c", GitTreeState:"clean", BuildDate:"2018-01-16T21:52:38Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
So, why kubernetes create deployment failed?
Check in "env" section, for apiVersion:
apps/v1
apps/v1beta1
apps/v1beta2
All the env variables should be a string, add the quote: e.g
- name: POSTGRES_PORT
value: {{ .Values.db.env.POSTGRES_PORT | quote }}
Change the apiVersion: apps/v1 by:
apiVersion: extensions/v1beta1
Related
How can i create a service Account as a one liner using kubectl create serviceAccount test-role and then how to pass the metadata?
apiVersion: v1
kind: ServiceAccount
metadata:
name: test-role
namespace: utility
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::xxx:role/rolename
kubectl version
Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.5", GitCommit:"c285e781331a3785a7f436042c65c5641ce8a9e9", GitTreeState:"archive", BuildDate:"1980-01-01T00:00:00Z", GoVersion:"go1.17.10", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"23+", GitVersion:"v1.23.14-eks-ffeb93d", GitCommit:"96e7d52c98a32f2b296ca7f19dc9346cf79915ba", GitTreeState:"clean", BuildDate:"2022-11-29T18:43:31Z", GoVersion:"go1.17.13", Compiler:"gc", Platform:"linux/amd64"}
If by one line you mean one command you can use a heredoc:
kubectl apply -f - <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
name: test-role
namespace: utility
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::xxx:role/rolename
EOF
Using the imperative kubectl commands, requires running two commands:
kubectl -n utility create serviceaccount test-role
kubectl -n utility annotate serviceaccount eks.amazonaws.com/role-arn=arn:aws:iam::xxx:role/rolename
I have Kubernetes version 1.24.3, and I created a new service account named "deployer", but when I checked it, it shows it doesn't have any secrets.
This is how I created the service account:
kubectl apply -f - << EOF
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: deployer
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: deployer-role
rules:
- apiGroups: ["", "extensions", "apps"]
resources:
- deployments
verbs: ["list", "get", "describe", "apply", "delete", "patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: deployer-crb
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: deployer-role
subjects:
- kind: ServiceAccount
name: deployer
namespace: default
---
apiVersion: v1
kind: Secret
type: kubernetes.io/service-account-token
metadata:
name: token-secret
annotations:
kubernetes.io/service-account.name: deployer
EOF
When I checked it, it shows that it doesn't have secrets:
cyber#manager1:~$ kubectl get sa deployer
NAME SECRETS AGE
deployer 0 4m32s
cyber#manager1:~$ kubectl get sa deployer -o yaml
apiVersion: v1
kind: ServiceAccount
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"ServiceAccount","metadata":{"annotations":{},"name":"deployer","namespace":"default"}}
creationTimestamp: "2022-10-13T08:36:54Z"
name: deployer
namespace: default
resourceVersion: "2129964"
uid: cd2bf19f-92b2-4830-8b5a-879914a18af5
And this is the secret that should be associated to the above service account:
cyber#manager1:~$ kubectl get secrets token-secret -o yaml
apiVersion: v1
data:
ca.crt: <REDACTED>
namespace: ZGVmYXVsdA==
token: <REDACTED>
kind: Secret
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Secret","metadata":{"annotations":{"kubernetes.io/service-account.name":"deployer"},"name":"token-secret","namespace":"default"},"type":"kubernetes.io/service-account-token"}
kubernetes.io/service-account.name: deployer
kubernetes.io/service-account.uid: cd2bf19f-92b2-4830-8b5a-879914a18af5
creationTimestamp: "2022-10-13T08:36:54Z"
name: token-secret
namespace: default
resourceVersion: "2129968"
uid: d960c933-5e7b-4750-865d-e843f52f1b48
type: kubernetes.io/service-account-token
What can be the reason?
Update:
The answer help, but for the protocol, it doesn't matter, the token works even it shows 0 secrets:
kubectl get pods --token `cat ./token` -s https://192.168.49.2:8443 --certificate-authority /home/cyber/.minikube/ca.crt --all-namespaces
Other Details:
I am working on Kubernetes version 1.24:
cyber#manager1:~$ kubectl version
WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version.
Client Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.0", GitCommit:"a866cbe2e5bbaa01cfd5e969aa3e033f3282a8a2", GitTreeState:"clean", BuildDate:"2022-08-23T17:44:59Z", GoVersion:"go1.19", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v4.5.7
Server Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.3", GitCommit:"aef86a93758dc3cb2c658dd9657ab4ad4afc21cb", GitTreeState:"clean", BuildDate:"2022-07-13T14:23:26Z", GoVersion:"go1.18.3", Compiler:"gc", Platform:"linux/amd64"}
You can delete it by running:
kubectl delete clusterroles deployer-role
kubectl delete clusterrolebindings deployer-crb
kubectl delete sa deployer
kubectl delete secrets token-secret
Reference to Kubernetes 1.24 changes:
Change log 1.24
Creating secret through the documentation
Base on the change log, the auto-generation of tokens is no longer available for every service account.
The LegacyServiceAccountTokenNoAutoGeneration feature gate is beta, and enabled by default. When enabled, Secret API objects containing service account tokens are no longer auto-generated for every ServiceAccount. Use the TokenRequest API to acquire service account tokens, or if a non-expiring token is required, create a Secret API object for the token controller to populate with a service account token by following this guide.
token-request-v1
stops auto-generation of legacy tokens because they are less secure
work-around
or you can use
kubectl create token SERVICE_ACCOUNT_NAME
kubectl create token deployer
Request a service account token.
Should the roleRef not reference the developer Role which has the name deployer role. Inwould try to replace
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: sdr
with
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: deplyer-role
I've installed the Kubernetes dashboard, and created a service account user with the appropriate permissions, however logging in with a token fails for some reason.
I see the following logs:
2018/08/17 14:26:06 [2018-08-17T14:26:06Z] Incoming HTTP/2.0 GET /api/v1/csrftoken/login request from 10.244.0.0:34914: {}
2018/08/17 14:26:06 [2018-08-17T14:26:06Z] Outcoming response to 10.244.0.0:34914 with 200 status code
2018/08/17 14:26:06 [2018-08-17T14:26:06Z] Incoming HTTP/2.0 POST /api/v1/login request from 10.244.0.0:34914: {
"kubeConfig": "",
"password": "",
"token": "ZXlKaGJHY2lPaUpTVXpJMU5pSXNJbXRwWkNJNklpSjkuZXlKcGMzTWlPaUpyZFdKbGNtNWxkR1Z6TDNObGNuWnBZMlZoWTJOdmRXNTBJaXdpYTNWaVpYSnVaWFJsY3k1cGJ5OXpaWEoyYVdObFlXTmpiM1Z1ZEM5dVlXMWxjM0JoWTJVaU9pSnJkV0psTFhONWMzUmxiU0lzSW10MVltVnlibVYwWlhNdWFXOHZjMlZ5ZG1salpXRmpZMjkxYm5RdmMyVmpjbVYwTG01aGJXVWlPaUpoWkcxcGJpMTFjMlZ5TFhSdmEyVnVMV2RrZG5oM0lpd2lhM1ZpWlhKdVpYUmxjeTVwYnk5elpYSjJhV05sWVdOamIzVnVkQzl6WlhKMmFXTmxMV0ZqWTI5MWJuUXVibUZ0WlNJNkltRmtiV2x1TFhWelpYSWlMQ0pyZFdKbGNtNWxkR1Z6TG1sdkwzTmxjblpwWTJWaFkyTnZkVzUwTDNObGNuWnBZMlV0WVdOamIzVnVkQzUxYVdRaU9pSmtaVEF4TnpRNU15MWhNakE0TFRFeFpUZ3RPRGxrWmkwd09EQXdNamRoTURobFpHTWlMQ0p6ZFdJaU9pSnplWE4wWlcwNmMyVnlkbWxqWldGalkyOTFiblE2YTNWaVpTMXplWE4wWlcwNllXUnRhVzR0ZFhObGNpSjkucHhfMDEwUTBYU2tPMmNhVi1ZYlRDYlllSTNVMVlmcGh3UFZ4TXBOYmF6dWpSM1gtOGVBTUZmbm1GNHlYWHFZWGw5eWlVYmRvQ3lBSl9YcHF5bTlLQThRaWx6MFU3eWZ1WV9BbUg4NmtDNE9hYW5aem1xSmp2N3ZObDY1MU1OeWF0dU5nR0JmU21GZXRCMnoxUkdYRmlIVF9UczljMjh1ZkZiSXNZNkRMVml4Y2JhUS0za2JxOW9PbzZ3NV8zc3ZRQ3dmNjNiTVNaSEpzdkgyUndwVkhkbFJnM3Rmbl9RRUxGcWtJYzZycERibFlUbXZJcVdVaWJjQVdHcXhDRVR6NU5vUGlnbndMaVpuVi1lZFpKZDRpbUJZNU5Ia3FLM0Q0TDgyTnp1NzJkUVU3M3B4T3F5Q3FVSlNhQ3IyVU52eVVucHRENTZTemdtSTBaM0JqUVkyTjFB",
"username": ""
}
2018/08/17 14:26:06 Non-critical error occurred during resource retrieval: the server has asked for the client to provide credentials
2018/08/17 14:26:06 [2018-08-17T14:26:06Z] Outcoming response to 10.244.0.0:34914 with 200 status code
2018/08/17 14:26:24 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
Kubernetes version:
kubectl version
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.7", GitCommit:"dd5e1a2978fd0b97d9b78e1564398aeea7e7fe92", GitTreeState:"clean", BuildDate:"2018-04-19T00:05:56Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:08:19Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
As floreks wrote on GitHub :
NOTE: Dashboard should not be exposed publicly over HTTP. For domains accessed over HTTP it will not be possible to sign in. Nothing will happen after clicking Sign in button on login page.
Also, as chrissound wrote:
I've worked around this by giving cluster admin permission to the dashboard user and just clicking 'skip' at the login prompt:
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
labels:
k8s-app: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system
NOTE: Dashboard should not be exposed publicly using kubectl proxy command as it only allows HTTP connection. For domains other than localhost and 127.0.0.1 it will not be possible to sign in. Nothing will happen after clicking Sign in button on login page.
You can change and publish the service kubernetes-dashboard as NodePort type, then access the dashboard with the specified NodePort. For example:
---
# ------------------- Dashboard Service ------------------- #
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
name: kubernetes-dashboard
namespace: kube-system
spec:
type: NodePort
ports:
- port: 443
targetPort: 8443
nodePort: 31115
selector:
k8s-app: kubernetes-dashboard
I deployed heketi/gluster on Kubernetes 1.6 cluster. Then I followed the guide to create a StorageClass for dynamic persistent volumes, but no pv are created if I create pvc.
heketi and glusterfs pods running and working as expected if I use heketi-cli manually and create pv manually. The pv are also claimed by pvc.
I feels like that I'm missing a step, but I don't know which one. I followed the guides and I assumed that dynamic persistent volumes should "just work".
install heketi-cli and glusterfs-client
use ./gk-deploy -g
create StorageClass
create PVC
Did I missed a step?
StorageClass
$ kubectl get storageclasses
NAME TYPE
slow kubernetes.io/glusterfs
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
creationTimestamp: 2017-06-07T06:54:35Z
name: slow
resourceVersion: "82741"
selfLink: /apis/storage.k8s.io/v1/storageclassesslow
uid: 2aab0a5c-4b4e-11e7-9ee4-001a4a3f1eb3
parameters:
restauthenabled: "false"
resturl: http://10.2.35.3:8080/
restuser: ""
restuserkey: ""
provisioner: kubernetes.io/glusterfs
PVC
$ kubectl -nkube-system get pvc
NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE
gluster1 Bound glusterfs-b427d1f1 1Gi RWO 15m
influxdb-persistent-storage Pending slow 14h
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{"volume.beta.kubernetes.io/storage-class":"slow"},"labels":{"k8s-app":"influxGrafana"},"name":"influxdb-persistent-storage","namespace":"kube-system"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}}}}
volume.beta.kubernetes.io/storage-class: slow
volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/glusterfs
creationTimestamp: 2017-06-06T16:48:46Z
labels:
k8s-app: influxGrafana
name: influxdb-persistent-storage
namespace: kube-system
resourceVersion: "87638"
selfLink: /api/v1/namespaces/kube-system/persistentvolumeclaims/influxdb-persistent-storage
uid: 021b69c4-4ad8-11e7-9ee4-001a4a3f1eb3
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
status:
phase: Pending
Sources:
https://github.com/gluster/gluster-kubernetes
http://blog.lwolf.org/post/how-i-deployed-glusterfs-cluster-to-kubernetes/
Environment:
$ kubectl cluster-info
Kubernetes master is running at https://andrea-master-0.muellerpublic.de:443
KubeDNS is running at https://andrea-master-0.muellerpublic.de:443/api/v1/proxy/namespaces/kube-system/services/kube-dns
kubernetes-dashboard is running at https://andrea-master-0.muellerpublic.de:443/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard
$ heketi-cli cluster list
Clusters:
24dca142f655fb698e523970b33238a9
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4", GitCommit:"d6f433224538d4f9ca2f7ae19b252e6fcb66a3ae", GitTreeState:"clean", BuildDate:"2017-05-19T18:44:27Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4+coreos.0", GitCommit:"8996efde382d88f0baef1f015ae801488fcad8c4", GitTreeState:"clean", BuildDate:"2017-05-19T21:11:20Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
The problem was the slash in the StorageClass resturl.
resturl: http://10.2.35.3:8080/ must be resturl: http://10.2.35.3:8080
PS: o.O ....
Docker allows execution of commands as other user with docker exec -u, when USER something in used in Dockerfile.
It is helpful to enter into superuser mode to debug issues, when you are running you CMD as system user in Dockerfile.
How to execute commands on Kubernetes as other user?
My kubectl version output is
Client Version: version.Info{Major:"1", Minor:"0", GitVersion:"v1.0.6", GitCommit:"388061f00f0d9e4d641f9ed4971c775e1654579d", GitTreeState:"clean"}
Server Version: version.Info{Major:"1", Minor:"0", GitVersion:"v1.0.6", GitCommit:"388061f00f0d9e4d641f9ed4971c775e1654579d", GitTreeState:"clean"}
You can check the spec schema to see what you can add in a pod or replication controller or whatever: https://cloud.google.com/container-engine/docs/spec-schema
You have runAsUser for what you want:
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
securityContext:
runAsUser: 41
This is not currently supported, but there is an open feature request for it: https://github.com/kubernetes/kubernetes/issues/30656