kubectl doesnt show username/password of secret? - kubernetes

So, I set up a few secrets in my cluster, but when I want to see them, I get no data response:
a#b:~/ kubectl create secret generic test-sc --username=test --password='tested'
secret/test-sc created
a#b:~/ kubectl describe secrets/test-sc
Name: test-sc
Namespace: default
Labels: <none>
Annotations: <none>
Type: Opaque
Data
====

Your secret is not correct.
You are specifying text so you should use --from-literal.
In your example
kubectl create secret generic test-sc --from-literal=username=test --from-literal=password='tested'
This is explained in the docs.

Related

kubernetes service account secrets is not listed

I created a secret of type service-account using the below code. The secret got created but when I run the kubectl get secrets the service-account secret is not listed. Where am I going wrong
apiVersion: v1
kind: Secret
metadata:
name: secret-sa-sample
annotations:
kubernetes.io/service-account.name: "sa-name"
type: kubernetes.io/service-account-token
data:
# You can include additional key value pairs as you do with Opaque Secrets
extra: YmFyCg==
kubectl create -f sa-secret.yaml
secret/secret-sa-sample created```
it might have been created in default namespace.
Specify namespace explicitly using -n $NS argument to kubectl

How can I determine whether Kubernetes is using authentication for a image repository?

I'm trying to investigate why a pod has a status of ImagePullBackOff.
If kubectl describe the pod I see an event listed :
Warning Failed 5m42s (x4 over 7m2s) kubelet Failed
to pull image
"**********************":
rpc error: code = Unknown desc = Error response from daemon:
unauthorized: You don't have the needed permissions to perform this
operation, and you may have invalid credentials. To authenticate your
request, follow the steps in:
https://cloud.google.com/container-registry/docs/advanced-authentication
This is not expected as I docker authentication set for the default service account - via a secret as mentioned here: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#add-image-pull-secret-to-service-account
How can I determine whether it's using the correct authentication so I can further debug this issue?
Not really an answer to the question but a solution in my case:
Seems there is something wrong with the kubectl patch serviceaccount default -p '{"imagepullsecrets": [{"name": "gcp-cr-read-access"}]}' as it does not seem to do anything...
Instead I edited the service account resource directly - no patch...
Demonstarted here:
root#docker-ubuntu-s-1vcpu-1gb-lon1-01:~/multitenant-manager# kubectl patch serviceaccount default -p '{"imagepullsecrets": [{"name": "gcp-cr-read-access"}]}'
serviceaccount/default patched (no change)
root#docker-ubuntu-s-1vcpu-1gb-lon1-01:~/multitenant-manager# kubectl describe serviceaccount default
Name: default
Namespace: app-1
Labels: <none>
Annotations: <none>
Image pull secrets: <none>
Mountable secrets: default-token-tqp58
Tokens: default-token-tqp58
Events: <none>
root#docker-ubuntu-s-1vcpu-1gb-lon1-01:~/multitenant-manager# kubectl get serviceaccount -o yaml
apiVersion: v1
items:
- apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: "2020-09-17T15:50:34Z"
name: default
namespace: app-1
resourceVersion: "111538"
selfLink: /api/v1/namespaces/app-1/serviceaccounts/default
uid: 5fe21574-67bf-485c-b9aa-d09c1fe3350c
secrets:
- name: default-token-tqp58
kind: List
metadata:
resourceVersion: ""
selfLink: ""
root#docker-ubuntu-s-1vcpu-1gb-lon1-01:~/multitenant-manager# kubectl patch -n app-1 serviceaccount default -p '{"imagepullsecrets": [{"name": "gcp-cr-read-access"}]}'
serviceaccount/default patched (no change)

One liner command to get secret name and secret's token

What's the one liner command to replace 2 commands like below to get the Kubernetes secret's token? Example usecase will be getting token from kubernetes-dashboard-admin's secret to login and view kubernetes-dashboard.
Command example:
$ kubectl describe serviceaccount default
Name: default
Namespace: default
Labels: <none>
Annotations: <none>
Image pull secrets: <none>
Mountable secrets: default-token-zvxf4
Tokens: default-token-zvxf4
Events: <none>
$ kubectl describe secret default-token-zvxf4
Name: default-token-zvxf4
Namespace: default
Labels: <none>
Annotations: kubernetes.io/service-account.name: default
kubernetes.io/service-account.uid: 809835e7-2564-439f-82f3-14762688ca80
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1025 bytes
namespace: 7 bytes
token: TOKENHERE
Answer that I discovered was below. By using jsonpath to retrieve and xargs to pass the secret name/output to second command. Will need to decode the encrypted token with base64 at the end.
$ kubectl get serviceaccount default -o=jsonpath='{.secrets[0].name}' | xargs kubectl get secret -ojsonpath='{.data.token}' | base64 --decode
TOKENHERE%
The tailing % is not part of the token
This should be able to work on MacOS without install additional app like jq which should be able to do the same. Hope this is helpful for others.
You generally don't need to run either command. Kubernetes will automatically mount the credentials to /var/run/secrets/kubernetes.io/serviceaccount/token in a pod declared using that service account, and the various Kubernetes SDKs know to look for credentials there. Accessing the API from a Pod in the Kubernetes documentation describes this setup in more detail.
Configure Service Accounts for Pods describes the Pod-level setup that's possible to do, though there are reasonable defaults for these.
apiVersion: v1
kind: Pod # or a pod spec embedded in a Deployment &c.
spec:
serviceAccountName: my-service-account # defaults to "default"
automountServiceAccountToken: true # defaults to true
I wouldn't try to make requests from outside the cluster as a service account. User permissions are better suited for this use case. As a user you could launch a Job with service-account permissions if you needed to.
Example using kubectl describe instead of kubectl get and adding the namespace definition:
kubectl -n kube-system describe secret $(kubectl -n kube-system describe sa default | grep 'Mountable secrets' | awk '{ print $3 }') | grep 'token:' | awk '{ print $2 }'

Why k8s pod can't find key in ConfigMap?

I'm having an issue with a Kubernetes pod that uses a ConfigMap. My pod fails to start, with the following error:
Warning Failed 10s (x7 over 2m16s) kubelet, docker-desktop Error: Couldn't find key URL in ConfigMap default/env-config
I created my ConfigMap as follows:
kubectl create configmap env-config --from-file env-config.yaml
This is my ConfigMap:
NAME DATA AGE
env-config 1 5m38s
Nates-MacBook-Pro:k8s natereed$ kubectl describe configmap env-config
Name: env-config
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
env-config.yaml:
----
apiVersion: v1
kind: ConfigMap
data:
AWS_BUCKET: mybucket
AWS_PROFILE: dev
AWS_REGION: us-east-2
JWT_SECRET: foo
POSTGRESS_DB: <mydb>
POSTGRESS_HOST: <my host>
URL: http://localhost:8100
metadata:
name: env-config
It looks like command to create the ConfigMap is wrong? It's not clear to me why it creates a map with a single key "env-config.yaml".
The YAML file looks like this:
apiVersion: v1
kind: ConfigMap
data:
AWS_BUCKET: mybucket
AWS_PROFILE: dev
AWS_REGION: us-east-2
JWT_SECRET: foo
POSTGRESS_DB: mydb
POSTGRESS_HOST: postgreshost
URL: http://localhost:8100
metadata:
name: env-config
namespace: default
I'd say that the issue occurred because you are passing a ConfigMap yaml definition as parameter of --from-file.
You could simply create it using:
kubectl create -f env-config.yaml
Besides that, if you would like to create using --from-file, then you can define your file only with the parameters that you need, it would be something like:
File name: env-config
AWS_PROFILE: dev
AWS_REGION: us-east-2
JWT_SECRET: foo
POSTGRESS_DB: <mydb>
POSTGRESS_HOST: <my host>
URL: http://localhost:8100
And then you can create the ConfigMap in the way you were doing before:
kubectl create configmap env-config --from-file env-config
This would create a ConfigMap like that: (kubectl describe configmap env-config)
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
env-config:
----
AWS_BUCKET: mybucket
AWS_PROFILE: dev
AWS_REGION: us-east-2
JWT_SECRET: foo
POSTGRESS_DB: <mydb>
POSTGRESS_HOST: <my host>
URL: http://localhost:8100
Events: <none>
Here you can find some useful information:
Create ConfigMaps from files
Define container environment variables using ConfigMap data
So you kind of got things a little weird. What you have there is a config map with one key named env-config.yaml, the value of which is a string containing YAML data for a config map with a bunch of keys including URL. I'm guessing you tried using kubectl create cm --from-file instead of kubectl apply -f?

kubernetes configmaps for binary file

My kubernetes version is 1.10.4.
I am trying to create a ConfigMap for java keystore files:
kubectl create configmap key-config --from-file=server-keystore=/home/ubuntu/ssl/server.keystore.jks --from-file=server-truststore=/home/ubuntu/ssl/server.truststore.jks --from-file=client--truststore=/home/ubuntu/ssl/client.truststore.jks --append-hash=false
It says configmap "key-config" created.
But when I describe the configmap I am getting null value:
$ kubectl describe configmaps key-config
Name: key-config
Namespace: prod-es
Labels: <none>
Annotations: <none>
Data
====
Events: <none>
I know my version kubernetes support binary data as configmaps or secrets but I am not sure what is wrong with my approach.
Any input on this is highly appreciated.
kubectl describe does not show binary data in ConfigMaps at the moment (kubectl version v1.10.4); also the DATA column of the kubectl get configmap output does not include the binary elements:
$ kubectl get cm
NAME DATA AGE
key-config 0 1m
But the data is there, it's just a poor UI experience at the moment. You can verify that with:
kubectl get cm key-config -o json
Or you can use this friendly command to check that the ConfigMap can be mounted and the projected contents matches your original files:
kubectl run cm-test --image=busybox --rm --attach --restart=Never --overrides='{"spec":{"volumes":[{"name":"cm", "configMap":{"name":"key-config"}}], "containers":[{"name":"cm-test", "image":"busybox", "command":["sh","-c","md5sum /cm/*"], "volumeMounts":[{"name":"cm", "mountPath":"/cm"}]}]}}'