Secret volumes do not work on multinode docker setup - kubernetes

I have setup a multinode kubernetes 1.0.3 cluster using instructions from https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/docker-multinode.md.
I create a secret volume using the following spec in myns namespace:
apiVersion: v1
kind: Secret
metadata:
name: mysecret
namespace: myns
labels:
name: mysecret
data:
myvar: "bUNqVlhCVjZqWlZuOVJDS3NIWkZHQmNWbXBRZDhsOXMK"
Create secret volume:
$ kubectl create -f mysecret.yml --namespace=myns
Check to see if secret volume exists:
$ kubectl get secrets --namespace=myns
NAME TYPE DATA
mysecret Opaque 1
Here is the Pod spec of the consumer of the secret volume:
apiVersion: v1
kind: Pod
metadata:
name: busybox
namespace: myns
spec:
containers:
- image: busybox
command:
- sleep
- "3600"
name: busybox
volumeMounts:
- name: mysecret
mountPath: /etc/mysecret
readOnly: true
volumes:
- name: mysecret
secret:
secretName: mysecret
Create the Pod
kubectl create -f busybox.yml --namespace=myns
Now if I exec into the docker container to inspect the contents of the /etc/mysecret directory. I find it to be empty.

What namespace are your pod and secret in? They must be in the same namespace. Would you post a gist or pastebin of the Kubelet log? That contains information that can help us diagnose this.
Also, are you running the Kubelet on your host directly or in a container?

Related

How to use hyphen key with hyphen in kubernetes secret?

I want to inject the following secret key/value in pods: test-with=1 and testwith=1. First I create the secret:
kubectl create secret generic test --from-literal=test-with=1 --from-literal=testwith=0
Then I create a yaml file for a pod with the following specification:
containers:
...
envFrom:
- secretRef:
name: test
The pod is running, but the result of env command inside the container only shows:
...
TERM=xterm
testwith=0
...
The test-with=1 does not show up. How can i declare the secret to see the key/value?
Variables with delimitations in names are displayed at the top when viewed through printenv.
Checked:
$ kubectl create secret generic test --from-literal=test-with=1 --from-literal=testwith=0
$ kubectl get secret/test -o yaml
apiVersion: v1
data:
test-with: MQ==
testwith: MA==
kind: Secret
metadata:
...
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: check-env
spec:
volumes:
- name: shared-data
emptyDir: {}
containers:
- name: nginx
image: nginx
envFrom:
- secretRef:
name: test
volumeMounts:
- name: shared-data
mountPath: /usr/share/nginx/html
hostNetwork: true
dnsPolicy: Default
EOF
$ kubectl exec -it shell-demo -- printenv | grep test
test-with=1
testwith=0
GKE v1.18.16-gke.502
I don't see any issue. Here is how I replicated it:
kubectl create secret generic test --from-literal=test-with=1 --from-literal=testwith=0
cat<<EOF | kubectl apply -f-
apiVersion: v1
kind: Pod
metadata:
labels:
run: cent
name: cent
spec:
containers:
- image: centos:7
name: cent
command:
- sleep
- "9999"
envFrom:
- secretRef:
name: test
EOF
➜ ~ kubectl exec -it cent -- bash
[root#cent /]# env | grep test
test-with=1
testwith=0
Most probably this is image issue

kubernetes deployment mounts secret as a folder instead of a file

I am having a config file as a secret in kubernetes and I want to mount it into a specific location inside the container. The problem is that the volume that is created inside the container is a folder instead of a file with the content of the secrets in it. Any way to fix it?
My deployment looks like this:
kind: Deployment
apiVersion: apps/v1
metadata:
name: jetty
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: jetty
template:
metadata:
labels:
app: jetty
spec:
containers:
- name: jetty
image: quay.io/user/jetty
ports:
- containerPort: 8080
volumeMounts:
- name: config-properties
mountPath: "/opt/jetty/config.properties"
subPath: config.properties
- name: secrets-properties
mountPath: "/opt/jetty/secrets.properties"
- name: doc-path
mountPath: /mnt/storage/
resources:
limits:
cpu: '1000m'
memory: '3000Mi'
requests:
cpu: '750m'
memory: '2500Mi'
volumes:
- name: config-properties
configMap:
name: jetty-config-properties
- name: secrets-properties
secret:
secretName: jetty-secrets
- name: doc-path
persistentVolumeClaim:
claimName: jetty-docs-pvc
imagePullSecrets:
- name: rcc-quay
Secrets vs ConfigMaps
Secrets let you store and manage sensitive information (e.g. passwords, private keys) and ConfigMaps are used for non-sensitive configuration data.
As you can see in the Secrets and ConfigMaps documentation:
A Secret is an object that contains a small amount of sensitive data such as a password, a token, or a key.
A ConfigMap allows you to decouple environment-specific configuration from your container images, so that your applications are easily portable.
Mounting Secret as a file
It is possible to create Secret and pass it as a file or multiple files to Pods.
I've create simple example for you to illustrate how it works.
Below you can see sample Secret manifest file and Deployment that uses this Secret:
NOTE: I used subPath with Secrets and it works as expected.
---
apiVersion: v1
kind: Secret
metadata:
name: my-secret
data:
secret.file1: |
c2VjcmV0RmlsZTEK
secret.file2: |
c2VjcmV0RmlsZTIK
---
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
containers:
- image: nginx
name: nginx
volumeMounts:
- name: secrets-files
mountPath: "/mnt/secret.file1" # "secret.file1" file will be created in "/mnt" directory
subPath: secret.file1
- name: secrets-files
mountPath: "/mnt/secret.file2" # "secret.file2" file will be created in "/mnt" directory
subPath: secret.file2
volumes:
- name: secrets-files
secret:
secretName: my-secret # name of the Secret
Note: Secret should be created before Deployment.
After creating Secret and Deployment, we can see how it works:
$ kubectl get secret,deploy,pod
NAME TYPE DATA AGE
secret/my-secret Opaque 2 76s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx 1/1 1 1 76s
NAME READY STATUS RESTARTS AGE
pod/nginx-7c67965687-ph7b8 1/1 Running 0 76s
$ kubectl exec nginx-7c67965687-ph7b8 -- ls /mnt
secret.file1
secret.file2
$ kubectl exec nginx-7c67965687-ph7b8 -- cat /mnt/secret.file1
secretFile1
$ kubectl exec nginx-7c67965687-ph7b8 -- cat /mnt/secret.file2
secretFile2
Projected Volume
I think a better way to achieve your goal is to use projected volume.
A projected volume maps several existing volume sources into the same directory.
In the Projected Volume documentation you can find detailed explanation but additionally I created an example that might help you understand how it works.
Using projected volume I mounted secret.file1, secret.file2 from Secret and config.file1 from ConfigMap as files into the Pod.
---
apiVersion: v1
kind: Secret
metadata:
name: my-secret
data:
secret.file1: |
c2VjcmV0RmlsZTEK
secret.file2: |
c2VjcmV0RmlsZTIK
---
apiVersion: v1
kind: ConfigMap
metadata:
name: my-config
data:
config.file1: |
configFile1
---
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- name: all-in-one
mountPath: "/config-volume"
readOnly: true
volumes:
- name: all-in-one
projected:
sources:
- secret:
name: my-secret
items:
- key: secret.file1
path: secret-dir1/secret.file1
- key: secret.file2
path: secret-dir2/secret.file2
- configMap:
name: my-config
items:
- key: config.file1
path: config-dir1/config.file1
We can check how it works:
$ kubectl exec nginx -- ls /config-volume
config-dir1
secret-dir1
secret-dir2
$ kubectl exec nginx -- cat /config-volume/config-dir1/config.file1
configFile1
$ kubectl exec nginx -- cat /config-volume/secret-dir1/secret.file1
secretFile1
$ kubectl exec nginx -- cat /config-volume/secret-dir2/secret.file2
secretFile2
If this response doesn't answer your question, please provide more details about your Secret and what exactly you want to achieve.

Get value of configMap from mountPath

I created configmap this way.
kubectl create configmap some-config --from-literal=key4=value1
After that i created pod which looks like this
.
I connect to this pod this way
k exec -it nginx-configmap -- /bin/sh
I found the folder /some/path but i could get value from key4.
If you refer to your ConfigMap in your Pod this way:
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: myfrontend
image: nginx
volumeMounts:
- mountPath: "/var/www/html"
name: config-volume
volumes:
- name: config-volume
configMap:
name: some-config
it will be available in your Pod as a file /var/www/html/key4 with the content of value1.
If you rather want it to be available as an environment variable you need to refer to it this way:
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: myfrontend
image: nginx
envFrom:
- configMapRef:
name: some-config
As you can see you don't need for it any volumes and volume mounts.
Once you connect to such Pod by running:
kubectl exec -ti mypod -- /bin/bash
You will see that your environment variable is defined:
root#mypod:/# echo $key4
value1

Can anyone please guide how to pull private images from Kubernetes?

kubectl create secret docker-registry private-registry-key --docker-username="devopsrecipes" --docker-password="xxxxxx" --docker-email="username#example.com" --docker-server="https://index.docker.io/v1/"
secret "private-registry-key" created
This command is used for accessing private docker repos.
As referenced: http://blog.shippable.com/kubernetes-tutorial-how-to-pull-private-docker-image-pod
But, not able to pull the image.
When tried to access ="https://index.docker.io/v1/"
It is giving page not found error.
Please guide me.
You also need to refer the imagePullSecrets in the pod / deployment spec you create:
apiVersion: v1
kind: Pod
metadata:
name: private-reg
spec:
containers:
- name: private-reg-container
image: <your-private-image>
imagePullSecrets:
- name: private-registry-key
Read more about imagePullSecrets here.
I Just tried creating the same on my cluster.
kubectl create secret docker-registry private-registry-key --docker-username="xx" --docker-password="xx" --docker-email="xx" --docker-server="https://index.docker.io/v1/"
Output:
secret/private-registry-key created
My Yaml file looks like
apiVersion: v1
kind: Pod
metadata:
name: private-reg
spec:
containers:
- name: private-reg-container
image: vaibhavjain882/ubuntubase:latest
command: ["sleep", "30"]
imagePullSecrets:
- name: private-registry-key
NAME READY STATUS RESTARTS AGE
private-reg 1/1 Running 0 35s
Note: Just verify, if you are passing the correct docker image name. In my case its "vaibhavjain882/ubuntubase:latest"

how does K8S handles multiple remote docker registeries in POD definition using imagePullSecrets list

I would like to access multiple remote registries to pull images.
In the k8s documentation they say:
(If you need access to multiple registries, you can create one secret
for each registry. Kubelet will merge any imagePullSecrets into a
single virtual .docker/config.json)
and so the POD definition should be something like this:
apiVersion: v1
kind: Pod
spec:
containers:
- name: ...
imagePullSecrets:
- name: secret1
- name: secret2
- ....
- name: secretN
Now I am not sure how K8S will pick the right secret for each image? will all secrets be verified one by one each time? and how K8S will handle the failed retries? and if a specific amount of unauthorized retries could lead to some lock state in k8sor docker registries?
/ Thanks
You can use following script to add two authentications in one secret
#!/bin/bash
u1="user_1_here"
p1="password_1_here"
auth1=$(echo -n "$u1:$p1" | base64 -w0)
u2="user_2_here"
p2="password_2_here"
auth2=$(echo -n "$u2:$p2" | base64 -w0)
cat <<EOF > docker_config.json
{
"auths": {
"repo1_name_here": {
"auth": "$auth1"
},
"repo2_name_here": {
"auth": "$auth2"
}
}
}
EOF
base64 -w0 docker_config.json > docker_config_b64.json
cat <<EOF | kubectl apply -f -
apiVersion: v1
type: kubernetes.io/dockerconfigjson
kind: Secret
data:
.dockerconfigjson: $(cat docker_config_b64.json)
metadata:
name: specify_secret_name_here
namespace: specify_namespace_here
EOF
Kubernetes isn't going to try all secrets until find the correct. When you create the secret, you are referencing that it's a docker registry:
$ kubectl create secret docker-registry user1-secret --docker-server=https://index.docker.io/v1/ --docker-username=user1 --docker-password=PASSWORD456 --docker-email=user1#email.com
$ kubectl create secret docker-registry user2-secret --docker-server=https://index.docker.io/v1/ --docker-username=user2 --docker-password=PASSWORD123 --docker-email=user2#email.com
$ kubectl get secrets user1-secret -o yaml
apiVersion: v1
data:
.dockerconfigjson: eyJhdXRocyI6eyJkb2NrZXIuZXhhbXBsZS5jb20iOnsidXNlcm5hbWUiOiJrdWJlIiwicGFzc3dvcmQiOiJQV19TVFJJTkciLCJlbWFpbCI6Im15QGVtYWlsLmNvbSIsImF1dGgiOiJhM1ZpWlRwUVYxOVRWRkpKVGtjPSJ9fX0=
kind: Secret
metadata:
creationTimestamp: "2020-01-13T13:15:52Z"
name: user1-secret
namespace: default
resourceVersion: "1515301"
selfLink: /api/v1/namespaces/default/secrets/user1-secret
uid: d2f3bb0c-3606-11ea-a202-42010a8000ad
type: kubernetes.io/dockerconfigjson
As you can see, type is kubernetes.io/dockerconfigjson is telling Kubernetes to treat this differently.
So, when you reference the address of your container as magic.example.com/magic-image on your yaml, Kubernetes will have enough information to connect the dots and use the right secret to pull your image.
apiVersion: v1
kind: Pod
metadata:
name: busyboxes
namespace: default
spec:
imagePullSecrets:
- name: user1-secret
- name: user2-secret
containers:
- name: jenkins
image: user1/jenkins
imagePullPolicy: Always
- name: busybox
image: user2/busybox
imagePullPolicy: Always
So as this example describes, it's possible to have 2 or more docker registry secrets with the same --docker-server value. Kubernetes will manage to take care of it seamlessly.