How to add encryption-provider-config option to kube-apiserver? - kubernetes

I am using kubernetes 1.15.7 version.
I am trying to follow the link https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/#understanding-the-encryption-at-rest-configuration to enable 'encryption-provider-config' option on 'kube-apiserver'.
I edited file '/etc/kubernetes/manifests/kube-apiserver.yaml' and provided below option
- --encryption-provider-config=/home/rtonukun/secrets.yaml
But after that I am getting below error.
The connection to the server 171.69.225.87:6443 was refused - did you specify the right host or port?
with all kubectl commands like 'kubectl get no'.
Mainy, how do I do these below two steps?
3. Set the --encryption-provider-config flag on the kube-apiserver to point to the location of the config file.
4. Restart your API server.

I've reproduced exactly your scenario, and I'll try to explain how I fixed it
Reproducing the same scenario
Create the encrypt file on /home/koopakiller/secrets.yaml:
apiVersion: apiserver.config.k8s.io/v1
kind: EncryptionConfiguration
resources:
- resources:
- secrets
providers:
- aescbc:
keys:
- name: key1
secret: r48bixfj02BvhhnVktmJJiuxmQZp6c0R60ZQBFE7558=
- identity: {}
Edit the file /etc/kubernetes/manifests/kube-apiserver.yaml and set the --encryption-provider-config flag:
- --encryption-provider-config=/home/koopakiller/encryption.yaml
Save the file and exit.
When I checked the pods status got the same error:
$ kubectl get pods -A
The connection to the server 10.128.0.62:6443 was refused - did you specify the right host or port?
Troubleshooting
Since kubectl is not working anymore, I tried to look directly the running containers using docker command, then I see kube-apiserver container was recently recreated:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
54203ea95e39 k8s.gcr.io/pause:3.1 "/pause" 1 minutes ago Up 1 minutes k8s_POD_kube-apiserver-lab-1_kube-system_015d9709c9881516d6ecf861945f6a10_0
...
Kubernetes store the logs of created pods on /var/log/pods directory, I've checked the kube-apiserver log file and found a valuable information:
{"log":"Error: error opening encryption provider configuration file "/home/koopakiller/encryption.yaml": open /home/koopakiller/encryption.yaml: no such file or directory\n","stream":"stderr","time":"2020-01-22T13:28:46.772768108Z"}
Explanation
Taking a look at manifest file kube-apiserver.yaml is possible to see the command kube-apiserver, it runs into container, so they need to have the encryption.yaml file mounted into container.
If you check the volumeMounts in this file, you could see that only the paths below is mounted in container by default:
/etc/ssl/certs
/etc/ca-certificates
/etc/kubernetes/pki
/usr/local/share/ca-certificates
/usr/share/ca-certificates
...
volumeMounts:
- mountPath: /etc/ssl/certs
name: ca-certs
readOnly: true
- mountPath: /etc/ca-certificates
name: etc-ca-certificates
readOnly: true
- mountPath: /etc/kubernetes/pki
name: k8s-certs
readOnly: true
- mountPath: /usr/local/share/ca-certificates
name: usr-local-share-ca-certificates
readOnly: true
- mountPath: /usr/share/ca-certificates
name: usr-share-ca-certificates
readOnly: true
...
Based on the facts above, we can assume that apiserver failed to start because /home/koopakiller/encryption.yaml doesn't actually mounted into container.
How to solve
I can see 2 ways to solve this issue:
1st - Copy the encryption file to /etc/kubernetes/pki (or any of the path above) and change the path in /etc/kubernetes/kube-apiserver.yaml:
- --encryption-provider-config=/etc/kubernetes/encryption.yaml
Save the file and wait apiserver restart.
2nd - Create a new volumeMounts in the kube-apiserver.yaml manifest to mount a custom directory from node into container.
Let's create a new directory in /etc/kubernetes/secret (home folder isn't a good location to leave config files =)).
Edit /etc/kubernetes/manifests/kube-apiserver.yaml:
...
- --encryption-provider-config=/etc/kubernetes/secret/encryption.yaml
...
volumeMounts:
- mountPath: /etc/kubernetes/secret
name: secret
readOnly: true
...
volumes:
- hostPath:
path: /etc/kubernetes/secret
type: DirectoryOrCreate
name: secret
...
After save the file kubernetes will mount the node path /etc/kubernetes/secret into the same path into the apiserver container, wait start completely and try to list your node again.
Please let know if that helped!

Related

CreateContainerConfigError - Kubernetes occasionally failing to prepare subPath for volumeMount of container

The error:
CreateContainerConfigError: failed to prepare subPath for volumeMount "myVolumeMount" of container "myContainer"
Relevant extract from YAML:
volumeMounts:
- name: myVolumeMount
mountPath: /var/data/crash
subPath: files/.cores
readOnly: false
I am occasionally seeing the failure show up when deploying. The fact that it is occasional is what makes this confusing. Is this a potential bug? Using Kubernetes versions: client (0.22) and server (1.22),
If you want to use the path /var/data/crash/files/.cores as the subPath, you need to define the mountPath as /var/data/crash/files/.cores too. Then, define the subPath as just .cores. Thus, the final block should be like,
volumeMounts:
- name: myVolumeMount
mountPath: /var/data/crash/files/.cores
subPath: .cores
readOnly: false
This is how the documentation specifies too.

Copy file inside Kubernetes pod from another container

I need to copy a file inside my pod during the time of creation. I don't want to use ConfigMap and Secrets. I am trying to create a volumeMounts and copy the source file using the kubectl cp command—my manifest looks like this.
apiVersion: v1
kind: Pod
metadata:
name: copy
labels:
app: hello
spec:
containers:
- name: init-myservice
image: bitnami/kubectl
command: ['kubectl','cp','./test.json','init-myservice:./data']
volumeMounts:
- name: my-storage
mountPath: data
- name: init-myservices
image: nginx
volumeMounts:
- name: my-storage
mountPath: data
volumes:
- name: my-storage
emptyDir: {}
But I am getting a CrashLoopBackOff error. Any help or suggestion is highly appreciated.
it's not possible.
let me explain : you need to think of it like two different machine. here your local machine is the one where the file exist and you want to copy it in another machine with cp. but it's not possible. and this is what you are trying to do here. you are trying to copy file from your machine to pod's machine.
here you can do one thing just create your own docker image for init-container. and copy the file you want to store before building the docker image. then you can copy that file in shared volume where you want to store the file.
I do agree with an answer provided by H.R. Emon, it explains why you can't just run kubectl cp inside of the container. I do also think there are some resources that could be added to show you how you can tackle this particular setup.
For this particular use case it is recommended to use an initContainer.
initContainers - specialized containers that run before app containers in a Pod. Init containers can contain utilities or setup scripts not present in an app image.
Kubernetes.io: Docs: Concepts: Workloads: Pods: Init-containers
You could use the example from the official Kubernetes documentation (assuming that downloading your test.json is feasible):
apiVersion: v1
kind: Pod
metadata:
name: init-demo
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: workdir
mountPath: /usr/share/nginx/html
# These containers are run during pod initialization
initContainers:
- name: install
image: busybox
command:
- wget
- "-O"
- "/work-dir/index.html"
- http://info.cern.ch
volumeMounts:
- name: workdir
mountPath: "/work-dir"
dnsPolicy: Default
volumes:
- name: workdir
emptyDir: {}
-- Kubernetes.io: Docs: Tasks: Configure Pod Initalization: Create a pod that has an initContainer
You can also modify above example to your specific needs.
Also, referring to your particular example, there are some things that you will need to be aware of:
To use kubectl inside of a Pod you will need to have required permissions to access the Kubernetes API. You can do it by using serviceAccount with some permissions. More can be found in this links:
Kubernetes.io: Docs: Reference: Access authn authz: Authentication: Service account tokens
Kubernetes.io: Docs: Reference: Access authn authz: RBAC
Your bitnami/kubectl container will run into CrashLoopBackOff errors because of the fact that you're passing a single command that will run to completion. After that Pod would report status Completed and it would be restarted due to this fact resulting in before mentioned CrashLoopBackOff. To avoid that you would need to use initContainer.
You can read more about what is happening in your setup by following this answer (connected with previous point):
Stackoverflow.com: Questions: What happens one of the container process crashes in multiple container POD?
Additional resources:
Kubernetes.io: Pod lifecycle
A side note!
I also do consider including the reason why Secrets and ConfigMaps cannot be used to be important in this particular setup.

Why the path does not get mount?

I've created the manifest file, that looks as follows:
apiVersion: v1
kind: Pod
metadata:
name: kuard
spec:
volumes:
- name: "kuard-data"
hostPath:
path: "/home/developer/kubernetes/exercises"
containers:
- image: gcr.io/kuar-demo/kuard-amd64:1
name: kuard
volumeMounts:
- mountPath: "/data"
name: "kuard-data"
ports:
- containerPort: 8080
name: http
protocol: TCP
As you can see, the hostpath is:
path: "/home/developer/kubernetes/exercises"
and the mountPath is:
mountPath: "/data"
I've created a hello.txt file in the folder /home/developer/kubernetes/exercises and when I enter into the pod via kubectl exec -it kuard ash I can not find the file hello.txt.
Where is the file?
kind is using Docker containers to simulate Kubernetes nodes. So when you are creating files on your host (your ubuntu machine) the containers will not automatically have access to them.
(This gets even more complicated when using macos or windows and docker is running in a separate virtual machine...)
I assume that there are some shared folders visible inside the kind-docker-nodes, but I could not find it documented.
You can verify the filesystem content of the docker node from inside the container using docker exec -it kind-control-plane /bin/sh and then work with the usual tools.
If you need to make content from your development machine available you might want to have a look at ksync: https://github.com/vapor-ware/ksync

Kubernetes can not mount a volume to a folder

I am following these docs on how to setup a sidecar proxy to my cloud-sql database. It refers to a manifest on github that -as I find it all over the place on github repos etc- seems to work for 'everyone' but I run into trouble. The proxy container can not mount to /secrets/cloudsql it seems as it can not succesfully start. When I run kubectl logs [mypod] cloudsql-proxy:
invalid json file "/secrets/cloudsql/mysecret.json": open /secrets/cloudsql/mysecret.json: no such file or directory
So the secret seems to be the problem.
Relevant part of the manifest:
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command: ["/cloud_sql_proxy",
"-instances=pqbq-224713:europe-west4:osm=tcp:5432",
"-credential_file=/secrets/cloudsql/mysecret.json"]
securityContext:
runAsUser: 2
allowPrivilegeEscalation: false
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
volumes:
- name: cloudsql-instance-credential
secret:
secretName: mysecret
To test/debug the secret I mount the volume to an another container that does start, but then the path and file /secrets/cloudsql/mysecret.json does not exist either. However when I mount the secret to an already EXISTING folder I can find in this folder not the mysecret.json file (as I expected...) but (in my case) two secrets it contains, so I find: /existingfolder/password and /existingfolder/username (apparently this is how it works!? When I cat these secrets they give the proper strings, so they seem fine).
So it looks like the path can not be made by the system, is this a permission issue? I tried simply mounting in the proxy container to the root ('/') so no folder, but that gives an error saying it is not allowed to do so. As the image gcr.io/cloudsql-docker/gce-proxy:1.11 is from Google and I can not get it running I can not see what folder it has.
My questions:
Is the mountPath created from the manifest or should it be already
in the container?
How can I get this working?
I solved it. I was using the same secret on the cloudsql-proxy as the ones used on the app (env), but it needs to be a key you generate from a service account and then make a secret out of that. Then it works. This tutorial helped me through.

Writing from container to host in Kubernetes

I currently have a job that runs a script. This script needs to create a file on the host file system.
To do so I make use of a hostPath volume with:
volumeMounts:
- mountPath: /var/logs/test
name: joblogs
volumes:
- hostPath:
path: /root/test
type: DirectoryOrCreate
name: joblogs
I used
chcon -Rt svirt_sandbox_file_t /root/test
to allow the writing on this directory, but even if files are created in
/var/logs/test
they are not on
/root/test
on the host.
EDIT: The pod itself runs on the same node I am talking about.