I am trying to configure SSL for the Kubernetes Dashboard. Unfortunately I receive the following error:
2020/07/16 11:25:44 Creating in-cluster Sidecar client
2020/07/16 11:25:44 Error while loading dashboard server certificates. Reason: open /certs/tls.crt: no such file or directory
volumeMounts:
- name: certificates
mountPath: /certs
# Create on-disk volume to store exec logs
I think that /certs should be mounted, but where should it be mounted?
Certificates are stored as secrets. Then secret can be used and mounted in a deployment.
So in your example it would look something like this:
...
volumeMounts:
- name: certificates
mountPath: /certs
# Create on-disk volume to store exec logs
...
volumes:
- name: certificates
secret:
secretName: certificates
...
This is just a short snipped of the whole process of setting up Kubernetes Dashboard v2.0.0 with recommended.yaml.
If you did used the recommended.yaml then certs are created automatically and stored in memory. Deployment is being created with args : -auto-generate-certificates
I also recommend reading How to expose your Kubernetes Dashboard with cert-manager as it might be helpful to you.
There already was an issue submitted with a simmilar problem as yours Couldn't read CA certificate: open : no such file or directory #2518 but it's regarding Kubernetes v1.7.5
If you have any more issues let me know I'll update the answer if you provide more details.
Related
I've used the Bitnami Helm chart to install SCDF into a k8s cluster generated by kOps in AWS.
I'm trying to add my development SCDF stream apps into the installation using a file URI and cannot figure-out where or how the shared Skipper & Server mount point is. exec'ing into either instance there is no /home/cnb and I'm not seeing anything common via mount. The best I can tell the Bitnami installation is using the MariaDB instance for shared "storage".
Is there a recommended way of installing local/dev Stream apps into the cluster?
There are a couple of parameters under the deployer section that allows you to mount volumes (link):
deployer:
## #param deployer.volumeMounts Streaming applications extra volume mounts
##
volumeMounts: {}
## #param deployer.volumes Streaming applications extra volumes
##
volumes: {}
see https://github.com/bitnami/charts/tree/master/bitnami/spring-cloud-dataflow#deployer-parameters.
Then, the mounted volume is used in the ConfigMaps (both server and skipper):
Server
https://github.com/bitnami/charts/blob/c351211a5501bb44b5e065a5e3a7d4b7414f84f3/bitnami/spring-cloud-dataflow/templates/server/configmap.yaml#L60
Skipper
https://github.com/bitnami/charts/blob/c351211a5501bb44b5e065a5e3a7d4b7414f84f3/bitnami/spring-cloud-dataflow/templates/skipper/configmap.yaml#L72
Apart from that, there are also server.extraVolumes and server.extraVolumeMounts to be set on the Dataflow Server Pod, and skipper.extraVolumes and skipper.extraVolumeMounts to be set on the Skipper Pod just in case it's useful for your use case.
Building on the previous answer here is what I came-up with:
Create an EBS Volume
Mount it on each EC2 instance in the cluster at the same location (/cdf)
Install CDF using the Bitnami chart and this config file:
server.extraVolumeMounts:
# Locstion in container
- mountPath: /applications
# Refer to the volume below
name: application-volume
server.extraVolumes:
- name: application-volume
hostPath:
# Location in host filesystem
path: /cdf
# this field is optional
type: Directory
skipper.extraVolumeMounts:
# Locstion in container
- mountPath: /applications
# Refer to the volume below
name: application-volume
skipper.extraVolumes:
- name: application-volume
hostPath:
# Location in host filesystem
path: /cdf
# this field is optional
type: Directory
Then I can copy my jars into /cdf on the host file system and install the applications using a file URI of file:///applications/<jar-file-name> and everything works.
I'm installing the component pack 6.5.0.0 for HCL Connections. Orient me works, but after deploying the customizer, my mw-proxy pods got stuck at ContainerCreating. They show the following event log error:
MountVolume.SetUp failed for volume "appregistry-mw-proxy-secret-vol" : secrets "appregistry-mw-proxy-secret" not found
I never heared of those secret and looked inside the chart. mw-proxy-cloud-deployment.yaml try to mount those secret:
volumes:
- name: nfs
persistentVolumeClaim:
claimName: customizernfsclaim
- name: appregistry-mw-proxy-secret-vol
secret:
secretName: appregistry-mw-proxy-secret
The problem is that I could not found any information what this secret is for and how it should be mounted. In the documentation they just require bootstrap, connections-env and infrastructure charts. All of them were installed. I just tried creating some file as secret:
echo Test123 > pwd-test
k create secret generic appregistry-mw-proxy-secret --from-file=pwd-test
After deleting all the pods, they came up running. But I don't know what this secret is for and what the customizer expects. Maybe this break some functionality of the application.
My questions are:
What is this secret for?
How do I create it correctly? (User, password, certificate, whatever)
Is there any documentation about it?
Have you tried to add the parameter
env.force_regenerate=true
to the bootstrap helmchart ?
There's also the createSecret=true in the connections-env helm chart.
If you used this documentation,
the order of the helm deployments is wrong.
The infrastructure deployment creates the secret "appregistry-mw-proxy-secret". So, first deploy infrastructure and after that mw-proxy and the pods will start.
The environment I'm working with is a secure cluster running cockroach/gke.
I have an approved default.client.root certificate which allows me to access the DB using root, but I can't understand how to generate new certificate requests for additional users. I've read the cockroachDB docs over and over, and it is explained how to manually generate a user certificate in a standalone config where the ca.key location is accessible, but not specifically how to do it in the context of Kubernetes.
I believe that the image cockroachdb/cockroach-k8s-request-cert:0.3 is the start point but I cannot figure out the pattern for how to use it.
Any pointers would be much appreciated. Ultimately I'd like to be able to use this certificate from an API in the same Kubernetes cluster which uses the pg client. Currently, it's in insecure mode, using just username and password.
The request-cert job is used as an init container for the pod. It will request a client or server certificate (the server certificates are requested by the CockroachDB nodes) using the K8S CSR API.
You can see an example of a client certificate being requested and then used by a job in client-secure.yaml. The init container is run before your normal container:
initContainers:
# The init-certs container sends a certificate signing request to the
# kubernetes cluster.
# You can see pending requests using: kubectl get csr
# CSRs can be approved using: kubectl certificate approve <csr name>
#
# In addition to the client certificate and key, the init-certs entrypoint will symlink
# the cluster CA to the certs directory.
- name: init-certs
image: cockroachdb/cockroach-k8s-request-cert:0.3
imagePullPolicy: IfNotPresent
command:
- "/bin/ash"
- "-ecx"
- "/request-cert -namespace=${POD_NAMESPACE} -certs-dir=/cockroach-certs -type=client -user=root -symlink-ca-from=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt"
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: client-certs
mountPath: /cockroach-certs
This sends a CSR using the K8S API, waits for approval, and places all resulting files (client certificate, key for client certificate, CA certificate) in /cockroach-certs. If the certificate already exists as a K8S secret, it just grabs it.
You can request a certificate for any user by just changing --user=root to the username you with to use.
I hava a kubernetes cluster up running on AWS. Now when I'm trying to attach a AWS EBS as a volume to a pod, I got a "special device does not exist" problem.
Output: mount: special device /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/vol-xxxxxxx does not exist
I did some research and found that the correct AWS EBS device path should be like this format:
/var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/aws/us-west-2a/vol-xxxxxxxx
My doubt is that it might because I set up the Kubernetes cluster according to this tutorial and did not set the cloud provider, and therefore the AWS device "does not exit". I wonder if my doubt is correct, and if yes, how to set the cloud provider after the cluster is already up running.
You need to set the cloud provider to properly mount an EBS volume. To do that after the fact set --cloud-provider=aws in the following services:
controller-manager
apiserver
kubelet
Restart everything and try mounting again.
An example pod which mounts an EBS volume explicitly may look like this:
apiVersion: v1
kind: Pod
metadata:
name: test-ebs
spec:
containers:
- image: gcr.io/google_containers/test-webserver
name: test-container
volumeMounts:
- mountPath: /test-ebs
name: test-volume
volumes:
- name: test-volume
# This AWS EBS volume must already exist.
awsElasticBlockStore:
volumeID: <volume-id>
fsType: ext4
The Kubernetes version is an important factor here. The EBS mounts was experimental in 1.2.x, I tried it then but without success. In the last releases I never tried it again but be sure to check your IAM roles on the k8s vm's to make sure they have the rights to provision EBS disks.
I want to use Minikube for local development. It needs to access my companies internal docker registry which is signed w/ a 3rd party certificate.
Locally, I would copy the cert and run update-ca-trust extract or update-ca-certificates depending on the OS.
For the Minikube vm, how do I get the cert installed, registered, and the docker daemon restarted so that docker pull will trust the server?
I had to do something similar recently. You should be able to just hop on the machine with minikube ssh and then follow the directions here
https://docs.docker.com/engine/security/certificates/#understanding-the-configuration
to place the CA in the appropriate directory (/etc/docker/certs.d/[registry hostname]/). You shouldn't need to restart the daemon for it to work.
Well, the minikube has a feature to copy all the contents of ~/.minikube/files directory to its VM filesystem. So you can place your certificates under
~/.minikube/files/etc/docker/certs.d/<docker registry host>:<docker registry port> path
and these files will be copied into the proper destination on minikube startup automagically.
Shell into Minikube.
Copy your certificates to:
/etc/docker/certs.d/<docker registry host>:<docker registry port>
Ensure that your permissions are correct on the certificate, they must be at least readable.
Restart Docker (systemctl restart docker)
Don't forget to create a secret if your Docker Registry uses basic authentication:
kubectl create secret docker-registry service-registry --docker-server=<docker registry host>:<docker registry port> --docker-username=<name> --docker-password=<pwd> --docker-email=<email>
Have you checked ImagePullSecrets.
You can create a secret with your cert and let your pod use it.
By starting up the minikube with the following :
minikube start --insecure-registry=internal-site.dev:5244
It will start the docker daemon with the --insecure-registry option :
/usr/local/bin/docker daemon -D -g /var/lib/docker -H unix:// -H tcp://0.0.0.0:2376 --label provider=virtualbox --insecure-registry internal-site.dev:5244 --tlsverify --tlscacert=/var/lib/boot2docker/ca.pem --tlscert=/var/lib/boot2docker/server.pem --tlskey=/var/lib/boot2docker/server-key.pem -s aufs
but this expects the connection to be HTTP. Unlike in the Docker registry documentation Basic auth does work, but it needs to be placed in a imagePullSecret from the Kubernetes docs.
I would also recommend reading "Adding imagePulSecrets to service account" (link on the page above) to get the secret added to all pods as they are deployed. Note that this will not impact already deployed pods.
One option that works for me is to run a k8s job to copy the cert to the minikube host...
This is what I used to trust the harbor registry I deployed into my minikube
cat > update-docker-registry-trust.yaml << END
apiVersion: batch/v1
kind: Job
metadata:
name: update-docker-registry-trust
namespace: harbor
spec:
template:
spec:
containers:
- name: update
image: centos:7
command: ["/bin/sh", "-c"]
args: ["find /etc/harbor-certs; find /minikube; mkdir -p /minikube/etc/docker/certs.d/core.harbor-${MINIKUBE_IP//./-}.nip.io; cp /etc/harbor-certs/ca.crt /minikube/etc/docker/certs.d/core.harbor-${MINIKUBE_IP//./-}.nip.io/ca.crt; find /minikube"]
volumeMounts:
- name: harbor-harbor-ingress
mountPath: "/etc/harbor-certs"
readOnly: true
- name: docker-certsd-volume
mountPath: "/minikube/etc/docker/"
readOnly: false
restartPolicy: Never
volumes:
- name: harbor-harbor-ingress
secret:
secretName: harbor-harbor-ingress
- name: docker-certsd-volume
hostPath:
# directory location on host
path: /etc/docker/
# this field is optional
type: Directory
backoffLimit: 4
END
kubectl apply -f update-docker-registry-trust.yaml
You should copy your root certificate to $HOME/.minikube/certs and restart the minikube with --embed-certs flag.
For more details please refer to minikube handbook: https://minikube.sigs.k8s.io/docs/handbook/untrusted_certs/
As best as I can tell, there is no way to do this. The next best option is to use the insecure-registry option at startup.
minikube --insecure-registry=foo.com:5000