Cannot install Kubernetes Metrics Server - kubernetes

I would like to install Kubernetes Metrics Server and try the Metrics API by following this recipe (from Kubernetes Handbook). I currently have a Kubernetes 1.13 cluster that was installed with kubeadm.
The recipe's section Enable API Aggregation recommends changes several settings in /etc/kubernetes/manifests/kube-apiserver.yaml. The current settings are as follows:
--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
--requestheader-allowed-names=front-proxy-client
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
--requestheader-extra-headers-prefix=X-Remote-Extra-
--requestheader-group-headers=X-Remote-Group
--requestheader-username-headers=X-Remote-User
The suggested new settings are as follows:
--requestheader-client-ca-file=/etc/kubernetes/certs/proxy-ca.crt
--proxy-client-cert-file=/etc/kubernetes/certs/proxy.crt
--proxy-client-key-file=/etc/kubernetes/certs/proxy.key
--requestheader-allowed-names=aggregator
--requestheader-extra-headers-prefix=X-Remote-Extra-
--requestheader-group-headers=X-Remote-Group
--requestheader-username-headers=X-Remote-User
If I install metrics-server without these changes its log contains errors like this:
unable to fully collect metrics: ... unable to fetch metrics from
Kubelet ... x509: certificate signed by unknown authority
Where do these credentials come from and what do they entail? I currently do not have a directory /etc/kubernetes/certs.
UPDATE I've now tried adding the following at suitable places inside metrics-server-deployment.yaml, however the issue still persists (in the absence of --kubelet-insecure-tls):
command:
- /metrics-server
- --client-ca-file
- /etc/kubernetes/pki/ca.crt
volumeMounts:
- mountPath: /etc/kubernetes/pki/ca.crt
name: ca
readOnly: true
volumens:
- hostPath:
path: /etc/kubernetes/pki/ca.crt
type: File
name: ca
UPDATE Here is probably the reason why mounting the CA certificate into the container apparently did not help.

About Kubernetes Certificates:
Take a look on to how to Manage TLS Certificates in a Cluster:
Every Kubernetes cluster has a cluster root Certificate Authority
(CA). The CA is generally used by cluster components to validate the
API server’s certificate, by the API server to validate kubelet client
certificates, etc. To support this, the CA certificate bundle is
distributed to every node in the cluster and is distributed as a
secret attached to default service accounts.
And also PKI Certificates and Requirements:
Kubernetes requires PKI certificates for authentication over TLS. If
you install Kubernetes with kubeadm, the certificates that your
cluster requires are automatically generated.
kubeadm, by default, create the Kubernetes certificates at /etc/kubernetes/pki/ directory.
About the metrics-server error:
It looks like the metrics-server is trying to validate the kubelet serving certs without having them be signed by the main Kubernetes CA. Installation tools like kubeadm may don't set up certificates properly.
This problem can also happen in the case of your server have changed names/addresses after the Kubernetes installation, which causes a mismatch of the apiserver.crt Subject Alternative Name and your current names/addresses. Check it with:
openssl x509 -in /etc/kubernetes/pki/apiserver.crt -text -noout | grep DNS
The fastest/easy way to overcome this error is by using the --kubelet-insecure-tls flag for metrics-server. Something like this:
# metrics-server-deployment.yaml
[...]
- name: metrics-server
image: k8s.gcr.io/metrics-server-amd64:v0.3.1
command:
- /metrics-server
- --kubelet-insecure-tls
Note that this implies security concerns. If you are running for tests, ok. But for production, the best approach is to identify and fix the certificate issues (Take a look at this metrics-server issue for more information: #146)

Related

Gitlab CI Kubernetes Agent Self Signed Certificate

I have following .gitlab-ci.yml config:
deploy:
stage: deploy
image:
name: bitnami/kubectl:latest
entrypoint: ['']
script:
- echo "Hello, Rules!"
- kubectl config get-contexts
- kubectl config use-context OurGroup/our-repo:agent-0
- kubectl get pods
rules:
- if: '$CI_COMMIT_REF_NAME == "master"'
when: manual
allow_failure: true
- if: '$CI_COMMIT_REF_NAME == "develop"'
when: manual
allow_failure: true
tags:
- docker
This fails on following error:
Unable to connect to the server: x509: certificate signed by unknown authority
We are running a self-hosted gitlab instance with a self-signed certificate. The issue is bitnami/kubectl:latest being a non root docker container, and it is described here in the official gitlab docu to be used:
https://docs.gitlab.com/ee/user/clusters/agent/ci_cd_workflow.html#update-your-gitlab-ciyml-file-to-run-kubectl-commands
I have tried the "echo "$CA_CERTIFICATE" > /usr/local/share/ca-certificates/my-ca.crt && update-ca-certificates" for injecting a certificate, but that fails due to not having privileges and SUDO not existing in this container.
kubectl get certificates fails on not being able to connect to localhost:8080
Any pointers on how to get a self-signed certificate to work with connection with kubectl and agent authentication, or what is perhaps considered a secure way of making this work?
Thankfully Gitlab provides a way to provided a path to a ca-crt-file via flag during start up of the agent.
Broadly the process for loading a self-signed certificate is:
create a file containing the public key of your self signed certificate, ensuring to concatenate any other public keys involved in the signing chain
if you can do the above it's possible to apply this file as a K8S ConfigMap, if not (you include the private key) then create it as a K8S Secret as the proceeding documentation indicates.
modify the K8S Deployment to mount the ConfigMap/Secret to the agent container then add a new line to the container args to point to the file - --ca-cert-file=/certs/ca.crt
GitLab Kubernetes Agent: provide custom certificates to the Agent Issue #280518 (thank you Philipp Hahn)
In the kind: Deployment deployment section add the following things:
In spec:template:spec:containers:args append - --ca-cert-file=/certs/${YOUR_CA}.crt - expand ${YOUR_CA} here manually
In spec:template:spec:contaiers:volumeMounts add a new block:
- name: custom-certs
readOnly: true
mountPath: /certs
In spec:template:spec:volumes add a new block:
- name: custom-certs
secret:
secretName: ca
Official Docs:
Providing a custom certificate for accessing GitLab
You can provide a Kubernetes Secret to the GitLab Runner Helm Chart, which will be used to populate the container’s /home/gitlab-runner/.gitlab-runner/certs directory.
Each key name in the Secret will be used as a filename in the directory, with the file content being the value associated with the key:
The key/file name used should be in the format <gitlab.hostname>.crt, for example gitlab.your-domain.com.crt.
Any intermediate certificates need to be concatenated to your server certificate in the same file.
The hostname used should be the one the certificate is registered for.
Gitlab Agent Helm Chart Deployment
- --ca-cert-file=/etc/agentk/config/ca.crt

Create TLS self-signed certificate for MinIO in Kubernetes cluster

My goal now is to create a TLS certificate for MinIO in my k8s cluster.
Link to MinIO requirements for TLS connection - up to date.
MinIO running through port-forward to get into the service in the cluster.
There is a cert-manager chart installed via terraform in the cluster which I want to use it for.
I would be happy to get all info on how to actually create, check the certificate, assign it and understand the core concepts of TLS secure connection. many of the guides I have read/watch so far got me a bit confused.
Our k8s is working as Helm charts overall so please be aware not to get into local commands.
Those certificates are supposed to be the simplest ones to create and assign. It will be self-signed which means the CA will be part of the cluster itself and not Third Party CA.
MinIO service expects for public.crt and private.key insdie this path:
/etc/minio/certs/
or this path:
${HOME}/.minio/certs
values.yaml snippet of TLS configuration:
## TLS Settings for MinIO
tls:
enabled: true
## Create a secret with private.key and public.crt files and pass
that here. Ref:
https://github.com/minio/minio/tree/master/docs/tls/kubernetes#2-
create-kubernetes-secret
certSecret: "tls-minio"
publicCrt: public.crt
privateKey: private.key
## Trusted Certificates Settings for MinIO. Ref:
https://docs.minio.io/docs/how-to-secure-access-to-minio-server-
with-tls#install-certificates-from-third-party-cas
## Bundle multiple trusted certificates into one secret and pass that here. Ref:
https://github.com/minio/minio/tree/master/docs/tls/kubernetes#2-
create-kubernetes-secret
## When using self-signed certificates, remember to include MinIO's own certificate in the bundle with key public.crt.
## If certSecret is left empty and tls is enabled, this chart installs the public certificate from .Values.tls.certSecret.
trustedCertsSecret: ""
Ask me for any more info about this.
Thanks!

Kubernetes: Open /certs/tls.crt: no such file or directory

I am trying to configure SSL for the Kubernetes Dashboard. Unfortunately I receive the following error:
2020/07/16 11:25:44 Creating in-cluster Sidecar client
2020/07/16 11:25:44 Error while loading dashboard server certificates. Reason: open /certs/tls.crt: no such file or directory
volumeMounts:
- name: certificates
mountPath: /certs
# Create on-disk volume to store exec logs
I think that /certs should be mounted, but where should it be mounted?
Certificates are stored as secrets. Then secret can be used and mounted in a deployment.
So in your example it would look something like this:
...
volumeMounts:
- name: certificates
mountPath: /certs
# Create on-disk volume to store exec logs
...
volumes:
- name: certificates
secret:
secretName: certificates
...
This is just a short snipped of the whole process of setting up Kubernetes Dashboard v2.0.0 with recommended.yaml.
If you did used the recommended.yaml then certs are created automatically and stored in memory. Deployment is being created with args : -auto-generate-certificates
I also recommend reading How to expose your Kubernetes Dashboard with cert-manager as it might be helpful to you.
There already was an issue submitted with a simmilar problem as yours Couldn't read CA certificate: open : no such file or directory #2518 but it's regarding Kubernetes v1.7.5
If you have any more issues let me know I'll update the answer if you provide more details.

How to create a Kubernetes client certificate signing request for Cockroachdb

The environment I'm working with is a secure cluster running cockroach/gke.
I have an approved default.client.root certificate which allows me to access the DB using root, but I can't understand how to generate new certificate requests for additional users. I've read the cockroachDB docs over and over, and it is explained how to manually generate a user certificate in a standalone config where the ca.key location is accessible, but not specifically how to do it in the context of Kubernetes.
I believe that the image cockroachdb/cockroach-k8s-request-cert:0.3 is the start point but I cannot figure out the pattern for how to use it.
Any pointers would be much appreciated. Ultimately I'd like to be able to use this certificate from an API in the same Kubernetes cluster which uses the pg client. Currently, it's in insecure mode, using just username and password.
The request-cert job is used as an init container for the pod. It will request a client or server certificate (the server certificates are requested by the CockroachDB nodes) using the K8S CSR API.
You can see an example of a client certificate being requested and then used by a job in client-secure.yaml. The init container is run before your normal container:
initContainers:
# The init-certs container sends a certificate signing request to the
# kubernetes cluster.
# You can see pending requests using: kubectl get csr
# CSRs can be approved using: kubectl certificate approve <csr name>
#
# In addition to the client certificate and key, the init-certs entrypoint will symlink
# the cluster CA to the certs directory.
- name: init-certs
image: cockroachdb/cockroach-k8s-request-cert:0.3
imagePullPolicy: IfNotPresent
command:
- "/bin/ash"
- "-ecx"
- "/request-cert -namespace=${POD_NAMESPACE} -certs-dir=/cockroach-certs -type=client -user=root -symlink-ca-from=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt"
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: client-certs
mountPath: /cockroach-certs
This sends a CSR using the K8S API, waits for approval, and places all resulting files (client certificate, key for client certificate, CA certificate) in /cockroach-certs. If the certificate already exists as a K8S secret, it just grabs it.
You can request a certificate for any user by just changing --user=root to the username you with to use.

How to install a CA in Minikube so image pulls are trusted

I want to use Minikube for local development. It needs to access my companies internal docker registry which is signed w/ a 3rd party certificate.
Locally, I would copy the cert and run update-ca-trust extract or update-ca-certificates depending on the OS.
For the Minikube vm, how do I get the cert installed, registered, and the docker daemon restarted so that docker pull will trust the server?
I had to do something similar recently. You should be able to just hop on the machine with minikube ssh and then follow the directions here
https://docs.docker.com/engine/security/certificates/#understanding-the-configuration
to place the CA in the appropriate directory (/etc/docker/certs.d/[registry hostname]/). You shouldn't need to restart the daemon for it to work.
Well, the minikube has a feature to copy all the contents of ~/.minikube/files directory to its VM filesystem. So you can place your certificates under
~/.minikube/files/etc/docker/certs.d/<docker registry host>:<docker registry port> path
and these files will be copied into the proper destination on minikube startup automagically.
Shell into Minikube.
Copy your certificates to:
/etc/docker/certs.d/<docker registry host>:<docker registry port>
Ensure that your permissions are correct on the certificate, they must be at least readable.
Restart Docker (systemctl restart docker)
Don't forget to create a secret if your Docker Registry uses basic authentication:
kubectl create secret docker-registry service-registry --docker-server=<docker registry host>:<docker registry port> --docker-username=<name> --docker-password=<pwd> --docker-email=<email>
Have you checked ImagePullSecrets.
You can create a secret with your cert and let your pod use it.
By starting up the minikube with the following :
minikube start --insecure-registry=internal-site.dev:5244
It will start the docker daemon with the --insecure-registry option :
/usr/local/bin/docker daemon -D -g /var/lib/docker -H unix:// -H tcp://0.0.0.0:2376 --label provider=virtualbox --insecure-registry internal-site.dev:5244 --tlsverify --tlscacert=/var/lib/boot2docker/ca.pem --tlscert=/var/lib/boot2docker/server.pem --tlskey=/var/lib/boot2docker/server-key.pem -s aufs
but this expects the connection to be HTTP. Unlike in the Docker registry documentation Basic auth does work, but it needs to be placed in a imagePullSecret from the Kubernetes docs.
I would also recommend reading "Adding imagePulSecrets to service account" (link on the page above) to get the secret added to all pods as they are deployed. Note that this will not impact already deployed pods.
One option that works for me is to run a k8s job to copy the cert to the minikube host...
This is what I used to trust the harbor registry I deployed into my minikube
cat > update-docker-registry-trust.yaml << END
apiVersion: batch/v1
kind: Job
metadata:
name: update-docker-registry-trust
namespace: harbor
spec:
template:
spec:
containers:
- name: update
image: centos:7
command: ["/bin/sh", "-c"]
args: ["find /etc/harbor-certs; find /minikube; mkdir -p /minikube/etc/docker/certs.d/core.harbor-${MINIKUBE_IP//./-}.nip.io; cp /etc/harbor-certs/ca.crt /minikube/etc/docker/certs.d/core.harbor-${MINIKUBE_IP//./-}.nip.io/ca.crt; find /minikube"]
volumeMounts:
- name: harbor-harbor-ingress
mountPath: "/etc/harbor-certs"
readOnly: true
- name: docker-certsd-volume
mountPath: "/minikube/etc/docker/"
readOnly: false
restartPolicy: Never
volumes:
- name: harbor-harbor-ingress
secret:
secretName: harbor-harbor-ingress
- name: docker-certsd-volume
hostPath:
# directory location on host
path: /etc/docker/
# this field is optional
type: Directory
backoffLimit: 4
END
kubectl apply -f update-docker-registry-trust.yaml
You should copy your root certificate to $HOME/.minikube/certs and restart the minikube with --embed-certs flag.
For more details please refer to minikube handbook: https://minikube.sigs.k8s.io/docs/handbook/untrusted_certs/
As best as I can tell, there is no way to do this. The next best option is to use the insecure-registry option at startup.
minikube --insecure-registry=foo.com:5000