How to create a Kubernetes client certificate signing request for Cockroachdb - kubernetes

The environment I'm working with is a secure cluster running cockroach/gke.
I have an approved default.client.root certificate which allows me to access the DB using root, but I can't understand how to generate new certificate requests for additional users. I've read the cockroachDB docs over and over, and it is explained how to manually generate a user certificate in a standalone config where the ca.key location is accessible, but not specifically how to do it in the context of Kubernetes.
I believe that the image cockroachdb/cockroach-k8s-request-cert:0.3 is the start point but I cannot figure out the pattern for how to use it.
Any pointers would be much appreciated. Ultimately I'd like to be able to use this certificate from an API in the same Kubernetes cluster which uses the pg client. Currently, it's in insecure mode, using just username and password.

The request-cert job is used as an init container for the pod. It will request a client or server certificate (the server certificates are requested by the CockroachDB nodes) using the K8S CSR API.
You can see an example of a client certificate being requested and then used by a job in client-secure.yaml. The init container is run before your normal container:
initContainers:
# The init-certs container sends a certificate signing request to the
# kubernetes cluster.
# You can see pending requests using: kubectl get csr
# CSRs can be approved using: kubectl certificate approve <csr name>
#
# In addition to the client certificate and key, the init-certs entrypoint will symlink
# the cluster CA to the certs directory.
- name: init-certs
image: cockroachdb/cockroach-k8s-request-cert:0.3
imagePullPolicy: IfNotPresent
command:
- "/bin/ash"
- "-ecx"
- "/request-cert -namespace=${POD_NAMESPACE} -certs-dir=/cockroach-certs -type=client -user=root -symlink-ca-from=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt"
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: client-certs
mountPath: /cockroach-certs
This sends a CSR using the K8S API, waits for approval, and places all resulting files (client certificate, key for client certificate, CA certificate) in /cockroach-certs. If the certificate already exists as a K8S secret, it just grabs it.
You can request a certificate for any user by just changing --user=root to the username you with to use.

Related

Gitlab CI Kubernetes Agent Self Signed Certificate

I have following .gitlab-ci.yml config:
deploy:
stage: deploy
image:
name: bitnami/kubectl:latest
entrypoint: ['']
script:
- echo "Hello, Rules!"
- kubectl config get-contexts
- kubectl config use-context OurGroup/our-repo:agent-0
- kubectl get pods
rules:
- if: '$CI_COMMIT_REF_NAME == "master"'
when: manual
allow_failure: true
- if: '$CI_COMMIT_REF_NAME == "develop"'
when: manual
allow_failure: true
tags:
- docker
This fails on following error:
Unable to connect to the server: x509: certificate signed by unknown authority
We are running a self-hosted gitlab instance with a self-signed certificate. The issue is bitnami/kubectl:latest being a non root docker container, and it is described here in the official gitlab docu to be used:
https://docs.gitlab.com/ee/user/clusters/agent/ci_cd_workflow.html#update-your-gitlab-ciyml-file-to-run-kubectl-commands
I have tried the "echo "$CA_CERTIFICATE" > /usr/local/share/ca-certificates/my-ca.crt && update-ca-certificates" for injecting a certificate, but that fails due to not having privileges and SUDO not existing in this container.
kubectl get certificates fails on not being able to connect to localhost:8080
Any pointers on how to get a self-signed certificate to work with connection with kubectl and agent authentication, or what is perhaps considered a secure way of making this work?
Thankfully Gitlab provides a way to provided a path to a ca-crt-file via flag during start up of the agent.
Broadly the process for loading a self-signed certificate is:
create a file containing the public key of your self signed certificate, ensuring to concatenate any other public keys involved in the signing chain
if you can do the above it's possible to apply this file as a K8S ConfigMap, if not (you include the private key) then create it as a K8S Secret as the proceeding documentation indicates.
modify the K8S Deployment to mount the ConfigMap/Secret to the agent container then add a new line to the container args to point to the file - --ca-cert-file=/certs/ca.crt
GitLab Kubernetes Agent: provide custom certificates to the Agent Issue #280518 (thank you Philipp Hahn)
In the kind: Deployment deployment section add the following things:
In spec:template:spec:containers:args append - --ca-cert-file=/certs/${YOUR_CA}.crt - expand ${YOUR_CA} here manually
In spec:template:spec:contaiers:volumeMounts add a new block:
- name: custom-certs
readOnly: true
mountPath: /certs
In spec:template:spec:volumes add a new block:
- name: custom-certs
secret:
secretName: ca
Official Docs:
Providing a custom certificate for accessing GitLab
You can provide a Kubernetes Secret to the GitLab Runner Helm Chart, which will be used to populate the container’s /home/gitlab-runner/.gitlab-runner/certs directory.
Each key name in the Secret will be used as a filename in the directory, with the file content being the value associated with the key:
The key/file name used should be in the format <gitlab.hostname>.crt, for example gitlab.your-domain.com.crt.
Any intermediate certificates need to be concatenated to your server certificate in the same file.
The hostname used should be the one the certificate is registered for.
Gitlab Agent Helm Chart Deployment
- --ca-cert-file=/etc/agentk/config/ca.crt

Create TLS self-signed certificate for MinIO in Kubernetes cluster

My goal now is to create a TLS certificate for MinIO in my k8s cluster.
Link to MinIO requirements for TLS connection - up to date.
MinIO running through port-forward to get into the service in the cluster.
There is a cert-manager chart installed via terraform in the cluster which I want to use it for.
I would be happy to get all info on how to actually create, check the certificate, assign it and understand the core concepts of TLS secure connection. many of the guides I have read/watch so far got me a bit confused.
Our k8s is working as Helm charts overall so please be aware not to get into local commands.
Those certificates are supposed to be the simplest ones to create and assign. It will be self-signed which means the CA will be part of the cluster itself and not Third Party CA.
MinIO service expects for public.crt and private.key insdie this path:
/etc/minio/certs/
or this path:
${HOME}/.minio/certs
values.yaml snippet of TLS configuration:
## TLS Settings for MinIO
tls:
enabled: true
## Create a secret with private.key and public.crt files and pass
that here. Ref:
https://github.com/minio/minio/tree/master/docs/tls/kubernetes#2-
create-kubernetes-secret
certSecret: "tls-minio"
publicCrt: public.crt
privateKey: private.key
## Trusted Certificates Settings for MinIO. Ref:
https://docs.minio.io/docs/how-to-secure-access-to-minio-server-
with-tls#install-certificates-from-third-party-cas
## Bundle multiple trusted certificates into one secret and pass that here. Ref:
https://github.com/minio/minio/tree/master/docs/tls/kubernetes#2-
create-kubernetes-secret
## When using self-signed certificates, remember to include MinIO's own certificate in the bundle with key public.crt.
## If certSecret is left empty and tls is enabled, this chart installs the public certificate from .Values.tls.certSecret.
trustedCertsSecret: ""
Ask me for any more info about this.
Thanks!

How can we use the kafka connect truststore password in an abstract way in the KafkaConnector resource?

We have a connect cluster of 3 nodes. We need couple of certificates in our connect cluster truststore. We have installed those certificates in the following way.
...
spec:
tls:
trustedCertificates:
- certificate: ca.crt
secretName: my-cluster-cluster-ca-cert
- secretName: root-cer
certificate: RootCA.crt
- certificate: IntermediateCA.crt
secretName: inter-cer
- secretName: solace-broker-secret
certificate: secure-solace-broker.crt
...
As you know, after the three connect clusters spins up, the certificates has been installed into the following truststore /tmp/kafka/cluster.truststore.p12. Also, we can found the randomly truststore password into the following file: /tmp/strimzi-connect.properties.
We direct the truststore path and the truststore password in the KafkaConnector resource file.
apiVersion: kafka.strimzi.io/v1alpha1
kind: KafkaConnector
metadata:
name: solace-source-connector
labels:
strimzi.io/cluster: my-connect-cluster
spec:
class: com.solace.connector.kafka.connect.source.SolaceSourceConnector
tasksMax: 1
config:
value.converter: org.apache.kafka.connect.converters.ByteArrayConverter
key.converter: org.apache.kafka.connect.storage.StringConverter
kafka.topic: solace-test
sol.host: tcps://msdkjskdjsdfrdfjdffdhxu3n.messaging.solace.cloud:55443
sol.username: my-solace-cloud-username
sol.password: password
sol.vpn_name: solaceservice
sol.topics: try-me
sol.message_processor_class: com.solace.connector.kafka.connect.source.msgprocessors.SolSampleSimpleMessageProcessor
sol.ssl_trust_store: /tmp/kafka/cluster.truststore.p12
sol.ssl_trust_store_password: HARDCODED_RANDOM_PASSWORD
Right now we are getting inside into one of the connect cluster pod, get the password from the /tmp/strimzi-connect.properties file and then using the password in the sol.ssl_trust_store_password field.
My question:
Is there any way to parametrize the password? Any encapsulated way to use the password (so that we do not need to get inside into the pod to know the password - expectation is, the kafkaconnector resouce would fecth the password from the /tmp/strimzi-connect.properties file, on which pod it is running)
I have got the answer from the Slack channel by Jakub Scholz.
The tls configuration you are using and the truststore are supposed to
be used for communication between Connect and Kafka, not for the
connectors. I think you have two options how to provide a truststore
for the connector
You can use the same truststore as you are using now, but load the password using the FileConfigProvider - I think that should load the
right password on each connect node
You can just create your own secret with the truststore for the connector and load it into connect using this:
https://strimzi.io/docs/operators/latest/full/using.html#assembly-kafka-connect-external-configuration-deployment-configuration-kafka-connect
And this is how I have implemented it:
Creating a custom keystore along with my certificates:
keytool -import -file RootCA.crt -alias root -keystore myTrustStore
Creating a Kubernetes secret with the trust store:
kubectl create secret generic my-trust-store --from-file=myTrustStore
loading the secret into the connect resource file:
spec:
...
externalConfiguration:
volumes:
- name: my-trust-store
secret:
secretName: my-trust-store
After the connect cluster pod spins up, the certificate will going to be available at /opt/kafka/external-configuration/my-trust-store/

Kubernetes: Open /certs/tls.crt: no such file or directory

I am trying to configure SSL for the Kubernetes Dashboard. Unfortunately I receive the following error:
2020/07/16 11:25:44 Creating in-cluster Sidecar client
2020/07/16 11:25:44 Error while loading dashboard server certificates. Reason: open /certs/tls.crt: no such file or directory
volumeMounts:
- name: certificates
mountPath: /certs
# Create on-disk volume to store exec logs
I think that /certs should be mounted, but where should it be mounted?
Certificates are stored as secrets. Then secret can be used and mounted in a deployment.
So in your example it would look something like this:
...
volumeMounts:
- name: certificates
mountPath: /certs
# Create on-disk volume to store exec logs
...
volumes:
- name: certificates
secret:
secretName: certificates
...
This is just a short snipped of the whole process of setting up Kubernetes Dashboard v2.0.0 with recommended.yaml.
If you did used the recommended.yaml then certs are created automatically and stored in memory. Deployment is being created with args : -auto-generate-certificates
I also recommend reading How to expose your Kubernetes Dashboard with cert-manager as it might be helpful to you.
There already was an issue submitted with a simmilar problem as yours Couldn't read CA certificate: open : no such file or directory #2518 but it's regarding Kubernetes v1.7.5
If you have any more issues let me know I'll update the answer if you provide more details.

Cannot install Kubernetes Metrics Server

I would like to install Kubernetes Metrics Server and try the Metrics API by following this recipe (from Kubernetes Handbook). I currently have a Kubernetes 1.13 cluster that was installed with kubeadm.
The recipe's section Enable API Aggregation recommends changes several settings in /etc/kubernetes/manifests/kube-apiserver.yaml. The current settings are as follows:
--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
--requestheader-allowed-names=front-proxy-client
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
--requestheader-extra-headers-prefix=X-Remote-Extra-
--requestheader-group-headers=X-Remote-Group
--requestheader-username-headers=X-Remote-User
The suggested new settings are as follows:
--requestheader-client-ca-file=/etc/kubernetes/certs/proxy-ca.crt
--proxy-client-cert-file=/etc/kubernetes/certs/proxy.crt
--proxy-client-key-file=/etc/kubernetes/certs/proxy.key
--requestheader-allowed-names=aggregator
--requestheader-extra-headers-prefix=X-Remote-Extra-
--requestheader-group-headers=X-Remote-Group
--requestheader-username-headers=X-Remote-User
If I install metrics-server without these changes its log contains errors like this:
unable to fully collect metrics: ... unable to fetch metrics from
Kubelet ... x509: certificate signed by unknown authority
Where do these credentials come from and what do they entail? I currently do not have a directory /etc/kubernetes/certs.
UPDATE I've now tried adding the following at suitable places inside metrics-server-deployment.yaml, however the issue still persists (in the absence of --kubelet-insecure-tls):
command:
- /metrics-server
- --client-ca-file
- /etc/kubernetes/pki/ca.crt
volumeMounts:
- mountPath: /etc/kubernetes/pki/ca.crt
name: ca
readOnly: true
volumens:
- hostPath:
path: /etc/kubernetes/pki/ca.crt
type: File
name: ca
UPDATE Here is probably the reason why mounting the CA certificate into the container apparently did not help.
About Kubernetes Certificates:
Take a look on to how to Manage TLS Certificates in a Cluster:
Every Kubernetes cluster has a cluster root Certificate Authority
(CA). The CA is generally used by cluster components to validate the
API server’s certificate, by the API server to validate kubelet client
certificates, etc. To support this, the CA certificate bundle is
distributed to every node in the cluster and is distributed as a
secret attached to default service accounts.
And also PKI Certificates and Requirements:
Kubernetes requires PKI certificates for authentication over TLS. If
you install Kubernetes with kubeadm, the certificates that your
cluster requires are automatically generated.
kubeadm, by default, create the Kubernetes certificates at /etc/kubernetes/pki/ directory.
About the metrics-server error:
It looks like the metrics-server is trying to validate the kubelet serving certs without having them be signed by the main Kubernetes CA. Installation tools like kubeadm may don't set up certificates properly.
This problem can also happen in the case of your server have changed names/addresses after the Kubernetes installation, which causes a mismatch of the apiserver.crt Subject Alternative Name and your current names/addresses. Check it with:
openssl x509 -in /etc/kubernetes/pki/apiserver.crt -text -noout | grep DNS
The fastest/easy way to overcome this error is by using the --kubelet-insecure-tls flag for metrics-server. Something like this:
# metrics-server-deployment.yaml
[...]
- name: metrics-server
image: k8s.gcr.io/metrics-server-amd64:v0.3.1
command:
- /metrics-server
- --kubelet-insecure-tls
Note that this implies security concerns. If you are running for tests, ok. But for production, the best approach is to identify and fix the certificate issues (Take a look at this metrics-server issue for more information: #146)