Gitlab CI Kubernetes Agent Self Signed Certificate - kubernetes

I have following .gitlab-ci.yml config:
deploy:
stage: deploy
image:
name: bitnami/kubectl:latest
entrypoint: ['']
script:
- echo "Hello, Rules!"
- kubectl config get-contexts
- kubectl config use-context OurGroup/our-repo:agent-0
- kubectl get pods
rules:
- if: '$CI_COMMIT_REF_NAME == "master"'
when: manual
allow_failure: true
- if: '$CI_COMMIT_REF_NAME == "develop"'
when: manual
allow_failure: true
tags:
- docker
This fails on following error:
Unable to connect to the server: x509: certificate signed by unknown authority
We are running a self-hosted gitlab instance with a self-signed certificate. The issue is bitnami/kubectl:latest being a non root docker container, and it is described here in the official gitlab docu to be used:
https://docs.gitlab.com/ee/user/clusters/agent/ci_cd_workflow.html#update-your-gitlab-ciyml-file-to-run-kubectl-commands
I have tried the "echo "$CA_CERTIFICATE" > /usr/local/share/ca-certificates/my-ca.crt && update-ca-certificates" for injecting a certificate, but that fails due to not having privileges and SUDO not existing in this container.
kubectl get certificates fails on not being able to connect to localhost:8080
Any pointers on how to get a self-signed certificate to work with connection with kubectl and agent authentication, or what is perhaps considered a secure way of making this work?

Thankfully Gitlab provides a way to provided a path to a ca-crt-file via flag during start up of the agent.
Broadly the process for loading a self-signed certificate is:
create a file containing the public key of your self signed certificate, ensuring to concatenate any other public keys involved in the signing chain
if you can do the above it's possible to apply this file as a K8S ConfigMap, if not (you include the private key) then create it as a K8S Secret as the proceeding documentation indicates.
modify the K8S Deployment to mount the ConfigMap/Secret to the agent container then add a new line to the container args to point to the file - --ca-cert-file=/certs/ca.crt
GitLab Kubernetes Agent: provide custom certificates to the Agent Issue #280518 (thank you Philipp Hahn)
In the kind: Deployment deployment section add the following things:
In spec:template:spec:containers:args append - --ca-cert-file=/certs/${YOUR_CA}.crt - expand ${YOUR_CA} here manually
In spec:template:spec:contaiers:volumeMounts add a new block:
- name: custom-certs
readOnly: true
mountPath: /certs
In spec:template:spec:volumes add a new block:
- name: custom-certs
secret:
secretName: ca
Official Docs:
Providing a custom certificate for accessing GitLab
You can provide a Kubernetes Secret to the GitLab Runner Helm Chart, which will be used to populate the container’s /home/gitlab-runner/.gitlab-runner/certs directory.
Each key name in the Secret will be used as a filename in the directory, with the file content being the value associated with the key:
The key/file name used should be in the format <gitlab.hostname>.crt, for example gitlab.your-domain.com.crt.
Any intermediate certificates need to be concatenated to your server certificate in the same file.
The hostname used should be the one the certificate is registered for.
Gitlab Agent Helm Chart Deployment
- --ca-cert-file=/etc/agentk/config/ca.crt

Related

ADO Pipeline Environment Kubernetes On-Prem Resource Connection failing with x509: certificate signed by unknown authority

I am trying to setup a multi-stage ADO pipeline using ADO pipeline Environment feature.
Stage 1: Builds the Spring-boot based Java Micro-service using Maven.
Stage 2: Deploys the above using Helm 3. The HelmDeploy#0 task uses Environment which has a Resource called tools-dev (a kubernetes namespace) where I want this service to be deployed using Helm chart.
It fails at the last step with this error:
/usr/local/bin/helm upgrade --install --values /azp/agent/_work/14/a/values.yaml --wait --set ENV=dev --set-file appProperties=/azp/agent/_work/14/a/properties.yaml --history-max 2 --stderrthreshold 3 java-rest-template k8s-common-helm/rest-template-helm-demo
Error: Kubernetes cluster unreachable: Get "https://rancher.msvcprd.windstream.com/k8s/clusters/c-gkffz/version?timeout=32s": x509: certificate signed by unknown authority
##[error]Error: Kubernetes cluster unreachable: Get "https://rancher.msvcprd.windstream.com/k8s/clusters/c-gkffz/version?timeout=32s": x509: certificate signed by unknown authority**
Finishing: Helm Deploy
I created the Kubernetes resource in the Environment using the kubectl commands specified in the settings section.
Deploy stage pipeline excerpt:
- stage: Deploy
displayName: kubernetes deployment
dependsOn: Build
condition: succeeded('Build')
jobs:
- deployment: deploy
pool: $(POOL_NAME)
displayName: Deploy
environment: dev-az-s-central-k8s2.tools-dev
strategy:
runOnce:
deploy:
steps:
- bash: |
helm repo add \
k8s-common-helm \
http://nexus.windstream.com/repository/k8s-helm/
helm repo update
displayName: 'Add and Update Helm repo'
failOnStderr: false
- task: HelmDeploy#0
inputs:
command: 'upgrade'
releaseName: '$(RELEASE_NAME)'
chartName: '$(HELM_CHART_NAME)'
valueFile: '$(Build.ArtifactStagingDirectory)/values.yaml'
arguments: '--set ENV=$(ENV) --set-file appProperties=$(Build.ArtifactStagingDirectory)/properties.yaml --history-max 2 --stderrthreshold 3'
displayName: 'Helm Deploy'
Environment Settings:
Name: dev-az-s-central-k8s2
Resource: tools-dev (Note: this is an on-prem k8s cluster that I am trying to connect to).
Can you please let me know what additional configuration is required to resolve this x509 certificate issue?
Check this documentation:
The issue is that your local Kubernetes config file must have the
correct credentials.
When you create a cluster on GKE, it will give you credentials,
including SSL certificates and certificate authorities. These need to
be stored in a Kubernetes config file (Default: ~/.kube/config) so
that kubectl and helm can access them.
Also, check answer in case Helm 3: x509 error when connecting to local Kubernetes
Helm looks for kubeconfig at this path $HOME/.kube/config.
Please run this command
microk8s.kubectl config view --raw > $HOME/.kube/config
This will save the config at required path in your directory and shall
work

Kubernetes: Open /certs/tls.crt: no such file or directory

I am trying to configure SSL for the Kubernetes Dashboard. Unfortunately I receive the following error:
2020/07/16 11:25:44 Creating in-cluster Sidecar client
2020/07/16 11:25:44 Error while loading dashboard server certificates. Reason: open /certs/tls.crt: no such file or directory
volumeMounts:
- name: certificates
mountPath: /certs
# Create on-disk volume to store exec logs
I think that /certs should be mounted, but where should it be mounted?
Certificates are stored as secrets. Then secret can be used and mounted in a deployment.
So in your example it would look something like this:
...
volumeMounts:
- name: certificates
mountPath: /certs
# Create on-disk volume to store exec logs
...
volumes:
- name: certificates
secret:
secretName: certificates
...
This is just a short snipped of the whole process of setting up Kubernetes Dashboard v2.0.0 with recommended.yaml.
If you did used the recommended.yaml then certs are created automatically and stored in memory. Deployment is being created with args : -auto-generate-certificates
I also recommend reading How to expose your Kubernetes Dashboard with cert-manager as it might be helpful to you.
There already was an issue submitted with a simmilar problem as yours Couldn't read CA certificate: open : no such file or directory #2518 but it's regarding Kubernetes v1.7.5
If you have any more issues let me know I'll update the answer if you provide more details.

Azure Devops kubernetes service connection for "kubeconfig" option does not appear to work against an AAD openidconnect integrated AKS cluster

When using the "kubeconfig" option I get the error when I click on "verify connection"
Error: TFS.WebApi.Exception: No user credentials found for cluster in KubeConfig content. Make sure that the credentials exist and try again.
The kubeconfig I pasted in, and selected the correct context from, is a direct copy paste of what is in my ~/.kube./config file and this works fine w/ kubectl
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: xxxxx
server: https://aks-my-stage-cluster-xxxxx.hcp.eastus.azmk8s.io:443
name: aks-my-stage-cluster-xxxxx
contexts:
- context:
cluster: aks-my-stage-cluster-xxxxx
user: clusterUser_aks-my-stage-cluster-xxxxx_aks-my-stage-cluster-xxxxx
name: aks-my-stage-cluster-xxxxx
current-context: aks-my-stage-cluster-xxxxx
kind: Config
preferences: {}
users:
- name: clusterUser_aks-my-stage-cluster-xxxxx_aks-my-stage-cluster-xxxxx
user:
auth-provider:
config:
access-token: xxxxx.xxx.xx-xx-xx-xx-xx
apiserver-id: xxxx
client-id: xxxxx
environment: AzurePublicCloud
expires-in: "3599"
expires-on: "1572377338"
refresh-token: xxxx
tenant-id: xxxxx
name: azure
Azure DevOps has an option to save the service connection without verification:
Even though the verification fails when editing the service connection, pipelines that use the service connection do work in my case.
Depending on the pasted KubeConfig you might encounter a 2nd problem where the Azure DevOps GUI for the service connection doesn't save or close, but also doesn't give you any error message. By inspecting the network traffic in e.g. Firefox' developer tools, I found out that the problem was the KubeConfig value being too long. Only ~ 20.000 characters are allowed. After removing irrelevant entries from the config, it worked.
PS: Another workaround is to run kubelogin in a script step in your pipeline.
It seems like it's not enough just to use converted with kubelogin kubeconfig. This plugin is required for kubectl to make a test connection and probably it's not used in the Azure DevOps service connection configuration.
As a workaround that can work for self-hosted build agent, you can install kubectl, kubelogin and whatever software you need to work with your AKS cluster and use shell scripts like:
export KUBECONFIG=~/.kube/config
kubectl apply -f deployment.yaml
You can try run below command to get the KubeConfig. And then copy the content of ~/.kube/config file the service connection to try again.
az aks get-credentials --resource-group myResourceGroup --name myAKSCluster
After run above command and copy the config from the ~/.kube/config on my local machine. i successfully add my kubernetes connection using kubeconfig option
You can also refer to the steps here.

Cannot install Kubernetes Metrics Server

I would like to install Kubernetes Metrics Server and try the Metrics API by following this recipe (from Kubernetes Handbook). I currently have a Kubernetes 1.13 cluster that was installed with kubeadm.
The recipe's section Enable API Aggregation recommends changes several settings in /etc/kubernetes/manifests/kube-apiserver.yaml. The current settings are as follows:
--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
--requestheader-allowed-names=front-proxy-client
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
--requestheader-extra-headers-prefix=X-Remote-Extra-
--requestheader-group-headers=X-Remote-Group
--requestheader-username-headers=X-Remote-User
The suggested new settings are as follows:
--requestheader-client-ca-file=/etc/kubernetes/certs/proxy-ca.crt
--proxy-client-cert-file=/etc/kubernetes/certs/proxy.crt
--proxy-client-key-file=/etc/kubernetes/certs/proxy.key
--requestheader-allowed-names=aggregator
--requestheader-extra-headers-prefix=X-Remote-Extra-
--requestheader-group-headers=X-Remote-Group
--requestheader-username-headers=X-Remote-User
If I install metrics-server without these changes its log contains errors like this:
unable to fully collect metrics: ... unable to fetch metrics from
Kubelet ... x509: certificate signed by unknown authority
Where do these credentials come from and what do they entail? I currently do not have a directory /etc/kubernetes/certs.
UPDATE I've now tried adding the following at suitable places inside metrics-server-deployment.yaml, however the issue still persists (in the absence of --kubelet-insecure-tls):
command:
- /metrics-server
- --client-ca-file
- /etc/kubernetes/pki/ca.crt
volumeMounts:
- mountPath: /etc/kubernetes/pki/ca.crt
name: ca
readOnly: true
volumens:
- hostPath:
path: /etc/kubernetes/pki/ca.crt
type: File
name: ca
UPDATE Here is probably the reason why mounting the CA certificate into the container apparently did not help.
About Kubernetes Certificates:
Take a look on to how to Manage TLS Certificates in a Cluster:
Every Kubernetes cluster has a cluster root Certificate Authority
(CA). The CA is generally used by cluster components to validate the
API server’s certificate, by the API server to validate kubelet client
certificates, etc. To support this, the CA certificate bundle is
distributed to every node in the cluster and is distributed as a
secret attached to default service accounts.
And also PKI Certificates and Requirements:
Kubernetes requires PKI certificates for authentication over TLS. If
you install Kubernetes with kubeadm, the certificates that your
cluster requires are automatically generated.
kubeadm, by default, create the Kubernetes certificates at /etc/kubernetes/pki/ directory.
About the metrics-server error:
It looks like the metrics-server is trying to validate the kubelet serving certs without having them be signed by the main Kubernetes CA. Installation tools like kubeadm may don't set up certificates properly.
This problem can also happen in the case of your server have changed names/addresses after the Kubernetes installation, which causes a mismatch of the apiserver.crt Subject Alternative Name and your current names/addresses. Check it with:
openssl x509 -in /etc/kubernetes/pki/apiserver.crt -text -noout | grep DNS
The fastest/easy way to overcome this error is by using the --kubelet-insecure-tls flag for metrics-server. Something like this:
# metrics-server-deployment.yaml
[...]
- name: metrics-server
image: k8s.gcr.io/metrics-server-amd64:v0.3.1
command:
- /metrics-server
- --kubelet-insecure-tls
Note that this implies security concerns. If you are running for tests, ok. But for production, the best approach is to identify and fix the certificate issues (Take a look at this metrics-server issue for more information: #146)

How to create a Kubernetes client certificate signing request for Cockroachdb

The environment I'm working with is a secure cluster running cockroach/gke.
I have an approved default.client.root certificate which allows me to access the DB using root, but I can't understand how to generate new certificate requests for additional users. I've read the cockroachDB docs over and over, and it is explained how to manually generate a user certificate in a standalone config where the ca.key location is accessible, but not specifically how to do it in the context of Kubernetes.
I believe that the image cockroachdb/cockroach-k8s-request-cert:0.3 is the start point but I cannot figure out the pattern for how to use it.
Any pointers would be much appreciated. Ultimately I'd like to be able to use this certificate from an API in the same Kubernetes cluster which uses the pg client. Currently, it's in insecure mode, using just username and password.
The request-cert job is used as an init container for the pod. It will request a client or server certificate (the server certificates are requested by the CockroachDB nodes) using the K8S CSR API.
You can see an example of a client certificate being requested and then used by a job in client-secure.yaml. The init container is run before your normal container:
initContainers:
# The init-certs container sends a certificate signing request to the
# kubernetes cluster.
# You can see pending requests using: kubectl get csr
# CSRs can be approved using: kubectl certificate approve <csr name>
#
# In addition to the client certificate and key, the init-certs entrypoint will symlink
# the cluster CA to the certs directory.
- name: init-certs
image: cockroachdb/cockroach-k8s-request-cert:0.3
imagePullPolicy: IfNotPresent
command:
- "/bin/ash"
- "-ecx"
- "/request-cert -namespace=${POD_NAMESPACE} -certs-dir=/cockroach-certs -type=client -user=root -symlink-ca-from=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt"
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: client-certs
mountPath: /cockroach-certs
This sends a CSR using the K8S API, waits for approval, and places all resulting files (client certificate, key for client certificate, CA certificate) in /cockroach-certs. If the certificate already exists as a K8S secret, it just grabs it.
You can request a certificate for any user by just changing --user=root to the username you with to use.