How to integrate Kubernetes with Gitlab - kubernetes

I'm trying to integrate Kubernetes cluster with Gitlab for using the Gitlab Review Apps feature.
Kubernetes cluster is created via Rancher 1.6
Running the kubectl get all from the kubernetes shell gives
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/my-service LoadBalancer x.x.144.67 x.x.13.89 80:32701/TCP 30d
svc/kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 30d
On the Gitlab CI / CD > Kubernetes page, we need to enter mainly 3 fields:
API URL
CA Certificate
Token
API URL
If I'm not wrong, we can get the Kubernetes API URL from Rancher Dashboard > Kubernetes > CLI > Generate Config and copy the server url under cluster
apiVersion: v1
kind: Config
clusters:
- cluster:
api-version: v1
insecure-skip-tls-verify: true
server: "https://x.x.122.197:8080/r/projects/1a7/kubernetes:6443"
CA Certificate & Token?
Now, the question is, where to get the CA Certificate (pem format) and the Token?
I tried all the ca.crt and token values from all the namespaces from the Kubernetes dashboard, but I'm getting this error on the Gitlab when trying to install Helm Tiller application:
Something went wrong while installing Helm Tiller
Can't start installation process
Here is how my secrets page look like

I'm also dying out with kubernetes and GitLab. I've created a couple single-node "clusters" for testing, one with minikube and another via kubeadm.
I answered this question on the GitLab forum but I'm posting my solution below:
API URL
According to the official documentation, the API URL is only https://hostname:port without trailing slash
List secrets
First, I listed the secrets as usual:
$ kubectl get secrets
NAME TYPE DATA AGE
default-token-tpvsd kubernetes.io/service-account-token 3 2d
k8s-dashboard-sa-token-XXXXX kubernetes.io/service-account-token 3 1d
Get the service token
$ kubectl -o json get secret k8s-dashboard-sa-token-XXXXX | jq -r '.data.token' | base64 -d
eyJhbGci ... sjcuNA8w
Get the CA certificate
Then I got the CA certificate directly from the JSON output via jq with a custom selector:
$ kubectl -o json get secret k8s-dashboard-sa-token-XXXXX | jq -r '.data."ca.crt"' | base64 -d - | tee ca.crt
-----BEGIN CERTIFICATE-----
MIICyDCCAbCgAwIBAgIBADANBgkqhkiG9w0BAQsFADAVMRMwEQYDVQQDEwprdWJl
... ... ... ... ... ...
FT55iMtPtFqAOnoYBCiLH6oT6Z1ACxduxPZA/EeQmTUoRJG8joczI0V1cnY=
-----END CERTIFICATE-----
Verity the CA certificate
With the CA certificate on hand you can verify as usual:
$ openssl x509 -in ca.crt -noout -subject -issuer
subject= /CN=kubernetes
issuer= /CN=kubernetes
$ openssl s_client -showcerts -connect 192.168.100.20:6443 < /dev/null &> apiserver.crt
$ openssl verify -verbose -CAfile ca.crt apiserver.crt
apiserver.crt: OK

Related

Microk8s : Generating Auth Certificates

I'm trying to generate another kubeconfig for a microk8s cluster. For this I chose the certificates approach and I'm using the following script to generate the certificates, create the certificate signing request and populate the kubeconfig file..
rm -rf ./certs_dir || true
mkdir ./certs_dir
sleep 5
openssl genrsa -out ./certs_dir/$USER_NAME.key 2048
openssl req -new -key ./certs_dir/$USER_NAME.key -out ./certs_dir/$USER_NAME.csr -subj "/CN=$USER_NAME"
CERT_S_REQ="
apiVersion: certificates.k8s.io/v1
kind: CertificateSigningRequest
metadata:
name: user-$USER_NAME-csr
spec:
groups:
- system:authenticated
request: $(cat $USER_NAME.csr | base64)
signerName: kubernetes.io/kube-apiserver-client
expirationSeconds: 864000000
usages:
- digital signature
- key encipherment
- client auth
"
export KUBECONFIG=../output/$NAME-kubeconfig.yaml
echo -e "$CERT_S_REQ" > ./certs_dir/user_csr.yaml
kubectl apply -f ./certs_dir/user_csr.yaml
kubectl get csr
kubectl certificate approve user-$USER_NAME-csr
sleep 10
kubectl get csr user-$USER_NAME-csr -o jsonpath='{.status.certificate}' | base64 -D > ./certs_dir/$USER_NAME.crt
kubectl create rolebinding user-$USER_NAME --clusterrole=cluster-admin --user=$USER_NAME
APISERVER=$(kubectl config view --raw -o 'jsonpath={..cluster.server}')
unset KUBECONFIG
kubectl config set-credentials "$USER_NAME" \
--client-certificate="./certs_dir/$USER_NAME.crt" \
--client-key="./certs_dir/$USER_NAME.key" \
--kubeconfig=../output/$USER_NAME.yaml \
--embed-certs=true
kubectl config set-cluster $CLUSTER_NAME --server=$APISERVER --kubeconfig=../output/$USER_NAME.yaml
kubectl config set-context default --user=$USER_NAME --cluster=$CLUSTER_NAME --kubeconfig=../output/$USER_NAME.yaml
kubectl config use-context default --kubeconfig=../output/$USER_NAME.yaml
Everything seems to work, but when trying to use the new kubeconfig file with the embedded certs it does not work, failing with following error whenever trying to execute a kubectl command
error: tls: private key does not match public key
Did I miss something?
I'm on MAC OS, running the microk8s cluster via multipass.
The microk8s cluster has the following enabled: ingress, storage, dns, rbac and also dashboard install: https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/aio/deploy/recommended.yaml

Using a custom certificate for the Kubernetes api server with minikube

I have been trying to find how to do this but so far have found nothing, I am quite new to Kubernetes so I might just have looked over it. I want to use my own certificate for the Kubernetes API server, is this possible? And if so, can someone perhaps give me a link?
Ok, so here is my idea. We know we cannot change cluster certs, but there is other way to do it. We should be able to proxy through ingress.
First we enabled ingres addon:
➜ ~ minikube addons enable ingress
Given tls.crt and tls.key we create a secret (you don't need to do this if you are using certmanager but this requires some additinal steps I am not going to describe here):
➜ ~ kubectl create secret tls my-tls --cert=tls.crt --key tls.key
and an ingress object:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-k8s
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
tls:
- hosts:
- foo.bar.com
secretName: my-tls
rules:
- host: foo.bar.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: kubernetes
port:
number: 443
Notice what docs say about CN and FQDN: k8s docs:
Referencing this secret in an Ingress tells the Ingress controller to secure the channel from the client to the load balancer using TLS. You need to make sure the TLS secret you created came from a certificate that contains a Common Name (CN), also known as a Fully Qualified Domain Name (FQDN) for https-example.foo.com.
The only issue with this approach is that we cannot use certificates for authentication when accessing from the outside.
But we can use tokens. Here is a page in k8s docs: https://kubernetes.io/docs/reference/access-authn-authz/authentication/ that lists all possible methods of authentication.
For testing I choose serviceaccout token but feel free to experiment with others.
Let's create a service account, bind a role to it, and try to access the cluster:
➜ ~ kubectl create sa cadmin
serviceaccount/cadmin created
➜ ~ kubectl create clusterrolebinding --clusterrole cluster-admin --serviceaccount default:cadmin cadminbinding
clusterrolebinding.rbac.authorization.k8s.io/cadminbinding created
Now we follow these instructions: access-cluster-api from docs to try to access the cluster with sa token.
➜ ~ APISERVER=https://$(minikube ip)
➜ ~ TOKEN=$(kubectl get secret $(kubectl get serviceaccount cadmin -o jsonpath='{.secrets[0].name}') -o jsonpath='{.data.token}' | base64 --decode )
➜ ~ curl $APISERVER/api --header "Authorization: Bearer $TOKEN" --insecure -H "Host: foo.bar.com"
{
"kind": "APIVersions",
"versions": [
"v1"
],
"serverAddressByClientCIDRs": [
{
"clientCIDR": "0.0.0.0/0",
"serverAddress": "192.168.39.210:8443"
}
]
}
note: I am testing it with invalid/selfsigned certificates and I don't own the foo.bar.com domain so I need to pass Host header by hand. For you it may look a bit different, so don't just copypate; try to understand what's happening and adjust it. If you have a domain you should be able to access it directly (no $(minikube ip) necessary).
As you should see, it worked! We got a valid response from api server.
But we probably don't want to use curl to access k8s.
Let's create a kubeconfig with the token.
kubectl config set-credentials cadmin --token $TOKEN --kubeconfig my-config
kubectl config set-cluster mini --kubeconfig my-config --server https://foo.bar.com
kubectl config set-context mini --kubeconfig my-config --cluster mini --user cadmin
kubectl config use-context --kubeconfig my-config mini
And now we can access k8s with this config:
➜ ~ kubectl get po --kubeconfig my-config
No resources found in default namespace.
Yes, you can use your own certificate and set inn the Kubernetes API server.
Suppose you have created the certificate move and save them to specific node directory:
{
sudo mkdir -p /var/lib/kubernetes/
sudo mv ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \
service-account-key.pem service-account.pem \
encryption-config.yaml /var/lib/kubernetes/
}
The instance internal IP address will be used to advertise the API Server to members of the cluster. Get the internal IP:
INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \
http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)
you can crate the service of API server and set it.
Note : Above mentioned example is specifically with consider the GCP instances so you might have to change some commands like.
INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \
http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)
for the above command, you can provide the manual bare metal IP list instead of getting from GCP instance API if you are not using it.
Here we go please refer to this link : https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/docs/08-bootstrapping-kubernetes-controllers.md#configure-the-kubernetes-api-server
here you can find all the details for creating and setting whole Kubernetes cluster from scratch along will detailed document and commands : https://github.com/kelseyhightower/kubernetes-the-hard-way

Parameters to access multiple clusters

I want to access an external k8s cluster which is running in private cloud. Do you have any idea how can I get these parameters? What should I do in order to generate them?
${CLIENT_CERTIFICATE_DATA}
fake-cert-file
fake-key-file
apiVersion: v1
kind: Config
clusters:
- cluster:
certificate-authority-data: ${CLUSTER_CA}
server: ${CLUSTER_ENDPOINT}
name: ${CLUSTER_NAME}
users:
- name: ${USER}
user:
client-certificate-data: $**{CLIENT_CERTIFICATE_DATA}**
contexts:
- context:
cluster: ${CLUSTER_NAME}
user:
client-certificate: **fake-cert-file**
client-key: **fake-key-file**
name: ${USER}-${CLUSTER_NAME}
current-context: ${USER}-${CLUSTER_NAME}
The steps to allow an access for a "bob" user are the followings:
Create a new CSR via openssl
openssl req -new -newkey rsa:4096 -nodes -keyout bob-k8s.key -out bob-k8s.csr -subj "/CN=bob/O=devops"
Create Kubernetes CertificateSigningRequest object
use
kubectl create –edit -f k8s-csr.yaml
and you should input the following
apiVersion: certificates.k8s.io/v1beta1
kind: CertificateSigningRequest
metadata:
name: bob-k8s-access
spec:
groups:
- system:authenticated
request: # replace with output from shell command: cat bob-k8s.csr | base64 | tr -d '\n'
usages:
- client auth
Verify your CSR object
kubectl get csr
Approve your certificate
kubectl certificate approve bob-k8s-access
Verify your Bob's certificate
kubectl get csr bob-k8s-access -o jsonpath='{.status.certificate}' | base64 --decode > bob-k8s-access.crt
Retrieve the cluster CA certificate
kubectl config view -o jsonpath='{.clusters[0].cluster.certificate-authority-data}' --raw | base64 --decode - > k8s-ca.crt
Setup Bob's kubeconfig file
$ kubectl config set-cluster $(kubectl config view -o jsonpath='{.clusters[0].name}') --server=$(kubectl config view -o jsonpath='{.clusters[0].cluster.server}') --certificate-authority=k8s-ca.crt --kubeconfig=bob-k8s-config --embed-certs
after this command a bob-k8s-config file should be created with Bob's .kube configuration
Setup Bob's credential accesses
kubectl config set-credentials bob --client-certificate=bob-k8s-access.crt --client-key=bob-k8s.key --embed-certs --kubeconfig=bob-k8s-config
Create a context in your config
kubectl config set-context bob --cluster=$(kubectl config view -o jsonpath='{.clusters[0].name}') --namespace=<ns-for-bob> --user=bob --kubeconfig=bob-k8s-config
Assign roles within the namespace
kubectl create rolebinding bob-admin --namespace=<ns-for-bob> --clusterrole=admin --user=bob
For more information about permissions, please, look at the Kubernetes configuration page
I've written this instructions starting from this guide that is more exhaustive!

How do I get the K8s `ca.crt` and `ca.key` running K8s on a service provider(GKE)

Note: I am not running locally on Minikube or something, but GKE - but could be any provider.
I want to be able to create users/contexts in K8s with openssl:
openssl x509 -req -in juan.csr -CA CA_LOCATION/ca.crt -CAKey CA_LOCATION/ca.key -CAcreateserial -out juan.crt -days 500
How do I get the K8s ca.crt and ca.key? - I found this for ca.crt, but is this the way and still missing the ca.key?
kubectl get secret -o jsonpath="{.items[?(#.type==\"kubernetes.io/service-account-token\")].data['ca\.crt']}" | base64 --decode
And, other way than logging into master node /etc/kubernetes/pki/.
I would suggest viewing the following documentation on how to generate a ca.key and ca.crt for your kubernetes cluster. Please keep in mind this is not an official google document, however this may help you achieve what you are looking for.
Here are the commands found in the document.
Generate ca.key: openssl genrsa -out ca.key 2048
Generate ca.cert: openssl req -x509 -new -nodes -key ca.key -subj "/CN=${MASTER_IP}" -days 10000 -out ca.crt
EDIT
I found 2 unsupported documents [1] [2] on generating a certificate and key with openssl, it should be applicable with kubernetes.
You don't. Google do not expose the keys as per the documentation.
Specifically and I quote:
An internal Google service manages root keys for this CA, which are non-exportable. This service accepts certificate signing requests, including those from the kubelets in each GKE cluster. Even if the API server in a cluster were compromised, the CA would not be compromised, so no other clusters would be affected.
Using Kubernetes v1.19 and higher you can sign your CSR using the Kubernetes API itself as referenced here.
Encode your CSR $ ENCODED=$(cat mia.csr | base64 | tr -d "\n").
Then post it to k8s:
$ cat <<EOF | kubectl apply -f -
apiVersion: certificates.k8s.io/v1
kind: CertificateSigningRequest
metadata:
name: mia
spec:
request: $ENCODED
signerName: kubernetes.io/kube-apiserver-client
#expirationSeconds: 86400 # Only supported on >=1.22
usages:
- client auth
EOF
And then approve the CSR:
$ kubectl certificate approve mia
certificatesigningrequest.certificates.k8s.io/mia approved
Then download the signed certificate:
kubectl get csr mia -o jsonpath='{.status.certificate}'| base64 -d > mia.crt
I wrote an example of the end to end flow here.

How to replace the "Kubernetes fake certificate" with a wildcard certificate (on bare metal private cloud) Nginx Ingress and cert manager

We have setup a Kubernetes cluster on our bare metal server.
We deploy our application where each namespace is an application for the end customer. ie customer1.mydomain.com -> namespace: cust1
We keep on getting the Kubernetes Ingress Controller Fake Certificate.
We have purchased our own wildcard certificates *.mydomain.com
#kubectl create secret tls OUR-SECRET --key /path/private.key --cert /path/chain.crt -n ingress-nginx
#kubectl create secret tls OUR-SECRET --key /path/private.key --cert /path/chain.crt -n kube-system
ingress.yaml
apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
name: ourcloud
namespace: cert-manager
spec:
secretName: oursecret
issuerRef:
name: letsencrypt-prod
commonName: '*.mydomain.com'
acme:
config:
- dns01:
provider: cf-dns-prod
domains:
- '*.mydomain.com'
kubectl apply -f ingress.yaml
certificate.certmanager.k8s.io/ourcloud created
https://cust1.mydomain.com connects with Kubernetes Ingress Controller Fake Certificate
I found the problem. I had the wrong filename in my yaml for the certificate files. Its allways good to look at the ingress logs
kubectl logs nginx-ingress-controller-689498bc7c-tf5 -n ingress-nginx
kubectl get -o yaml ingress --all-namespaces
Try to recreate the secrete from files and see if it works.
kubectl delete -n cust4 SECRETNAME
kubectl -n cust4 create secret tls SECRETENAME --key key.key --cert cert.crt
If you are using Helm and cert manager, make sure each ingress resource has a different certificate name, these values are usually set from the values file in a helm chart.
tls
- secretName: <give certificate name>
hosts: example.com
You can check the certificates available using to avoid name collision if you have successfully deployed your ingress resources:
kubectl get certificates