Hashicorp Vault: Is it possible to make edits to pre-existing server configuration file? - kubernetes

I have a Kubernetes cluster which utilizes Vault secrets. I am attempting to modify the conf.hcl that was used to establish Vault. I went into the pod which contains Vault, and appended:
max_lease_ttl = "999h"
default_lease_ttl = "999h"
I did attempt to apply the changes using the only server option available according to the documentation, but failed due to it already being established:
vault server -config conf.hcl
Error initializing listener of type tcp: listen tcp4 0.0.0.0:8200: bind: address already in use

You can't reinitialize in the pod since it's the port is already bound on the containers (Vault is already running there).
You need to restart the pod/deployment with a new config. Not sure how your Vault deployment is configured but the config could be in the container itself, or in some mounted volume or perhaps a ConfigMap.

Related

How to configure the cluster to provide signing

I have deployed DigitalOcean Chatwoot. Now I need to add the certificate created in the cluster config so that it can be accessed using HTTPS.
I am following this documentation, to manage TLS Certificates in the Kubernetes Cluster.
After following the steps, mentioned in the documentation, I got these
But, as mentioned in the last point of the documentation, I need to add the certificate keys in the Kubernetes controller manager. To do that I used this command
kube-controller-manager --cluster-signing-cert-file ca.pem --cluster-signing-key-file ca-key.pem
But I am getting the below error :
I0422 16:02:37.871541 1032467 serving.go:348] Generated self-signed cert in-memory
W0422 16:02:37.871592 1032467 client_config.go:617] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
W0422 16:02:37.871598 1032467 client_config.go:622] error creating inClusterConfig, falling back to default config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable
Can anyone please tell me how can I configure the cluster to provide security (HTTPS), to the cluster?

Terraform dial tcp 192.xx.xx.xx:443: i/o timeout error

I am trying to implement CI / CD using GitLab + Terraform to K8S Cluster and K8S Control Plane (Master node) was setup on CentOS
However, Pipeline job fails with the following error
Error: Failed to get existing workspaces: Get "https://192.xx.xx.xx/api/v1/namespaces/default/secrets?labelSelector=tfstate%3Dtrue": dial tcp 192.xx.xx.xx:443: i/o timeout
From the error mentioned above (default/secrets?labelSelector=tfstate%3Dtrue), I assume the error is related to missing 'terraform secret' on default namespace
Example (Terraform secret taken from my Windows)
PS C:\> kubectl get secret
NAME TYPE DATA AGE
default-token-7mzv6 kubernetes.io/service-account-token 3 27d
tfstate-default-state Opaque 1 15h
However, I am not sure which process would create 'tfsecret' or should we create it manually ?
Kindly let me know if I my understanding is wrong and had I missed anything else
EDIT
The issue mentioned above occurred because existing Gitlab-runner was on a different subnet (eg 172.xx.xx.xx instead of 192.xx.xx.xx)
I was asked to use a different Gitlab-runner which runs on the same subnet and now it throws the following error
Error: Failed to get existing workspaces: Get "https://192.xx.xx.xx:6443/api/v1/namespaces/default/secrets?labelSelector=tfstate%3Dtrue": x509: certificate signed by unknown authority
Now, I am bit confused whether the certificate-issue is between GitLab-Runner and Gitlab-Server or Gitlab-Server and K8S Cluster or something else
You have configured Kubernetes as the remote state backend for your Terraform configuration. The error is, that the backend is trying to query existing secrets to determine what workspaces are configured. The x509: certificate signed by unknown authority indicates, that the KUBECONFIG the remote state backend uses does not match the CA of the API server you're connecting to.
If the runners are K8s pods themselves, make sure you provide a KUBECONFIG that matches your target cluster and that the remote state does not configure itself as in-cluster by reading the service account token every K8s pod has - which in most cases will only work for the cluster the pod is running on.
You don't provide enough information to be more specific. But big picture, you have to configure the state backend, and any provider that connect to K8s. Theoretically, the state backend secrets and the K8s resources do not have to be on the same cluster. Meaning, you may have to have different configuration for state backend and K8s providers.

Set authentication-token-webhook-config-file

My goal is to set appscode guard application.
In order to so i need to set the value of authentication-token-webhook-config-file flag in Kubernetes api server.
How to do that ?
If you are looking for the way to add an option key to kube-apiserver pod on existing cluster, you just need to edit file /etc/kubernetes/manifests/kube-apiserver.yaml on master node.
After saving this file, kube-apiserver pod will be restarted by kubelet service automatically.
Considering that flag you've mentioned has to have name of the configuration file as parameter, ensure the file exists on the master node file system.
--authentication-token-webhook-config-file string
File with webhook configuration for token authentication in kubeconfig format. The API server will query the remote service to determine authentication for bearer tokens.
The directory for the manifests is defined by kubelet option --pod-manifest-path and can be found using command:
$ ps aux | grep kubelet
You can find more information about life cycle of such pods in Kubernetes documentation

Get kubeconfig by ssh into cluster

If I am able to SSH into the master or any nodes in the cluster, is it possible for me to get 1) the kubeconfig file or 2) all information necessary to compose my own kubeconfig file?
You could find configuration on master node under /etc/kubernetes/admin.conf (on v1.8+).
On some versions of kubernetes, this can be found under ~/.kube
I'd be interested in hearing the answer to this as well. But I think it depends on how the authentication is set up. For example,
Minikube uses "client certificate" authentication. If it stores the client.key on the cluster as well, you might construct a kubeconfig file by combining it with the cluster’s CA public key.
GKE (Google Kubernetes Engine) uses authentication on a frontend that's separate from the Kubernetes cluster (masters are hosted separately). You can't ssh into the master, but if it was possible, you still might not be able to construct a token that works against the API server.
However, by default Pods have a service account token that can be used to authenticate to Kubernetes API. So if you SSH into a node and run docker exec into a container managed by Kubernetes, you will see this:
/ # ls run/secrets/kubernetes.io/serviceaccount
ca.crt namespace token
You can combine ca.crt and token to construct a kubeconfig file that will authenticate to the Kubernetes master.
So the answer to your question is yes, if you SSH into a node, you can then jump into a Pod and collect information to compose your own kubeconfig file. (See this question on how to disable this. I think there are solutions to disable it by default as well by forcing RBAC and disabling ABAC, but I might be wrong.)

How does Kubectl connect to the master

I've installed Kubernetes via Vagrant on OS X and everything seems to be working fine, but I'm unsure how kubectl is able to communicate with the master node despite being local to the workstation filesystem.
How is this implemented?
kubectl has a configuration file that specifies the location of the Kubernetes apiserver and the client credentials to authenticate to the master. All of the commands issued by kubectl are over the HTTPS connection to the apiserver.
When you run the scripts to bring up a cluster, they typically generate this local configuration file with the parameters necessary to access the cluster you just created. By default, the file is located at ~/.kube/config.
In addition to what Robert said: the connection between your local CLI and the cluster is controlled through kubectl config set, see the docs.
The Getting started with Vagrant section of the docs should contain everything you need.