How to configure the cluster to provide signing - kubernetes

I have deployed DigitalOcean Chatwoot. Now I need to add the certificate created in the cluster config so that it can be accessed using HTTPS.
I am following this documentation, to manage TLS Certificates in the Kubernetes Cluster.
After following the steps, mentioned in the documentation, I got these
But, as mentioned in the last point of the documentation, I need to add the certificate keys in the Kubernetes controller manager. To do that I used this command
kube-controller-manager --cluster-signing-cert-file ca.pem --cluster-signing-key-file ca-key.pem
But I am getting the below error :
I0422 16:02:37.871541 1032467 serving.go:348] Generated self-signed cert in-memory
W0422 16:02:37.871592 1032467 client_config.go:617] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
W0422 16:02:37.871598 1032467 client_config.go:622] error creating inClusterConfig, falling back to default config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable
Can anyone please tell me how can I configure the cluster to provide security (HTTPS), to the cluster?

Related

Couchbase Operator tool for Kubernetes (cbopctl) does not support "oidc" authentication

I have a Kubernetes cluster running on IBM Cloud and I'm trying to deploy the Couchbase operator.
When running the command:
cbopctl apply --kubeconfig /home/jenkins/.bluemix/cluster.yml -f couchbase-autonomous-operator-kubernetes_1.0.0-linux_x86_64/couchbase-cluster.yaml
I get the following error.
panic: No Auth Provider found for name "oidc"
goroutine 1 [running]:
github.com/couchbase/couchbase-operator/pkg/client.MustNew(0xc4201e2e00, 0xc4201e2e00, 0x0)
/var/tmp/foo/goproj/src/github.com/couchbase/couchbase-operator/pkg/client/client.go:21 +0x71
main.(*ApplyContext).Run(0xc4207e8570)
How do I authenticate this service?
Looks like you have your ~/.kube/config file configured to use OpenID with the oidc authenticator. The ~/.kube/config is with the client-go library uses to authenticate and cbopctl uses the client-go library.
This explains how to set it up in Kubernetes. If you are using an IBM cloud managed Kubenetes cluster, it's probably already configured on the kube-apiserver and you would have to follow this
To manually configure kubectl you would have to do something like this.
The other answers are correct. To provide the IBM Cloud-specific steps, you can download your config file by using ibmcloud ks cluster-config <cluster-name>. That will give you the KUBECONFIG variable to export by copying and pasting. It will also give you the path that you can use to target the config in your couchbase command.

Hashicorp Vault: Is it possible to make edits to pre-existing server configuration file?

I have a Kubernetes cluster which utilizes Vault secrets. I am attempting to modify the conf.hcl that was used to establish Vault. I went into the pod which contains Vault, and appended:
max_lease_ttl = "999h"
default_lease_ttl = "999h"
I did attempt to apply the changes using the only server option available according to the documentation, but failed due to it already being established:
vault server -config conf.hcl
Error initializing listener of type tcp: listen tcp4 0.0.0.0:8200: bind: address already in use
You can't reinitialize in the pod since it's the port is already bound on the containers (Vault is already running there).
You need to restart the pod/deployment with a new config. Not sure how your Vault deployment is configured but the config could be in the container itself, or in some mounted volume or perhaps a ConfigMap.

How to change Kubernetes configuration key file from controller-manager and apiserver to point to the same x509 RSA key?

I am having tiller panic crash as detailed in helm FAQ (https://docs.helm.sh/using_helm/) under the question
Q: Tiller crashes with a panic:
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation...
The FAQ answer suggests
To fix this, you will need to change your Kubernetes configuration.
Make sure that --service-account-private-key-file from
controller-manager and --service-account-key-file from apiserver point
to the same x509 RSA key.
I've tried to search online and read the docs at (https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/) which states
You must pass a service account private key file to the token
controller in the controller-manager by using the
--service-account-private-key-file option. The private key will be used to sign generated service account tokens. Similarly, you must
pass the corresponding public key to the kube-apiserver using the
--service-account-key-file option. The public key will be used to verify the tokens during authentication.
and the docs at https://kubernetes.io/docs/reference/access-authn-authz/authentication/
The docs explain concepts well, but no specifics on actually changing the config.
How do I change my Kubernetes configuration as the FAQ answer suggests?
Make sure that --service-account-private-key-file from
controller-manager and --service-account-key-file from apiserver point
to the same x509 RSA key.
Details:
using kops and gossip based k8s cluster
I have found through trial and error that helm ls seems to have a bug for my setup, or maybe helm ls is expected to show the error if there are no releases to be shown.
Right after helm init, helm ls would show the error panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation...,
but if you just went ahead and used helm,
e.g. helm install stable/<package>, the chart is deployed successfully.
And after the charts are deployed, calling helm ls no longer returns the error, and correctly shows the deployed charts (releases).
I have confirmed it using kubectl get pods, kubectl get services and kubectl get deployments, the released objects are all running.

Get kubeconfig by ssh into cluster

If I am able to SSH into the master or any nodes in the cluster, is it possible for me to get 1) the kubeconfig file or 2) all information necessary to compose my own kubeconfig file?
You could find configuration on master node under /etc/kubernetes/admin.conf (on v1.8+).
On some versions of kubernetes, this can be found under ~/.kube
I'd be interested in hearing the answer to this as well. But I think it depends on how the authentication is set up. For example,
Minikube uses "client certificate" authentication. If it stores the client.key on the cluster as well, you might construct a kubeconfig file by combining it with the cluster’s CA public key.
GKE (Google Kubernetes Engine) uses authentication on a frontend that's separate from the Kubernetes cluster (masters are hosted separately). You can't ssh into the master, but if it was possible, you still might not be able to construct a token that works against the API server.
However, by default Pods have a service account token that can be used to authenticate to Kubernetes API. So if you SSH into a node and run docker exec into a container managed by Kubernetes, you will see this:
/ # ls run/secrets/kubernetes.io/serviceaccount
ca.crt namespace token
You can combine ca.crt and token to construct a kubeconfig file that will authenticate to the Kubernetes master.
So the answer to your question is yes, if you SSH into a node, you can then jump into a Pod and collect information to compose your own kubeconfig file. (See this question on how to disable this. I think there are solutions to disable it by default as well by forcing RBAC and disabling ABAC, but I might be wrong.)

How does Kubectl connect to the master

I've installed Kubernetes via Vagrant on OS X and everything seems to be working fine, but I'm unsure how kubectl is able to communicate with the master node despite being local to the workstation filesystem.
How is this implemented?
kubectl has a configuration file that specifies the location of the Kubernetes apiserver and the client credentials to authenticate to the master. All of the commands issued by kubectl are over the HTTPS connection to the apiserver.
When you run the scripts to bring up a cluster, they typically generate this local configuration file with the parameters necessary to access the cluster you just created. By default, the file is located at ~/.kube/config.
In addition to what Robert said: the connection between your local CLI and the cluster is controlled through kubectl config set, see the docs.
The Getting started with Vagrant section of the docs should contain everything you need.