Set authentication-token-webhook-config-file - kubernetes

My goal is to set appscode guard application.
In order to so i need to set the value of authentication-token-webhook-config-file flag in Kubernetes api server.
How to do that ?

If you are looking for the way to add an option key to kube-apiserver pod on existing cluster, you just need to edit file /etc/kubernetes/manifests/kube-apiserver.yaml on master node.
After saving this file, kube-apiserver pod will be restarted by kubelet service automatically.
Considering that flag you've mentioned has to have name of the configuration file as parameter, ensure the file exists on the master node file system.
--authentication-token-webhook-config-file string
File with webhook configuration for token authentication in kubeconfig format. The API server will query the remote service to determine authentication for bearer tokens.
The directory for the manifests is defined by kubelet option --pod-manifest-path and can be found using command:
$ ps aux | grep kubelet
You can find more information about life cycle of such pods in Kubernetes documentation

Related

How to manage kubectl from another user

Hi i have a server configured with kubernetes (without using minikube), i can execute kubectl commands without problems, like kubectl get all, kubectl delete pod, kubectl delete apply ...
I would want to know how to allow another user from my server to execute kubectl commands, because if i change to another user and i try to execute kubectl get all -s localhost:8443 i get:
Error from server (BadRequest): the server rejected our request for an unknown reason
I have read the Kubernetes Authorization Documentation, but im not sure if it is what im looking for.
This is happening because there is no kubeconfig file for the user.You need to have the same kubeconfig file for the other user either in the default location $HOME/.kube/config or in any location pointed by KUBECONFIG environment variable.
You can copy the existing kubeconfig file for the working user to the above location for the non working user.

Kubernetes - Where does it store secrets and how does it use those secrets on multiple nodes?

Not really a programming question but quite curious to know how does Kubernetes or Minikube manage secrets & uses it on multiple nodes/pods?
Let's say if I create a secret to pull image with kubectl as below -
$ kubectl create secret docker-registry regsecret --docker-server=https://index.docker.io/v1/ --docker-username=$USERNM --docker-password=$PASSWD --docker-email=vivekyad4v#gmail.com
What processes will occur in the backend and how will k8s or Minikube use those on multiple nodes/pods?
All data in Kubernetes is managed by the API Server component that performs CRUD operations on the data store (current only option is etcd).
When you submit a secret with kubectl to the API Server it stores the resource and data in etcd. It is recommended to enable encryption for secrets in in the API Server (through setting the right flags) so that the data is encrypted at rest, otherwise anyone with access to etcd will be able to read your secrets in plain text.
When the secret is needed for either mounting in a Pod or in your example for pulling a Docker image from a private registry, it is requested from the API Server by the node-local kubelet and kept in tmpfs so it never touches any hard disk unencrypted.
Here another security recommendation comes into play, which is called Node Authorization (again set up by setting the right flags and distributing certificates to API Server and Kubelets). With Node Authorization enabled you can make sure that a kubelet can only request resources (incl. secrets) that are meant to be run on that specific node, so a hacked node just exposes the resources on that single node and not everything.
Secrets are stored in the only datastore a kubernetes cluster has: etcd.
As all other resources, they're retrieved when needed by the kubelet executable (that runs in every node) by querying k8s' API server
If you are wandering how to actually access the secrets (the stored files),
kubectl -n kube-system exec -it <etcd-pod-name> ls -l /etc/kubernetes/pki/etcd
You will get a list of all keys (system default keys). you can simply view them using cat command (if they are encrypted you won't see much)

Get kubeconfig by ssh into cluster

If I am able to SSH into the master or any nodes in the cluster, is it possible for me to get 1) the kubeconfig file or 2) all information necessary to compose my own kubeconfig file?
You could find configuration on master node under /etc/kubernetes/admin.conf (on v1.8+).
On some versions of kubernetes, this can be found under ~/.kube
I'd be interested in hearing the answer to this as well. But I think it depends on how the authentication is set up. For example,
Minikube uses "client certificate" authentication. If it stores the client.key on the cluster as well, you might construct a kubeconfig file by combining it with the cluster’s CA public key.
GKE (Google Kubernetes Engine) uses authentication on a frontend that's separate from the Kubernetes cluster (masters are hosted separately). You can't ssh into the master, but if it was possible, you still might not be able to construct a token that works against the API server.
However, by default Pods have a service account token that can be used to authenticate to Kubernetes API. So if you SSH into a node and run docker exec into a container managed by Kubernetes, you will see this:
/ # ls run/secrets/kubernetes.io/serviceaccount
ca.crt namespace token
You can combine ca.crt and token to construct a kubeconfig file that will authenticate to the Kubernetes master.
So the answer to your question is yes, if you SSH into a node, you can then jump into a Pod and collect information to compose your own kubeconfig file. (See this question on how to disable this. I think there are solutions to disable it by default as well by forcing RBAC and disabling ABAC, but I might be wrong.)

k8s API server is down due to misconfiguration, how to bring it up again?

I was trying to add a command line flag to the API server. In my setup, it was running as a daemon set inside the k8s cluster so I got the daemon set manifest using kubectl, updated it, and executed kubectl apply -f apiserver.yaml (I know, this was not a good idea).
Of course, the new yaml file I wrote had an error so the API server is not starting anymore and I can't use kubectl to update it. I have an ssh connection to the node where it was running and I can see how the kubelet is trying to run the apiserver pod every few seconds with the ill-formed command. I am trying to configure the kubelet service to use the correct api-server command but am not being able to do so.
Any ideas?
The API server definition usually lives in /etc/kubernetes/manifests - Edit the configuration there rather than at the API level

How does Kubectl connect to the master

I've installed Kubernetes via Vagrant on OS X and everything seems to be working fine, but I'm unsure how kubectl is able to communicate with the master node despite being local to the workstation filesystem.
How is this implemented?
kubectl has a configuration file that specifies the location of the Kubernetes apiserver and the client credentials to authenticate to the master. All of the commands issued by kubectl are over the HTTPS connection to the apiserver.
When you run the scripts to bring up a cluster, they typically generate this local configuration file with the parameters necessary to access the cluster you just created. By default, the file is located at ~/.kube/config.
In addition to what Robert said: the connection between your local CLI and the cluster is controlled through kubectl config set, see the docs.
The Getting started with Vagrant section of the docs should contain everything you need.