Get kubeconfig by ssh into cluster - kubernetes

If I am able to SSH into the master or any nodes in the cluster, is it possible for me to get 1) the kubeconfig file or 2) all information necessary to compose my own kubeconfig file?

You could find configuration on master node under /etc/kubernetes/admin.conf (on v1.8+).
On some versions of kubernetes, this can be found under ~/.kube

I'd be interested in hearing the answer to this as well. But I think it depends on how the authentication is set up. For example,
Minikube uses "client certificate" authentication. If it stores the client.key on the cluster as well, you might construct a kubeconfig file by combining it with the cluster’s CA public key.
GKE (Google Kubernetes Engine) uses authentication on a frontend that's separate from the Kubernetes cluster (masters are hosted separately). You can't ssh into the master, but if it was possible, you still might not be able to construct a token that works against the API server.
However, by default Pods have a service account token that can be used to authenticate to Kubernetes API. So if you SSH into a node and run docker exec into a container managed by Kubernetes, you will see this:
/ # ls run/secrets/kubernetes.io/serviceaccount
ca.crt namespace token
You can combine ca.crt and token to construct a kubeconfig file that will authenticate to the Kubernetes master.
So the answer to your question is yes, if you SSH into a node, you can then jump into a Pod and collect information to compose your own kubeconfig file. (See this question on how to disable this. I think there are solutions to disable it by default as well by forcing RBAC and disabling ABAC, but I might be wrong.)

Related

Generating a kubeconfig file and authenticating for google cloud

I have a Kubernetes cluster. Inside my cluster is a Django application which needs to connect to my Kubernetes cluster on GKE. Upon my Django start up (inside my Dockerfile), I authenticate with Google Cloud by using:
gcloud auth activate-service-account $GKE_SERVICE_ACCOUNT_NAME --key-file=$GOOGLE_APPLICATION_CREDENTIALS
gcloud config set project $GKE_PROJECT_NAME
gcloud container clusters get-credentials $GKE_CLUSTER_NAME --zone $GKE_ZONE
I am not really sure if I need to do this everytime my Django container starts, and I am not sure I understand how authentication to Google Cloud works. Could I perhaps just generate my Kubeconfig file, store it somewhere safe and use it all the time instead of authenticating?
In other words, is a Kubeconfig file enough to connect to my GKE cluster?
If your service is running in a Pod inside the GKE cluster you want to connect to, use a Kubernetes service account to authenticate.
Create a Kubernetes service account and attach it to your Pod. If your Pod already has a Kubernetes service account, you may skip this step.
Use Kubernetes RBAC to grant the Kubernetes service account the correct permissions.
The following example grants edit permissions in the prod namespace:
kubectl create rolebinding yourserviceaccount \
--clusterrole=edit \
--serviceaccount=yournamespace:yourserviceaccount\
--namespace=prod
At runtime, when your service invokes kubectl, it automatically receives the credentials you configured.
You can also store the credentials as a secret and mount it on your pod so that it can read them from there
To use a Secret with your workloads, you can specify environment variables that reference the Secret's values, or mount a volume containing the Secret.
You can create a Secret using the command-line or a YAML file.
Here is an example using Command-line
kubectl create secret SECRET_TYPE SECRET_NAME DATA
SECRET_TYPE: the Secret type, which can be one of the following:
generic:Create a Secret from a local file, directory, or literal value.
docker-registry:Create a dockercfg Secret for use with a Docker registry. Used to authenticate against Docker registries.
tls:Create a TLS secret from the given public/private key pair. The public/private key pair must already exist. The public key certificate must be .PEM encoded and match the given private key.
For most Secrets, you use the generic type.
SECRET_NAME: the name of the Secret you are creating.
DATA: the data to add to the Secret, which can be one of the following:
A path to a directory containing one or more configuration files, indicated using the --from-file or --from-env-file flags.
Key-value pairs, each specified using --from-literal flags.
If you need more information about kubectl create you can check the reference documentation

Where to find the ca.key on kubernetes helm installation on EKS

probably super simple but my lack of experience with both kubernetes and cockroachdb makes it a bit difficult.
I have set up a EKS cluster of 2 nodes where I am running 2 replicas of cockroach DB. I have followed the documentation to install cockroachDB with helm. Now I am trying to access the database from my local machine by making an external connection.
When I run the cockroach-secure-client pod, bash in to it and try to create a new user certificate, it is asking me for my ca.key file. I have no clue where to find this. In '/cockroach-certs' you can find the ca.crt file, but I need the ca.key file.
Where am I suppose to find this?
You'll have to use the Kubernetes CA key. See here: https://www.cockroachlabs.com/docs/stable/orchestrate-cockroachdb-with-kubernetes.html#initialize-the-cluster

Airflow KubePodOperator pull image from private repository

How can Apache Airflow's KubernetesPodOperator pull docker images from a private repository?
The KubernetesPodOperator has an image_pull_secrets which you can pass a Secrets object to authenticate with the private repository. But the secrets object can only represent an environment variable, or a volume - neither of which fit my understanding of how Kubernetes uses secrets to authenticate with private repos.
Using kubectl you can create the required secret with something like
$ kubectl create secret docker-registry $SECRET_NAME \
--docker-server=https://${ACCOUNT}.dkr.ecr.${REGION}.amazonaws.com \
--docker-username=AWS \
--docker-password="${TOKEN}" \
--docker-email="${EMAIL}"
But how can you create the authentication secret in Airflow?
There is secret object with docker-registry type according to kubernetes documentation which can be used to authenticate to private repository.
As You mentioned in Your question; You can use kubectl to create secret of docker-registry type that you can then try to pass with image_pull_secrets.
However depending on platform You are using this might have limited or no use at all according to kubernetes documentation:
Configuring Nodes to Authenticate to a Private Registry
Note: If you are running on Google Kubernetes Engine, there will already be a .dockercfg on each node with credentials for Google Container Registry. You cannot use this approach.
Note: If you are running on AWS EC2 and are using the EC2 Container Registry (ECR), the kubelet on each node will manage and update the ECR login credentials. You cannot use this approach.
Note: This approach is suitable if you can control node configuration. It will not work reliably on GCE, and any other cloud provider that does automatic node replacement.
Note: Kubernetes as of now only supports the auths and HttpHeaders section of docker config. This means credential helpers (credHelpers or credsStore) are not supported.
Making this work on mentioned platforms is possible but it would require automated scripts and third party tools.
Like in Amazon ECR example: Amazon ECR Docker Credential Helper would be needed to periodically pull AWS credentials to docker registry configuration and then have another script to update kubernetes docker-registry secrets.
As for Airflow itself I don't think it has functionality to create its own docker-repository secrets.
You can request functionality like that in Apache Airflow JIRA.
P.S.
If You still have issues with Your K8s cluster you might want to create new question on stack addressing them.

Kubernetes - Where does it store secrets and how does it use those secrets on multiple nodes?

Not really a programming question but quite curious to know how does Kubernetes or Minikube manage secrets & uses it on multiple nodes/pods?
Let's say if I create a secret to pull image with kubectl as below -
$ kubectl create secret docker-registry regsecret --docker-server=https://index.docker.io/v1/ --docker-username=$USERNM --docker-password=$PASSWD --docker-email=vivekyad4v#gmail.com
What processes will occur in the backend and how will k8s or Minikube use those on multiple nodes/pods?
All data in Kubernetes is managed by the API Server component that performs CRUD operations on the data store (current only option is etcd).
When you submit a secret with kubectl to the API Server it stores the resource and data in etcd. It is recommended to enable encryption for secrets in in the API Server (through setting the right flags) so that the data is encrypted at rest, otherwise anyone with access to etcd will be able to read your secrets in plain text.
When the secret is needed for either mounting in a Pod or in your example for pulling a Docker image from a private registry, it is requested from the API Server by the node-local kubelet and kept in tmpfs so it never touches any hard disk unencrypted.
Here another security recommendation comes into play, which is called Node Authorization (again set up by setting the right flags and distributing certificates to API Server and Kubelets). With Node Authorization enabled you can make sure that a kubelet can only request resources (incl. secrets) that are meant to be run on that specific node, so a hacked node just exposes the resources on that single node and not everything.
Secrets are stored in the only datastore a kubernetes cluster has: etcd.
As all other resources, they're retrieved when needed by the kubelet executable (that runs in every node) by querying k8s' API server
If you are wandering how to actually access the secrets (the stored files),
kubectl -n kube-system exec -it <etcd-pod-name> ls -l /etc/kubernetes/pki/etcd
You will get a list of all keys (system default keys). you can simply view them using cat command (if they are encrypted you won't see much)

How does Kubectl connect to the master

I've installed Kubernetes via Vagrant on OS X and everything seems to be working fine, but I'm unsure how kubectl is able to communicate with the master node despite being local to the workstation filesystem.
How is this implemented?
kubectl has a configuration file that specifies the location of the Kubernetes apiserver and the client credentials to authenticate to the master. All of the commands issued by kubectl are over the HTTPS connection to the apiserver.
When you run the scripts to bring up a cluster, they typically generate this local configuration file with the parameters necessary to access the cluster you just created. By default, the file is located at ~/.kube/config.
In addition to what Robert said: the connection between your local CLI and the cluster is controlled through kubectl config set, see the docs.
The Getting started with Vagrant section of the docs should contain everything you need.