How to access kubeconfig file inside containers - kubernetes

I have a container where I used a bitnami/kubectl image.
Now I want to run a few kubectl commands inside that container.
How kubectl container aware of my kubeconfig file?
I know that I can mount the local kubeconfig file into containers and use it.
But is there any other way possible to access kubeconfig without using it as a volume mount?
I went throug the documentation of RBAC in Kubernetes.
Does configure role and role-binding alone is enough to run kubectl apply and kubectl delete commands successfully even without mounting kubeconfig file?
It would be really helpful if someone helps me with this.
Thanks in advance!

Now I want to run a few kubectl commands inside that container.
Why do you need it inside the container?
kubectl is your CLI to "communicate" with the cluster, the commands are passed to the kube-api, parsed, and executed usually by Admission controller.
Not clear why you need to run kubectl commands inside the container, since kubectl use your kubeconfig file for the communication (it will read the certificate path to the certificate data) and will be able to connect to your cluster.
How to run K8S API in your container?
The appropriate solution is to run an API query inside your container.
Every pod stores internally the Token & ServiceAccount which will allow you to query the API
Use the following script I'm using to query the API
https://github.com/nirgeier/KubernetesLabs/blob/master/Labs/21-KubeAPI/api_query.sh
#!/bin/sh
#################################
## Access the internal K8S API ##
#################################
# Point to the internal API server hostname
API_SERVER_URL=https://kubernetes.default.svc
# Path to ServiceAccount token
# The service account is mapped by the K8S API server in the pods
SERVICE_ACCOUNT_FOLDER=/var/run/secrets/kubernetes.io/serviceaccount
# Read this Pod's namespace if required
# NAMESPACE=$(cat ${SERVICE_ACCOUNT_FOLDER}/namespace)
# Read the ServiceAccount bearer token
TOKEN=$(cat ${SERVICE_ACCOUNT_FOLDER}/token)
# Reference the internal certificate authority (CA)
CACERT=${SERVICE_ACCOUNT_FOLDER}/ca.crt
# Explore the API with TOKEN and the Certificate
curl -X GET \
--cacert ${CACERT} \
--header "Authorization: Bearer ${TOKEN}" \
${API_SERVER_URL}/api

You can use kubectl without your kubeconfig file. Your pod is launched with a service account. And all kubectl commands will be executed with the service account privileges. So you have to use rbac to grant access rights to that service account first.

Related

Cannot install Helm chart when accessing GKE cluster directly

I've set up a basic GKE cluster using Autopilot settings. I am able to install Helm charts on it using kubectl with proper kubeconfig pointing to the GKE cluster.
I'd like to do the same without the kubeconfig, by providing the cluster details with relevant parameters.
To do that I'm running a docker container using alpine/helm image and passing the paramtrised command which looks like this:
docker run --rm -v $(pwd):/chart alpine/helm install <my_chart_name> /chart --kube-apiserver <cluster_endpoint> --kube-ca-file /chart/<cluster_certificate_file> --kube-as-user <my_gke_cluster_username> --kube-token <token>
unfortunately it returns :
Error: INSTALLATION FAILED: Kubernetes cluster unreachable: Get "http://<cluster_endpoint>/version": dial tcp <cluster_endpoint>:80: i/o timeout
Is this even doable with GKE?
One challenge will be that GKE leverages a plugin (currently built in to kubectl itself but soon the standlone gke-gcloud-auth-plugin) to obtain an access token for the default gcloud user.
This token expires hourly.
If you can, it would be better to mount the kubeconfig (${HOME}/.kube/config) file into the container as it should (!) then authenticate as if it were kubectl which will not only leverage the access token correctly but will renew it as appropriate.
https://github.com/alpine-docker/helm
docker run \
--interactive --tty --rm \
--volume=${PWD}/.kube:/root/.kube \
--volume=${PWD}/.helm:/root/.helm \
--volume=${PWD}/.config/helm:/root/.config/helm \
--volume=${PWD}/.cache/helm:/root/.cache/helm \
alpine/helm ...
NOTE It appears there are several (.helm, .config and .cache) other local paths that may be required too.
Problem solved! A more experienced colleague has found the solution.
I should have used the address including "http://" protocol specification. That however still kept returning "Kubernetes cluster unreachable: " error, with "unknown" details instead.
I had been using incorect username. Instead the one from kubeconfig file, a new service account should be created and its name used instead in a form system:serviceaccount:<namespace>:<service_account>. However that would not alter the error either.
The service account lacked proper role, following command did the job: kubectl create rolebinding --clusterrole=cluster-admin --serviceaccount=<namespace>:<service_account>. Ofc, cluster-admin might now be the role we want to give away freely.

DO Kubernetes Cluster + GCP Container Registry

I have a Kubernetes cluster in Digital Ocean, I want to pull the images from a private repository in GCP.
I tried to create a secret that make me able to to pull the images following this article https://blog.container-solutions.com/using-google-container-registry-with-kubernetes
Basically, these are the steps
In the GCP account, create a service account key, with a JSON credential
Execute
kubectl create secret docker-registry gcr-json-key \
--docker-server=gcr.io \
--docker-username=_json_key \
--docker-password="$(cat ~/json-key-file.json)" \
--docker-email=any#valid.email
In the deployment yaml reference the secret
imagePullSecrets:
- name: gcr-json-key
I don't understand why I am getting 403. If there are some restriccions to use the registry outside google cloud, or if I missed some configuration something.
Failed to pull image "gcr.io/myapp/backendnodeapi:latest": rpc error: code = Unknown desc = failed to pull and unpack image "gcr.io/myapp/backendnodeapi:latest": failed to resolve reference "gcr.io/myapp/backendnodeapi:latest": unexpected status code [manifests latest]: 403 Forbidden
Verify that you have enabled the Container Registry API, Installed Cloud SDK and Service account you are using for authentication has permissions to access Container Registry.
Docker requires privileged access to interact with registries. On Linux or Windows, add the user that you use to run Docker commands to the Docker security group.
This documentation has details on prerequisites for container registry.
Note:
Ensure that the version of kubectl is the latest version.
I tried replicating by following the document you provided and it worked at my end, So ensure that all the prerequisites are met.
That JSON string is not a password.
The documentation suggests to either activate the service account:
gcloud auth activate-service-account [USERNAME]#[PROJECT-ID].iam.gserviceaccount.com --key-file=~/service-account.json
Or add the configuration to $HOME/.docker/config.json
And then run docker-credential-gcr configure-docker.
Kubernetes seems to demand a service-account token secret
and this requires annotation kubernetes.io/service-account.name.
Also see Configure Service Accounts for Pods.

Generating a kubeconfig file and authenticating for google cloud

I have a Kubernetes cluster. Inside my cluster is a Django application which needs to connect to my Kubernetes cluster on GKE. Upon my Django start up (inside my Dockerfile), I authenticate with Google Cloud by using:
gcloud auth activate-service-account $GKE_SERVICE_ACCOUNT_NAME --key-file=$GOOGLE_APPLICATION_CREDENTIALS
gcloud config set project $GKE_PROJECT_NAME
gcloud container clusters get-credentials $GKE_CLUSTER_NAME --zone $GKE_ZONE
I am not really sure if I need to do this everytime my Django container starts, and I am not sure I understand how authentication to Google Cloud works. Could I perhaps just generate my Kubeconfig file, store it somewhere safe and use it all the time instead of authenticating?
In other words, is a Kubeconfig file enough to connect to my GKE cluster?
If your service is running in a Pod inside the GKE cluster you want to connect to, use a Kubernetes service account to authenticate.
Create a Kubernetes service account and attach it to your Pod. If your Pod already has a Kubernetes service account, you may skip this step.
Use Kubernetes RBAC to grant the Kubernetes service account the correct permissions.
The following example grants edit permissions in the prod namespace:
kubectl create rolebinding yourserviceaccount \
--clusterrole=edit \
--serviceaccount=yournamespace:yourserviceaccount\
--namespace=prod
At runtime, when your service invokes kubectl, it automatically receives the credentials you configured.
You can also store the credentials as a secret and mount it on your pod so that it can read them from there
To use a Secret with your workloads, you can specify environment variables that reference the Secret's values, or mount a volume containing the Secret.
You can create a Secret using the command-line or a YAML file.
Here is an example using Command-line
kubectl create secret SECRET_TYPE SECRET_NAME DATA
SECRET_TYPE: the Secret type, which can be one of the following:
generic:Create a Secret from a local file, directory, or literal value.
docker-registry:Create a dockercfg Secret for use with a Docker registry. Used to authenticate against Docker registries.
tls:Create a TLS secret from the given public/private key pair. The public/private key pair must already exist. The public key certificate must be .PEM encoded and match the given private key.
For most Secrets, you use the generic type.
SECRET_NAME: the name of the Secret you are creating.
DATA: the data to add to the Secret, which can be one of the following:
A path to a directory containing one or more configuration files, indicated using the --from-file or --from-env-file flags.
Key-value pairs, each specified using --from-literal flags.
If you need more information about kubectl create you can check the reference documentation

Does Kubernetes liveness probes support user authentication with PKIs?

I am trying to access some of our rest endpoints to check that our API container is up and running. If I can specify a PKI I can access our endpoints which currently are all behind authentication. Is this possible?
If not I will have to add a new endpoint.
Step 1: add curl to your container image REF, Hint: Modify the Docker file to include curl.
Step 2: (in kubernetes deployment) configure the resource to mount the certs needed to query (GET request) the REST endpoint. REF Hint: Follow the way serviceaccount credentials are mounted to a POD.
Step 3: Now use those certs which are mounted to your container. In the liveness probe to curl it the way shown here
At this point if you have curled successfully with status code 200. you will have a linux comand execution code 0 which lead to successfull liveness check else the pod will be restarted.
You can try to implement it with an external curl script and a liveness probe with liveness command, adding certs as secrets and mounting it, and exec curl like:
curl -v --cacert /mounted/cd/secret/ca.pem \
--key /mounted/secret/key/key.pem --cert /mounted/secret/cert/admin.pem \
http://liveness/probe/url
Regards.

What's the recommended way to locate the apiserver from an openshift pod?

From the Kubernetes docs (Accessing the API from a Pod):
The recommended way to locate the apiserver within the pod is with the kubernetes DNS name, which resolves to a Service IP which in turn will be routed to an apiserver.
However, this 'kubernetes' dns name does not appear to exist when I was in the shell of an OpenShift pod. I expected it to exist by default due the Kubernetes running underneath, but am I mistaken? This was using OpenShift Container Platform version 3.7.
Is there a standard way to access the apiserver short of passing it in as an environment variable or something?
In OpenShift, you can use:
https://openshift.default.svc.cluster.local
You could also use the values from the environment variables:
KUBERNETES_SERVICE_PORT
KUBERNETES_SERVICE_HOST
as in:
#!/bin/sh
SERVER=`https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_SERVICE_PORT`
TOKEN=`cat /var/run/secrets/kubernetes.io/serviceaccount/token`
URL="$SERVER/oapi/v1/users/~"
curl -k -H "Authorization: Bearer $TOKEN" $URL
Note that the default service account that containers are run as will not have REST API access. Best thing to do is to create a new service account in the project and grant that the rights to use the REST API endpoint for the actions it needs.