I am trying to access some of our rest endpoints to check that our API container is up and running. If I can specify a PKI I can access our endpoints which currently are all behind authentication. Is this possible?
If not I will have to add a new endpoint.
Step 1: add curl to your container image REF, Hint: Modify the Docker file to include curl.
Step 2: (in kubernetes deployment) configure the resource to mount the certs needed to query (GET request) the REST endpoint. REF Hint: Follow the way serviceaccount credentials are mounted to a POD.
Step 3: Now use those certs which are mounted to your container. In the liveness probe to curl it the way shown here
At this point if you have curled successfully with status code 200. you will have a linux comand execution code 0 which lead to successfull liveness check else the pod will be restarted.
You can try to implement it with an external curl script and a liveness probe with liveness command, adding certs as secrets and mounting it, and exec curl like:
curl -v --cacert /mounted/cd/secret/ca.pem \
--key /mounted/secret/key/key.pem --cert /mounted/secret/cert/admin.pem \
http://liveness/probe/url
Regards.
Related
I have a container where I used a bitnami/kubectl image.
Now I want to run a few kubectl commands inside that container.
How kubectl container aware of my kubeconfig file?
I know that I can mount the local kubeconfig file into containers and use it.
But is there any other way possible to access kubeconfig without using it as a volume mount?
I went throug the documentation of RBAC in Kubernetes.
Does configure role and role-binding alone is enough to run kubectl apply and kubectl delete commands successfully even without mounting kubeconfig file?
It would be really helpful if someone helps me with this.
Thanks in advance!
Now I want to run a few kubectl commands inside that container.
Why do you need it inside the container?
kubectl is your CLI to "communicate" with the cluster, the commands are passed to the kube-api, parsed, and executed usually by Admission controller.
Not clear why you need to run kubectl commands inside the container, since kubectl use your kubeconfig file for the communication (it will read the certificate path to the certificate data) and will be able to connect to your cluster.
How to run K8S API in your container?
The appropriate solution is to run an API query inside your container.
Every pod stores internally the Token & ServiceAccount which will allow you to query the API
Use the following script I'm using to query the API
https://github.com/nirgeier/KubernetesLabs/blob/master/Labs/21-KubeAPI/api_query.sh
#!/bin/sh
#################################
## Access the internal K8S API ##
#################################
# Point to the internal API server hostname
API_SERVER_URL=https://kubernetes.default.svc
# Path to ServiceAccount token
# The service account is mapped by the K8S API server in the pods
SERVICE_ACCOUNT_FOLDER=/var/run/secrets/kubernetes.io/serviceaccount
# Read this Pod's namespace if required
# NAMESPACE=$(cat ${SERVICE_ACCOUNT_FOLDER}/namespace)
# Read the ServiceAccount bearer token
TOKEN=$(cat ${SERVICE_ACCOUNT_FOLDER}/token)
# Reference the internal certificate authority (CA)
CACERT=${SERVICE_ACCOUNT_FOLDER}/ca.crt
# Explore the API with TOKEN and the Certificate
curl -X GET \
--cacert ${CACERT} \
--header "Authorization: Bearer ${TOKEN}" \
${API_SERVER_URL}/api
You can use kubectl without your kubeconfig file. Your pod is launched with a service account. And all kubectl commands will be executed with the service account privileges. So you have to use rbac to grant access rights to that service account first.
I've set up a basic GKE cluster using Autopilot settings. I am able to install Helm charts on it using kubectl with proper kubeconfig pointing to the GKE cluster.
I'd like to do the same without the kubeconfig, by providing the cluster details with relevant parameters.
To do that I'm running a docker container using alpine/helm image and passing the paramtrised command which looks like this:
docker run --rm -v $(pwd):/chart alpine/helm install <my_chart_name> /chart --kube-apiserver <cluster_endpoint> --kube-ca-file /chart/<cluster_certificate_file> --kube-as-user <my_gke_cluster_username> --kube-token <token>
unfortunately it returns :
Error: INSTALLATION FAILED: Kubernetes cluster unreachable: Get "http://<cluster_endpoint>/version": dial tcp <cluster_endpoint>:80: i/o timeout
Is this even doable with GKE?
One challenge will be that GKE leverages a plugin (currently built in to kubectl itself but soon the standlone gke-gcloud-auth-plugin) to obtain an access token for the default gcloud user.
This token expires hourly.
If you can, it would be better to mount the kubeconfig (${HOME}/.kube/config) file into the container as it should (!) then authenticate as if it were kubectl which will not only leverage the access token correctly but will renew it as appropriate.
https://github.com/alpine-docker/helm
docker run \
--interactive --tty --rm \
--volume=${PWD}/.kube:/root/.kube \
--volume=${PWD}/.helm:/root/.helm \
--volume=${PWD}/.config/helm:/root/.config/helm \
--volume=${PWD}/.cache/helm:/root/.cache/helm \
alpine/helm ...
NOTE It appears there are several (.helm, .config and .cache) other local paths that may be required too.
Problem solved! A more experienced colleague has found the solution.
I should have used the address including "http://" protocol specification. That however still kept returning "Kubernetes cluster unreachable: " error, with "unknown" details instead.
I had been using incorect username. Instead the one from kubeconfig file, a new service account should be created and its name used instead in a form system:serviceaccount:<namespace>:<service_account>. However that would not alter the error either.
The service account lacked proper role, following command did the job: kubectl create rolebinding --clusterrole=cluster-admin --serviceaccount=<namespace>:<service_account>. Ofc, cluster-admin might now be the role we want to give away freely.
I wish to access Kubernetes APIs from my local machine. I'm trying to get list of pods using kubernetes Rest APIs.
I've created a kubernetes cluster and some pods on Google Cloud.
On my local Windows machine, I've installed gcloud sdk and kubectl component with it.
I connected to my cluster using:
gcloud container clusters get-credentials my-cluster --region us-central1 --project my-project
I can get the list of pods using kubectl get pods
Although, I want to get pods list using kubernetes Rest APIs.
GET https://kubernetes.default/api/v1/namespaces/default/pods
Authorization: Bearer my_access_token
But I think the request is not going through.
In Postman, I get the error:
Error: tunneling socket could not be established, cause=socket hang up
Or in Python using requests library (from my local machine), I get the error
HTTPSConnectionPool(host='kubernetes.default', port=443): Max retries exceeded with url: /api/v1/namespaces/default/pods (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x00000277DCD04D90>: Failed to establish a new connection: [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond'))
What am I missing here?
The endpoint https://kubernetes.default only works if you want to access Kubernetes REST API from inside the cluster i.e from another pod. For accessing Kubernetes REST API from outside the kubernetes cluster i.e from your local machine you need to use the API server IP or host which is externally accessible i.e the one which is there in kubeconfig file.
For accessing it from outside the kubernetes cruster i.e from your local machine there are three ways referring from the docs here
Run kubectl in proxy mode (recommended). This method is recommended, since it uses the stored apiserver location and verifies the identity of the API server using a self-signed cert. No man-in-the-middle (MITM) attack is possible using this method.
kubectl proxy --port=8080 &
curl http://localhost:8080/api/v1/namespaces/default/pods
It is possible to avoid using kubectl proxy by passing an authentication token directly to the API server, like this:
Check all possible clusters, as your .KUBECONFIG may have multiple contexts:
kubectl config view -o jsonpath='{"Cluster name\tServer\n"}{range .clusters[*]}{.name}{"\t"}{.cluster.server}{"\n"}{end}'
Select name of cluster you want to interact with from above output:
export CLUSTER_NAME="some_server_name"
Point to the API server referring the cluster name
APISERVER=$(kubectl config view -o jsonpath="{.clusters[?(#.name==\"$CLUSTER_NAME\")].cluster.server}")
Gets the token value
TOKEN=$(kubectl get secrets -o jsonpath="{.items[?(#.metadata.annotations['kubernetes\.io/service-account\.name']=='default')].data.token}"|base64 --decode)
Explore the API with TOKEN
curl -X GET $APISERVER/api/v1/namespaces/default/pods --header "Authorization: Bearer $TOKEN" --insecure
Using client library
To use Python client, run the following command: pip install kubernetes See Python Client Library page for more installation options.
The Python client can use the same kubeconfig file as the kubectl CLI does to locate and authenticate to the API server. See this example:
from kubernetes import client, config
config.load_kube_config()
v1=client.CoreV1Api()
print("Listing pods with their IPs:")
ret = v1.list_pod_for_all_namespaces(watch=False)
for i in ret.items:
print("%s\t%s\t%s" % (i.status.pod_ip, i.metadata.namespace, i.metadata.name))
You can also do it the way you are doing without using kubeconfig file but it's more work and you need to use the kubernetes API Server IP or hostname from the kubeconfig file.
using below kubectl command start a proxy to the Kubernetes API server:
kubectl proxy --port=8080
Get the API versions:
curl http://localhost:8080/api/
The output should look similar to this:
{
"kind": "APIVersions",
"versions": [
"v1"
],
"serverAddressByClientCIDRs": [
{
"clientCIDR": "0.0.0.0/0",
"serverAddress": "10.0.2.15:8443"
}
]
}
Your api server address is not correct for external REST access.
Get the address like this.
kubectl config view
Find your cluster name in the list and get the APi.
Here is the cURL (without the real IP or the token) which worked in my local pc.
curl --location --request GET 'https://nnn.nnn.nnnn.nnn/api/v1/namespaces/develop/pods' \
--header 'Authorization: bearer xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
If you run in POSTMAN, you might have to disable certificate verification.
I have the following doubt, let's say I have a kubernetes pod A with n=1 replicas, and I want that every x minutes it will create a new pod B of type "Job", I can create a job pod from kubectl without problems, but how can I make a pod be instantiated from another pod ?
I could try to use kubectl on the parent pod but I don't think it's the most elegant way to do it.
In the pod you could use any supported kubernetes client library to call REST API exposed by kubernetes API server to create a pod.
The client library need to be authenticated to be to call the kubernetes API. A service account can be used by the client library for that and the service account need to have RBAC to be to be able to create a pod by calling kubernetes API server.
Internally kubectl also calls the REST API exposed by kubernetes API server when kubectl is used to create a pod.
In my opinion you have 2 options here:
As suggested in previous answer, using a client library.
Using an ambassador container pattern: ambassador containers proxy a local connection to the world, you can read about this pattern more here.
How will this solve your issue:
Instead of talking to the API server directly from your pod (as you would using kubectl) you can run kubectl proxy in an ambassador container alongside the main container and communicate with the API server through it.
Instead of talking to the API server directly, the app in the main container can connect to the ambassador through HTTP (instead of HTTPS) and let the ambassador
proxy handle the HTTPS connection to the API server, taking care of security transparently. It does this by using the files from the default token’s secret volume (see the script below).
Because all containers in a pod share the same loopback network interface, your app can access the proxy through a port on localhost.
How to build such container?
Dockerfile (uses v1.8):
FROM alpine
RUN apk update && apk add curl && curl -L -O https://dl.k8s.io/v1.8.0/kubernetes-client-linux-amd64.tar.gz && tar zvxf kubernetes-client-linux-amd64.tar.gz kubernetes/client/bin/kubectl && mv kubernetes/client/bin/kubectl / && rm -rf kubernetes && rm -f kubernetes-client-linux-amd64.tar.gz
ADD kubectl-proxy.sh /kubectl-proxy.sh
ENTRYPOINT /kubectl-proxy.sh
Where kubectl-proxy.sh is the following script:
#!/bin/sh
API_SERVER="https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_SERVICE_PORT"
CA_CRT="/var/run/secrets/kubernetes.io/serviceaccount/ca.crt"
TOKEN="$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)"
/kubectl proxy --server="$API_SERVER" --certificate-authority="$CA_CRT" --token="$TOKEN" --accept-paths='^.*'
All that's left for you to do is build this image (push it to a registry), add it as a container to your app pod, and talk to it directly through loopback.
By default, kubectl proxy binds to port 8001, and because both containers in the pod share the same network interfaces, including loopback, you can point your requests to localhost:8001
Credit for this goes to Kubernetes in Action book (which is awesome!)
From the Kubernetes docs (Accessing the API from a Pod):
The recommended way to locate the apiserver within the pod is with the kubernetes DNS name, which resolves to a Service IP which in turn will be routed to an apiserver.
However, this 'kubernetes' dns name does not appear to exist when I was in the shell of an OpenShift pod. I expected it to exist by default due the Kubernetes running underneath, but am I mistaken? This was using OpenShift Container Platform version 3.7.
Is there a standard way to access the apiserver short of passing it in as an environment variable or something?
In OpenShift, you can use:
https://openshift.default.svc.cluster.local
You could also use the values from the environment variables:
KUBERNETES_SERVICE_PORT
KUBERNETES_SERVICE_HOST
as in:
#!/bin/sh
SERVER=`https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_SERVICE_PORT`
TOKEN=`cat /var/run/secrets/kubernetes.io/serviceaccount/token`
URL="$SERVER/oapi/v1/users/~"
curl -k -H "Authorization: Bearer $TOKEN" $URL
Note that the default service account that containers are run as will not have REST API access. Best thing to do is to create a new service account in the project and grant that the rights to use the REST API endpoint for the actions it needs.