Accessing Kubernetes APIs from local machine - kubernetes

I wish to access Kubernetes APIs from my local machine. I'm trying to get list of pods using kubernetes Rest APIs.
I've created a kubernetes cluster and some pods on Google Cloud.
On my local Windows machine, I've installed gcloud sdk and kubectl component with it.
I connected to my cluster using:
gcloud container clusters get-credentials my-cluster --region us-central1 --project my-project
I can get the list of pods using kubectl get pods
Although, I want to get pods list using kubernetes Rest APIs.
GET https://kubernetes.default/api/v1/namespaces/default/pods
Authorization: Bearer my_access_token
But I think the request is not going through.
In Postman, I get the error:
Error: tunneling socket could not be established, cause=socket hang up
Or in Python using requests library (from my local machine), I get the error
HTTPSConnectionPool(host='kubernetes.default', port=443): Max retries exceeded with url: /api/v1/namespaces/default/pods (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x00000277DCD04D90>: Failed to establish a new connection: [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond'))
What am I missing here?

The endpoint https://kubernetes.default only works if you want to access Kubernetes REST API from inside the cluster i.e from another pod. For accessing Kubernetes REST API from outside the kubernetes cluster i.e from your local machine you need to use the API server IP or host which is externally accessible i.e the one which is there in kubeconfig file.
For accessing it from outside the kubernetes cruster i.e from your local machine there are three ways referring from the docs here
Run kubectl in proxy mode (recommended). This method is recommended, since it uses the stored apiserver location and verifies the identity of the API server using a self-signed cert. No man-in-the-middle (MITM) attack is possible using this method.
kubectl proxy --port=8080 &
curl http://localhost:8080/api/v1/namespaces/default/pods
It is possible to avoid using kubectl proxy by passing an authentication token directly to the API server, like this:
Check all possible clusters, as your .KUBECONFIG may have multiple contexts:
kubectl config view -o jsonpath='{"Cluster name\tServer\n"}{range .clusters[*]}{.name}{"\t"}{.cluster.server}{"\n"}{end}'
Select name of cluster you want to interact with from above output:
export CLUSTER_NAME="some_server_name"
Point to the API server referring the cluster name
APISERVER=$(kubectl config view -o jsonpath="{.clusters[?(#.name==\"$CLUSTER_NAME\")].cluster.server}")
Gets the token value
TOKEN=$(kubectl get secrets -o jsonpath="{.items[?(#.metadata.annotations['kubernetes\.io/service-account\.name']=='default')].data.token}"|base64 --decode)
Explore the API with TOKEN
curl -X GET $APISERVER/api/v1/namespaces/default/pods --header "Authorization: Bearer $TOKEN" --insecure
Using client library
To use Python client, run the following command: pip install kubernetes See Python Client Library page for more installation options.
The Python client can use the same kubeconfig file as the kubectl CLI does to locate and authenticate to the API server. See this example:
from kubernetes import client, config
config.load_kube_config()
v1=client.CoreV1Api()
print("Listing pods with their IPs:")
ret = v1.list_pod_for_all_namespaces(watch=False)
for i in ret.items:
print("%s\t%s\t%s" % (i.status.pod_ip, i.metadata.namespace, i.metadata.name))
You can also do it the way you are doing without using kubeconfig file but it's more work and you need to use the kubernetes API Server IP or hostname from the kubeconfig file.

using below kubectl command start a proxy to the Kubernetes API server:
kubectl proxy --port=8080
Get the API versions:
curl http://localhost:8080/api/
The output should look similar to this:
{
"kind": "APIVersions",
"versions": [
"v1"
],
"serverAddressByClientCIDRs": [
{
"clientCIDR": "0.0.0.0/0",
"serverAddress": "10.0.2.15:8443"
}
]
}

Your api server address is not correct for external REST access.
Get the address like this.
kubectl config view
Find your cluster name in the list and get the APi.
Here is the cURL (without the real IP or the token) which worked in my local pc.
curl --location --request GET 'https://nnn.nnn.nnnn.nnn/api/v1/namespaces/develop/pods' \
--header 'Authorization: bearer xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
If you run in POSTMAN, you might have to disable certificate verification.

Related

How to access kubeconfig file inside containers

I have a container where I used a bitnami/kubectl image.
Now I want to run a few kubectl commands inside that container.
How kubectl container aware of my kubeconfig file?
I know that I can mount the local kubeconfig file into containers and use it.
But is there any other way possible to access kubeconfig without using it as a volume mount?
I went throug the documentation of RBAC in Kubernetes.
Does configure role and role-binding alone is enough to run kubectl apply and kubectl delete commands successfully even without mounting kubeconfig file?
It would be really helpful if someone helps me with this.
Thanks in advance!
Now I want to run a few kubectl commands inside that container.
Why do you need it inside the container?
kubectl is your CLI to "communicate" with the cluster, the commands are passed to the kube-api, parsed, and executed usually by Admission controller.
Not clear why you need to run kubectl commands inside the container, since kubectl use your kubeconfig file for the communication (it will read the certificate path to the certificate data) and will be able to connect to your cluster.
How to run K8S API in your container?
The appropriate solution is to run an API query inside your container.
Every pod stores internally the Token & ServiceAccount which will allow you to query the API
Use the following script I'm using to query the API
https://github.com/nirgeier/KubernetesLabs/blob/master/Labs/21-KubeAPI/api_query.sh
#!/bin/sh
#################################
## Access the internal K8S API ##
#################################
# Point to the internal API server hostname
API_SERVER_URL=https://kubernetes.default.svc
# Path to ServiceAccount token
# The service account is mapped by the K8S API server in the pods
SERVICE_ACCOUNT_FOLDER=/var/run/secrets/kubernetes.io/serviceaccount
# Read this Pod's namespace if required
# NAMESPACE=$(cat ${SERVICE_ACCOUNT_FOLDER}/namespace)
# Read the ServiceAccount bearer token
TOKEN=$(cat ${SERVICE_ACCOUNT_FOLDER}/token)
# Reference the internal certificate authority (CA)
CACERT=${SERVICE_ACCOUNT_FOLDER}/ca.crt
# Explore the API with TOKEN and the Certificate
curl -X GET \
--cacert ${CACERT} \
--header "Authorization: Bearer ${TOKEN}" \
${API_SERVER_URL}/api
You can use kubectl without your kubeconfig file. Your pod is launched with a service account. And all kubectl commands will be executed with the service account privileges. So you have to use rbac to grant access rights to that service account first.

kubernetes how to create new pod from an existing pod

I have the following doubt, let's say I have a kubernetes pod A with n=1 replicas, and I want that every x minutes it will create a new pod B of type "Job", I can create a job pod from kubectl without problems, but how can I make a pod be instantiated from another pod ?
I could try to use kubectl on the parent pod but I don't think it's the most elegant way to do it.
In the pod you could use any supported kubernetes client library to call REST API exposed by kubernetes API server to create a pod.
The client library need to be authenticated to be to call the kubernetes API. A service account can be used by the client library for that and the service account need to have RBAC to be to be able to create a pod by calling kubernetes API server.
Internally kubectl also calls the REST API exposed by kubernetes API server when kubectl is used to create a pod.
In my opinion you have 2 options here:
As suggested in previous answer, using a client library.
Using an ambassador container pattern: ambassador containers proxy a local connection to the world, you can read about this pattern more here.
How will this solve your issue:
Instead of talking to the API server directly from your pod (as you would using kubectl) you can run kubectl proxy in an ambassador container alongside the main container and communicate with the API server through it.
Instead of talking to the API server directly, the app in the main container can connect to the ambassador through HTTP (instead of HTTPS) and let the ambassador
proxy handle the HTTPS connection to the API server, taking care of security transparently. It does this by using the files from the default token’s secret volume (see the script below).
Because all containers in a pod share the same loopback network interface, your app can access the proxy through a port on localhost.
How to build such container?
Dockerfile (uses v1.8):
FROM alpine
RUN apk update && apk add curl && curl -L -O https://dl.k8s.io/v1.8.0/kubernetes-client-linux-amd64.tar.gz && tar zvxf kubernetes-client-linux-amd64.tar.gz kubernetes/client/bin/kubectl && mv kubernetes/client/bin/kubectl / && rm -rf kubernetes && rm -f kubernetes-client-linux-amd64.tar.gz
ADD kubectl-proxy.sh /kubectl-proxy.sh
ENTRYPOINT /kubectl-proxy.sh
Where kubectl-proxy.sh is the following script:
#!/bin/sh
API_SERVER="https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_SERVICE_PORT"
CA_CRT="/var/run/secrets/kubernetes.io/serviceaccount/ca.crt"
TOKEN="$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)"
/kubectl proxy --server="$API_SERVER" --certificate-authority="$CA_CRT" --token="$TOKEN" --accept-paths='^.*'
All that's left for you to do is build this image (push it to a registry), add it as a container to your app pod, and talk to it directly through loopback.
By default, kubectl proxy binds to port 8001, and because both containers in the pod share the same network interfaces, including loopback, you can point your requests to localhost:8001
Credit for this goes to Kubernetes in Action book (which is awesome!)

Accessing the Kubernetes REST end points using bearer token

Requirement: We need to access the Kubernetes REST end points from our java code. Our basic operations using the REST end points are to Create/Update/Delete/Get the deployments.
We have downloaded the kubectl and configured the kubeconfig file of the cluster in our Linux machine.
We can perform operations in that cluster using the kubectl. We got the bearer token of that cluster running the command 'kubectl get pods -v=8'. We are using this bearer token in our REST end points to perform our required operations.
Questions:
What is the better way to get the bearer token?
Will the bearer token gets change during the lifecycle of the cluster?
follow this simple way
kubectl proxy --port=8080 &
curl http://localhost:8080/api/
from java code use the below approach
# Check all possible clusters, as you .KUBECONFIG may have multiple contexts:
kubectl config view -o jsonpath='{"Cluster name\tServer\n"}{range .clusters[*]}{.name}{"\t"}{.cluster.server}{"\n"}{end}'
# Select name of cluster you want to interact with from above output:
export CLUSTER_NAME="some_server_name"
# Point to the API server refering the cluster name
APISERVER=$(kubectl config view -o jsonpath="{.clusters[?(#.name==\"$CLUSTER_NAME\")].cluster.server}")
# Gets the token value
TOKEN=$(kubectl get secrets -o jsonpath="{.items[?(#.metadata.annotations['kubernetes\.io/service-account\.name']=='default')].data.token}"|base64 -d)
# Explore the API with TOKEN
curl -X GET $APISERVER/api --header "Authorization: Bearer $TOKEN" --insecure
Q: What is the better way to get the bearer token?
A: Since you have configured access to the cluster, you might use
kubectl describe secrets
Q: Will the bearer token gets change during the lifecycle of the cluster?
A: Static tokens do not expire.
Please see Accessing Clusters and Authenticating for more details.
If your token comes from a ServiceAccount, best way should be to add this Service Account to the deployment/statefull or daemonSet where your java runs.This will lead to tokne being written in your pod in "/var/run/secrets/kubernetes.io/serviceaccount/token". Config is explained here
Most of kubernetes client library knows how to discover a service account linked to a running context and "automaticaly" use it. Not an expert be it seems yes for Java client see : github
This token will not be automaticaly refreshed by kubernetes unless someone force a rolling.

Proxy K8S app delegating authentication of requests from other pods

Background
I have a K8S cluster with a number of different pods that have their own specific service accounts, cluster roles, and cluster role bindings, so that they can execute various read/write requests directly with the K8S REST API. There are some complicated requests that can be issued, and I'd like to make a function to wrap the complex logic. However, the various services in the cluster are written in multiple (i.e. 6+) programming languages, and there does not (yet) seem to be a trivial way to allow all these services to directly re-use this code.
I'm considering creating a "proxy" micro-service, that exposes its own REST API, and issues the necessary requests and handles the "complex logic" on behalf of the client.
Problem
The only problem is that, with the current deployment model, a client could request that the proxy micro-service execute an HTTP request that the client itself isn't authorized to make.
Question
Is there a trivial/straightforward way for one pod, for example, to identify the client pod, and execute some kind of query/result-of-policy operation (i.e. by delegating the authentication to the K8S cluster authentication mechanism itself) to determine if it should honor the request from the client pod?
Kubernetes Authentication model represents a way how the particular user or service account can be entitled in k8s cluster, however Authorization methods determine whether initial request from the cluster visitor, aimed to do some action on cluster resources/objects, has sufficient permissions to make that possible.
Due to the fact that you've used specific service accounts per each Pod entire the cluster and granting them specific RBAC rules, it might be possible to use SelfSubjectAccessReview API in order to inspect requests to k8s REST API and determine whether the client's Pod service account has appropriate permission to perform any action on target's Pod namespace.
That can be achievable using kubectl auth can-i subcommand by submitting essential information for user impersonation.
I assume that you might also be able to query k8s authorization API group within HTTP request schema and then parse structured data from JSON/YAML format, like in the example below:
Regular kubectl auth can-i command to check whether default SA can retrieve data about Pods in default namespace:
kubectl auth can-i get pod --as system:serviceaccount:default:default
Equivalent method via HTTP call to k8s REST API using JSON type of content within Bearer Token authentication:
curl -k \
-X POST \
-d #- \
-H "Authorization: Bearer $MY_TOKEN" \
-H 'Accept: application/json' \
-H "Impersonate-User: system:serviceaccount:default:default" \
-H 'Content-Type: application/json' \
https://<API-Server>/apis/authorization.k8s.io/v1/selfsubjectaccessreviews <<'EOF'
{
"kind": "SelfSubjectAccessReview",
"apiVersion": "authorization.k8s.io/v1",
"spec":{"resourceAttributes":{"namespace":"default","verb":"get","resource":"pods"}}
}
EOF
Output:
.... "status": {
"allowed": true,
"reason": "RBAC: allowed by RoleBinding ....

What's the recommended way to locate the apiserver from an openshift pod?

From the Kubernetes docs (Accessing the API from a Pod):
The recommended way to locate the apiserver within the pod is with the kubernetes DNS name, which resolves to a Service IP which in turn will be routed to an apiserver.
However, this 'kubernetes' dns name does not appear to exist when I was in the shell of an OpenShift pod. I expected it to exist by default due the Kubernetes running underneath, but am I mistaken? This was using OpenShift Container Platform version 3.7.
Is there a standard way to access the apiserver short of passing it in as an environment variable or something?
In OpenShift, you can use:
https://openshift.default.svc.cluster.local
You could also use the values from the environment variables:
KUBERNETES_SERVICE_PORT
KUBERNETES_SERVICE_HOST
as in:
#!/bin/sh
SERVER=`https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_SERVICE_PORT`
TOKEN=`cat /var/run/secrets/kubernetes.io/serviceaccount/token`
URL="$SERVER/oapi/v1/users/~"
curl -k -H "Authorization: Bearer $TOKEN" $URL
Note that the default service account that containers are run as will not have REST API access. Best thing to do is to create a new service account in the project and grant that the rights to use the REST API endpoint for the actions it needs.