How to get the namespace from inside a pod in OpenShift? - kubernetes

I would like to access to OpenShift and Kubernetes API from inside a pod to query and modify objects in the application the pod belongs to.
In the documentation (https://docs.openshift.org/latest/dev_guide/service_accounts.html) I found this description on how to access the api:
$ TOKEN="$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)"
$ curl --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt \
"https://openshift.default.svc.cluster.local/oapi/v1/users/~" \
-H "Authorization: Bearer $TOKEN"
The problem is when I for example want to access a pod, I need to know the namespace I'm in:
https://openshift.default.svc.cluster.local/oapi/v1/namespaces/${namespace}/pods
The only way I found so far is to submit the namespace as an environment variable, but I would like to not requiring the user to enter that information.

At least in kubernetes 1.5.3 I can also see the namespace in /var/run/secrets/kubernetes.io/serviceaccount/namespace.

You can get the namespace of your pod automatically populated as an environment variable using the downward API.

I found a solution by tracing what the web console does.
I am allowed to ask for the project list without having cluster-admin rigths on the following url:
https://openshift.default.svc.cluster.local/oapi/v1/projects
Only the projects which I have rights to are listed, and then it is possible to determine the current project which name is also the namespace.
Would love if there was an easier solution, but this works.

Related

Cannot install Helm chart when accessing GKE cluster directly

I've set up a basic GKE cluster using Autopilot settings. I am able to install Helm charts on it using kubectl with proper kubeconfig pointing to the GKE cluster.
I'd like to do the same without the kubeconfig, by providing the cluster details with relevant parameters.
To do that I'm running a docker container using alpine/helm image and passing the paramtrised command which looks like this:
docker run --rm -v $(pwd):/chart alpine/helm install <my_chart_name> /chart --kube-apiserver <cluster_endpoint> --kube-ca-file /chart/<cluster_certificate_file> --kube-as-user <my_gke_cluster_username> --kube-token <token>
unfortunately it returns :
Error: INSTALLATION FAILED: Kubernetes cluster unreachable: Get "http://<cluster_endpoint>/version": dial tcp <cluster_endpoint>:80: i/o timeout
Is this even doable with GKE?
One challenge will be that GKE leverages a plugin (currently built in to kubectl itself but soon the standlone gke-gcloud-auth-plugin) to obtain an access token for the default gcloud user.
This token expires hourly.
If you can, it would be better to mount the kubeconfig (${HOME}/.kube/config) file into the container as it should (!) then authenticate as if it were kubectl which will not only leverage the access token correctly but will renew it as appropriate.
https://github.com/alpine-docker/helm
docker run \
--interactive --tty --rm \
--volume=${PWD}/.kube:/root/.kube \
--volume=${PWD}/.helm:/root/.helm \
--volume=${PWD}/.config/helm:/root/.config/helm \
--volume=${PWD}/.cache/helm:/root/.cache/helm \
alpine/helm ...
NOTE It appears there are several (.helm, .config and .cache) other local paths that may be required too.
Problem solved! A more experienced colleague has found the solution.
I should have used the address including "http://" protocol specification. That however still kept returning "Kubernetes cluster unreachable: " error, with "unknown" details instead.
I had been using incorect username. Instead the one from kubeconfig file, a new service account should be created and its name used instead in a form system:serviceaccount:<namespace>:<service_account>. However that would not alter the error either.
The service account lacked proper role, following command did the job: kubectl create rolebinding --clusterrole=cluster-admin --serviceaccount=<namespace>:<service_account>. Ofc, cluster-admin might now be the role we want to give away freely.

Accessing the Kubernetes REST end points using bearer token

Requirement: We need to access the Kubernetes REST end points from our java code. Our basic operations using the REST end points are to Create/Update/Delete/Get the deployments.
We have downloaded the kubectl and configured the kubeconfig file of the cluster in our Linux machine.
We can perform operations in that cluster using the kubectl. We got the bearer token of that cluster running the command 'kubectl get pods -v=8'. We are using this bearer token in our REST end points to perform our required operations.
Questions:
What is the better way to get the bearer token?
Will the bearer token gets change during the lifecycle of the cluster?
follow this simple way
kubectl proxy --port=8080 &
curl http://localhost:8080/api/
from java code use the below approach
# Check all possible clusters, as you .KUBECONFIG may have multiple contexts:
kubectl config view -o jsonpath='{"Cluster name\tServer\n"}{range .clusters[*]}{.name}{"\t"}{.cluster.server}{"\n"}{end}'
# Select name of cluster you want to interact with from above output:
export CLUSTER_NAME="some_server_name"
# Point to the API server refering the cluster name
APISERVER=$(kubectl config view -o jsonpath="{.clusters[?(#.name==\"$CLUSTER_NAME\")].cluster.server}")
# Gets the token value
TOKEN=$(kubectl get secrets -o jsonpath="{.items[?(#.metadata.annotations['kubernetes\.io/service-account\.name']=='default')].data.token}"|base64 -d)
# Explore the API with TOKEN
curl -X GET $APISERVER/api --header "Authorization: Bearer $TOKEN" --insecure
Q: What is the better way to get the bearer token?
A: Since you have configured access to the cluster, you might use
kubectl describe secrets
Q: Will the bearer token gets change during the lifecycle of the cluster?
A: Static tokens do not expire.
Please see Accessing Clusters and Authenticating for more details.
If your token comes from a ServiceAccount, best way should be to add this Service Account to the deployment/statefull or daemonSet where your java runs.This will lead to tokne being written in your pod in "/var/run/secrets/kubernetes.io/serviceaccount/token". Config is explained here
Most of kubernetes client library knows how to discover a service account linked to a running context and "automaticaly" use it. Not an expert be it seems yes for Java client see : github
This token will not be automaticaly refreshed by kubernetes unless someone force a rolling.

What's the recommended way to locate the apiserver from an openshift pod?

From the Kubernetes docs (Accessing the API from a Pod):
The recommended way to locate the apiserver within the pod is with the kubernetes DNS name, which resolves to a Service IP which in turn will be routed to an apiserver.
However, this 'kubernetes' dns name does not appear to exist when I was in the shell of an OpenShift pod. I expected it to exist by default due the Kubernetes running underneath, but am I mistaken? This was using OpenShift Container Platform version 3.7.
Is there a standard way to access the apiserver short of passing it in as an environment variable or something?
In OpenShift, you can use:
https://openshift.default.svc.cluster.local
You could also use the values from the environment variables:
KUBERNETES_SERVICE_PORT
KUBERNETES_SERVICE_HOST
as in:
#!/bin/sh
SERVER=`https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_SERVICE_PORT`
TOKEN=`cat /var/run/secrets/kubernetes.io/serviceaccount/token`
URL="$SERVER/oapi/v1/users/~"
curl -k -H "Authorization: Bearer $TOKEN" $URL
Note that the default service account that containers are run as will not have REST API access. Best thing to do is to create a new service account in the project and grant that the rights to use the REST API endpoint for the actions it needs.

Kubernetes REST API - Create deployment

I was looking at the kubernetes API endpoints listed here. Im trying to create a deployment which can be run from the terminal using kubectl ru CLUSTER-NAME IMAGE-NAME PORT. However I cant seem to find any endpoint for this command in the link I posted above. I can create a node using curl POST /api/v1/namespaces/{namespace}/pods and then delete using the curl -X DELETE http://localhost:8080/api/v1/namespaces/default/pods/node-name where node name HAS to be a single node (if there are 100 nodes, each should be done individually). Is there an api endpoint for creating and deleting deployments??
To make it easier to eliminate fields or restructure resource representations, Kubernetes supports multiple API versions, each at a different API path, such as /api/v1 or /apis/extensions/v1beta1 and to extend the Kubernetes API, API groups is implemented.
Currently there are several API groups in use:
the core (oftentimes called legacy, due to not having explicit group name) group, which is at REST path /api/v1 and is not specified as part of the apiVersion field, e.g. apiVersion: v1.
the named groups are at REST path /apis/$GROUP_NAME/$VERSION, and use apiVersion: $GROUP_NAME/$VERSION (e.g. apiVersion: batch/v1). Full list of supported API groups can be seen in Kubernetes API reference.
To manage extensions resources such as Ingress, Deployments, and ReplicaSets refer to Extensions API reference.
As described in the reference, to create a Deployment:
POST /apis/extensions/v1beta1/namespaces/{namespace}/deployments
I debugged this by running kubectl with verbose logging: kubectl --v=9 update -f dev_inventory.yaml.
It showed the use of an API call like this one:
curl -i http://localhost:8001/apis/extensions/v1beta1/namespaces/default/deployments
Note that the first path element is apis, not the normal api. I don't know why it's like this, but the command above works.
I might be too late to help in this question, but here is what I tried on v1.9 to deploy a StatefulSet:
curl -kL -XPOST -H "Accept: application/json" -H "Content-Type: application/json" \
-H "Authorization: Bearer <*token*>" --data #statefulset.json \
https://<*ip*>:6443/apis/apps/v1/namespaces/eng-pst/statefulsets
I converted the statefulset.yaml to json cause I saw the data format when api was doing the POST was in json.
I ran this command to find out the API call i need to make for my k8s object:
kubectl --v=10 apply -f statefulset.yaml
(might not need a v=10 level but I wanted to as much info as I could)
The Kubernetes Rest Api documentation is quite sophisticated but unfortunately the deployment documentation is missing.
Since the Rest schema is identical to other resources you can figure out the rest calls:
GET retrieve a deployment by name:
curl -H "Authorization: Bearer ${KEY}" ${API_URL}/apis/extensions/v1beta1/namespaces/${namespace}/deployments/${NAME}
POST create a new deployment
curl -X POST -d #deployment-definition.json -H "Content-Type: application/json" -H "Authorization: Bearer ${KEY}" ${API_URL}/apis/extensions/v1beta1/namespaces/${namespace}/deployments
You should be able to use the calls right away when you provide the placeholder for your
API key ${KEY}
API url ${API_URL}
Deployment name ${NAME}
Namespace ${namespace}
Have you tried the analogous URL?
http://localhost:8080/api/v1/namespaces/default/deployment/deployment-name

Kubernetes get endpoints

I have a set of pods providing nsqlookupd service.
Now I need each nsqd container to have a list of nsqlookupd servers to connect to (while service will point to different every time) simultaneously. Something similar I get with
kubectl describe service nsqlookupd
...
Endpoints: ....
but I want to have it in a variable within my deployment definition or somehow from within nsqd container
Sounds like you would need an extra service running either in your nsqd container or in a separate container in the same pod. The role of that service would be to pole the API regularly in order to fetch the list of endpoints.
Assuming that you enabled Service Accounts (enabled by default), here is a proof of concept on the shell using curl and jq from inside a pod:
# Read token and CA cert from Service Account
CACERT="/var/run/secrets/kubernetes.io/serviceaccount/ca.crt"
TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
# Replace the namespace ("kube-system") and service name ("kube-dns")
ENDPOINTS=$(curl -s --cacert "$CACERT" -H "Authorization: Bearer $TOKEN" \
https://kubernetes.default.svc/api/v1/namespaces/kube-system/endpoints/kube-dns \
)
# Filter the JSON output
echo "$ENDPOINTS" | jq -r .subsets[].addresses[].ip
# output:
# 10.100.42.3
# 10.100.67.3
Take a look at the source code of Kube2sky for a good implementation of that kind of service in Go.
Could be done with a StatefuSet. Stable names + stable storage