Kubernetes: How to get other pods' name from within a pod? - kubernetes

I'd like to find out the other pods' name running in the same single-host cluster. All pods are single-application containers. I have pod A (written in Java) that acts as a TCP/IP server and a few other pods (written in C++) connect to the server.
In pod A, I can get IP address of clients (other pods). How do I get their pods' name? I can't run kubectl commands because pod A has no kubectl installed.
Thanks,

You can directly call kube-apiserver with cURL.
First you need to have a serviceaccount binded to clusterrole to be able to send requests to apiserver.
kubectl create clusterrole listpods --verb=get,list,watch --resource=pods
kubectl create clusterrolebinding listpods --clusterrole=listpods --serviceaccount=default:default
Get a shell inside a container
kubectl exec -it deploy/YOUR_DEPLOYMENT -- sh
Define necessary parameters for your cURL, run below commands inside container
APISERVER=https://kubernetes.default.svc
SERVICEACCOUNT=/var/run/secrets/kubernetes.io/serviceaccount
NAMESPACE=$(cat ${SERVICEACCOUNT}/namespace)
TOKEN=$(cat ${SERVICEACCOUNT}/token)
CACERT=${SERVICEACCOUNT}/ca.crt
Send pods list request to apiserver
curl -s -k -H "Authorization: Bearer $TOKEN" -H 'Accept: application/json' $APISERVER/api/v1/pods | jq "[ .items[] | .metadata.name ]"
Done ! It will return you a json list of "kubectl get pods"
For more examples, you can check OpenShift RestAPI Reference. Also, if you are planning to do some programmatic stuff, I advice you to checkout official kubernetes-clients.
Credits for jq improvement to #moonkotte

Related

how to "stop" and "start" kubernetes API deployments /pod

I have a requirement to stop deployment by label name and start it again, via the API
also I need to do that for a group of deployments so I added label for each of them
so i know how to filter the deploymnet by desire label. but I found that if I would like to stop deployment from running, I do need to scale it down and changed the replica number to 0
is there any other option to do that via API? because now I should need to keep the replica for start (scale-up again) but this is a parameter that not easy to keep in a lifecycle of a service
so now the best option that I found is smth like :
PAYLOAD='[{"op":"replace","path":"/spec/replicas","value":"3"}]'
curl -X PATCH -d$PAYLOAD -H 'Content-Type: application/json-patch+json' $API_URL
but I am asking if there is smth else and if there is a group "stop /start" like in docker swarm that you can just run docker stack rm for example
If you would like to run kubectl scale deployments mydeployment --replicas=0 via API call, you can run below command
$ curl -k \
-X PUT \
-d #- \
-H "Authorization: Bearer $TOKEN" \
-H 'Accept: application/json' \
-H 'Content-Type: application/json' \
https://$ENDPOINT/apis/apps/v1/namespaces/$NAMESPACE/deployments/$NAME/scale <<'EOF'
{
"kind": "Scale",
"apiVersion": "apps/v1beta1",
...
}
EOF
More examples can be found in Openshift RestAPI documentations.
How about a solution where you store number of replicas in annoatation:
export DEPLOYMENT_NAME=xxx
kubectl annotate deployments $DEPLOYMENT_NAME replicas-before=$(kubectl get deployments.apps $DEPLOYMENT_NAME -ojsonpath="{.spec.replicas}")
kubectl scale deployment --replicas 0 $DEPLOYMENT_NAME
kubectl scale deployment --replicas $(kubectl get deployments.apps nginx -ojsonpath="{.metadata.annotations.replicas-before}") $DEPLOYMENT_NAME
This does not require additional saving the state externally. You save the current state in an annotation in this example called replicas-before, and then scale down the deployment to 0. If you want to restore the number or replicas, just read the replicas from annotation and scale the deployment up to this value.
I know you asked for a solution using k8s api. Just run the kubectl command with -v=10 and see what api requests are being sent.

Can I query kube-apiserver from kube-proxy pod?

I've got no access to kube-apiserver pod directly, but I do have an access to kube-proxy pod. Can I run curl https://localhost:6443/healthz as a healthness probe to kube-apiserver or something?
All the Pods can have access to the API Server: this is granted by the Service Account secret mounted in /var/run/secrets/kubernetes.io/serviceaccount/token.
Before proceeding, you have to ensure your Pod is allowed to reach the API Server, thus not blocked by a NetworkPolicy: this requirement hasn't been declared on your question, so giving this is not the case.
The said token is used to performs actions against the API Server, such as CRUD ops for those resources protected by RBAC.
If you just need to check the healthiness of the API Server, you can cURL the API Server using the mounted CA public certificate in /var/run/secrets/kubernetes.io/serviceaccount/ca.pem and the environment variable KUBERNETES_SERVICE_HOST already injected at runtime in your Pod pointing to the API Server, as well with the KUBERNETES_SERVICE_PORT although the 443 should be the default value.
Example
# kubectl run -it --image curlimages/curl curl --command -- sh
If you don't see a command prompt, try pressing enter.
/ $ env | grep -i kubernetes
KUBERNETES_SERVICE_PORT=443
KUBERNETES_PORT=tcp://10.96.0.1:443
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
KUBERNETES_SERVICE_HOST=10.96.0.1
/ $ curl --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_SERVICE_PORT/api
-H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)"
{
"kind": "APIVersions",
"versions": [
"v1"
],
"serverAddressByClientCIDRs": [
{
"clientCIDR": "0.0.0.0/0",
"serverAddress": "172.20.0.2:6443"
}
]
}

Rollout restart statefulset using kubectl proxy

I have started kubectl proxy from within my pods and am able to access kubernetes APIs. I have a need to restart my statefulset.
Using kubectl, I would done this:
kubectl rollout restart statefulset my-statefulset
However, I would like to do this using the REST APIs. For instance, I can delete my pods, using this:
curl -XDELETE localhost:8080/api/v1/namespaces/default/pods
Is there any equivalent REST endpoint that I can use to rollout restart a statefulset?
I run your command kubectl rollout restart statefulset my-statefulset --v 10 and notice the output logs.
I figured out kubectl makes a patch request when I apply above command. And I am able to do that patch request using curl like following
curl -k --data '{"spec":{"template":{"metadata":{"annotations":{"kubectl.kubrnetes.io/restartedAt":"'"$(date +%Y-%m-%dT%T%z)"'"}}}}}'\
-XPATCH -H "Accept: application/json, */*" -H "Content-Type: application/strategic-merge-patch+json"\
localhost:8080/apis/apps/v1/namespaces/default/statefulsets/my-statefulset

Kubernetes Engine API delete pod

I need to delete POD on my GCP kubernetes cluster. Actually in Kubernetes Engine API documentation I can find only REST api's for: projects.locations.clusters.nodePools, but nothing for PODs.
The GKE API is used to manage the cluster itself on an infrastructure level. To manage Kubernetes resources, you'd have to use the Kubernetes API. There are clients for various languages, but of course you can also directly call the API.
Deleting a Pod from within another or the same Pod:
PODNAME=ubuntu-xxxxxxxxxx-xxxx
curl https://kubernetes/api/v1/namespaces/default/pods/$PODNAME \
-X DELETE -k \
-H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)"
From outside, you'd have to use the public Kubernetes API server URL and a valid token. Here's how you get those using kubectl:
APISERVER=$(kubectl config view --minify -o jsonpath='{.clusters[0].cluster.server}')
TOKEN=$(kubectl get secret $(kubectl get serviceaccount default -o jsonpath='{.secrets[0].name}') -o jsonpath='{.data.token}' | base64 --decode )
Here's more official information on accessing the Kubernetes API server.

Running dashboard inside play-with-kubernetes

I'm trying to start a dashboard inside play-with-kubernetes
Commands I'm running:
start admin node
kubeadm init --apiserver-advertise-address $(hostname -i)
start network
kubectl apply -n kube-system -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
allow master to hold nodes(?)
kubectl taint nodes --all node-role.kubernetes.io/master-
Wait until dns is up
kubectl get pods --all-namespaces
join node (copy from admin startup, not from here)
kubeadm join --token 43d52c.d72308004d523ac4 10.0.21.3:6443
download and run dashboard
curl -L -s https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml | sed 's/targetPort: 8443/targetPort: 8443\n type: NodePort/' | \
kubectl apply -f -
Unfortunatelly dashboard is not available.
What should I do to correctly deploy it inside play-with-kubernetes?
You need heapster for dashboard to work. So execute these as well:
kubectl apply -f https://github.com/kubernetes/heapster/raw/master/deploy/kube-config/rbac/heapster-rbac.yaml
kubectl apply -f https://github.com/kubernetes/heapster/raw/master/deploy/kube-config/influxdb/heapster.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/influxdb.yaml
Also, unless you want to fiddle with authentication you need to grant dashboard admin privileges with something like this:
kubectl create clusterrolebinding insecure-dashboard --clusterrole=cluster-admin --serviceaccount=kube-system:kubernetes-dashboard
Eventually a port link will appear (30xxx) but you will need to change the url scheme to https from http - and convince your browser that you don't care about the insecure certificate.
You should have a working dashboard now. Piece of cake ;)