kubernetes -list ingress for all namespaces by REST-Call - rest

I want to list all ingress-urls on a kubernets cluster for every namespace.
I know it´s possible with:
kubectl -> kubectl get ingress
numerous clients, e.g. for python: https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/ExtensionsV1beta1Api.md#list_ingress_for_all_namespaces
For my current situation, a simple REST-Call would be the best solution, but I can´t find any documentation which points me in the right direction. Is there a REST-Endpoint to access above mentioned information on a kubernets cluster?
Thanks in advance.

Yes, you can call the API server to retrieve all ingress rules:
https://kubernetes/apis/extensions/v1beta1/ingresses
This url would work within your cluster environment. Replace it with some public IP/Domain when calling it from the outside.
You will need to authenticate using Bearer Token. That token is usually mounted within your Pods at /var/run/secrets/kubernetes.io/serviceaccount/token (there are some exceptions, for example terraform kubernetes backend defaults to not mounting this token). To get the token for external use, you can export it using:
TOKEN=$(kubectl describe secret $(kubectl get secrets \
| grep ^default | cut -f1 -d ' ') | grep -E '^token' | cut -f2 -d':' | tr -d " ")
Here's some more info (not about ingress, but other REST API calls): https://stackoverflow.com/a/50797128/9423721

Related

Kubernetes - How to get Service Name of a Pod Aligned to

I would like to know, how to find service name from the Pod Name in Kubernetes.
Can you guys suggest ?
Services (spec.selector) and Pods (metadata.labels) are bound through shared labels.
So, you want to find all Services that include (some) of the Pod's labels.
kubectl get services \
--selector=${KEY-1}=${VALUE-1},${KEY-2}=${VALUE-2},...
--namespace=${NAMESPACE}
Where ${KEY} and ${VALUE} are the Pod's label(s) key(s) and values(s)
It's challenging though because it's possible for the Service's selector labels to differ from Pod labels. You'd not want there to be no intersection but a Service's labels could well be a subset of any Pods'.
The following isn't quite what you want but you may be able to extend it to do what you want. Given the above, it enumerates the Services in a Namespace and, using each Service's selector labels, it enumerates Pods that select based upon them:
NAMESPACE="..."
SERVICES="$(\
kubectl get services \
--namespace=${NAMESPACE} \
--output=name)"
for SERVICE in ${SERVICES}
do
SELECTOR=$(\
kubectl get ${SERVICE} \
--namespace=${NAMESPACE}\
--output=jsonpath="{.spec.selector}" \
| jq -r '.|to_entries|map("\(.key)=\(.value)")|#csv' \
| tr -d '"')
PODS=$(\
kubectl get pods \
--selector=${SELECTOR} \
--namespace=${NAMESPACE} \
--output=name)
printf "%s: %s\n" ${SERVICE} ${PODS}
done
NOTE This requires jq because I'm unsure whether it's possible to use kubectl's JSONPath to range over a Service's labels and reformat these as needed. Even using jq, my command's messy:
Get the Service's selector as {"k1":"v1","k2":"v2",...}
Convert this to "k1=v1","k2=v2",...
Trim the extra (?) "
If you want to do this for all Namespaces, you can wrap everything in:
NAMESPACES=$(kubectl get namespaces --output=name)
for NAMESPACE in ${NAMESPACE}
do
...
done
You can get information about a pods service from it's environment variables.
( ref: https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/#environment-variables)
kubectl exec <pod_name> -- printenv | grep SERVICE
Example:
Getting details about pod
Getting service from environment variable

Using Kubectl API, read container start time from within pod

To know the container start time, we generally describe the pod using:
kubectl describe pod <pod-name>. I need to access the container's start time via a kubectl api in terms of timestamp or any format. Do this exist in the API?
Effectively you could grab this via the status and state transitions. With kubectl it would look like this:
kubectl get pod $PODNAME -o jsonpath='{.status.conditions[?(#.type=="Ready")].lastTransitionTime}'
would yield 2021-05-25T15:57:03Z right now for me.
You could give the pod API access but that would be tricky (no easily policy way to say "access only to itself"). There is the Downward API volume system but I don't think it includes this field?
I have written a shell-script for solving this problem, which follows up on info of pods from inside pod via kube APIs and parsing the same for required lastTransitionTime parameter:
APISERVER=https://kubernetes.default.svc.cluster.local
SERVICEACCOUNT=/var/run/secrets/kubernetes.io/serviceaccount
NAMESPACE=$(cat ${SERVICEACCOUNT}/namespace)
TOKEN=$(cat ${SERVICEACCOUNT}/token)
curl -ik -H "Authorization: Bearer ${TOKEN}" ${APISERVER}/api/v1/namespaces/${NAMESPACE}/pods | grep "lastTransitionTime\:*[T]*\:*\:*" | tail -1
This script outputs the last restart time of pod (tail -1) for me.

There was a problem authenticationg with your cluster. when i making gitlab and k8s cluster integration

I create k8s cluster in aws by using kops
i wrote kubernetes cluster name : test.fuzes.io
api url : https://api.test.fuzes.io/api/v1
and i fill the CA Certificate field with result of
kubectl get secret {secrete_name} -o jsonpath="{['data']['ca\.crt']}" | base64 --decode
and finally i fill the Service Token field with result of
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep gitlab-admin | awk '{print $1}')
but when i save changes, i got message
There was a problem authenticating with your cluster. Please ensure your CA Certificate and Token are valid.
and i can't install helm tiller with kubernetes error:404
I really don't know what i did wrong. please help me....
As #fuzes confirmed cluster re-creation can be a workaround for this issue.
This was also described on a GitLab Issues - Kubernetes authentication not consistent
In short:
Using the same Kubernetes cluster integration configuration in multiple projects authenticates correctly on one but not the other.
Another suggestion to work around this by just setting CI Variables (KUBE_NAMESPACE and KUBECONFIG) instead of using our Kubernetes integration.
Hope this will be helpful for future references.
Adjust the api URL to https://api.test.fuzes.io:6443 (6443 is the default port kube master listens on for the api-server , if you have it edited then use the custom one )
use this command to validate the port "kubectl cluster-info | grep 'Kubernetes master' | awk '/http/ {print $NF}' "
This command will print the api-server url , you can add it directly in the asked column
Next , for your CA certificate ensure you copy all the command output along with BEGIN CERTIFICATE and END CERTIFICATE
with this you will be able to add the cluster
kubectl cluster-info | \
grep 'Kubernetes master' | \
awk '/http/ {print $NF}'
return https://control.pomazan.xyz/k8s/clusters/c-t7qr5
But use like https://80.211.195.192:6443 as API URL.
{"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "forbidden: User \"system:anonymous\" cannot get path \"/\"",
"reason": "Forbidden",
"details": {
},
"code": 403
}
This question is appeared in many people's environment, finally can be resolved!!!

Kubernetes get endpoints

I have a set of pods providing nsqlookupd service.
Now I need each nsqd container to have a list of nsqlookupd servers to connect to (while service will point to different every time) simultaneously. Something similar I get with
kubectl describe service nsqlookupd
...
Endpoints: ....
but I want to have it in a variable within my deployment definition or somehow from within nsqd container
Sounds like you would need an extra service running either in your nsqd container or in a separate container in the same pod. The role of that service would be to pole the API regularly in order to fetch the list of endpoints.
Assuming that you enabled Service Accounts (enabled by default), here is a proof of concept on the shell using curl and jq from inside a pod:
# Read token and CA cert from Service Account
CACERT="/var/run/secrets/kubernetes.io/serviceaccount/ca.crt"
TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
# Replace the namespace ("kube-system") and service name ("kube-dns")
ENDPOINTS=$(curl -s --cacert "$CACERT" -H "Authorization: Bearer $TOKEN" \
https://kubernetes.default.svc/api/v1/namespaces/kube-system/endpoints/kube-dns \
)
# Filter the JSON output
echo "$ENDPOINTS" | jq -r .subsets[].addresses[].ip
# output:
# 10.100.42.3
# 10.100.67.3
Take a look at the source code of Kube2sky for a good implementation of that kind of service in Go.
Could be done with a StatefuSet. Stable names + stable storage

Kubernetes Endpoints with TTL

I have a Kubernetes service without a selector for which I would like to manually manage the Endpoints by having the endpoint servers register/heartbeat themselves.
Is there a way to specify a TTL for Endpoints when I POST them to the Kubernetes API server, so that they will timeout and be deleted automatically if my endpoint server terminates and stops heartbeating?
If not, would it be reasonable if I add the Endpoints to the registry by POSTing directly to the underlying Etcd, instead of going through the Kubernetes API, or will that cause other problems?
You do not need to modify kubernetes to do this.
Here is how to do it yourself.
add an annotation to each object that you want to have a TTL. The annotation can say when it should expire. You can pick the name and format of this annotation.
update the annotation each time you update the object.
run another process that repeatedly lists all the objects of a given type and deletes ones that need to expire.
Here are specific commands to do this for endpoints.
Add an annotation to an endpoint with expiration time one minute from now:
#!/bin/bash
expiretime=$(date -v+60S +%s)
kubectl annotate endpoints/somename expires-at=$expiretime
Script to list endpoints, and delete those with expires-at after now:
#!/bin/bash
while 1
do
for NS in $(kubectl get namespaces -o name | cut -f 2 -d "/")
do
for NAME in $(kubectl --namespace=$NS get endpoints -o name)
do
exp=$( kubectl get --namespace $NS $NAME -o jsonpath={.metadata.annotations."expires-at"} 2> /dev/null) && \
[[ $exp < $(date +%s) ]] && \
echo "Deleting expired endpoints $NAME in $NS" && \
kubectl delete $NS $NAME
done
done
done
A pod is a great place to run the above script. It will have automatic access to the API and with a replication controller, it will run forever.
There is no TTL or heartbeat built into the endpoints API objects. You really don't want to write directly to etcd though… that will bite you eventually