kubectl get nodes from pod (NetworkPolicy) - kubernetes

I try to run using Python kubectl to get nodes inside the POD.
How I should set up a Network Policy for this pod?
I tried to connect my namespace to the kube-system namespace, but it was not working.
Thanks.

As per Accessing the API from a Pod:
The recommended way to locate the apiserver within the pod is with the
kubernetes.default.svc DNS name, which resolves to a Service IP which
in turn will be routed to an apiserver.
The recommended way to authenticate to the apiserver is with a service
account credential. By kube-system, a pod is associated with a service account, and a credential (token) for that service account is placed into the filesystem tree of each container in that pod, at /var/run/secrets/kubernetes.io/serviceaccount/token.
All you need is a service account with enough privileges, and use the API server DNS name as stated above. Example:
# Export the token value
export TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
# Use kubectl to talk internally with the API server
kubectl --insecure-skip-tls-verify=true \
--server="https://kubernetes.default.svc:443" \
--token="${TOKEN}" \
get nodes
The Network Policy may be restrictive and prevent this type of call, however, by default, the above should work.

Related

How to get another pod name from it's IP?

I have a pod that exposes a port (Server). Other pods (Clients) can communicate with it.
The server can find remote IP and port on a socket (when the client connects to it). I am looking for a way to get the client's pod name (from its IP and port).
I saw a bunch of questions/answers about getting pod names via kubectl. However, I am not sure whether I can do kubectl from within a cluster itself.
I am trying to figure out what is available for something running on the cluster. It's ok if it requires some special privileges. It's more complicated if it requires authentication.
List all the Pods with the List Pods API operation and parse the JSON response for the podIP field (e.g. with jq or some other JSON parsing tool) to find the JSON object of the Pod that has your desired IP address. Then, extract the metadata.name field from this JSON object to get the name of the Pod.
You can do this by either directly using the Kubernetes API (e.g. with curl) or with kubectl (e.g. kubectl get pods -o json | jq ...). In any case, you must include in this request the ServiceAccount token of the ServiceAccount used by the Pod from which you are issuing the request (if you use the Kubernetes API directly, as a Bearer token in the Authorization header, and if you use kubectl with the --token command-line flag).
Regarding authorisation, you need a Role allowing the list verb on the pods resource and a RoleBinding that binds this Role to the ServiceAccount that your Pod is using (by default, Pods use a ServiceAccount named default in their namespace, but you can specify a custom ServiceAccount with the serviceAccountName field of the Pod).
Whatever can be done via kubectl can be done via direct requests to the API server as well. All you need is to have proper ServiceAccount set up for your pod. Once you have it - you use can plenty of libraries dedicated for communication with k8s API server.

Kubernetes - kubectl get pods being called from node

Would like to know if anyone has experienced the point of need to call kubectl commands from the node?
What i'm trying to do for this example is to access the master from the node and get some valuable information.
bafontainha ~ kubectl get pods --server https://localhost:6443 --insecure-skip-tls-verify=true
Please enter Username: service-account
Please enter Password: error: the server doesn't have a resource type "pods"
kubectl need either a bearer token passed inline or a kubeconfig file with a token or a certificate to authenticate the client to the Kubernetes API Server.
By giving --insecure-skip-tls-verify=true kubectl will not verify the API Server endpoints authenticity but that does not mean that you can call Kubernetes API without a valid bearer token or a client certificate which essentially proves the identity of the client calling the API Server.
I think the easiest option would be just copy the working kubeconfig file to the node VM into .kube/config location and execute kubectl get pods

Get IP addresses of all k8s pods from within a container

I have a service called my-service with an endpoint called refreshCache. my-service is hosted on multiple servers, and occasionally I want an event in my-service on one of the servers to trigger refreshCache on my-service on all servers. To do this I manually maintain a list of all the servers that host my-service, pull that list, and send a REST request to <server>/.../refreshCache for each server.
I'm now migrating my service to k8s. Similarly to before, where I was running refreshCache on all servers that hosted my-service, I now want to be able to run refreshCache on all the pods that host my-service. Unfortunately I cannot manually maintain a list of pod IPs, as my understanding is that IPs are ephemeral in k8s, so I need to be able to dynamically get the IPs of all pods in a node, from within a container in one of those pods. Is this possible?
Note: I'm aware this information is available with kubectl get endpoints ..., however kubectl will not be available within my container.
For achieving this the best way would be to use a K8s config inside the pod.
For this the K8s Client can help. Here is an example python script that can be used to get pods and their metadata from inside the pod.
from kubernetes import client, config
def trigger_refresh_cache():
# it works only if this script is run by K8s as a POD
config.load_incluster_config()
v1 = client.CoreV1Api()
print("Listing pods with their IPs:")
ret = v1.list_pod_for_all_namespaces(label_selector='app=my-service')
for i in ret.items:
print("%s\t%s\t%s" %
(i.status.pod_ip, i.metadata.namespace, i.metadata.name))
# Rest of the logic goes here to trigger endpoint
Here the method load_incluster_config() is used which loads the kubeconfig inside pod via the service account attached to that pod.
You don't need kubectl to access the Kubernetes API. You can do it with any tool that can make HTTP requests.
The Kubernetes API is a simple HTTP REST API, and all the authentication information that you need is present in the container if it runs as a Pod in the cluster.
To get the Endpoints object named my-service from within a container in the cluster, you can do:
curl -k -H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" \
https://kubernetes.default.svc:443/api/v1/namespaces/{namespace}/endpoints/my-service
Note: replace {namespace} with the namespace of the my-service Endpoints resource.
And to extract the IP addresses of the returned JSON, you could pipe the output to a tool like jq:
... | jq -r '.subsets[].addresses[].ip'
Note that the Pod from which you are executing this needs read permissions for the Endpoints resource, otherwise the API request is denied.
You can do this with a ClusterRole, ClusterRoleBinding, and Service Account (you need to set this up only once):
kubectl create sa endpoint-reader
kubectl create clusterrole endpoint-reader --verb=get,list --resource=endpoints
kubectl create clusterrolebinding endpoint-reader --serviceaccount=default:endpoint-reader --clusterrole=endpoint-reader
Then, use the endpoint-reader ServiceAccount for the Pod from which you want to execute the above curl command by specifying it in the pod.spec.serviceAccountName field.
Granting permissions for any other API operations (i.e. combinations of verbs and resources) works in the same way.

Kubernetes Dashboard does not accept service account's token over HTTP: Authentication failed. Please try again

I have installed Kubernetes Dashboard on a Kubernetes 1.13 cluster as described here:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
I've also configured the dashboard to serve its content insecurely because I want to expose it via an Ingress at https://my-cluster/dashboard where the TLS connection will be terminated at the ingress controller.
I have edited service/kubernetes-dashboard in namespace kube-system and changed ports from {port:443, targetPort:8443} to {port:80, protocol:TCP, targetPort:9090}.
I have edited deployment/kubernetes-dashboard in namespace kube-system and changed
ports from {containerPort: 8443, protocol:TCP} to {containerPort: 9090, protocol:TCP} (and livenessProbe analagously). I have also changed args from [ --auto-generate-certificates ] to [ --enable-insecure-login ].
This allows me to contact the dashboard from a cluster node via HTTP at the service's cluster IP address and port 80 (no Ingress is configured yet).
I have also created a sample user as explained here and extracted its token. The token works e.g. in kubectl --token $token get pod --all-namespaces, so it apparently possesses cluster-admin privileges. However, if I enter the same token into the dashboards' login screen I get "Authentication failed. Please try again.".
What could be the reason why? How can I further diagnose and solve the issue? (The dashboard's log does not provide any help at this point.)
UPDATE If I keep the dashboard's standard configuration (i.e. for secure access over HTTPS) the same token is accepted.

How to configure kubectl in kubernetes cluster

I have provisioned a kuberenetes cluster using this saltstack repo:
https://github.com/valentin2105/Kubernetes-Saltstack
Now, I am not able to configure my kubectl CLI to access the cluster.
Is there a way to reset the credentials?
Is there a way to get configure properly the .kube/config with the right context, user, credentials and cluster name retrieving the info from the servers?
I am new to kubernetes, so maybe I am missing something here.
To be able to set your cluster you can do as follow:
kubectl config set-cluster k8s-cluster --server=${CLUSTER} [--insecure-skip-tls-verify=true]
--server=${CLUSTER} where ${CLUSTER} is your cluster adress
--insecure-skip-tls-verify=true is used if you are using http over https
Then you need to set your context ( depending on your kubernetes configuration
kubectl config set-context k8s-context --cluster=k8s-cluster --namespace=${NS}
--namespace=${NS} to specify the default namespace ( which skips the -n while typing kubectl commands for that namespace )
If you are using a RBAC, you might need to specify your user and pass your connection token or your login password:
For this advanced usage you can see the docs https://kubernetes.io/docs/reference/kubectl/cheatsheet/#kubectl-context-and-configuration
Now and finally to use your context you only have to:
kubectl config use-context ${USER}