Kubernetes: link a pod to kube-proxy - kubernetes

As I understand it, kube-proxy runs on every Kubernetes node (it is started on Master and on the Worker nodes)
If I understand correctly, it is also the 'recommended' way to access the API (see: https://github.com/kubernetes/kubernetes/blob/release-1.0/docs/user-guide/accessing-the-cluster.md#accessing-the-api-from-a-pod)
So, since kube-proxy is already running on every node, is the 'recommended' way to start each pod with a new kube-proxy container in it, or is it possible to 'link' somehow to the running kube-proxy container?
Originally I was using the URL with $KUBERNETES_SERVICE_HOST and the credentials passed as a Secret, on GKE,
calling
curl https://$USER:$PASSWORD#${KUBERNETES_SERVICE_HOST}/api/v1/namespaces/${NAMESPACE}/endpoints/${SELECTOR}
and parsing the results, but on K8s deployed on a CoreOS cluster I only seem to be able to authenticate through TLS and certs and the linked proxy seems like a better way.
So, I'm looking for the most efficient / easiest way to connect to the API from a pod to look up the IP of another pod referred to by a Service.
Any suggestion/input?

There are a couple options here, as noted in the doc link you provided.
The preferred method is using Service Accounts to access the API:
The short description is that your service would read the service-account secrets (token / CA-cert) that are mounted into the pod, then inject the token into the http header and validate the apiserver cert using the CA-cert. This somewhat simplifies the description of service accounts, but the above link can provide more detail.
Example using curl and service-account data inside pod:
curl -H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt https://kubernetes/api/v1/namespaces
Another option, mentioned in the link you provided, is to run a side-car container running a "kubectl proxy" in the same pod as your application.
A note of clarification: the "kube-proxy" and "kubectl proxy" are not referring to the same thing. The kube-proxy is responsible for routing "service" requests, kubectl proxy is a cli cmd which opens a local proxy to the Kubernetes API.
What is happening under the covers when running kubectl proxy is that the kubectl command already knows how to use the service-account data, so it will extract the token/CA-cert and establish a connection to the API server for you, then expose an interface locally in the pod (which you can use without any auth/TLS).
This is might be an easier approach as it likely requires no changes to your existing application, short of pointing it to the local kubectl proxy container running in the same pod.
One other side-note: I'm not sure of your exact use-case, but generally it would be preferable to use the Service IP / Service DNS name and allow Kubernetes to handle service discovery, rather than extracting the pod IP itself (the pod IP will change if the pod gets scheduled to a different machine).

Related

Deleting kube-apiserver from kubernetes-master (just for testing and understanding)

Deleting kube-apiserver from kubernetes-master does not prevent kubectl from querying pods. I always understand, kube-apiserver is responsible for communication with the master.
My question: how can kubectl still able to query pods while kube-apiserver is still restarting? Is there any official documentation that covers this behavior?
Your understanding is correct. The Kubernetes API server validates and configures data for the api objects which include pods, services, replication controllers, and others. The API Server services REST operations and provides the frontend to the cluster's shared state through which all other components interact. So if your api-server pod will encounter some issues you will not be able to get your client communicating with it.
What is happening is that when you delete the api-server pod it is being immediately recreated hence your client is able to connect and fetch the data.
To provide an example I have simulated the api-server pod failure by fiddling a bit with kube-apiserver.yaml file in the /etc/kubernetes/manifests:
➜ manifests pwd
/etc/kubernetes/manifests
Immediately once a did that I was no longer able to connect to api-server:
➜ manifests kubectl get pods -A
The connection to the server 10.128.15.230:6443 was refused - did you specify the right host or port?
Getting those manifest in docker desktop could be tricky depends where you run it. Please have a look at this case where answer show solution to that.

How to debug a Kubernetes service endpoint that isn't serving correctly?

I have set up a Kubernetes cluster. The cluster contains, among other things, a cluster and deployment surfacing an API webservice (based on the subway-explorer-gmaps-proxy container).
I've deployed the service externally, using the LoadBalancer service type (this is on GCP):
$kubectl get svc subway-explorer-gmaps-proxy-service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
subway-explorer-gmaps-proxy-service LoadBalancer 10.35.252.232 35.224.78.225 9000:31396/TCP 19h
My understanding (and correct me if I'm wrong!) is that this service should now be queryable outside of the cluster, by visiting http://35.224.78.225 in the browser.
When running the Docker container locally, I can verify things are working correctly by navigating to the following URL:
http://localhost:49161/starting_x=-73.954527&starting_y=40.587243&ending_x=-73.977756&ending_y=40.687163
Looking at the kubectl get output, I expect visiting the following URL in the browser will serve me the content I'm looking for:
http://35.224.78.225:31396/starting_x=-73.954527&starting_y=40.587243&ending_x=-73.977756&ending_y=40.687163
But when I visit this URL, nothing gets served.
I suspect there is a non-fatal error in the deployment configuration. What is an effective way of debugging this effective way of debugging this problem? Are there access logs or a stdout stream somewhere I can check to see what's wrong?
You can try running through the official docs on debugging services: https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/
Beyond that, have you confirmed you're querying the load balancer on the right port? While I don't deploy on GCP, when launching a load balancer for a kubernetes service on AWS it'll accept traffic on port 80/443 and forward it to the NodePort of the service, which I'm guessing is 31396 for your case. What are the ports listed in kubectl get svc subway-explorer-gmaps-proxy-service -o yaml?
What I didn't realize is that Google Cloud has a separate firewall system, which is distinct from the connection settings managed by Kubernetes. In order to expose the application to the outside world (e.g. a web browser, for example), I need to also modify the Google Cloud Firewall rules (see for example this answer as to how).
To test that the application is working on the Kubernetes side, you need not modify cloud firewall rules. Instead, run wget, curl, or some similar data retrieval command from a different pod on the cluster, pointed at the internal IP address and port number of the pod of interest.
For example. The "hello world" pod used by the Kubernetes documentation is the busybox pod (defined here). By creating this pod in my cluster, and then running the following:
kubectl exec busybox -c busybox -- wget "10.35.249.23:9000"
I was able to confirm that the service is functioning correctly within Kubernetes. You can also use any other pod which defines a wget in the underlying OS, I just used busybox because all of my other pods use Google's Container Optimized OS, which doesn't include it.
Finally, for the purposes of debugging, I went ahead and added a /status endpoint to my API application service which serves {"status": "OK"} when the core service is working. I recommend following this pattern with other applications as well, as it gives a simple endpoint that you can test to make sure that, at a minimum, the webserver is responding to input. In my case, I discovered that the /status page is OK, but the API calls are failing, which allows me to narrow the issue down to unresolved Promises caused by a bad credentials secret.

How do I find out the external IP of a Load Balancer service?

I am using Kubernetes Engine on the Google Cloud Platform. I have a pod running a process in a Docker scratch container. I also have a load balancer service that gives me access to the pod from the outside world.
The process running in the pod needs to know what its external IP address is. How can I get this?
Prior to using Kubernetes Engine I was using Compute Engine and could find the external IP address by the following:
curl -H "Metadata-Flavor: Google" http://metadata/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
Are there any internal tools I can use that would be available to my process? Or would I need the process to call an external site that can mirror back the IP address?
Every Pod (unless configured not to do so) has valid kubernetes credentials in /var/run/secrets/kubernetes.io/serviceaccount/token as described here so the answer is to use the kubernetes API to ask the Service in front of the Pod(s) for its status:loadBalancer:ingress:ip: as described here which I have every reason to believe GKE will keep up-to-date with any changes to the load balancer. The kubernetes API is always(?) located at https://kubernetes (that's normally enough, or https://kubernetes.default.svc.cluster.local is its full name), so there should be very little configuration the Pod would need in order to carry out the lookup.
The asterisk to that response is that one must provide the name of the Service to the Pod(s) of the Service sitting in front of it, because (for the most part) there is no way for the Pod to know how many Services point to it.

REST API for Kubernetes apiserver proxy?

I have a K8s Cluster made from kubeadm v1.8.4 present on virtual machine. Now i want to access this K8s cluster using rest API from my laptop/workstation.
One way through which i am able to do this is by running this command "kubectl proxy --address= --accept-hosts '.*' ".But i have to manually run this command in order to access my cluster from laptop and i don't want that.
While going through the docs i found that there is another proxy available which is apiserver proxy.I am trying to use this way by following this link(https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/#manually-constructing-apiserver-proxy-urls) but getting error "could not get any response" in postman.
So i was wondering whether i am using this apiserver proxy properly or not.Or is there any other way through which i can hit REST request from my laptop to my cluster on VM's without manually hitting "kubectl proxy" command?
What kube proxy does for you is essentialy two things.
First, obviously, it proxies your traffic from a localhost port to kubernetes api.
Second, it also authenticates you against the cluster so that all the proxied calls do not neet authentication information.
To hit API directly, it's as simple as pointing your client to the right IP:PORT of your VM, but... you need to either ignore (not advised) tls issues or trust kube CA cert. Also, you still need to authenticate to it, so you need to use appropriate client credentials (ie. bearer token).
Manually constructing apiserver proxy refers to a different kind of beast, which allows you to proxy traffic to services deployed in your kubernetes cluster by means of accessing a particular path in kube API server. So to use that you need to have the access to the API already.

Kubernetes - Unable to hit the server

We deployed a containerized app by pulling a public docker image from docker hub and were able to get a pod running at a server running at 172.30.105.44. Hitting this IP from a rest client or curl/pinging the IP gives no response. Can someone please guide us where we are going wrong?
Firstly, find out the IP of your node by executing the command
kubectl get nodes
Get the information related to the pod running by executing the command kubectl describe services <pod-name>
Make a note of the field NodePort from here.
To access your service that is already running, hit the endpoint - nodeIP:NodePort.
You can now access your service successfully!
I am not sure where you have deployed (AWS, GKE, Bare) but you should make sure you have the following:
https://kubernetes.io/docs/user-guide/ingress/
https://kubernetes.io/docs/user-guide/services/
Ingress will work out of the box on GKE, but with an AWS installation, you may need to make sure you have nginx-ingress pods running.