REST API for Kubernetes apiserver proxy? - rest

I have a K8s Cluster made from kubeadm v1.8.4 present on virtual machine. Now i want to access this K8s cluster using rest API from my laptop/workstation.
One way through which i am able to do this is by running this command "kubectl proxy --address= --accept-hosts '.*' ".But i have to manually run this command in order to access my cluster from laptop and i don't want that.
While going through the docs i found that there is another proxy available which is apiserver proxy.I am trying to use this way by following this link(https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/#manually-constructing-apiserver-proxy-urls) but getting error "could not get any response" in postman.
So i was wondering whether i am using this apiserver proxy properly or not.Or is there any other way through which i can hit REST request from my laptop to my cluster on VM's without manually hitting "kubectl proxy" command?

What kube proxy does for you is essentialy two things.
First, obviously, it proxies your traffic from a localhost port to kubernetes api.
Second, it also authenticates you against the cluster so that all the proxied calls do not neet authentication information.
To hit API directly, it's as simple as pointing your client to the right IP:PORT of your VM, but... you need to either ignore (not advised) tls issues or trust kube CA cert. Also, you still need to authenticate to it, so you need to use appropriate client credentials (ie. bearer token).
Manually constructing apiserver proxy refers to a different kind of beast, which allows you to proxy traffic to services deployed in your kubernetes cluster by means of accessing a particular path in kube API server. So to use that you need to have the access to the API already.

Related

How can I access my service's through exposed NodePort when I enable mtls with Istio?

I previously had a bunch of microservices running fine without mtls enabled, that I could access via my frontend http://192.168.99.100:31001/, backend(s), and db running on various other NodePort's.
For the next stage of my project I need to enable mtls to accomplish securing my services via a JWT token controlled by istio. But, when I use istio-auth-demo instead of istio-demo I cannot access my services via their endpoint. What do I need to do to fix this? I have wrote a gateway, virtualservice, and destinationrules that I thought might clear up the issue.
Just looking for someone to point me in the right direction.
I am not sure what was the issue here. Maybe it was because I was running it on minikube and some config wasn't supported. I just enabled mtls on the pods I wanted and ran the regular version of istio.

Access api-server Kubernetes

I have a kubernetes api-server IP address and im trying to check if it's possible to use kubectl to access the environment when you have only api server ip address and not the master/node address.
Thanks!
It Isn't possible connecting straight to the api-server using kube without using the Master. Plus in order to login you must have the user, pass and certificate in most cases.
The scenario will be: connecting into the environment using the master than all the commands executed in kube will automaticall query the api-server.

How to enable Kubernetes api server instances to connect to external networks via a proxy

The goal is to enable Kubernetes api server to connect to resources on the internet when it is on a private network on which internet resources can only be accessed through a proxy.
Background:
A kubernetes cluster is spun up using kubespray containing two apiserver instances that run on two VMs and are controlled via a manifest file. The Azure AD is being used as the identity provider for authentication. In order for this to work the API server needs to initialize its OIDC component by connecting to Microsoft and downloading some keys that are used to verify tokens issued by Azure AD.
Since the Kubernetes cluster is on a private network and needs to go through a proxy before reaching the internet, one approach was to set https_proxy and no_proxy in the kubernetes API server container environment by adding this to the manifest file. The problem with this approach is that when using Istio to manage access to APIs, no_proxy needs to be updated whenever a new service is added to the cluster. One solution could have been to add a suffix to every service name and set *.suffix in no proxy. However, it appears that using wildcards in the no_proxy configuration is not supported.
Is there any alternate way for the Kubernetes API server to reach Microsoft without interfering with other functionality?
Please let me know if any additional information or clarifications are needed.
I'm not sure how you would have Istio manage the egress traffic for your Kubernetes masters where your kube-apiservers run, so I wouldn't recommend it. As far as I understand, Istio is generally used to manage (ingress/egress/lb/metrics/etc) actual workloads in your cluster and these workloads generally run on your nodes, not masters. I mean the kube-apiserver actually manages the CRDs that Istio uses.
Most people use Docker on their masters, you can use the proxy environment variables for your containers like you mentioned.
We tried a couple of solutions to avoid having to set http(s)_proxy and no_proxy env variables in the kube-apiserver and constantly whitelist new services in the cluster...
Introduce a self managed proxy server which would determine what traffic goes is forwarded to an internet connected proxy and what traffic is not proxied:
squid proxy seemed to do the trick by defining some ACLs. One issue we had was that node names were not resolved by kube-dns so we had to add manual entries into the hosts files of containers (not sure how these were handled by default).
we also tried writing a proxy using node but it had trouble with https in some scenarios.
Introduce a self managed identity provider between azure and our k8s cluster which was configured to use the internet connected proxy this avoiding having to configure the proxy in the kube-apiserver
We landed up going with option 2 as it gave us more flexibility in the long term.

ejabberd on kubernetes gke doesn't pass healthcheck

I am trying to run ejabberd on google kubernetes engine. As I am using daemonset as kubernetes resource to deploy manage kubernetes pods of ejabberd, I need to setup custom healthcheck(which must receive status code 200 to be successful) for ejabberd container. (:5280/admin doesn't work as there is basic auth there, :5222 and :5269 send response that libcurl cannot parse, thus both doesn't work).
Tried to configure api and set custom healthcheck an api irl, but actually it's not secure and more configuration to be done.
does anyone passed through this problem and what solution can be done for this?

Kubernetes: link a pod to kube-proxy

As I understand it, kube-proxy runs on every Kubernetes node (it is started on Master and on the Worker nodes)
If I understand correctly, it is also the 'recommended' way to access the API (see: https://github.com/kubernetes/kubernetes/blob/release-1.0/docs/user-guide/accessing-the-cluster.md#accessing-the-api-from-a-pod)
So, since kube-proxy is already running on every node, is the 'recommended' way to start each pod with a new kube-proxy container in it, or is it possible to 'link' somehow to the running kube-proxy container?
Originally I was using the URL with $KUBERNETES_SERVICE_HOST and the credentials passed as a Secret, on GKE,
calling
curl https://$USER:$PASSWORD#${KUBERNETES_SERVICE_HOST}/api/v1/namespaces/${NAMESPACE}/endpoints/${SELECTOR}
and parsing the results, but on K8s deployed on a CoreOS cluster I only seem to be able to authenticate through TLS and certs and the linked proxy seems like a better way.
So, I'm looking for the most efficient / easiest way to connect to the API from a pod to look up the IP of another pod referred to by a Service.
Any suggestion/input?
There are a couple options here, as noted in the doc link you provided.
The preferred method is using Service Accounts to access the API:
The short description is that your service would read the service-account secrets (token / CA-cert) that are mounted into the pod, then inject the token into the http header and validate the apiserver cert using the CA-cert. This somewhat simplifies the description of service accounts, but the above link can provide more detail.
Example using curl and service-account data inside pod:
curl -H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt https://kubernetes/api/v1/namespaces
Another option, mentioned in the link you provided, is to run a side-car container running a "kubectl proxy" in the same pod as your application.
A note of clarification: the "kube-proxy" and "kubectl proxy" are not referring to the same thing. The kube-proxy is responsible for routing "service" requests, kubectl proxy is a cli cmd which opens a local proxy to the Kubernetes API.
What is happening under the covers when running kubectl proxy is that the kubectl command already knows how to use the service-account data, so it will extract the token/CA-cert and establish a connection to the API server for you, then expose an interface locally in the pod (which you can use without any auth/TLS).
This is might be an easier approach as it likely requires no changes to your existing application, short of pointing it to the local kubectl proxy container running in the same pod.
One other side-note: I'm not sure of your exact use-case, but generally it would be preferable to use the Service IP / Service DNS name and allow Kubernetes to handle service discovery, rather than extracting the pod IP itself (the pod IP will change if the pod gets scheduled to a different machine).