Accessing Kubernetes API via Kubernetes Dashboard Host - kubernetes

So the idea is Kubernetes dashboard accesses Kubernetes API to give us beautiful visualizations of different 'kinds' running in the Kubernetes cluster and the method by which we access the Kubernetes dashboard is by the proxy mechanism of the Kubernetes API which can then be exposed to a public host for public access.
My question would be is there any possibility that we can access Kubernetes API proxy mechanism for some other service inside a Kubernetes cluster via that publically exposed address of Kubernetes Dashboard?

Sure you can. So after you set up your proxy with kubectl proxy, you can access the services with this format:
http://localhost:8001/api/v1/namespaces/kube-system/services/<service-name>:<port-name>/proxy/
For example for http-svc and port name http:
http://localhost:8001/api/v1/namespaces/default/services/http-svc:http/proxy/
Note: it's not necessarily for public access, but rather a proxy for you to connect from your public machine (say your laptop) to a private Kubernetes cluster.

You can do it by changing your service to NodePort:
$ kubectl -n kube-system edit service kubernetes-dashboard
You should see yaml representation of the service. Change type: ClusterIP to type: NodePort and save file.
Note: This way of accessing Dashboard is only possible if you choose to install your user certificates in the browser. Certificates used by kubeconfig file to contact API Server can be used.
Please check the following articles and URLs for better understanding:
Stackoverflow thread
Accessing Dashboard 1.7.X and above
Deploying a publicly accessible Kubernetes Dashboard
How to access kubernetes dashboard from outside cluster
Hope it will help you!

Exposing Kubernetes Dashboard not secure at all , but your answer is about K8s API Server that need to be accessible by external services.
The right answer differs according your platform and infrastructure , but as general points
[Network Security] Limit IP public reachability to K8s API Servers(s) / Load balancer if exist as a white list mechanism
[Network Security] Private-to-Private reachability is better like vpn or AWS PrivateLink
[ API Security ] Limit Privileges by clusterrole/role to enforce RBAC , better to keep it ReadOnly verbs { Get , List }
[ API Security ] enable audit logging for k8s components to keep track of events and actions

Related

Accessing pods through ClusterIP

I want to create a cluster of RESTful web APIs in AWS EKS and be able to access them through a single IP (allowing kubernetes to load balance requests to each). I have followed the procedure explained the this link and have set up an example nginx deployment as shown in the following image:
The problem is that when I access the example nginx deployment via 172.31.22.183 it works just fine, but when I try to use the cluster IP 10.100.145.181 it does not yield any response in such a way that it seems to be unreachable.
What's the purpose of that cluster ip then and how can I use it to achieve what I need?
What's the purpose of that cluster ip then and how can I use it to
achieve what I need?
ClusterIP is local IP that is used internally in the cluster, you can use it to access the application.
While i think Endpoint IP that you got, is might be external and you can access the application outside.
AWS EKS and be able to access them through a single IP (allowing
kubernetes to load balance requests to each)
For this best practice is to use the ingress, API gateway or service mesh.
Ingress is single point where all your request will be coming inside it will be load balancing and forwarding the traffic internally inside the cluster.
Consider ingress is like Loadbalancer single point to come inside the cluster.
Ingress : https://kubernetes.io/docs/concepts/services-networking/ingress/
AWS Example : https://aws.amazon.com/blogs/opensource/network-load-balancer-nginx-ingress-controller-eks/
ClusterIP is an IP that is only accessible inside the cluster. You cannot hit it from outside cluster unless you use kubectl port-forward

What is node/proxy subresource in kubernetes?

You can find mentions of that resource in the following Questions: 1, 2. But I am not able to figure out what is the use of this resource.
Yes, it's true, the provided (in comments) link to the documentation might be confusing so let me try to clarify you this.
As per the official documentation the apiserver proxy:
is a bastion built into the apiserver
connects a user outside of the cluster to cluster IPs which otherwise might not be reachable
runs in the apiserver processes
client to proxy uses HTTPS (or http if apiserver so configured)
proxy to target may use HTTP or HTTPS as chosen by proxy using available information
can be used to reach a Node, Pod, or Service
does load balancing when used to reach a Service
So answering your question - setting node/proxyresource in clusterRole allows k8s services access kubelet endpoints on specific node and path.
As per the official documentation:
There are two primary communication paths from the control plane
(apiserver) to the nodes. The first is from the apiserver to the
kubelet process which runs on each node in the cluster. The second is
from the apiserver to any node, pod, or service through the
apiserver's proxy functionality.
The connections from the apiserver to the kubelet are used for:
fetching logs for pods
attaching (through kubectl) to running pods
providing the kubelet's port-forwarding functionality
Here are also few running examples of using node/proxy resource in clusterRole:
How to Setup Prometheus Monitoring On Kubernetes Cluster
Running Prometheus on Kubernetes
It is hard to find in Kubernetes official documents any information about this sub-resource.
In context of RBAC, the format node/proxy can be used to grant access to the sub-resource named proxy for node resource. Also the same access can be granted for pods and services.
We can see it from the output of available resourses from the Kubernetes API server (API Version: v1.21.0):
===/api/v1===
...
nodes/proxy
...
pods/proxy
...
services/proxy
...
Detailed information about usage of proxy sub-resource can be found in The Kubernetes API (depends on the version you use) - section Proxy Operations for every mentioned resource: pods, nodes, services.

Expose pods in AKS to internet with existing setup

We have a request to expose certain pods in an AKS environment to the internet for 3rd party use.
Currently we have a private AKS cluster with a managed standard SKU load balancer in front using the advanced azure networking (basically Calico) where each Pod gets its own private IP from the Vnet IP space. All private IPs currently route through a firewall via user defined route in order to reach the internet, and vice versa. Traffic between on prem routes over a VPN connection through the azure virtual wan. I don’t want to change any existing routing behavior unless 100% necessary.
My question is, how do you expose an existing private AKS cluster’s specific Pods to be accessible from the internet? The entire cluster does not need to be exposed to the internet. The issue I foresee is the ephemeral Pods and ever changing IPs making simple NATing in the firewalls not an option. I’ve also thought about simply making a new AKS cluster with a public load balancer. The issue here though is security as it must still go through the firewalls and likely could with existing user defined routes
What is the recommended way to setup the architecture where certain Pods in AKS can be accessible over the internet, while still allowing those Pods to access the Pods over the private network. I want to avoid exposing all Pods to the internet
There are a couple of options that you can use in order to expose your application to
outside your network, such as: Service:
NodePort: Exposes the Service on each Node’s IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You’ll be able to contact the NodePort Service, from outside the cluster, by requesting <NodeIP>:<NodePort>.
LoadBalancer: Exposes the Service externally using a cloud provider’s load balancer. NodePort and ClusterIP Services, to which the external load balancer routes, are automatically created.
Also, there is another option that is use an ingress, IMO this is the best way to expose HTTP applications externally, because it's possible to create rules by path and host, and gives you much more flexibility than services. For ingress only HTTP/HTTPS is supported, if you need TCP then go to Services
I'd recommend you take a look in this links to understand in deep how services and ingress works:
Kubernetes Services
Kubernetes Ingress
NGINX Ingress
AKS network concepts
Deploy nginx ingress controller and bind the ingress controller service to a public Load Balancer. Define Ingress rules for the kubernetes services that you want to access from internet. Note that ingress controller enables entry point to the services running inside kubernetes
Several years later and wanted to update.
We did successfully implement a scalable ingress option into our private AKS cluster using NGINX as the ingress. The basic flow was
Public IP > NAT to frontend private IP of NGINX > NGINX path rules that point to your pod/service
Taking a URL as an example for a microservice of www.example.com/service1, the public DNS entry you create is what resolves www.example.com to the public IP that you will NAT to the private IP of NGINX. Then, the rules you create within NGINX take the specific /service1 path of the URL and use it to route to the specific service you pointed it at. It behaves much like URL switching in other load balancers. That is really all NGINX is doing for you. In NGINX syntax, this involves specifying a hosts name (URL) and an associated rule with a backend path and service name. The service name in this example is service1 and the path is / because service1 sits just behind the root.
Something like this saves cost by using less public IPs. For example, you can use a subdomain to easily NAT traffic to a seperate test environment. www.test.example.com and www.example.com can point to separate public IPs, which you can NAT to separate AKS clusters running NGINX. In this way, your NGINX rules can be identical because it's only looking for /service1 which hopefully you've mirrored test and prod environments.
Many ways to do this but a few recommendations from lessons learned
use subdomains to break out multiple environments
standardize your NGINX private front end IP across envronments (make them all end in .100 as an example
create a standard NGINX ingress template where you really only need to modify the serviceName. Your hostName should be static within an environment
have your devs include this and deploy their microservices with helm rather than relying on an infrastructure team to update NGINX services. Sort of defeats the devops mentality and speed gains

How would I setup kuberentes ingress to for VPN-only access?

I've got a Kubernetes cluster with nginx ingress setup for public endpoints. That works great, but I have one service that I don't want to expose to the public, but I do want to expose to people who have vpc access via vpn. The people who will need to access this route will not have kubectl setup, so they can't use port-forward to send it to localhost.
What's the best way to setup ingress for a service that will be restricted to only people on the VPN?
Edit: thanks for the responses. As a few people guessed I'm running an EKS cluster in AWS.
It depends a lot on your Ingress Controller and cloud host, but roughly speaking you would probably set up a second copy of your controller using a internal load balancer service rather than a public LB and then set that service and/or ingress to only allow from the IP of the VPN pods.
Since you are talking about "VPC" and assuming you have your cluster in AWS, you probably need to do what #coderanger said.
Deploy a new ingress controller with "LoadBalancer" in the service type and add an the annotation service.beta.kubernetes.io/aws-load-balancer-internal: "true".
Check here what are the possible annotations that you can add to a Load Balancer in AWS: https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/#load-balancers
You can also create a security group for example and add it to the load balancer with service.beta.kubernetes.io/aws-load-balancer-security-groups.

Connecting Kube cluster through proxy and clusterIP?

As various google articles(Example : this blog) states that this(connecting Kube cluster through proxy and clusterIP) method isn’t suitable for a production environment, but it’s useful for development.
My question is why it is not suitable for production ? Why connecting through nodeport service is better than proxy and clusterIP ?
Lets distinguish between three scenarios where connecting to the cluster is required
Connecting to Kubernetes API Server
Connecting to the API server is required for administrative purposes. The users of your application have no business with it.
The following options are available
Connect directly to Master IP via HTTPS
Kubectl Proxy Use kubectl proxy to to make the Kubernetes API available on your localhost.
Connecting external traffic to your applications running in the Kubernetes Cluster. Here you want to expose your applications to your users. You'll need to configure a Service and they can be of the following types
NodePort: Only accessible on the NodeIPs and ports > 30000
ClusterIP: Internal Only. External traffic cannot hit a service of type ClusterIP directly. Requires ingress resource & ingress controller to receive external traffic.
LoadBalancer: Allows you receive external traffic to one and only one service
Ingress: This isn't a type of service, it is another type of Kubernetes resource. By configuring NGINX Ingress for example, you can handle traffic to multiple ClusterIP services with only on external LoadBalancer.
A Developer needs to troubleshoot a pod/service: kubectl port-forward: Port forwarding example Requires kubectl to be configured on the system hence it cannot be used for all users of the application
As you can see from the above explanation, the proxy and port-forwarding option aren't viable options for connecting external traffic to the applications running because it requires your kubectl installed and configured with a valid kubeconfig which grants access into your cluster.