Enable access to Kubernetes Dashboard without kubectl proxy - kubernetes

If I move a relevant config file and run kubectl proxy it will allow me to access the Kubernetes dashboard through this URL:
http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
However if I try to access the node directly, without kubectl proxy, I will get a 403 Forbidden.
http://dev-master:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login
Our kubernetes clusters are hidden inside a private network that users need to VPN in to; furthermore only some of us can talk to the master node of each of our clusters after authenticating to the VPN. As such, running kubectl proxy is a redundant step, and choosing the appropriate config file for each cluster is an additional pain, especially when we want to compare the state of different clusters.
What needs to be changed to allow "anonymous" HTTP access to the dashboard of these already-secured kubernetes master nodes?

You would want to set up a Service (either NodePort or LoadBalancer) for the dashboard pod(s) to expose it to the outside world (well, outside from the PoV of the cluster, which is still an internal network for you).

Related

How to allow nodes of one GKE cluster to connect to another GKE

I have a GKE clusters setup, dev and stg let's say, and wanted apps running in pods on stg nodes to connect to dev master and execute some commands on that's GKE - I have all the setup I need and when I add from hand IP address of the nodes all works fine but the IP's are changing,
so my question is how can I add to Master authorised networks the ever-changing default-pool IPs of nodes from the other cluster?
EDIT: I think I found the solution, it's not the node IP but the NAT IP I have added to authorized networks, so assuming I don't change those I just need to add the NAT I guess, unless someone knows better solution ?
I'm not sure that you are doing the correct things. In kubernetes, your communication is performed between services, that represents deployed pods, on one or several nodes.
When you communicate with the outside, you reach an endpoint (an API or a specific port). The endpoint is materialized by a loadbalancer that routes the traffic.
Only the kubernetes master care about the node as resources (CPU, memory, GPU,...) provider inside the cluster. You should never have to directly reach the node of a cluster without using the standard way.
Potentially you can reach the NodePort service exposal on the NodeIP+servicePort.
What you really need to do is configure the kubectl in jenkins pipeline to connect to GKE Master IP. The master is responsible for accepting your commands (rollback, deployment, etc). See Configuring cluster access for kubectl
The Master IP is available in the Kubernetes Engine console along with the Certificate Authority certificate. A good approach is to use a service account token to authenticate with the master. See how to Login to GKE via service account with token.

How to install Kubernetes dashboard on external IP address?

How to install Kubernetes dashboard on external IP address?
Is there any tutorial for this?
You can expose services and pods in several ways:
expose the internal ClusterIP service through Ingress, if you have that set up.
change the service type to use 'type: LoadBalancer', which will try to create an external load balancer.
If you have external IP addresses on your kubernetes nodes, you can also expose the ports directly on the node hosts; however, I would avoid these unless it's a small, test cluster.
change the service type to 'type: NodePort', which will utilize a port above 30000 on all cluster machines.
expose the pod directly using 'type: HostPort' in the pod spec.
Depending on your cluster type (Kops-created, GKE, EKS, AKS and so on), different variants may not be setup. Hosted clusters typically support and recommend LoadBalancers, which they charge for, but may or may not have support for NodePort/HostPort.
Another, more important note is that you must ensure you protect the dashboard. Running an unprotected dashboard is a sure way of getting your cluster compromised; this recently happened to Tesla. A decent writeup on various way to protect yourself was written by Jo Beda of Heptio

Frontend communication with API in Kubernetes cluster

Inside of a Kubernetes Cluster I am running 1 node with 2 deployments. React front-end and a .NET Core app. I also have a Load Balancer service for the front end app. (All working: I can port-forward to see the backend deployment working.)
Question: I'm trying to get the front end and API to communicate. I know I can do that with an external facing load balancer but is there a way to do that using the clusterIPs and not have an external IP for the back end?
The reason we are interested in this, it simply adds one more layer of security. Keeping the API to vnet only, we are removing one more entry point.
If it helps, we are deploying in Azure with AKS. I know they have some weird deployment things sometimes.
Pods running on the cluster can talk to each other using a ClusterIP service, which is the default service type. You don't need a LoadBalancer service to make two pods talk to each other. According to the docs on this topic
ClusterIP exposes the service on a cluster-internal IP. Choosing this value makes the service only reachable from within the cluster. This is the default ServiceType.
As explained in the Discovery documentation, if both Pods (frontend and API) are running on the same namespace, the frontend just needs to send requests to the name of the backend service.
If they are running on different namespaces, the frontend API needs to use a fully qualified domain name to be able to talk with the backend.
For example, if you have a Service called "my-service" in Kubernetes Namespace "my-ns" a DNS record for "my-service.my-ns" is created. Pods which exist in the "my-ns" Namespace should be able to find it by simply doing a name lookup for "my-service". Pods which exist in other Namespaces must qualify the name as "my-service.my-ns". The result of these name lookups is the cluster IP.
You can find more info about how DNS works on kubernetes in the docs.
The problem with this configuration is the idea that the Frontend app will be trying to reach out to the API via the internal cluster. But it will not. My app, on the client's browser can not reach services and pods in my Kluster.
My cluster will need something like nginx or another external Load Balancer to allow my client side api calls to reach my API.
You can alternatively used your front end app, as your proxy, but that is highly not advised!
I'm trying to get the front end and api to communicate
By api, if you mean the Kubernetes API server, first setup a service account and token for the front-end pod to communicate with the Kubernetes API server by following the steps here, here and here.
is there a way to do that using the clusterIPs and not have an external IP for the back end
Yes, this is possible and more secure if external access is not needed for the service. Service type ClusterIP will not have an ExternalIP and the pods can talk to each other using ClusterIP:Port within the cluster.

how kubernetes service works?

Of all the concepts from Kubernetes, I find service working mechanism is the most difficult to understand
Here is what I imagine right now:
kube-proxy in each node listen to any new service/endpoint in master API controller
If there is any new service/endpoint, it adds a rule to that node's iptables
For NodePort service, external client has to access new service through one of the node's ip and NodePort. The node will forward the request to the new service IP
Is it correct? There are still a few things I'm still not clear:
Are services lying within nodes? If so, can we ssh into nodes and inspect how services work?
Are service IPs virtual IPs and only accessible within nodes?
Most of the diagrams that I see online draw services as crossing all nodes, which make it even more difficult to imagine
kube-proxy in each node listen to any new service/endpoint in master API controller
Kubernetes uses etcd to share the current cluster configuration information across all nodes (including pods, services, deployments, etc.).
If there is any new service/endpoint, it adds a rule to that node's iptables
Internally kubernetes has a so called Endpoint Controller that is responsible for modifying the DNS configuration of the virtual cluster network to make service endpoints available via DNS (and environment variables).
For NodePort service, external client has to access new service through one of the node's ip and NodePort. The node will forward the request to the new service IP
Depending on the service type additional action is taken, e.g. to make a port available on the nodes through an automatically created clusterIP service for type nodePort. Or an external load balancer is created with the cloud provider, etc.
Are services lying within nodes? If so, can we ssh into nodes and inspect how services work?
As explained, services are manifested in the cluster configuration, the endpoint controller as well as additional things, like the clusterIP services, load balancers, etc. I cannot see a need to ssh into nodes to inspect services. Typically interacting with the cluster api should be sufficient to investigate/update the service configuration.
Are service IPs virtual IPs and only accessible within nodes?
Service IPs, like POD IPs are virtual and accessible from within the cluster network. There is a global allocation map in etcd that maintains the complete list that allows allocating unique new ones. For more information on the networking model read this blog.
For more detailed information see the docs for kubernetes components and services.

Should the Kubernetes api server be accesible as https://kubernetes:443 from any pod in the cluster?

According to the Kubernetes docs,
The kubernetes service (in all namespaces) is configured with a virtual IP address that is redirected (via kube-proxy) to the HTTPS endpoint on the apiserver.
For some reason I can't access kubernetes from a non-default namespace, unless I manually create the service there (or use kubernetes.default). Looking at the code I see the kubernetes service is created in namespace default, is it also available in other namespaces? If so, how is that accomplished? How might I debug it?
I've been finding it difficult to Google this, since "kubernetes service" is not really a great search keyword.
For the record, I'm using GKE.
Service kubernetes is only available in Namespace default.
If you want to access API server using this service, you need to use kubernetes.default
Services are assigned a DNS A record for a name of the form
my-svc.my-namespace.svc.cluster.local
This resolves to the cluster IP of the Service.
That means, you need to use kubernetes.default.svc.cluster.local
You can skip svc.cluster.local.
So to access a kubernetes Service, you need to provide kubernetes.default.
If you want to access from default namespace, you can skip namespace part.
See details in here.
Also,
When you create a pod, if you do not specify a service account, it is automatically assigned the default service account in the same namespace.
You can access the API from inside a pod using automatically mounted service account credentials, as described in Accessing the Cluster.