Thingsboard running on k8s - kubernetes

I am attempting to set up ThingsBoard on a google k8s cluster following the documentation here.
Everything is set up and running, but I can't seem to figure out which IP I should use to connect to the login page. None of the external ips I can find appear to be working

Public access is set up using an Ingress here https://github.com/thingsboard/thingsboard/blob/release-2.3/k8s/thingsboard.yml#L571-L607
By default I think GKE sets up ingress-gce which uses Google Cloud Load Balancer rules to implement the ingress system, so you would need to find the IP of your load balancer. That said the Ingree doesn't specify a hostname-based routing rule so it might not work well if you have other ingresses in play.

Related

What I need to provide to make calls from my k8s cluster?

I have a Kubernetes Cluster with my application running inside of it, also I have a host machine, that my application need to access.
All the infrastructure is located inside the VPN network
How can I setup egress to let my application send requests from the cluster to this host machine (does the Kubernetes Network Policies is an appropriate way to handle this stuff and actually solving this problem?)
(Sorry, if this is too obvious question, haven't found any solutions for that yet, that works)
I'm not sure if I get your question right, but by default no network connectivity is blocked by Kubernetes. I assume you haven't set up any NetworkPolicies, this means all Ingress & Egress communication is open and nothing will block access, at least from K8s perspective.
However, if you have only deployed your application but haven't exposed it yet (with Ingress or Service: LoadBalancer) you will not be able to reach your application from outside the cluster. If you're running on-prem you will need to install MetalLB or some sort of service that allows you to create Services of Type LoadBalancer. The same goes for Ingress however, as the Ingress Controller will need some sort of access in the first place.

Connecting to many kubernetes services from local machine

From my local machine I would like to be able to port forward to many services in a cluster.
For example I have services of name serviceA-type1, serviceA-type2, serviceA-type3... etc. None of these services are accessible externally but can be accessed using the kubectl port-forward command. However there are so many services, that port forwarding to each is unfeasible.
Is it possible to create some kind of proxy service in kubernetes that would allow me to connect to any of the serviceA-typeN services by specifying the them in a URL? I would like to be able to port-forward to the proxy service from my local machine and it would then forward the requests to the serviceA-typeN services.
So for example, if I have set up a port forward on 8080 to this proxy, then the URL to access the serviceA-type1 service might look like:
http://localhost:8080/serviceA-type1/path/to/endpoint?a=1
I could maybe create a small application that would do this but does kubernetes provide this functionality already?
kubectl proxy command provides this functionality.
Read more here: https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-services/#manually-constructing-apiserver-proxy-urls
Good option is to use Ingrees to achieve it.
Read more about what Ingress is.
Main concepts are:
Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource.
An Ingress may be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name-based virtual hosting.
An Ingress controller is responsible for fulfilling the Ingress, usually with a load balancer, though it may also configure your edge router or additional frontends to help handle the traffic.
An Ingress does not expose arbitrary ports or protocols. Exposing services other than HTTP and HTTPS to the internet typically uses a service of type Service.Type=NodePort or Service.Type=LoadBalancer.
In Kubernetes we have 4 types of Services and the default service type is Cluster IP which means the service is only reachable within the cluster.Ingress exposes your service outside the cluster so ingress acts as the entry point into your cluster.
If you plan to move to cloud (I assume you will, because all applications are going to work in cloud in future) with Ingress, it will be compatible with cloud services and eventually will save time and will be easier to migrate from local environment.
To start with ingress you need to install an Ingress controller first.
There are different ingress controllers which you can use.
You can start with most common ingress-nginx which is supported by kubernetes community.
If you're using a minikube than it can be enabled as an addon - see here
Once you have installed ingress in your cluster, you need to create a rule to have it work. Simple fanout is an example with two services and path based routing to it.

How to enable Kubernetes api server instances to connect to external networks via a proxy

The goal is to enable Kubernetes api server to connect to resources on the internet when it is on a private network on which internet resources can only be accessed through a proxy.
Background:
A kubernetes cluster is spun up using kubespray containing two apiserver instances that run on two VMs and are controlled via a manifest file. The Azure AD is being used as the identity provider for authentication. In order for this to work the API server needs to initialize its OIDC component by connecting to Microsoft and downloading some keys that are used to verify tokens issued by Azure AD.
Since the Kubernetes cluster is on a private network and needs to go through a proxy before reaching the internet, one approach was to set https_proxy and no_proxy in the kubernetes API server container environment by adding this to the manifest file. The problem with this approach is that when using Istio to manage access to APIs, no_proxy needs to be updated whenever a new service is added to the cluster. One solution could have been to add a suffix to every service name and set *.suffix in no proxy. However, it appears that using wildcards in the no_proxy configuration is not supported.
Is there any alternate way for the Kubernetes API server to reach Microsoft without interfering with other functionality?
Please let me know if any additional information or clarifications are needed.
I'm not sure how you would have Istio manage the egress traffic for your Kubernetes masters where your kube-apiservers run, so I wouldn't recommend it. As far as I understand, Istio is generally used to manage (ingress/egress/lb/metrics/etc) actual workloads in your cluster and these workloads generally run on your nodes, not masters. I mean the kube-apiserver actually manages the CRDs that Istio uses.
Most people use Docker on their masters, you can use the proxy environment variables for your containers like you mentioned.
We tried a couple of solutions to avoid having to set http(s)_proxy and no_proxy env variables in the kube-apiserver and constantly whitelist new services in the cluster...
Introduce a self managed proxy server which would determine what traffic goes is forwarded to an internet connected proxy and what traffic is not proxied:
squid proxy seemed to do the trick by defining some ACLs. One issue we had was that node names were not resolved by kube-dns so we had to add manual entries into the hosts files of containers (not sure how these were handled by default).
we also tried writing a proxy using node but it had trouble with https in some scenarios.
Introduce a self managed identity provider between azure and our k8s cluster which was configured to use the internet connected proxy this avoiding having to configure the proxy in the kube-apiserver
We landed up going with option 2 as it gave us more flexibility in the long term.

Whitelist traffic to mysql from a kubernetes service

I have a Cloud MySQL instance which allows traffic only from whitelisted IPs. How do I determine which IP I need to add to the ruleset to allow traffic from my Kubernetes service?
The best solution is to use the Cloud SQL Proxy in a sidecar pattern. This adds an additional container into the pod with your application that allows for traffic to be passed to Cloud SQL.
You can find instructions for setting it up here. (It says it's for GKE, but the principles are the same)
If you prefer something a little more hands on, this codelab will walk you through taking an app from local to on a Kubernetes Cluster.
I am using Google Cloud Platform, so my solution was to add the Google Compute Engine VM instance External IP to the whitelist.

How to connect an AWS Load Balancer to nginx ingress controller with Kubernetes deployed on EC2 with kubeadm?

I have installed Kubernetes Cluster on AWS using kubeadm. I understand that it will not fall under AWS Deployment. I am trying to follow the baremetal fashion of installing Kubernetes.
Everything works fine for Nodeport, I want to know if I can connect an AWS Load Balancer to this setup, If yes, how?
I've thoroughly researched online and found this solution where in we can specify an external IP address to a service. But, Load Balancers do not have IP addresses.
I am using Nginx Ingress Controller, everything works fine on ClusterIP, how can I expose the application using AWS Load Balancer?
Can anyone help?
Use the following approach to integrate aws-load-balancer with nginx-ingress-controller running in kubernetes cluster created/running on EC2
Step 1: Setup/Install "nginx-ingress-controller"
Step 2: Create a extra service to expose "nginx-ingress-controller" on fix ports with NodePort . See below screenshot highlighted in yellow
Extra service to expose "nginx-ingress-controller" externally will look like following if you installed "nginx-ingress-controller" in "kube-system" namespace using helm
Step 3: Go to Load Balancers Section in EC2 from AWS Console ,
then Create Load Balancer, for Simplicity choose "Classic Load Balancer's Create Button"
Setup Ports and Protocol configuration as Below
Then Assign your Security group , Configure health check for instances,
Finally Add your cluster Instances from list , Add tags and finally review and click
create done.
Visit the LB's DNS Name , you will find your "nginx-ingress-controller" exposed.