Kubernetes LoadBalancer with both HTTPS and TCP traffic - kubernetes

I have an HTTP LoadBalancer on Google Kubernetes Engine that is configured with nginx-ingress to serve website traffic. I would now also like to expose a database (PostgreSQL) on port 5432. How do I do that without the cost of a separate LoadBalancer? nginx-ingress seems to only support HTTP traffic.

EDIT:
Actually, never mind; see https://github.com/nginxinc/kubernetes-ingress/blob/c525f568e5b2c5fb234706c67c9a453d4248ee9f/examples/customization/nginx-config.yaml#L35 for how to add a main snippet via NGINX ingress configmap. Lookup how to use NGINX as a TCP proxy and put that snippet there
Configuration snippets available via annotations don't allow you to add a whole server. Meaning the standard ingress controller deployment can't do what you are asking. You'll have to put together a custom deployment yourself to add another server snippet.

Related

AWS EKS websocket based app - good approach?

I've just deployed websocket based echo-server on AWS EKS. I see it's running stable and okay but when I was searching for implementation details I was finding only articles that were saying something about nginx ingress controller or AWS application loadbalancer and a lot of troubles with them.
Do I miss anything in my current, vanilla config? Do I need the AWS ALB or nginx ingress controller?
Thank you for all the replies.
All the best.
Do I miss anything in my current, vanilla config?
You probably exposed your echo-server app using service type - ClusterIP or NodePort which is fine if you only need to access your app locally in the cluster (ClusterIP) or using your node IP address (NodePort).
Do I need the AWS ALB or nginx ingress controller?
They both are different things, but they have similar common goal - to make your websocket app available externally and distribute traffic based on defined L7 routing routes. It's good solution if you have multiple deployments. So you need to answer yourself if you need some kind of Ingress Controller. If you are planning to deploy your application into production you should consider using those solutions, but probably it may be fine with service type LoadBalancer.
EDIT:
If you are already using service type LoadBalancer your app is already available externally. Ingress controller provides additional configuration possibilities to configure L7 traffic route to your cluster (Ingress Controllers are often using LoadBalancer under the hood). Check this answer for more details about differences between LoadBalancer and Ingress.
Also check:
Choosing the Right Load Balancer on Amazon: AWS Application Load Balancer vs. NGINX Plus
Configuring Kubernetes Ingress on AWS? Don’t Make These Mistakes
WebSocket - Deploy to Kubernetes
LoadBalancer vs Ingress

Kubernetes ingress controller expose to specific port

My institution has firewall settings that block most of the external ports, currently, I have internal Linux virtual machine, for example, http://abc.xyz:5555 (this link can only be accessed in the internal network), and a Netscaler is set up by the admin so that the internal link is forward to a publicly available link: https://def.edu.
Now I have multiple web servers that use ports like 5556,5557,5558. I want to set up Kubernetes ingress that all traffic goes into the ingress controller first, and the ingress will forward traffic to my multiple web services. Typically as the below image shows.
I only have port 5555 available, but all tutorials of Ingress seem only to support HTTP 80 and HTTPS 443 port. My question is, can I set up the Ingress controller host as http://abc.xyz:5555? Or I should go for other approaches, like this said: An Ingress does not expose arbitrary ports or protocols. Exposing services other than HTTP and HTTPS to the internet typically uses a service of type Service.Type=NodePort or Service.Type=LoadBalancer., if so, what terms/techniques should I use?
I suggest to use an ingress, since each loadbalancer gets an own external ip assigned. You can specify a custom port and protocols (tcp,udp,http). I worked with nginx, but the documentation seemed outdated (last checked last week). So we are currently using Traefik. The web dashboard was also a big help in debugging it.
How we solved it:
Install traefik via helm with custom values, so it listens to other ports besides 80 and 443; Add custom entrypoints in your values.yaml and install traefik with:
helm install --values values.yaml stable/traefik
Install your ingress http/tcp/udp routes
Forward your web dashboard and go to http://localhost:9000/dashboard
Please see the official docs for more detailed steps: https://docs.traefik.io/getting-started/install-traefik/#use-the-helm-chart

Kubernetes nginx ingress accesses outside of cluster without using service

Apologies if this has been answered before, but I am a little confused on Ingress Nginx is working together with services.
I am trying to implement an nginx ingress in my Kubernetes environment.
So far I have an ingress-nginx-controller-deployment setup, as well as a deployment and service for the default backend. I still need to create my actual Ingress resources, the ingress-nginx-controller-service and also my backend.
curl <NodeIP>
returns "default backend 404" on port 80 for the Node which the ingress-nginx-controller-deployment is deployed on.
However, my understanding is that exposing anything out of the cluster requires a service (Nodeport/Loadbalancer), which is the duty of the ingress-nginx-controller-service.
My question is how is this possible, that I can access port 80 for my Node on my browser, which is outside the cluster?
Could I then deploy my backend app on port 80 the same way the above is done?
I feel like I am misunderstanding a key concept here.
default backend image: gcr.io/google_containers/defaultbackend:1.0
nginx-controller image: gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.3
I think you missed a really good article about how nginx-ingress expose to the world!
I short:
If you're using hostNetwork: true then you bypass the kubernetes network (kube-proxy). in a simple word, you bypass the container and orchestration network and just using the host network then the node with nginx-ingress container will expose port 80 to the world.
There are other ways that you can use to expose nginx port to outside of the cluster (node-port, network load balancer like MetalLB).

What's the exactly flow chart of an outside request comes into k8s pod via Ingress?

all
I knew well about k8s' nodePort and ClusterIP type in services.
But I am very confused about the Ingress way, because how will a request come into a pod in k8s by this Ingress way?
Suppose K8s master IP is 1.2.3.4, after Ingress setup, and can connect to backend service(e.g, myservice) with a port(e.g, 9000)
Now, How can I visit this myservice:9000 outside? i.e, through 1.2.3.4? As there's no entry port on the 1.2.3.4 machine.
And many docs always said visit this via 'foo.com' configed in the ingress YAML file. But that is really funny, because xxx.com definitely needs DNS, it's not a magic to let you new-invent any xxx.com you like be a real website and can map your xxx.com to your machine!
The key part of the picture is the Ingress Controller. It's an instance of a proxy (could be nginx or haproxy or another ingress type) and runs inside the cluster. It acts as an entrypoint and lets you add more sophisticated routing rules. It reads Ingress Resources that are deployed with apps and which define the routing rules. This allows each app to say what the Ingress Controller needs to do for routing to it.
Because the controller runs inside the cluster, it needs to be exposed to the outside world. You can do this by NodePort but if you're using a cloud provider then it's more common to use LoadBalancer. This gives you an external IP and port that reaches the Ingress controller and you can point DNS entries at that. If you do point DNS at it then you have the option to use routing rules base on DNS (such as using different subdomains for different apps).
The article 'Kubernetes NodePort vs LoadBalancer vs Ingress? When should I use what?' has some good explanations and diagrams - here's the diagram for Ingress:

Accessing a webpage hosting on a pod

I have deployment that hosts a website on port 9001 and a service attached to it. I want to allow anyone (from outside cluster) to be able to connect to that site.
Any help would be appreciated.
I want to allow anyone (from outside cluster) to be able to connect to that site
There are many ways to do this using kubernetes services to expose port 9001 of the website to the outside world:
Service type LoadBalancer if you have an external, cloud-provider's load-balancer.
ExternalIPs. The website can be hit at ExternalIP:Port.
Service type NodePort if the cluster's nodes are reachable from the users. The website can be hit at NodeIP:NodePort.
Ingress controller and ingress resource.
As you wrote that this is not a cloud deployment, you need to consider how to correctly expose this to the world in a decent fashion. First and formost, create a NodePort type service for your deployment. With this, your nodes will expose that service on a high port.
Depending on your network, at this point you either need to configure a loadbalancer in your network to forward traffic for some IP:80 to your node(s) high NodePort, or for example deploy HAProxy in a DeamonSet with hostNetwork: true that will proxy 80 to your NodePort.
A bit more complexity can be added by deployment of Nginx IngressController (exposed as above) and use of Ingress to make the Ingress Controller expose all your services without the need to fiddle with NodePort/LB/HAProxy for each of them individualy any more.