hostnetwork pod - only 1 container should expose to the internet - kubernetes

These are my first steps to the kubernetes world so excuse me if my terms are not used right etc.
I am running a single node kubernetes setup without external loadbalancer and I have deployed a pod with to containers. One mysql database and a powerdns.
Powerdns should expose port 53 to the internet while mysql should expose its port only in the cluster.
Therefore I set the following:
"hostNetwork: true" for the pod
"hostPort" for the powerdns container and not for mysql
Service for port 3306 with "type: ClusterIP"
Now everything is running. Powerdns can connect to the mysql and is exposed on port 53 in the internet.
But contrary to my assumption the mysql database is exposed to the internet too.
Could anyone give me a hint to what I am doing wrong?

Using hostNetwork: true allows your whole pod (all containers in it) to bind ports to the host, which you already identified as problematic.
First of all, you should consider to move the mysql container out of your pod. Using multiple containers is supposed to group containers working as one unit (e.g. an application and a background process closely communicating with each other).
Think in services. Your service PowerDNS is a service user itself as it requires a database, something the application PowerDNS doesn't provide. You want another service for MySQL. Take a look at the documentation (one, two) for StatefulSets as it uses MySQL as an example (running databases on Kubernetes is one of the more complex tasks).
Create a ClusterIP service for this. ClusterIP services are only available from within the cluster (your database is an internal service, so that's what you want).
This way, your PowerDNS pod will only feature one container that you can bind to your host network. But using hostNetwork: true is not a good in general. You won't be able to create multiple instances of your application (in case PowerDNS scales), it's fine for first steps though. A load balancer in front of your setup would be better though. You can use NodePort services to make your service available on a high-values port which your load balancer proxies connections to.

Related

Accessing pods directly VS ClusterIP service to access all exposed ports

I would like to expose one pod that has a lot of ports (including big port range, thousands of them) to the cluster members (namely as a ClusterIP service). So manually listing them in the service definition is not really possible (Kubernetes does not support exposing port ranges yet).
The container in the pod will run Samba AD DC (here I am just showing that there are really a lot of ports): https://wiki.samba.org/index.php/Samba_AD_DC_Port_Usage
I have been trying to find out how to expose the whole pod (like DMZ on a service); if it is possible at all. I am not sure if this is the best approach to get the goal I want (exposing the whole pod to the internal cluster network).
To summarize the question, is there any way to expose the whole ports (or the whole pod let's say) to the internal cluster network (or to any network of choice) using Service?
I am not sure if I am missing something that can be done better in this regard.

How can I access directly stateful pods ports on localhost Kubernetes from localhost (for example Cassandra) - what routing is need?

I want to build some testing environment with use Kubernetes on localhost (can be Docker Desktop. minikube, ...). I want to connect my client to 3 instances of Cassandra inside localhost K8s cluster. Cassandra is example it can be same in etcd, redis, ... or any StatefulSet.
I created StatefulSet with 3 replicas on same ports on localhost Kubernetes.
I create Services to expose each pod.
What I should do next to route traffic with use three different names cassandra-0, cassandra-1, cassandra-2 and same port. This is required by driver - I can not forward individual ports since driver require to run all instances on same port.
So it should be cassandra-0:9042, cassandra-1:9042, cassandra-0:9042.
To shows this I create some drawing to explain it graphically.
I want achieve red line connection with use something ... - I do not know what to use in K8s - maybe services.
I would say you should define a node port and send your request to localhost:NodePort
  ports:
  - protocol: TCP
    port: 8081
    targetPort: 8080
    nodePort: 32000
Just change your ports so they fit your needs.
If you already created a service with ports exposed, get all endpoints and try turn traffic towards them.
kubectl get endpoints -A

How to access another container in a pod

I have set up a multi-container pod consisting on multiple interrelated microservices. On docker-compose if I wanted to access the another container in a compose I just use the name of the service.
I am trying to do the same thing with Kube, without having to create a pod per microservice.
I tried the name of the container or suffix with .local neither worked and I got an UnknownHostException.
My preference is also to have all the microservices runnning on port 80, but in case that does not work within a single pod, I also tried having each microservice running on it's own port and use localhost but that didn't work either it simply said connection refused (as opposed to Unknown Host)
The applications in a pod all use the same network namespace (same IP and port space), and can thus “find” each other and communicate using localhost. Because of this, applications in a pod must coordinate their usage of ports.
https://kubernetes.io/docs/concepts/workloads/pods/pod/#resource-sharing-and-communication

How to access Kubernetes pod in local cluster?

I have set up an experimental local Kubernetes cluster with one master and three slave nodes. I have created a deployment for a custom service that listens on port 10001. The goal is to access an exemplary endpoint /hello with a stable IP/hostname, e.g. http://<master>:10001/hello.
After deploying the deployment, the pods are created fine and are accessible through their cluster IPs.
I understand the solution for cloud providers is to create a load balancer service for the deployment, so that you can just expose a service. However, this is apparently not supported for a local cluster. Setting up Ingress seems overkill for this purpose. Is it not?
It seems more like kube proxy is the way to go. However, when I run kube proxy --port <port> on the master node, I can access http://<master>:<port>/api/..., but not the actual pod.
There are many related questions (e.g. How to access services through kubernetes cluster ip?), but no (accepted) answers. The Kubernetes documentation on the topic is rather sparse as well, so I am not even sure about what is the right approach conceptually.
I am hence looking for a straight-forward solution and/or a good tutorial. It seems to be a very typical use case that lacks a clear path though.
If an Ingress Controller is overkill for your scenario, you may want to try using a service of type NodePort. You can specify the port, or let the system auto-assign one for you.
A NodePort service exposes your service at the same port on all Nodes in your cluster. If you have network access to your Nodes, you can access your service at the node IP and port specified in the configuration.
Obviously, this does not load balance between nodes. You can add an external service to help you do this if you want to emulate what a real load balancer would do. One simple option is to run something like rocky-cli.
An Ingress is probably your simplest bet.
You can schedule the creation of an Nginx IngressController quite simply; here's a guide for that. Note that this setup uses a DaemonSet, so there is an IngressController on each node. It also uses the hostPort config option, so the IngressController will listen on the node's IP, instead of a virtual service IP that will not be stable.
Now you just need to get your HTTP traffic to any one of your nodes. You'll probably want to define an external DNS entry for each Service, each pointing to the IPs of your nodes (i.e. multiple A/AAAA records). The ingress will disambiguate and route inside the cluster based on the HTTP hostname, using name-based virtual hosting.
If you need to expose non-HTTP services, this gets a bit more involved, but you can look in the nginx ingress docs for more examples (e.g. UDP).

Hitting an endpoint of HeadlessService - Kubernetes

We wanted podnames to be resolved to IP's to configure the seed nodes in an akka cluster. This was happenning by using the concept of a headless service and stateful sets in Kubernetes. But, how do I expose a headless service externally to hit an endpoint from outside?
It is hard to expose a Kubernetes service to the outside, since this would require some complex TCP proxies. The reason for this is, that the headless services is only a DNS record with an IP for each pod. But these IPs are only reachable from within the cluster.
One solution is to expose this via Node ports, which means the ports are opened on the host itself. Unfortunately this makes the service discovery harder, because you don't know which host has a scheduled pod on it.
You can setup node ports via:
the services: https://kubernetes.io/docs/user-guide/services/#type-nodeport
or directly in the Pod by defining spec.containers[].ports[].hostPort
Another alternative is to use a LoadBalancer, if your cloud provider supports that. Unfortunately you cannot address each instance itself, since they share the same IP. This might not be suitable for your application.