How to access another container in a pod - kubernetes

I have set up a multi-container pod consisting on multiple interrelated microservices. On docker-compose if I wanted to access the another container in a compose I just use the name of the service.
I am trying to do the same thing with Kube, without having to create a pod per microservice.
I tried the name of the container or suffix with .local neither worked and I got an UnknownHostException.
My preference is also to have all the microservices runnning on port 80, but in case that does not work within a single pod, I also tried having each microservice running on it's own port and use localhost but that didn't work either it simply said connection refused (as opposed to Unknown Host)

The applications in a pod all use the same network namespace (same IP and port space), and can thus “find” each other and communicate using localhost. Because of this, applications in a pod must coordinate their usage of ports.
https://kubernetes.io/docs/concepts/workloads/pods/pod/#resource-sharing-and-communication

Related

How can I access directly stateful pods ports on localhost Kubernetes from localhost (for example Cassandra) - what routing is need?

I want to build some testing environment with use Kubernetes on localhost (can be Docker Desktop. minikube, ...). I want to connect my client to 3 instances of Cassandra inside localhost K8s cluster. Cassandra is example it can be same in etcd, redis, ... or any StatefulSet.
I created StatefulSet with 3 replicas on same ports on localhost Kubernetes.
I create Services to expose each pod.
What I should do next to route traffic with use three different names cassandra-0, cassandra-1, cassandra-2 and same port. This is required by driver - I can not forward individual ports since driver require to run all instances on same port.
So it should be cassandra-0:9042, cassandra-1:9042, cassandra-0:9042.
To shows this I create some drawing to explain it graphically.
I want achieve red line connection with use something ... - I do not know what to use in K8s - maybe services.
I would say you should define a node port and send your request to localhost:NodePort
  ports:
  - protocol: TCP
    port: 8081
    targetPort: 8080
    nodePort: 32000
Just change your ports so they fit your needs.
If you already created a service with ports exposed, get all endpoints and try turn traffic towards them.
kubectl get endpoints -A

Access an application running on a random port in Kubernetes

My application is running within a pod container in a kubernetes cluster. Every time it is started in the container it allocates a random port. I would like to access my application from outside (from another pod or node for example) but since it allocates a random port I cannot create a serivce (NodePort or LoadBalancer) to map the application port to a specific port to be able to access it.
What are the options to handle this case in a kubernetes cluster?
Not supported, checkout the issue here. Even with docker, if your range is overly broad, you can hit issue as well.
One option to solve this would be to use a dynamically configured sidecar proxy that takes care of routing the pods traffic from a fixed port to the dynamic port of the application.
This has the upside that the service can be scaled even if all instances have different ports.
But this also has the potentially big downside that every request has an added overhead due to the extra network step and even if it's just some milliseconds this can have quite an impact depending on how the application works.
Regarding the dynamic configuration of the proxy you could e.g. use shared pod volumes.
Edit:
Containers within a pod are a little bit special. They are able to share predefined volumes and their network namespace.
For shared volumes this means. you are able to define a volume e.g. of type emptyDir and mount it in both containers. The result is that you can share information between both containers by writing them into that specific volume in the first pod and reading the information in the second.
For networking this makes communication between containers of one pod easier because you can use the loopback to communicate with between your containers. In your case this means the sidecar proxy container can call your service by calling localhost:<dynamic-port>.
For further information take a look at the official docs.
So how is this going to help you?
You can use a proxy like envoy and configure it to listen to dynamic configuration changes. The source for the dynamic configuration should be a shared volume between both your application container and the sidecar proxy container.
After your application started and allocated the dynamic port you can now automatically create the configuration for the envoy proxy and save it in the shared volume. The same source envoy is listening to due to the aforementioned configuration.
Envoy itself acts as a reverse proxy and can listen statically on port 8080 (or any other port you like) and routes the incoming network traffic to your application with dynamic port by calling localhost:<dynamic-port>.
Unfortunately I don't have any predefined configuration for this use case but if you need some inspiration you can take a look at istio - they use something quite similar.
As gohm'c mentioned there is no built in way to do this.
As a workaround you can run a script to adjust the service:
Start the application and retrieve the choosen random port from the application
Modify the Kubernetes service, loadbalancer, etc. with the new port
The downside of this is that if the port changes there will be short delay to update the service afterwards.
How about this:
Deploy and start your application in a Kubernetes cluster.
run kubectl exec <pod name here> -- netstat -tulpn | grep "application name" to identify the port number associated with your application.
run kubectl port-forward <pod name here> <port you like>:<random application port> to access the pod of your application.
Would this work for you?
Edit. This is very similar to #Lukas Eichler's response.

hostnetwork pod - only 1 container should expose to the internet

These are my first steps to the kubernetes world so excuse me if my terms are not used right etc.
I am running a single node kubernetes setup without external loadbalancer and I have deployed a pod with to containers. One mysql database and a powerdns.
Powerdns should expose port 53 to the internet while mysql should expose its port only in the cluster.
Therefore I set the following:
"hostNetwork: true" for the pod
"hostPort" for the powerdns container and not for mysql
Service for port 3306 with "type: ClusterIP"
Now everything is running. Powerdns can connect to the mysql and is exposed on port 53 in the internet.
But contrary to my assumption the mysql database is exposed to the internet too.
Could anyone give me a hint to what I am doing wrong?
Using hostNetwork: true allows your whole pod (all containers in it) to bind ports to the host, which you already identified as problematic.
First of all, you should consider to move the mysql container out of your pod. Using multiple containers is supposed to group containers working as one unit (e.g. an application and a background process closely communicating with each other).
Think in services. Your service PowerDNS is a service user itself as it requires a database, something the application PowerDNS doesn't provide. You want another service for MySQL. Take a look at the documentation (one, two) for StatefulSets as it uses MySQL as an example (running databases on Kubernetes is one of the more complex tasks).
Create a ClusterIP service for this. ClusterIP services are only available from within the cluster (your database is an internal service, so that's what you want).
This way, your PowerDNS pod will only feature one container that you can bind to your host network. But using hostNetwork: true is not a good in general. You won't be able to create multiple instances of your application (in case PowerDNS scales), it's fine for first steps though. A load balancer in front of your setup would be better though. You can use NodePort services to make your service available on a high-values port which your load balancer proxies connections to.

OpenShift and hostnetwork=true

I have deployed two POD-s with hostnetwork set to true. When the POD-s are deployed on same OpenShfit node then everything works fine since they can discover each other using node IP.
When the POD-s are deployed on different OpenShift nodes then they cant discover each other, I get no route to host if I want to point one POD to another using node IP. How to fix this?
The uswitch/kiam (https://github.com/uswitch/kiam) service is a good example of a use case.
it has an agent process that runs on the hostnetwork of all worker nodes because it modifies a firewall rule to intercept API requests (from containers running on the host) to the AWS api.
it also has a server process that runs on the hostnetwork to access the AWS api since the AWS api is on a subnet that is only available to the host network.
finally... the agent talks to the server using GRPC which connects directly to one of the IP addresses that are returned when looking up the kiam-server.
so you have pods of the agent deployment running on the hostnetwork of node A trying to connect to kiam server running on the hostnetwork of node B.... which just does not work.
furthermore, this is a private service... it should not be available from outside the network.
If you want the two containers to be share the same physical machine and take advantage of loopback for quick communications, then you would be better off defining them together as a single Pod with two containers.
If the two containers are meant to float over a larger cluster and be more loosely coupled, then I'd recommend taking advantage of the Service construct within Kubernetes (under OpenShift) and using that for the appropriate discovery.
Services are documented at https://kubernetes.io/docs/concepts/services-networking/service/, and along with an internal DNS service (if implemented - common in Kubernetes 1.4 and later) they provide a means to let Kubernetes manage where things are, updating an internal DNS entry in the form of <servicename>.<namespace>.svc.cluster.local. So for example, if you set up a Pod with a service named "backend" in the default namespace, the other Pod could reference it as backend.default.svc.cluster.local. The Kubernetes documentation on the DNS portion of this is available at https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/
This also avoids the "hostnetwork=true" complication, and lets OpenShift (or specifically Kubernetes) manage the networking.
If you have to absolutely use hostnetwork, you should be creating router and then use those routers to have the communication between pods. You can create ha proxy based router in opeshift, reference here --https://docs.openshift.com/enterprise/3.0/install_config/install/deploy_router.html

Is it possible to open new ports on the fly in Kube pod?

I am trying to run a software in side Kubernetes that open more pods at runtime based on various operations. Is it possible to open more ports on the fly in a Kubernetes pod? It does not seem to be possible at the Docker level (Exposing a port on a live Docker container) which means Kubernetes can't do it either (I guess ?)
Each Pod in Kubernetes gets its own IP address. So a container (application) in a Pod can use any port as long as that port is not used by any other container within the same Pod.
Now if you want to expose those dynamic ports, it will require additional configuration though. Ports are exposed using Services, and service configuration has to be updated to expose those dynamic ports.