Kubernetes: Multiple containers that have to communicate + exposed nodePort - kubernetes

In my setup, there is a set of containers that were initially built to run with docker-compose. After moving to Kubernetes I'm facing the following challenges:
docker-compose managed to provide some type of internal DNS that allowed a container to be addressed by its name. How do I create such a network in Kubernetes? What I found so far:
they could all be part of one pod and therefore communicate via localhost
they could all have a common label with matching key:value pairs and a service, but how does one handle Ports?
I need to expose an internal Port to a certain NodePort as it has to be publicly available. How does such a service config look like? What I found so far:
something like this:
apiVersion: v1
kind: Service
metadata:
labels:
app: frontend
name: frontend-nodeport
spec:
type: NodePort
ports:
- name: "3000-30001"
port: 3000
nodePort: 30001
selector:
app: frontend
status:
loadBalancer: {}`

Docker-compose managed to provide some type of internal DNS that allowed a container to be addressed by its name. How do I create such a network in Kubernetes?
As you researched, you can, indeed have two approaches:
IF you containers are to be scaled together then place them inside same pod and communicate through localhost over separate ports. This is less likely your case since this approach is more suitable when containerized app is more similar to processes on one physical box than a separate service/server.
IF your containers are to be scaled separaltey, which is more probably your case, then use service. With services, in place of localhost (in previous point) you will either use just service name as it is (if pods are in same namespace) or FQDN (servicename.namespace.svc.cluster.local) if services are accessed across namespaces. As opposed to previous point where you had to have different ports for your containers (since you address localhost), in this case you can have same port across multiple services, since service:port must be unique. Also with service you can remap ports from containers if you wish to do so as well.
Since you asked this as an introductory question two words of caution:
service resolution works from standpoint of pod/container. To test it you actually need to exec into actual container (or proxy from host) and this is common confusion point. Just to be on safe side test service:port accessibility within actual container, not from master.
Finally, just to mimic docker-compose setup for inter-container network, you don't need to expose NodePort or whatever. Service layer in kubernetes will take care of DNS handling. NodePort has different intention.
I need to expose an internal Port to a certain NodePort. How does such a service config look like?
You are on a good track, here is nice overview to get you started, and reference relevant to your question is given below:
apiVersion: v1
kind: Service
metadata:
name: my-nodeport-service
selector:
app: my-app
spec:
type: NodePort
ports:
- name: http
port: 80
targetPort: 80
nodePort: 30036
protocol: TCP
Edit: Could you please provide an example of how a service.yaml would look like if the containers are scaled seperately ?
First one is, say, api server, we'll call it svc-my-api, it will use pod(s) labeled app: my-api and communicate to pod's port 80 and will be accessible by other pods (in the same namespace) as host:svc-my-api and port:8080
apiVersion: v1
kind: Service
metadata:
name: svc-my-api
labels:
app: my-api
spec:
selector:
app: my-api
ports:
- protocol: TCP
port: 8080
targetPort: 80
Second one is, say, mysql server, we'll call it svc-my-database. Supposing that containers from api pods (covered by previous service) want to access database, they will use host:svc-my-database and port:3306.
apiVersion: v1
kind: Service
metadata:
name: svc-my-database
labels:
app: my-database
spec:
selector:
app: my-database
ports:
- name: http
protocol: TCP
port: 3306
targetPort: 3306

1.- You can add some parameters to your pod resource (or any other that is going to create a pod), as follows:
...
spec:
hostname: foo-{1..4} #keep in mind this line
subdomain: bar #and this line
containers:
- image: busybox
...
Note: so imagine you just created 4 pods, with hostname foo-1, foo-2, foo-3 and foo-4. These are separate pods. You can't do foo-{1..4}. So this is just for demo purposes.
If you now create a service with the same name as the subdomain, you would be able to reach the pod from anywhere in the cluster by hostname.service-name.namespace.svc.cluster.local.
Example:
apiVersion: v1
kind: Service
metadata:
name: bar #my subdomain is called "bar", so is this service
spec:
selector:
app: my-app
ports:
- name: foo
port: 1234
targetPort: 1234
Now, say I have the label app: my-app in my pods, so the service is targeting them correctly.
At this point, look what happens (from any pod, within the cluster):
/ # nslookup foo-1.bar.my-namespace.svc.cluster.local
Server: 10.63.240.10
Address 1: 10.63.240.10 kube-dns.kube-system.svc.cluster.local
Name: foo-1.bar.my-namespace.svc.cluster.local
Address 1: 10.60.1.24 foo-1.bar.my-namespace.svc.cluster.local
2.- The second part of your question is almost correct. This is a NodePort service:
apiVersion: v1
kind: Service
metadata:
name: svc-nodeport
spec:
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: http
selector:
app: my-app
type: NodePort
This service runs on port 80, so it is reachable on port 80 from within the cluster. It will map the port to a random port over 30000 on the node. Now this same service is available on port 30001 (for example) of the node from outside world. Finally it will forward the requests to the port 8080 of the container.

Related

How do I access Kubernetes pods through a single IP?

I have a set of pods running based on the following fleet:
apiVersion: "agones.dev/v1"
kind: Fleet
metadata:
name: bungee
spec:
replicas: 2
template:
metadata:
labels:
run: bungee
spec:
ports:
- name: default
containerPort: 25565
protocol: TCP
template:
spec:
containers:
- name: bungee
image: a/b:test
I can access these pods outside the cluster with <node-IP>:<port> where the port is random per pod given by Agones.
My goal is to be able to connect to these pods through a single IP, meaning I have to add some sort of load balancer. I tried using this service of type LoadBalancer, but I can't connect to any of the pods with it.
apiVersion: v1
kind: Service
metadata:
name: bungee-svc
spec:
type: LoadBalancer
loadBalancerIP: XXX.XX.XX.XXX
ports:
- port: 25565
protocol: TCP
selector:
run: bungee
externalTrafficPolicy: Local
Is a service like this the wrong approach here, and if so what should I use instead? If it is correct, why is it not working?
Edit: External IP field says pending while checking the service status. I am running Kubernetes on bare-metal.
Edit 2: Attempting to use NodePort as suggested, I see the service has not been given an external IP address. Trying to connect to <node-IP>:<nodePort> does not work. Could it be a problem related to the selector?
LoadBalancer Services could have worked, in clusters that are integrating with the API of the cloud provider hosting your Kubernetes nodes (cloud-controller-manager component). Since this is not your case, you're looking for a NodePort Service.
Something like:
apiVersion: v1
kind: Service
metadata:
name: bungee-svc
spec:
type: NodePort
ports:
- port: 25565
protocol: TCP
selector:
run: bungee
Having created that service, you can check its description - or yaml/json representation:
# kubectl describe svc xxx
Type: NodePort
IP: 10.233.24.89 <- ip within SDN
Port: tcp-8080 8080/TCP <- ports within SDN
TargetPort: 8080/TCP <- port on your container
NodePort: tcp-8080 31655/TCP <- port exposed on your nodes
Endpoints: 10.233.108.232:8080 <- pod:port ...
Session Affinity: None
Now, I know the port 31655 was allocated to my NodePort Service -- ports are unique on your cluster, they are picked within a range, depends on your cluster configuration.
I can connect to my service, accessing any Kubernetes node IP, on the port that was allocated to my NodePort service.
curl http://k8s-worker1.example.com:31655/
As a sidenote: a LoadBalancer Service extends a NodePort Service. While the externalIP won't ever show up, note that your Service was already allocated with its own port, as any NodePort Service - which is meant to receive traffic from whichever LoadBalancer would have been configured on behalf of your cluster, onto the cloud infrastructure it is integrated with.
And ... I have to say I'm not familiar with Agones. When you say "I can access these pods outside the cluster with <node-IP>:<port> where the port is random per pod given by Agones". Are you sure ports are allocated on a per-pod basis, and bound to a given node? Or could it be they're already using a NodePort Service. Give it another look: have you tried connecting that port on other nodes of your cluster?

Do services send traffic to local pods first?

I have a DaemonSet with a service pointing to it.
When a pod will access the ClusterIP of the my service, will it get the local pod running on the same node or any pod in the service?
Is there any way to achieve this? My understanding is that it would be the same thing than externaltrafficpolicy: local but for internal traffic.
By default, traffic sent to a ClusterIP or NodePort Service may be routed to any backend address for the Service. Since Kubernetes 1.7 it has been possible to route "external" traffic to the Pods running on the Node that received the traffic, but this is not supported for ClusterIP Services, and more complex topologies — such as routing zonally — have not been possible. The Service Topology feature resolves this by allowing the Service creator to define a policy for routing traffic based upon the Node labels for the originating and destination Nodes.
You need to use: Service topology
An example service which prefers local pods:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: my-app
ports:
- protocol: TCP
port: 80
targetPort: 9376
topologyKeys:
- "kubernetes.io/hostname"
- "*"
UPD 1:
There is another option to make sure, that requests sent to a port of some particular node will be handled on the same node - it's hostPort.
An example:
kind: Pod
apiVersion: v1
metadata:
name: test-api
labels:
app: test-api
spec:
containers:
- name: testapicontainer
image: myprivaterepo/testapi:latest
ports:
- name: web
hostPort: 55555
containerPort: 80
protocol: TCP
The above pod will expose container port 80 on a hostPort: 55555 - if you have DaemonSet for those pods - then you can be sure, that they will be run on each node and each request will be handled on the node which received it.
But, please be careful using it and read this: Configuration Best Practices

Load balancing to multiple containers of same app in a pod

I have a scenario where I need to have two instances of an app container run within the same pod.
I have them setup to listen on different ports. Below is how the Deployment manifest looks like.
The Pod launches just fine with the expected number of containers.
I can even connect to both ports on the podIP from other pods.
kind: Deployment
metadata:
labels:
service: app1-service
name: app1-dep
namespace: exp
spec:
template:
spec:
contianers:
- image: app1:1.20
name: app1
ports:
- containerPort: 9000
protocol: TCP
- image: app1:1.20
name: app1-s1
ports:
- containerPort: 9001
protocol: TCP
I can even create two different Services one for each port of the container, and that works great as well.
I can individually reach both Services and end up on the respective container within the Pod.
apiVersion: v1
kind: Service
metadata:
name: app1
namespace: exp
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: 9000
selector:
service: app1-service
sessionAffinity: None
type: ClusterIP
---
apiVersion: v1
kind: Service
metadata:
name: app1-s1
namespace: exp
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: 9001
selector:
service: app1-service
sessionAffinity: None
type: ClusterIP
I want both the instances of the container behind a single service, that round robins between both the containers.
How can I achieve that? Is it possible within the realm of services? Or would I need to explore ingress for something like this?
Kubernetes services has three proxy modes: iptables (is the default), userspace, IPVS.
Userspace: is the older way and it distribute in round-robin as the only way.
Iptables: is the default and select at random one pod and stick with it.
IPVS: Has multiple ways to distribute traffic but first you have to install it on your node, for example on centos node with this command:
yum install ipvsadm and then make it available.
Like i said, Kubernetes service by default has no round-robin.
To activate IPVS you have to add a parameter to kube-proxy
--proxy-mode=ipvs
--ipvs-scheduler=rr (to select round robin)
One can expose multiple ports using a single service. In Kubernetes-service manifest, spec.ports[] is an array. So, one can specify multiple ports in it. For example, see bellow:
apiVersion: v1
kind: Service
metadata:
name: app1
namespace: exp
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: 9000
- name: http-s1
port: 81
protocol: TCP
targetPort: 9001
selector:
service: app1-service
sessionAffinity: None
type: ClusterIP
Now, the hostname is same except the port and by default, kube-proxy in userspace mode chooses a backend via a round-robin algorithm.
What I would do is to separate the app in two different deployments, with one container in each deployment. I would set the same labels to both deployments and attack them both with one single service.
This way, you don't even have to run them on different ports.
Later on, if you would want one of them to receive more traffic, I would just play with the number of the replicas of each deployment.

Why don't my Kubernetes services publish on the port I specify?

I've been tinkering with Kubernetes on and off for the past few years and I am not sure if this has always been the case (maybe this behavior changed recently) but I cannot seem to get Services to publish on the ports I intend - they always publish on a high random port (>30000).
For instance, I'm going through this walkthrough on Ingress and I create the following Deployment and Service objects per the instructions:
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: hello-world-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: hello-world
spec:
containers:
- image: "gokul93/hello-world:latest"
imagePullPolicy: Always
name: hello-world-container
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: hello-world-svc
spec:
ports:
- port: 9376
protocol: TCP
targetPort: 8080
selector:
app: hello-world
type: NodePort
According to this, I should have a Service that's listening on port 8080, but instead it's a high, random port:
~$ kubectl describe svc hello-world-svc
Name: hello-world-svc
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=hello-world
Type: NodePort
IP: 10.109.24.16
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
NodePort: <unset> 31669/TCP
Endpoints: 10.40.0.4:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
I also verified that none of my nodes are listening on 8080, but they are listening on 31669.
This isn't super ideal - especially considering that the Ingress portion will need to know what servicePort is being used (the walkthrough references this at 8080).
By the way, when I create the Ingress controller, this behavior is the same - rather than listening on 80 and 443 like a good load balancer, it's listening on high random ports.
Am I missing something? Am I doing it all wrong?
Matt,
The reason a random port is being allocated is because you are creating a service of type NodePort.
K8s documentation explains NodePort here
Based on your config, the service is exposed on port 9376 (and the backend port is 8080). So hello-word-svc should be available at: 10.109.24.16:9376. Essentially this service can be reached in one of the following means:
Service ip/port :- 10.109.24.16:9376
Node ip/port :- [your compute node ip]:31669 <-- this is created because your service is of type NodePort
You can also query the pod directly to test that the pod is in-fact exposing a service.
Pod ip/port: 10.40.0.4:8080
Since your eventual goal is to use ingress controller for external reachability to your service, "type: ClusterIP" might suffice your ask.

Kubernetes local port for deployment in Minikube

I'm trying to expose my Deployment to a port which I can access through my local computer via Minikube.
I have tried two YAML configurations (one a load balancer, one just a service exposing a port).
I: http://pastebin.com/gL5ZBZg7
apiVersion: v1
kind: Service
metadata:
name: bot
labels:
app: bot
spec:
type: LoadBalancer
ports:
- port: 8000
targetPort: 8000
protocol: TCP
selector:
app: bot
II: http://pastebin.com/sSuyhzC5
apiVersion: v1
kind: Service
metadata:
name: bot
labels:
app: bot
spec:
ports:
- port: 8000
targetPort: 8000
protocol: TCP
selector:
app: bot
The deployment and the docker container image both expose port 8000, and the Pod is tagged with app:bot.
The first results in a service with a port which never finishes, and the external IP never gets assigned.
The second results in a port of bot:8000 TCP, bot:0 TCP in my dashboard and when I try "minikube service bot" nothing happens. The same happens if I type in "kubectl expose service bot".
I am on Mac OS X.
How can I set this up properly?
The LoadBalancer service is meant for Cloud providers and not really relevant for minikube.
From the documentation:
On cloud providers which support external load balancers, setting the type field to "LoadBalancer" will provision a load balancer for your Service.
Using a Service of type NodePort (see documentation) as mentioned in the Networking part of the minikube documentation is the intended way of exposing services on minikube.
So your configuration should look like this:
apiVersion: v1
kind: Service
metadata:
name: bot
labels:
app: bot
spec:
type: NodePort
ports:
- port: 8000
targetPort: 8000
nodePort: 30356
protocol: TCP
selector:
app: bot
And access your application through:
> IP=$(minikube ip)
> curl "http://$IP:30356"
Hope that helps.
Minikube now has the service command to access a service.
Use kubectl service <myservice>.
That will give you a URL which you can use to talk to the service.