Do services send traffic to local pods first? - kubernetes

I have a DaemonSet with a service pointing to it.
When a pod will access the ClusterIP of the my service, will it get the local pod running on the same node or any pod in the service?
Is there any way to achieve this? My understanding is that it would be the same thing than externaltrafficpolicy: local but for internal traffic.

By default, traffic sent to a ClusterIP or NodePort Service may be routed to any backend address for the Service. Since Kubernetes 1.7 it has been possible to route "external" traffic to the Pods running on the Node that received the traffic, but this is not supported for ClusterIP Services, and more complex topologies — such as routing zonally — have not been possible. The Service Topology feature resolves this by allowing the Service creator to define a policy for routing traffic based upon the Node labels for the originating and destination Nodes.
You need to use: Service topology
An example service which prefers local pods:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: my-app
ports:
- protocol: TCP
port: 80
targetPort: 9376
topologyKeys:
- "kubernetes.io/hostname"
- "*"
UPD 1:
There is another option to make sure, that requests sent to a port of some particular node will be handled on the same node - it's hostPort.
An example:
kind: Pod
apiVersion: v1
metadata:
name: test-api
labels:
app: test-api
spec:
containers:
- name: testapicontainer
image: myprivaterepo/testapi:latest
ports:
- name: web
hostPort: 55555
containerPort: 80
protocol: TCP
The above pod will expose container port 80 on a hostPort: 55555 - if you have DaemonSet for those pods - then you can be sure, that they will be run on each node and each request will be handled on the node which received it.
But, please be careful using it and read this: Configuration Best Practices

Related

How to send packets from outside world IP to Pod as i am receiving from pod to outside world IP

How to send packets from outside world IP to Pod as i am receiving from pod to outside world IP?
service.yaml:
apiVersion: v1
kind: Service
metadata:
name: gnb-service
spec:
selector:
app: 5ggnb
type: ClusterIP
ports:
port: 58242
name: 5gsvc
protocol: UDP
targetPort: 58242
externalIPs:
198.168.11.188 # Node1-IP
apiVersion: v1
kind: Endpoints
metadata:
name: gnb-endpoints
subsets:
addresses:
- ip: 192.168.30.61
ports:
port: 58242
protocol: UDP'
The most common way to expose a Pod to clients outside your cluster SDN is to use Ingress Controllers (haproxy, nginx, traefik, ....). There may already be one of those deployed in your cluster, at which point you would create an Ingress object.
Another way to do this would be to user a Service with type=NodePort. Although this is usually not recommended - production Kubernetes clusters would rarely allow direct connections to such ports: ensuring Ingresses are the only point of entry to applications is one of the first steps to Kubernetes clusters hardening.
In clusters that are integrated with a cloud provider (aws, azure, openstack, ...), and clusters including some MetalLB deployment, you may be able to use a Service with type=LoadBalancer.
Do not set one of your node IP address as externalIPs on a Service.

How do I access Kubernetes pods through a single IP?

I have a set of pods running based on the following fleet:
apiVersion: "agones.dev/v1"
kind: Fleet
metadata:
name: bungee
spec:
replicas: 2
template:
metadata:
labels:
run: bungee
spec:
ports:
- name: default
containerPort: 25565
protocol: TCP
template:
spec:
containers:
- name: bungee
image: a/b:test
I can access these pods outside the cluster with <node-IP>:<port> where the port is random per pod given by Agones.
My goal is to be able to connect to these pods through a single IP, meaning I have to add some sort of load balancer. I tried using this service of type LoadBalancer, but I can't connect to any of the pods with it.
apiVersion: v1
kind: Service
metadata:
name: bungee-svc
spec:
type: LoadBalancer
loadBalancerIP: XXX.XX.XX.XXX
ports:
- port: 25565
protocol: TCP
selector:
run: bungee
externalTrafficPolicy: Local
Is a service like this the wrong approach here, and if so what should I use instead? If it is correct, why is it not working?
Edit: External IP field says pending while checking the service status. I am running Kubernetes on bare-metal.
Edit 2: Attempting to use NodePort as suggested, I see the service has not been given an external IP address. Trying to connect to <node-IP>:<nodePort> does not work. Could it be a problem related to the selector?
LoadBalancer Services could have worked, in clusters that are integrating with the API of the cloud provider hosting your Kubernetes nodes (cloud-controller-manager component). Since this is not your case, you're looking for a NodePort Service.
Something like:
apiVersion: v1
kind: Service
metadata:
name: bungee-svc
spec:
type: NodePort
ports:
- port: 25565
protocol: TCP
selector:
run: bungee
Having created that service, you can check its description - or yaml/json representation:
# kubectl describe svc xxx
Type: NodePort
IP: 10.233.24.89 <- ip within SDN
Port: tcp-8080 8080/TCP <- ports within SDN
TargetPort: 8080/TCP <- port on your container
NodePort: tcp-8080 31655/TCP <- port exposed on your nodes
Endpoints: 10.233.108.232:8080 <- pod:port ...
Session Affinity: None
Now, I know the port 31655 was allocated to my NodePort Service -- ports are unique on your cluster, they are picked within a range, depends on your cluster configuration.
I can connect to my service, accessing any Kubernetes node IP, on the port that was allocated to my NodePort service.
curl http://k8s-worker1.example.com:31655/
As a sidenote: a LoadBalancer Service extends a NodePort Service. While the externalIP won't ever show up, note that your Service was already allocated with its own port, as any NodePort Service - which is meant to receive traffic from whichever LoadBalancer would have been configured on behalf of your cluster, onto the cloud infrastructure it is integrated with.
And ... I have to say I'm not familiar with Agones. When you say "I can access these pods outside the cluster with <node-IP>:<port> where the port is random per pod given by Agones". Are you sure ports are allocated on a per-pod basis, and bound to a given node? Or could it be they're already using a NodePort Service. Give it another look: have you tried connecting that port on other nodes of your cluster?

Why don't my Kubernetes services publish on the port I specify?

I've been tinkering with Kubernetes on and off for the past few years and I am not sure if this has always been the case (maybe this behavior changed recently) but I cannot seem to get Services to publish on the ports I intend - they always publish on a high random port (>30000).
For instance, I'm going through this walkthrough on Ingress and I create the following Deployment and Service objects per the instructions:
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: hello-world-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: hello-world
spec:
containers:
- image: "gokul93/hello-world:latest"
imagePullPolicy: Always
name: hello-world-container
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: hello-world-svc
spec:
ports:
- port: 9376
protocol: TCP
targetPort: 8080
selector:
app: hello-world
type: NodePort
According to this, I should have a Service that's listening on port 8080, but instead it's a high, random port:
~$ kubectl describe svc hello-world-svc
Name: hello-world-svc
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=hello-world
Type: NodePort
IP: 10.109.24.16
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
NodePort: <unset> 31669/TCP
Endpoints: 10.40.0.4:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
I also verified that none of my nodes are listening on 8080, but they are listening on 31669.
This isn't super ideal - especially considering that the Ingress portion will need to know what servicePort is being used (the walkthrough references this at 8080).
By the way, when I create the Ingress controller, this behavior is the same - rather than listening on 80 and 443 like a good load balancer, it's listening on high random ports.
Am I missing something? Am I doing it all wrong?
Matt,
The reason a random port is being allocated is because you are creating a service of type NodePort.
K8s documentation explains NodePort here
Based on your config, the service is exposed on port 9376 (and the backend port is 8080). So hello-word-svc should be available at: 10.109.24.16:9376. Essentially this service can be reached in one of the following means:
Service ip/port :- 10.109.24.16:9376
Node ip/port :- [your compute node ip]:31669 <-- this is created because your service is of type NodePort
You can also query the pod directly to test that the pod is in-fact exposing a service.
Pod ip/port: 10.40.0.4:8080
Since your eventual goal is to use ingress controller for external reachability to your service, "type: ClusterIP" might suffice your ask.

Kubernetes: Multiple containers that have to communicate + exposed nodePort

In my setup, there is a set of containers that were initially built to run with docker-compose. After moving to Kubernetes I'm facing the following challenges:
docker-compose managed to provide some type of internal DNS that allowed a container to be addressed by its name. How do I create such a network in Kubernetes? What I found so far:
they could all be part of one pod and therefore communicate via localhost
they could all have a common label with matching key:value pairs and a service, but how does one handle Ports?
I need to expose an internal Port to a certain NodePort as it has to be publicly available. How does such a service config look like? What I found so far:
something like this:
apiVersion: v1
kind: Service
metadata:
labels:
app: frontend
name: frontend-nodeport
spec:
type: NodePort
ports:
- name: "3000-30001"
port: 3000
nodePort: 30001
selector:
app: frontend
status:
loadBalancer: {}`
Docker-compose managed to provide some type of internal DNS that allowed a container to be addressed by its name. How do I create such a network in Kubernetes?
As you researched, you can, indeed have two approaches:
IF you containers are to be scaled together then place them inside same pod and communicate through localhost over separate ports. This is less likely your case since this approach is more suitable when containerized app is more similar to processes on one physical box than a separate service/server.
IF your containers are to be scaled separaltey, which is more probably your case, then use service. With services, in place of localhost (in previous point) you will either use just service name as it is (if pods are in same namespace) or FQDN (servicename.namespace.svc.cluster.local) if services are accessed across namespaces. As opposed to previous point where you had to have different ports for your containers (since you address localhost), in this case you can have same port across multiple services, since service:port must be unique. Also with service you can remap ports from containers if you wish to do so as well.
Since you asked this as an introductory question two words of caution:
service resolution works from standpoint of pod/container. To test it you actually need to exec into actual container (or proxy from host) and this is common confusion point. Just to be on safe side test service:port accessibility within actual container, not from master.
Finally, just to mimic docker-compose setup for inter-container network, you don't need to expose NodePort or whatever. Service layer in kubernetes will take care of DNS handling. NodePort has different intention.
I need to expose an internal Port to a certain NodePort. How does such a service config look like?
You are on a good track, here is nice overview to get you started, and reference relevant to your question is given below:
apiVersion: v1
kind: Service
metadata:
name: my-nodeport-service
selector:
app: my-app
spec:
type: NodePort
ports:
- name: http
port: 80
targetPort: 80
nodePort: 30036
protocol: TCP
Edit: Could you please provide an example of how a service.yaml would look like if the containers are scaled seperately ?
First one is, say, api server, we'll call it svc-my-api, it will use pod(s) labeled app: my-api and communicate to pod's port 80 and will be accessible by other pods (in the same namespace) as host:svc-my-api and port:8080
apiVersion: v1
kind: Service
metadata:
name: svc-my-api
labels:
app: my-api
spec:
selector:
app: my-api
ports:
- protocol: TCP
port: 8080
targetPort: 80
Second one is, say, mysql server, we'll call it svc-my-database. Supposing that containers from api pods (covered by previous service) want to access database, they will use host:svc-my-database and port:3306.
apiVersion: v1
kind: Service
metadata:
name: svc-my-database
labels:
app: my-database
spec:
selector:
app: my-database
ports:
- name: http
protocol: TCP
port: 3306
targetPort: 3306
1.- You can add some parameters to your pod resource (or any other that is going to create a pod), as follows:
...
spec:
hostname: foo-{1..4} #keep in mind this line
subdomain: bar #and this line
containers:
- image: busybox
...
Note: so imagine you just created 4 pods, with hostname foo-1, foo-2, foo-3 and foo-4. These are separate pods. You can't do foo-{1..4}. So this is just for demo purposes.
If you now create a service with the same name as the subdomain, you would be able to reach the pod from anywhere in the cluster by hostname.service-name.namespace.svc.cluster.local.
Example:
apiVersion: v1
kind: Service
metadata:
name: bar #my subdomain is called "bar", so is this service
spec:
selector:
app: my-app
ports:
- name: foo
port: 1234
targetPort: 1234
Now, say I have the label app: my-app in my pods, so the service is targeting them correctly.
At this point, look what happens (from any pod, within the cluster):
/ # nslookup foo-1.bar.my-namespace.svc.cluster.local
Server: 10.63.240.10
Address 1: 10.63.240.10 kube-dns.kube-system.svc.cluster.local
Name: foo-1.bar.my-namespace.svc.cluster.local
Address 1: 10.60.1.24 foo-1.bar.my-namespace.svc.cluster.local
2.- The second part of your question is almost correct. This is a NodePort service:
apiVersion: v1
kind: Service
metadata:
name: svc-nodeport
spec:
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: http
selector:
app: my-app
type: NodePort
This service runs on port 80, so it is reachable on port 80 from within the cluster. It will map the port to a random port over 30000 on the node. Now this same service is available on port 30001 (for example) of the node from outside world. Finally it will forward the requests to the port 8080 of the container.

How to make service TCP/UDP ports externally accessible in Kubernetes?

I have many tenants running on one Kubernetes cluster (on AWS), where every tenant has one Pod that exposes one TCP port (not HTTP) and one UDP port.
I don't need load balancing capabilities.
The approach should expose an IP address that is externally available with a dedicated port for each tenant
I don't want to expose the nodes directly to the internet
I have the following service so far:
apiVersion: v1
kind: Service
metadata:
name: my-service
labels:
app: my-app
spec:
type: NodePort
ports:
- port: 8111
targetPort: 8111
protocol: UDP
name: my-udp
- port: 8222
targetPort: 8222
protocol: TCP
name: my-tcp
selector:
app: my-app
What is the way to go?
Deploy a NGINX ingress controller on your AWS cluster
Change your service my-service type from NodePort to ClusterIP
Edit the configMap tcp-services in the ingress-nginx namespace adding :
data:
"8222": your-namespace/my-service:8222
Same for configMap udp-services :
data:
"8111": your-namespace/my-service:8111
Now, you can access your application externally using the nginx-controller IP <ip:8222> (TCP) and <ip:8111> (UDP)
The description provided by #ffledgling is what you need.
But I have to mention that if you want to expose ports, you have to use a load balancer or expose nodes to the Internet. For example, you can expose a node to the Internet and allow access only to some necessary ports.