I have a set of pods running based on the following fleet:
apiVersion: "agones.dev/v1"
kind: Fleet
metadata:
name: bungee
spec:
replicas: 2
template:
metadata:
labels:
run: bungee
spec:
ports:
- name: default
containerPort: 25565
protocol: TCP
template:
spec:
containers:
- name: bungee
image: a/b:test
I can access these pods outside the cluster with <node-IP>:<port> where the port is random per pod given by Agones.
My goal is to be able to connect to these pods through a single IP, meaning I have to add some sort of load balancer. I tried using this service of type LoadBalancer, but I can't connect to any of the pods with it.
apiVersion: v1
kind: Service
metadata:
name: bungee-svc
spec:
type: LoadBalancer
loadBalancerIP: XXX.XX.XX.XXX
ports:
- port: 25565
protocol: TCP
selector:
run: bungee
externalTrafficPolicy: Local
Is a service like this the wrong approach here, and if so what should I use instead? If it is correct, why is it not working?
Edit: External IP field says pending while checking the service status. I am running Kubernetes on bare-metal.
Edit 2: Attempting to use NodePort as suggested, I see the service has not been given an external IP address. Trying to connect to <node-IP>:<nodePort> does not work. Could it be a problem related to the selector?
LoadBalancer Services could have worked, in clusters that are integrating with the API of the cloud provider hosting your Kubernetes nodes (cloud-controller-manager component). Since this is not your case, you're looking for a NodePort Service.
Something like:
apiVersion: v1
kind: Service
metadata:
name: bungee-svc
spec:
type: NodePort
ports:
- port: 25565
protocol: TCP
selector:
run: bungee
Having created that service, you can check its description - or yaml/json representation:
# kubectl describe svc xxx
Type: NodePort
IP: 10.233.24.89 <- ip within SDN
Port: tcp-8080 8080/TCP <- ports within SDN
TargetPort: 8080/TCP <- port on your container
NodePort: tcp-8080 31655/TCP <- port exposed on your nodes
Endpoints: 10.233.108.232:8080 <- pod:port ...
Session Affinity: None
Now, I know the port 31655 was allocated to my NodePort Service -- ports are unique on your cluster, they are picked within a range, depends on your cluster configuration.
I can connect to my service, accessing any Kubernetes node IP, on the port that was allocated to my NodePort service.
curl http://k8s-worker1.example.com:31655/
As a sidenote: a LoadBalancer Service extends a NodePort Service. While the externalIP won't ever show up, note that your Service was already allocated with its own port, as any NodePort Service - which is meant to receive traffic from whichever LoadBalancer would have been configured on behalf of your cluster, onto the cloud infrastructure it is integrated with.
And ... I have to say I'm not familiar with Agones. When you say "I can access these pods outside the cluster with <node-IP>:<port> where the port is random per pod given by Agones". Are you sure ports are allocated on a per-pod basis, and bound to a given node? Or could it be they're already using a NodePort Service. Give it another look: have you tried connecting that port on other nodes of your cluster?
Related
How to send packets from outside world IP to Pod as i am receiving from pod to outside world IP?
service.yaml:
apiVersion: v1
kind: Service
metadata:
name: gnb-service
spec:
selector:
app: 5ggnb
type: ClusterIP
ports:
port: 58242
name: 5gsvc
protocol: UDP
targetPort: 58242
externalIPs:
198.168.11.188 # Node1-IP
apiVersion: v1
kind: Endpoints
metadata:
name: gnb-endpoints
subsets:
addresses:
- ip: 192.168.30.61
ports:
port: 58242
protocol: UDP'
The most common way to expose a Pod to clients outside your cluster SDN is to use Ingress Controllers (haproxy, nginx, traefik, ....). There may already be one of those deployed in your cluster, at which point you would create an Ingress object.
Another way to do this would be to user a Service with type=NodePort. Although this is usually not recommended - production Kubernetes clusters would rarely allow direct connections to such ports: ensuring Ingresses are the only point of entry to applications is one of the first steps to Kubernetes clusters hardening.
In clusters that are integrated with a cloud provider (aws, azure, openstack, ...), and clusters including some MetalLB deployment, you may be able to use a Service with type=LoadBalancer.
Do not set one of your node IP address as externalIPs on a Service.
I have a DaemonSet with a service pointing to it.
When a pod will access the ClusterIP of the my service, will it get the local pod running on the same node or any pod in the service?
Is there any way to achieve this? My understanding is that it would be the same thing than externaltrafficpolicy: local but for internal traffic.
By default, traffic sent to a ClusterIP or NodePort Service may be routed to any backend address for the Service. Since Kubernetes 1.7 it has been possible to route "external" traffic to the Pods running on the Node that received the traffic, but this is not supported for ClusterIP Services, and more complex topologies — such as routing zonally — have not been possible. The Service Topology feature resolves this by allowing the Service creator to define a policy for routing traffic based upon the Node labels for the originating and destination Nodes.
You need to use: Service topology
An example service which prefers local pods:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: my-app
ports:
- protocol: TCP
port: 80
targetPort: 9376
topologyKeys:
- "kubernetes.io/hostname"
- "*"
UPD 1:
There is another option to make sure, that requests sent to a port of some particular node will be handled on the same node - it's hostPort.
An example:
kind: Pod
apiVersion: v1
metadata:
name: test-api
labels:
app: test-api
spec:
containers:
- name: testapicontainer
image: myprivaterepo/testapi:latest
ports:
- name: web
hostPort: 55555
containerPort: 80
protocol: TCP
The above pod will expose container port 80 on a hostPort: 55555 - if you have DaemonSet for those pods - then you can be sure, that they will be run on each node and each request will be handled on the node which received it.
But, please be careful using it and read this: Configuration Best Practices
I am working with a minecraft server image to create a cluster of statefulsets that all should have a random external port. I was told using a nodeport would do the job but not exactly how that is done. I was looking at nodeport but it looks like you would need to specify that exact port name.
I need each replica in a cluster to either have a random external IP or a random external port on the same IP, Is that possible or do I need to create a service for every single port/IP.
You need to create a NodePort service for each instance of minecraft server.
A NodePort open a random port up to < 30000 and links it to an internal (set of) based on the selectors.
For instance, let's say there is one instance of the minecraft server with the following resource:
apiVersion: v1
kind: Pod
metadata:
name: minecraft-instance1
labels:
instance: minecraft-1
spec:
...
This is the nodePort description to reach it on port 30007 (on every node of the cluster):
apiVersion: v1
kind: Service
metadata:
name: service-minecraft-1
spec:
type: NodePort
selector:
instance: minecraft-1
ports:
- port: 25565
targetPort: 25565
nodePort: 30007
I'm trying to setup a zookeeper cluster (3 replicas) but each host can't connect to another and I really don't know where's the problem.
It's creating 3 pods successfully with names like
zookeeper-0.zookeeper-internal.default.svc.cluster.local
zookeeper-1.zookeeper-internal.default.svc.cluster.local
zookeeper-2.zookeeper-internal.default.svc.cluster.local
but when connected to one of them and trying to connect to the open port it returns the Unknown host message:
zookeeper#zookeeper-0:/opt$ nc -z zookeeper-1.zookeeper-internal.default.svc.cluster.local 2181
zookeeper-1.zookeeper-internal.default.svc.cluster.local: forward host lookup failed: Unknown host
My YAML file is here
I really appreciate any help.
Did you create a headless service as you had mentioned in your yaml - serviceName: zookeeper-internal ?
You need to create this service (update the port) to access the zookeeper-0.zookeeper-internal.default.svc.cluster.local
apiVersion: v1
kind: Service
metadata:
labels:
app: zookeeper-cluster-dev
name: zookeeper
name: zookeeper-internal
spec:
ports:
- name: zookeeper-port
port: 80
protocol: TCP
targetPort: 80
selector:
name: zookeeper
clusterIP: None
type: ClusterIP
Service is required. But it does not expose anything outside the cluster. It is only within cluster. Any pods can access this service within the cluster. So you can not access it from your browser unless you expose it via NodePort / LoadBalancer / Ingress!
I have many tenants running on one Kubernetes cluster (on AWS), where every tenant has one Pod that exposes one TCP port (not HTTP) and one UDP port.
I don't need load balancing capabilities.
The approach should expose an IP address that is externally available with a dedicated port for each tenant
I don't want to expose the nodes directly to the internet
I have the following service so far:
apiVersion: v1
kind: Service
metadata:
name: my-service
labels:
app: my-app
spec:
type: NodePort
ports:
- port: 8111
targetPort: 8111
protocol: UDP
name: my-udp
- port: 8222
targetPort: 8222
protocol: TCP
name: my-tcp
selector:
app: my-app
What is the way to go?
Deploy a NGINX ingress controller on your AWS cluster
Change your service my-service type from NodePort to ClusterIP
Edit the configMap tcp-services in the ingress-nginx namespace adding :
data:
"8222": your-namespace/my-service:8222
Same for configMap udp-services :
data:
"8111": your-namespace/my-service:8111
Now, you can access your application externally using the nginx-controller IP <ip:8222> (TCP) and <ip:8111> (UDP)
The description provided by #ffledgling is what you need.
But I have to mention that if you want to expose ports, you have to use a load balancer or expose nodes to the Internet. For example, you can expose a node to the Internet and allow access only to some necessary ports.