I have many tenants running on one Kubernetes cluster (on AWS), where every tenant has one Pod that exposes one TCP port (not HTTP) and one UDP port.
I don't need load balancing capabilities.
The approach should expose an IP address that is externally available with a dedicated port for each tenant
I don't want to expose the nodes directly to the internet
I have the following service so far:
apiVersion: v1
kind: Service
metadata:
name: my-service
labels:
app: my-app
spec:
type: NodePort
ports:
- port: 8111
targetPort: 8111
protocol: UDP
name: my-udp
- port: 8222
targetPort: 8222
protocol: TCP
name: my-tcp
selector:
app: my-app
What is the way to go?
Deploy a NGINX ingress controller on your AWS cluster
Change your service my-service type from NodePort to ClusterIP
Edit the configMap tcp-services in the ingress-nginx namespace adding :
data:
"8222": your-namespace/my-service:8222
Same for configMap udp-services :
data:
"8111": your-namespace/my-service:8111
Now, you can access your application externally using the nginx-controller IP <ip:8222> (TCP) and <ip:8111> (UDP)
The description provided by #ffledgling is what you need.
But I have to mention that if you want to expose ports, you have to use a load balancer or expose nodes to the Internet. For example, you can expose a node to the Internet and allow access only to some necessary ports.
Related
How to send packets from outside world IP to Pod as i am receiving from pod to outside world IP?
service.yaml:
apiVersion: v1
kind: Service
metadata:
name: gnb-service
spec:
selector:
app: 5ggnb
type: ClusterIP
ports:
port: 58242
name: 5gsvc
protocol: UDP
targetPort: 58242
externalIPs:
198.168.11.188 # Node1-IP
apiVersion: v1
kind: Endpoints
metadata:
name: gnb-endpoints
subsets:
addresses:
- ip: 192.168.30.61
ports:
port: 58242
protocol: UDP'
The most common way to expose a Pod to clients outside your cluster SDN is to use Ingress Controllers (haproxy, nginx, traefik, ....). There may already be one of those deployed in your cluster, at which point you would create an Ingress object.
Another way to do this would be to user a Service with type=NodePort. Although this is usually not recommended - production Kubernetes clusters would rarely allow direct connections to such ports: ensuring Ingresses are the only point of entry to applications is one of the first steps to Kubernetes clusters hardening.
In clusters that are integrated with a cloud provider (aws, azure, openstack, ...), and clusters including some MetalLB deployment, you may be able to use a Service with type=LoadBalancer.
Do not set one of your node IP address as externalIPs on a Service.
I have a set of pods running based on the following fleet:
apiVersion: "agones.dev/v1"
kind: Fleet
metadata:
name: bungee
spec:
replicas: 2
template:
metadata:
labels:
run: bungee
spec:
ports:
- name: default
containerPort: 25565
protocol: TCP
template:
spec:
containers:
- name: bungee
image: a/b:test
I can access these pods outside the cluster with <node-IP>:<port> where the port is random per pod given by Agones.
My goal is to be able to connect to these pods through a single IP, meaning I have to add some sort of load balancer. I tried using this service of type LoadBalancer, but I can't connect to any of the pods with it.
apiVersion: v1
kind: Service
metadata:
name: bungee-svc
spec:
type: LoadBalancer
loadBalancerIP: XXX.XX.XX.XXX
ports:
- port: 25565
protocol: TCP
selector:
run: bungee
externalTrafficPolicy: Local
Is a service like this the wrong approach here, and if so what should I use instead? If it is correct, why is it not working?
Edit: External IP field says pending while checking the service status. I am running Kubernetes on bare-metal.
Edit 2: Attempting to use NodePort as suggested, I see the service has not been given an external IP address. Trying to connect to <node-IP>:<nodePort> does not work. Could it be a problem related to the selector?
LoadBalancer Services could have worked, in clusters that are integrating with the API of the cloud provider hosting your Kubernetes nodes (cloud-controller-manager component). Since this is not your case, you're looking for a NodePort Service.
Something like:
apiVersion: v1
kind: Service
metadata:
name: bungee-svc
spec:
type: NodePort
ports:
- port: 25565
protocol: TCP
selector:
run: bungee
Having created that service, you can check its description - or yaml/json representation:
# kubectl describe svc xxx
Type: NodePort
IP: 10.233.24.89 <- ip within SDN
Port: tcp-8080 8080/TCP <- ports within SDN
TargetPort: 8080/TCP <- port on your container
NodePort: tcp-8080 31655/TCP <- port exposed on your nodes
Endpoints: 10.233.108.232:8080 <- pod:port ...
Session Affinity: None
Now, I know the port 31655 was allocated to my NodePort Service -- ports are unique on your cluster, they are picked within a range, depends on your cluster configuration.
I can connect to my service, accessing any Kubernetes node IP, on the port that was allocated to my NodePort service.
curl http://k8s-worker1.example.com:31655/
As a sidenote: a LoadBalancer Service extends a NodePort Service. While the externalIP won't ever show up, note that your Service was already allocated with its own port, as any NodePort Service - which is meant to receive traffic from whichever LoadBalancer would have been configured on behalf of your cluster, onto the cloud infrastructure it is integrated with.
And ... I have to say I'm not familiar with Agones. When you say "I can access these pods outside the cluster with <node-IP>:<port> where the port is random per pod given by Agones". Are you sure ports are allocated on a per-pod basis, and bound to a given node? Or could it be they're already using a NodePort Service. Give it another look: have you tried connecting that port on other nodes of your cluster?
I have a DaemonSet with a service pointing to it.
When a pod will access the ClusterIP of the my service, will it get the local pod running on the same node or any pod in the service?
Is there any way to achieve this? My understanding is that it would be the same thing than externaltrafficpolicy: local but for internal traffic.
By default, traffic sent to a ClusterIP or NodePort Service may be routed to any backend address for the Service. Since Kubernetes 1.7 it has been possible to route "external" traffic to the Pods running on the Node that received the traffic, but this is not supported for ClusterIP Services, and more complex topologies — such as routing zonally — have not been possible. The Service Topology feature resolves this by allowing the Service creator to define a policy for routing traffic based upon the Node labels for the originating and destination Nodes.
You need to use: Service topology
An example service which prefers local pods:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: my-app
ports:
- protocol: TCP
port: 80
targetPort: 9376
topologyKeys:
- "kubernetes.io/hostname"
- "*"
UPD 1:
There is another option to make sure, that requests sent to a port of some particular node will be handled on the same node - it's hostPort.
An example:
kind: Pod
apiVersion: v1
metadata:
name: test-api
labels:
app: test-api
spec:
containers:
- name: testapicontainer
image: myprivaterepo/testapi:latest
ports:
- name: web
hostPort: 55555
containerPort: 80
protocol: TCP
The above pod will expose container port 80 on a hostPort: 55555 - if you have DaemonSet for those pods - then you can be sure, that they will be run on each node and each request will be handled on the node which received it.
But, please be careful using it and read this: Configuration Best Practices
Is there a way to keep the same external IP that current load balancer has even when I make a new deployment?
When I delete the deployment connected to the load balancer, load balancer still stays so is it possible to connect a new deployment to that existing load balancer?
yes, you can pass the externalIP in the service object's yaml file.
Try following this -
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: MyApp
ports:
- protocol: TCP
port: 80
targetPort: 9376
loadBalancerIP: 78.11.24.19
type: LoadBalancer
Please refer to https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer for more info on this
I'm trying to expose my Deployment to a port which I can access through my local computer via Minikube.
I have tried two YAML configurations (one a load balancer, one just a service exposing a port).
I: http://pastebin.com/gL5ZBZg7
apiVersion: v1
kind: Service
metadata:
name: bot
labels:
app: bot
spec:
type: LoadBalancer
ports:
- port: 8000
targetPort: 8000
protocol: TCP
selector:
app: bot
II: http://pastebin.com/sSuyhzC5
apiVersion: v1
kind: Service
metadata:
name: bot
labels:
app: bot
spec:
ports:
- port: 8000
targetPort: 8000
protocol: TCP
selector:
app: bot
The deployment and the docker container image both expose port 8000, and the Pod is tagged with app:bot.
The first results in a service with a port which never finishes, and the external IP never gets assigned.
The second results in a port of bot:8000 TCP, bot:0 TCP in my dashboard and when I try "minikube service bot" nothing happens. The same happens if I type in "kubectl expose service bot".
I am on Mac OS X.
How can I set this up properly?
The LoadBalancer service is meant for Cloud providers and not really relevant for minikube.
From the documentation:
On cloud providers which support external load balancers, setting the type field to "LoadBalancer" will provision a load balancer for your Service.
Using a Service of type NodePort (see documentation) as mentioned in the Networking part of the minikube documentation is the intended way of exposing services on minikube.
So your configuration should look like this:
apiVersion: v1
kind: Service
metadata:
name: bot
labels:
app: bot
spec:
type: NodePort
ports:
- port: 8000
targetPort: 8000
nodePort: 30356
protocol: TCP
selector:
app: bot
And access your application through:
> IP=$(minikube ip)
> curl "http://$IP:30356"
Hope that helps.
Minikube now has the service command to access a service.
Use kubectl service <myservice>.
That will give you a URL which you can use to talk to the service.