ClusterIP with externalIPs - can't be accessed from outside the cluster - kubernetes

In the past I've tried NodePort service and if I add a firewall rule to the corresponding Node, it works like a charm:
type: NodePort
ports:
- nodePort: 30000
port: 80
targetPort: 5000
I can access my service from outside and long as the node has an external IP(which it does by default in GKE).
However, the service can only be assigned to 30000+ range ports, which is not very convenient.
By the way, the Service looks as follows:
kubectl get service -o=wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
web-engine-service NodePort 10.43.244.110 <none> 80:30000/TCP 11m app=web-engine-pod
Recently, I've come across a different configuration option that is documented here.
I've tried is as it seems quite promising and should allow to expose my service on any port I want.
The configuration is as follows:
ports:
- name: web-port
port: 80
targetPort: 5000
externalIPs:
- 35.198.163.215
After the service updated, I can see that External IP is indeed assigned to it:
$ kubectl get service -o=wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
web-engine-service ClusterIP 10.43.244.110 35.198.163.215 80/TCP 19m app=web-engine-pod
(where 35.198.163.215 - Node's external IP in GKE)
And yet, my app is not available on the Node's IP, unlike in the first scenario(I did add firewall rules for all ports I'm working with including 80, 5000, 30000).
What's the point of externalIPs configuration then? What does it actually do?
Note: I'm creating a demo project, so please don't tell me about LoabBalancer type, I'm well aware of that and will get to that a bit later.

In the API documentation, externalIPs is documented as (emphasis mine):
externalIPs is a list of IP addresses for which nodes in the cluster will also accept traffic for this service. These IPs are not managed by Kubernetes. The user is responsible for ensuring that traffic arrives at a node with this IP. A common example is external load-balancers that are not part of the Kubernetes system.
So you can put any IP address you want there, and it will show up in kubectl get service output, but it doesn't mean the cluster will actually accept traffic there.
To accept inbound traffic from outside the cluster, you need a minimum of a NodePort service; in a cloud environment a LoadBalancer service or an Ingress is a more common setup. You can't really short-cut around these. Conversely, a LoadBalancer isn't especially advanced or difficult, just change type: LoadBalancer in the configuration you already show and GKE will create the endpoint for you. The GKE documentation has a more complete example.
("Inside the cluster" and "outside the cluster" are different networks, and like other NAT setups pods can generally make outbound calls but you need specific setup to accept inbound calls. That's what a NodePort service does, and in the standard setup a LoadBalancer service builds on top of that.)

I wanted to give you more insight on:
How you can manage to make it work.
Why it's not working in your example.
More information about exposing traffic on GKE.
How you can manage to make it work?
You will need to enter internal IP of your node/nodes to the service definition where externalIP resides.
Example:
apiVersion: v1
kind: Service
metadata:
name: hello-external
spec:
selector:
app: hello
version: 2.0.0
ports:
- name: http
protocol: TCP
port: 80 # port to send the traffic to
targetPort: 50001 # port that pod responds to
externalIPs:
- 10.156.0.47
- 10.156.0.48
- 10.156.0.49
Why it's not working in your example?
I've prepared an example to show you why it doesn't work.
Assuming that you have:
VM in GCP with:
any operating system that allows to run tcpdump
internal IP of: 10.156.0.51
external IP of: 35.246.207.189
allowed the traffic to enter on port: 1111 to this VM
You can run below command (on VM) to capture the traffic coming to the port: 1111:
$ tcpdump port 1111 -nnvvS
-nnvvS - don't resolve DNS or Port names, be more verbose when printing info, print the absolute sequence numbers
You will need to send a request to external IP: 35.246.207.189 of your VM with a port of: 1111
$ curl 35.246.207.189:1111
You will get a connection refused message but the packet will be captured. You will get an output similar to this:
tcpdump: listening on ens4, link-type EN10MB (Ethernet), capture size 262144 bytes
12:04:25.704299 IP OMMITED
YOUR_IP > 10.156.0.51.1111: Flags [S], cksum 0xd9a8 (correct), seq 585328262, win 65535, options [mss 1460,nop,wscale 6,nop,nop,TS val 1282380791 ecr 0,sackOK,eol], length 0
12:04:25.704337 IP OMMITED
10.156.0.51.1111 > YOUR_IP: Flags [R.], cksum 0x32e3 (correct), seq 0, ack 585328263, win 0, length 0
By that example you can see the destination IP address for your packet coming to the VM. As shown above it's the internal IP of your VM and not external. That's why putting external IP in your YAML definition is not working.
This example also works on GKE. For simplicity purposes you can create a GKE cluster with Ubuntu as base image and do the same as shown above.
You can read more about IP addresses by following link below:
Cloud.google.com: VPC: Docs: IP addresses
More about exposing traffic on GKE
What's the point of externalIPs configuration then? What does it actually do?
In simple terms it will allow the traffic to enter your cluster. Request sent to your cluster will need to have destination IP the same as in the externalIP parameter in your service definition to be routed to the corresponding service.
This method requires you to track the IP addresses of your nodes and could be prone to issues when the IP address of your node will not be available (nodes autoscaling for example).
I recommend you to expose your services/applications by following official GKE documentation:
Cloud.google.com: Kubernetes Engine: Docs: How to: Exposing apps
As mentioned before, LoadBalancer type of service will automatically take into consideration changes that were made to the cluster. Things like autoscaling which increase/decrease count of your nodes. With the service shown above (with externalIP) this would require manual changes.
Please let me know if you have any questions to that.

Related

GKE 1 load balancer with multiple apps on different assigned ports

I want to be able to deploy several, single pod, apps and access them on a single IP address leaning on Kubernetes to assign the ports as they are when you use a NodePort service.
Is there a way to use NodePort with a load balancer?
Honestly, NodePort might work by itself, but GKE seems to block direct access to the nodes. There doesn't seem to be firewall controls like on their unmanaged VMs.
Here's a service if we need something to base an answer on. In this case, I want to deploy 10 these services which are different applications, on the same IP, each publicly accessible on a different port, each proxying port 80 of the nginx container.
---
apiVersion: v1
kind: Service
metadata:
name: foo-svc
spec:
selector:
app: nginx
ports:
- name: foo
protocol: TCP
port: 80
type: NodePort
GKE seems to block direct access to the nodes.
GCP allows creating the FW rules that allow incoming traffic either to 'All Instances in the Network' or 'Specified Target Tags/Service Account' in your VPC Network.
Rules are persistent unless the opposite is specified under the organization's policies.
Node's external IP address can be checked at Cloud Console --> Compute Engine --> VM Instances or with kubectl get nodes -o wide.
I run GKE (managed k8s) and can access all my assets externally.
I have opened all the needed ports in my setup. below is the quickest example.
Below you can find my setup:
$ kubectl get nodes -o wide
NAME AGE VERSION INTERNAL-IP EXTERNAL-IP
gke--mnnv 43d v1.14.10-gke.27 10.156.0.11 34.89.x.x
gke--nw9v 43d v1.14.10-gke.27 10.156.0.12 35.246.x.x
kubectl get svc -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) SELECTOR
knp-np NodePort 10.0.11.113 <none> 8180:30008/TCP 8180:30009/TCP app=server-go
$ curl 35.246.x.x:30008/test
Hello from ServerGo. You requested: /test
That is why it looks like a bunch of NodePort type Services would be sufficient (each one serves requests for particular selector)
If for some reason it's not possible to set up the FW rules to allow traffic directly to your Nodes it's possible to configure GCP TCP LoadBalancer.
Cloud Console --> Network Services --> Load Balancing --> Create LB --> TCP Load Balancing.
There you can select your GKE Nodes (or pool of nodes) as a 'Backend' and specify all the needed ports for the 'Frontend'. For the Frontend you can Reserve Static IP right during the configuration and specify 'Port' range as two port numbers separated by a dash (assuming you have multiple ports to be forwarded to your node pool). Additionally, you can create multiple 'Frontends' if needed.
I hope that helps.
Is there a way to use NodePort with a load balancer?
Kubernetes LoadBalancer type service builds on top of NodePort. So internally LoadBalancer uses NodePort meaning when a loadBalancer type service is created it automatically maps to the NodePort. Although it's tricky but possible to create NodePort type service and manually configure the Google provided loadbalancer to point to NodePorts.

Kubernetes: how to access service if nodePort is random?

I'm new to K8s and am currently using Minikube to play around with the platform. How do I configure a public (i.e. outside the cluster) port for the service? I followed the nginx example, and K8s service tutorials. In my case, I created the service like so:
kubectl expose deployment/mysrv --type=NodePort --port=1234
The service's port is 1234 for anyone trying to access it from INSIDE the cluster. The minikube tutorials say I need to access the service directly through it's random nodePort, which works for manual testing purposes:
kubectl describe service mysrv | grep NodePort
...
NodePort: <unset> 32387/TCP
# curl "http://`minikube ip`:32387/"
But I don't understand how, in a real cluster, the service could have a fixed world-accessible port. The nginx examples describe something about using the LoadBalancer service kind, but they don't even specify ports there...
Any ideas how to fix the external port for the entire service?
The minikube tutorials say I need to access the service directly through it's random nodePort, which works for manual testing purposes:
When you create service object of type NodePort with a $ kubectl expose command you cannot choose your NodePort port. To choose a NodePort port you will need to create a YAML definition of it.
You can manually specify the port in service object of type Nodeport with below example:
apiVersion: v1
kind: Service
metadata:
name: example-nodeport
spec:
type: NodePort
selector:
app: hello # selector for deployment
ports:
- name: example-port
protocol: TCP
port: 1234 # CLUSTERIP PORT
targetPort: 50001 # POD PORT WHICH APPLICATION IS RUNNING ON
nodePort: 32222 # HERE!
You can apply above YAML definition by invoking command:
$ kubectl apply -f FILE_NAME.yaml
Above service object will be created only if nodePort port is available to use.
But I don't understand how, in a real cluster, the service could not have a fixed world-accessible port.
In clusters managed by cloud providers (for example GKE) you can use a service object of type LoadBalancer which will have a fixed external IP and fixed port.
Clusters that have nodes with public IP's can use service object of type NodePort to direct traffic into the cluster.
In minikube environment you can use a service object of type LoadBalancer but it will have some caveats described in last paragraph.
A little bit of explanation:
NodePort
Nodeport is exposing the service on each node IP at a static port. It allows external traffic to enter with the NodePort port. This port will be automatically assigned from range of 30000 to 32767.
You can change the default NodePort port range by following this manual.
You can check what is exactly happening when creating a service object of type NodePort by looking on this answer.
Imagine that:
Your nodes have IP's:
192.168.0.100
192.168.0.101
192.168.0.102
Your pods respond on port 50001 with hello and they have IP's:
10.244.1.10
10.244.1.11
10.244.1.12
Your Services are:
NodePort (port 32222) with:
ClusterIP:
IP: 10.96.0.100
port:7654
targetPort:50001
A word about targetPort. It's a definition for port on the pod that is for example a web server.
According to above example you will get hello response with:
NodeIP:NodePort (all the pods could respond with hello):
192.168.0.100:32222
192.168.0.101:32222
192.168.0.102:32222
ClusterIP:port (all the pods could respond with hello):
10.0.96.100:7654
PodIP:targetPort (only the pod that request is sent to can respond with hello)
10.244.1.10:50001
10.244.1.11:50001
10.244.1.12:50001
You can check access with curl command as below:
$ curl http://NODE_IP:NODEPORT
In the example you mentioned:
$ kubectl expose deployment/mysrv --type=NodePort --port=1234
What will happen:
It will assign a random port from range of 30000 to 32767 on your minikube instance directing traffic entering this port to pods.
Additionally it will create a ClusterIP with port of 1234
In the example above there was no parameter targetPort. If targetPort is not provided it will be the same as port in the command.
Traffic entering a NodePort will be routed directly to pods and will not go to the ClusterIP.
From the minikube perspective a NodePort will be a port on your minikube instance. It's IP address will be dependent on the hypervisor used. Exposing it outside your local machine will be heavily dependent on operating system.
LoadBalancer
There is a difference between a service object of type LoadBalancer(1) and an external LoadBalancer(2):
Service object of type LoadBalancer(1) allows to expose a service externally using a cloud provider’s LoadBalancer(2). It's a service within Kubernetes environment that through service controller can schedule a creation of external LoadBalancer(2).
External LoadBalancer(2) is a load balancer provided by cloud provider. It will operate at Layer 4.
Example definition of service of type LoadBalancer(1):
apiVersion: v1
kind: Service
metadata:
name: example-loadbalancer
spec:
type: LoadBalancer
selector:
app: hello
ports:
- port: 1234 # LOADBALANCER PORT
targetPort: 50001 # POD PORT WHICH APPLICATION IS RUNNING ON
nodePort: 32222 # PORT ON THE NODE
Applying above YAML will create a service of type LoadBalancer(1)
Take a specific look at:
ports:
- port: 1234 # LOADBALANCER PORT
This definition will simultaneously:
specify external LoadBalancer(2) port as 1234
specify ClusterIP port as 1234
Imagine that:
Your external LoadBalancer(2) have:
ExternalIP: 34.88.255.5
port:7654
Your nodes have IP's:
192.168.0.100
192.168.0.101
192.168.0.102
Your pods respond on port 50001 with hello and they have IP's:
10.244.1.10
10.244.1.11
10.244.1.12
Your Services are:
NodePort (port 32222) with:
ClusterIP:
IP: 10.96.0.100
port:7654
targetPort:50001
According to above example you will get hello response with:
ExternalIP:port (all the pods could respond with hello):
34.88.255.5:7654
NodeIP:NodePort (all the pods could respond with hello):
192.168.0.100:32222
192.168.0.101:32222
192.168.0.102:32222
ClusterIP:port (all the pods could respond with hello):
10.0.96.100:7654
PodIP:targetPort (only the pod that request is sent to can respond with hello)
10.244.1.10:50001
10.244.1.11:50001
10.244.1.12:50001
ExternalIP can be checked with command: $ kubectl get services
Flow of the traffic:
Client -> LoadBalancer:port(2) -> NodeIP:NodePort -> Pod:targetPort
Minikube: LoadBalancer
Note: This feature is only available for cloud providers or environments which support external load balancers.
-- Kubernetes.io: Create external LoadBalancer
On cloud providers that support load balancers, an external IP address would be provisioned to access the Service. On Minikube, the LoadBalancer type makes the Service accessible through the minikube service command.
-- Kubernetes.io: Hello minikube
Minikube can create service object of type LoadBalancer(1) but it will not create an external LoadBalancer(2).
The ExternalIP in command $ kubectl get services will have pending status.
To address that there is no external LoadBalancer(2) you can invoke $ minikube tunnel which will create a route from host to minikube environment to access the CIDR of ClusterIP directly.
There is a small mistake in Dawid Kruk’s answer,
Traffic entering a NodePort will be routed directly to pods and will
not go to the ClusterIP.
But as k8s documented here:
NodePort: Exposes the Service on each Node's IP at a static port (the
NodePort). A ClusterIP Service, to which the NodePort Service
routes, is automatically created. You'll be able to contact the
NodePort Service, from outside the cluster, by requesting
:.
Traffic entering a NodePort did go to ClusterIP.

What does the colon mean in the list of ports when running kubectl get services

If I run kubectl get services for a simple demo service I get the following response:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
demo-service LoadBalancer 10.104.48.115 <pending> 80:32264/TCP 18m
What does the : mean in the port list?
External access to the demo-service will happen via port 32264, which connects to port 80 on the docker container.
Meaning 80:32264/TCP this is,
You have demo-service and it is pointing 80 port to your pod and 32264/TCP means you can use NodeIP for accessing the application which is running in pod from external network (outside of cluster). And the : will separate these ports for your understanding which is external and internal port for accessing pod.
This means that your service demo-service can be reached on port 80 from other containers and on the NodePort 32264 from the "outer" world.
In this particular case it will be accessed by Load Balancer which is provisioned/managed by some sort of Kubernetes controller.
Though this is old, I want to write a different answer.
For Loadbalancer type of service, the port before : is the port your service exposed, usually specified by admin in service yaml file. The port after ':' is a random NodePort on the node, usually assigned by system.

Kubernetes Service not being assigned an (external) IP address

There are various answers for very similar questions around SO that all show what I expect my deployment to look like, however mine does not.
I am running Minikube 0.25, with Kubernetes 1.9 on Windows
10.
I have successfully created a node, a replication controller, and a
single pod template has been replicated 10 times.
The node is Minikube, and is assigned the IP address 10.49.106.251
The dashboard is available at 10.49.106.251:30000
I am deploying a service with a YAML file, but the service is never assigned an external IP - the result is the same if I happen to use kubectl expose.
The YAML file that I am using:
kind: Service
apiVersion: v1
metadata:
name: hello-service
spec:
type: NodePort
selector:
app: hello-world
ports:
- protocol: TCP
port: 8080
I can also use the YAML file to assign an external IP - I assign it the same value as the node IP address. Either way results in no possible connection to the service. I should also point out that the 10 replicated pods all match the selector.
The result of running kubectl get svc for the default, and after updating the external IP are below:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-service NodePort 10.108.61.233 <none> 8080:32406/TCP 1m
hello-service NodePort 10.108.61.233 10.49.106.251 8080:32406/TCP 1m
The tutorial I have been following, and the other answers on SO show a result similar to:
hello-service NodePort 10.108.61.233 <nodes> 8080:32406/TCP 1m
Where the difference is that the external IP is set to <nodes>
I have encountered a number of issues when running locally - is this just another case of doing so, or has someone else identified a way to get around the external IP assignment issue?
For local development purpose, I have also met with the problem of exposing a 'public IP' for my local development cluster.
Fortunately, I have found one of the kubectl command which can help:
kubectl port-forward service/service-name 9092
Where 9092 is the container port to expose, so that I can access applications inside the cluster, on my local development environment.
The important note is that it is not a 'production' grade solution.
Works well as a temporary hack to get to the cluster insides.
Using NodePort means it will open a port on all nodes of your cluster. In your example above, the port exposed to the outside world is 32406.
In order to access hello-service (if it is http) it will be http://[ the node ip]:32406/. This will hit your minikube and the the request will be routed to your pod in roundrobin fashion.
same problem when trying to deploy a simple helloworld image locally with Kubernetes v1.9.2
After two weeks of attempts , It seems that Kubernetes expose all nginx web server applications internally in port 80 not 8080
So this should work kubectl expose deployment hello-service --type=NodePort --port=80

How to assign a static IP to a pod using Kubernetes

currently kubectl assigns the IP address to a pod and that is shared within the pod by all the containers.
I am trying to assign a static IP address to a pod i.e in the same network range as the one assigned by kubectl, I am using the following deployment file
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: redis
spec:
replicas: 1
template:
metadata:
labels:
run: rediscont
spec:
containers:
- name: redisbase
image: localhost:5000/demo/redis
ports:
- containerPort: 6379
hostIP: 172.17.0.1
hostPort: 6379
On the dockerhost where its deployed i see the following:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4106d81a2310 localhost:5000/demo/redis "/bin/bash -c '/root/" 28 seconds ago Up 27 seconds k8s_redisbase.801f07f1_redis-1139130139-jotvn_default_f1776984-d6fc-11e6-807d-645106058993_1b598062
71b03cf0bb7a gcr.io/google_containers/pause:2.0 "/pause" 28 seconds ago Up 28 seconds 172.17.0.1:6379->6379/tcp k8s_POD.99e70374_redis-1139130139-jotvn_default_f1776984-d6fc-11e6-807d-645106058993_8c381981
The IP tables-save gives the following output
-A DOCKER -d 172.17.0.1/32 ! -i docker0 -p tcp -m tcp --dport 6379 -j DNAT --to-destination 172.17.0.3:6379
Even with this, from other pods the IP 172.17.0.1 is not accessible.
Basically the question is how to assign static IP to a pod so that 172.17.0.3 doesn't get assigned to it
Generally, assigning a Pod a static IP address is an anti-pattern in Kubernetes environments. There are a couple of approaches you may want to explore instead. Using a Service to front-end your Pods (or to front-end even just a single Pod) will give you a stable network identity, and allow you to horizontally scale your workload (if the workload supports it). Alternately, using a StatefulSet may be more appropriate for some workloads, as it will preserve startup order, host name, PersistentVolumes, etc., across Pod restarts.
I know this doesn't necessarily directly answer your question, but hopefully it provides some additional options or information that proves useful.
Assigning static IP addresses to PODs is not possible in OSS Kubernetes. But it is possible to configure via some CNI plugins. For instance, Calico provides a way to override IPAM and use fixed addresses by annotating pod. The address must be within a configured Calico IP pool and not currently in use.
https://docs.projectcalico.org/networking/use-specific-ip
When you created Deployment with one replica and defined hostIP and hostPort
you basically bounded hostIP and hostPort of your host machine with your pod IP and container port, so that traffic is routed from hostIP: port to podIP: port.
Created pod (and container inside of it ) was assigned the ip address from the IP range that is available to it. Basically, the IP range depends on the CNI networking plugin used and how it allocates IP range to each node. For instance flannel, by default, provides a /24 subnet to hosts, from which Docker daemon allocates IPs to containers. So hostIP: 172.17.0.1 option in a spec has nothing to do with assigning IP address to a pod.
Basically, the question is how to assign static IP to a pod so that
172.17.0.3 doesn't get assigned to it
As far as I know, all major networking plugins, provide a range of IPs to hosts, so that a pod's IP will be assigned from that range.
You can explore different networking plugins and look at how each of them deals with IPAM(IP Address Management), maybe some plugin provides that functionality or offers some tweaks to implement that, but overall its usefulness would be quite limited.
Below is useful info on "hostIP, hostPort" from official K8 docs:
Don’t specify a hostPort for a Pod unless it is absolutely necessary.
When you bind a Pod to a hostPort, it limits the number of places the
Pod can be scheduled, because each
combination must be unique. If you don’t specify the hostIP and
protocol explicitly, Kubernetes will use 0.0.0.0 as the default hostIP
and TCP as the default protocol.
If you only need access to the port
for debugging purposes, you can use the apiserver proxy or kubectl
port-forward.
If you explicitly need to expose a Pod’s port on the
node, consider using a NodePort Service before resorting to hostPort.
Avoid using hostNetwork, for the same reasons as hostPort.
Orignal info link to config best practices.