I'm new to K8s and am currently using Minikube to play around with the platform. How do I configure a public (i.e. outside the cluster) port for the service? I followed the nginx example, and K8s service tutorials. In my case, I created the service like so:
kubectl expose deployment/mysrv --type=NodePort --port=1234
The service's port is 1234 for anyone trying to access it from INSIDE the cluster. The minikube tutorials say I need to access the service directly through it's random nodePort, which works for manual testing purposes:
kubectl describe service mysrv | grep NodePort
...
NodePort: <unset> 32387/TCP
# curl "http://`minikube ip`:32387/"
But I don't understand how, in a real cluster, the service could have a fixed world-accessible port. The nginx examples describe something about using the LoadBalancer service kind, but they don't even specify ports there...
Any ideas how to fix the external port for the entire service?
The minikube tutorials say I need to access the service directly through it's random nodePort, which works for manual testing purposes:
When you create service object of type NodePort with a $ kubectl expose command you cannot choose your NodePort port. To choose a NodePort port you will need to create a YAML definition of it.
You can manually specify the port in service object of type Nodeport with below example:
apiVersion: v1
kind: Service
metadata:
name: example-nodeport
spec:
type: NodePort
selector:
app: hello # selector for deployment
ports:
- name: example-port
protocol: TCP
port: 1234 # CLUSTERIP PORT
targetPort: 50001 # POD PORT WHICH APPLICATION IS RUNNING ON
nodePort: 32222 # HERE!
You can apply above YAML definition by invoking command:
$ kubectl apply -f FILE_NAME.yaml
Above service object will be created only if nodePort port is available to use.
But I don't understand how, in a real cluster, the service could not have a fixed world-accessible port.
In clusters managed by cloud providers (for example GKE) you can use a service object of type LoadBalancer which will have a fixed external IP and fixed port.
Clusters that have nodes with public IP's can use service object of type NodePort to direct traffic into the cluster.
In minikube environment you can use a service object of type LoadBalancer but it will have some caveats described in last paragraph.
A little bit of explanation:
NodePort
Nodeport is exposing the service on each node IP at a static port. It allows external traffic to enter with the NodePort port. This port will be automatically assigned from range of 30000 to 32767.
You can change the default NodePort port range by following this manual.
You can check what is exactly happening when creating a service object of type NodePort by looking on this answer.
Imagine that:
Your nodes have IP's:
192.168.0.100
192.168.0.101
192.168.0.102
Your pods respond on port 50001 with hello and they have IP's:
10.244.1.10
10.244.1.11
10.244.1.12
Your Services are:
NodePort (port 32222) with:
ClusterIP:
IP: 10.96.0.100
port:7654
targetPort:50001
A word about targetPort. It's a definition for port on the pod that is for example a web server.
According to above example you will get hello response with:
NodeIP:NodePort (all the pods could respond with hello):
192.168.0.100:32222
192.168.0.101:32222
192.168.0.102:32222
ClusterIP:port (all the pods could respond with hello):
10.0.96.100:7654
PodIP:targetPort (only the pod that request is sent to can respond with hello)
10.244.1.10:50001
10.244.1.11:50001
10.244.1.12:50001
You can check access with curl command as below:
$ curl http://NODE_IP:NODEPORT
In the example you mentioned:
$ kubectl expose deployment/mysrv --type=NodePort --port=1234
What will happen:
It will assign a random port from range of 30000 to 32767 on your minikube instance directing traffic entering this port to pods.
Additionally it will create a ClusterIP with port of 1234
In the example above there was no parameter targetPort. If targetPort is not provided it will be the same as port in the command.
Traffic entering a NodePort will be routed directly to pods and will not go to the ClusterIP.
From the minikube perspective a NodePort will be a port on your minikube instance. It's IP address will be dependent on the hypervisor used. Exposing it outside your local machine will be heavily dependent on operating system.
LoadBalancer
There is a difference between a service object of type LoadBalancer(1) and an external LoadBalancer(2):
Service object of type LoadBalancer(1) allows to expose a service externally using a cloud provider’s LoadBalancer(2). It's a service within Kubernetes environment that through service controller can schedule a creation of external LoadBalancer(2).
External LoadBalancer(2) is a load balancer provided by cloud provider. It will operate at Layer 4.
Example definition of service of type LoadBalancer(1):
apiVersion: v1
kind: Service
metadata:
name: example-loadbalancer
spec:
type: LoadBalancer
selector:
app: hello
ports:
- port: 1234 # LOADBALANCER PORT
targetPort: 50001 # POD PORT WHICH APPLICATION IS RUNNING ON
nodePort: 32222 # PORT ON THE NODE
Applying above YAML will create a service of type LoadBalancer(1)
Take a specific look at:
ports:
- port: 1234 # LOADBALANCER PORT
This definition will simultaneously:
specify external LoadBalancer(2) port as 1234
specify ClusterIP port as 1234
Imagine that:
Your external LoadBalancer(2) have:
ExternalIP: 34.88.255.5
port:7654
Your nodes have IP's:
192.168.0.100
192.168.0.101
192.168.0.102
Your pods respond on port 50001 with hello and they have IP's:
10.244.1.10
10.244.1.11
10.244.1.12
Your Services are:
NodePort (port 32222) with:
ClusterIP:
IP: 10.96.0.100
port:7654
targetPort:50001
According to above example you will get hello response with:
ExternalIP:port (all the pods could respond with hello):
34.88.255.5:7654
NodeIP:NodePort (all the pods could respond with hello):
192.168.0.100:32222
192.168.0.101:32222
192.168.0.102:32222
ClusterIP:port (all the pods could respond with hello):
10.0.96.100:7654
PodIP:targetPort (only the pod that request is sent to can respond with hello)
10.244.1.10:50001
10.244.1.11:50001
10.244.1.12:50001
ExternalIP can be checked with command: $ kubectl get services
Flow of the traffic:
Client -> LoadBalancer:port(2) -> NodeIP:NodePort -> Pod:targetPort
Minikube: LoadBalancer
Note: This feature is only available for cloud providers or environments which support external load balancers.
-- Kubernetes.io: Create external LoadBalancer
On cloud providers that support load balancers, an external IP address would be provisioned to access the Service. On Minikube, the LoadBalancer type makes the Service accessible through the minikube service command.
-- Kubernetes.io: Hello minikube
Minikube can create service object of type LoadBalancer(1) but it will not create an external LoadBalancer(2).
The ExternalIP in command $ kubectl get services will have pending status.
To address that there is no external LoadBalancer(2) you can invoke $ minikube tunnel which will create a route from host to minikube environment to access the CIDR of ClusterIP directly.
There is a small mistake in Dawid Kruk’s answer,
Traffic entering a NodePort will be routed directly to pods and will
not go to the ClusterIP.
But as k8s documented here:
NodePort: Exposes the Service on each Node's IP at a static port (the
NodePort). A ClusterIP Service, to which the NodePort Service
routes, is automatically created. You'll be able to contact the
NodePort Service, from outside the cluster, by requesting
:.
Traffic entering a NodePort did go to ClusterIP.
Related
I want to be able to deploy several, single pod, apps and access them on a single IP address leaning on Kubernetes to assign the ports as they are when you use a NodePort service.
Is there a way to use NodePort with a load balancer?
Honestly, NodePort might work by itself, but GKE seems to block direct access to the nodes. There doesn't seem to be firewall controls like on their unmanaged VMs.
Here's a service if we need something to base an answer on. In this case, I want to deploy 10 these services which are different applications, on the same IP, each publicly accessible on a different port, each proxying port 80 of the nginx container.
---
apiVersion: v1
kind: Service
metadata:
name: foo-svc
spec:
selector:
app: nginx
ports:
- name: foo
protocol: TCP
port: 80
type: NodePort
GKE seems to block direct access to the nodes.
GCP allows creating the FW rules that allow incoming traffic either to 'All Instances in the Network' or 'Specified Target Tags/Service Account' in your VPC Network.
Rules are persistent unless the opposite is specified under the organization's policies.
Node's external IP address can be checked at Cloud Console --> Compute Engine --> VM Instances or with kubectl get nodes -o wide.
I run GKE (managed k8s) and can access all my assets externally.
I have opened all the needed ports in my setup. below is the quickest example.
Below you can find my setup:
$ kubectl get nodes -o wide
NAME AGE VERSION INTERNAL-IP EXTERNAL-IP
gke--mnnv 43d v1.14.10-gke.27 10.156.0.11 34.89.x.x
gke--nw9v 43d v1.14.10-gke.27 10.156.0.12 35.246.x.x
kubectl get svc -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) SELECTOR
knp-np NodePort 10.0.11.113 <none> 8180:30008/TCP 8180:30009/TCP app=server-go
$ curl 35.246.x.x:30008/test
Hello from ServerGo. You requested: /test
That is why it looks like a bunch of NodePort type Services would be sufficient (each one serves requests for particular selector)
If for some reason it's not possible to set up the FW rules to allow traffic directly to your Nodes it's possible to configure GCP TCP LoadBalancer.
Cloud Console --> Network Services --> Load Balancing --> Create LB --> TCP Load Balancing.
There you can select your GKE Nodes (or pool of nodes) as a 'Backend' and specify all the needed ports for the 'Frontend'. For the Frontend you can Reserve Static IP right during the configuration and specify 'Port' range as two port numbers separated by a dash (assuming you have multiple ports to be forwarded to your node pool). Additionally, you can create multiple 'Frontends' if needed.
I hope that helps.
Is there a way to use NodePort with a load balancer?
Kubernetes LoadBalancer type service builds on top of NodePort. So internally LoadBalancer uses NodePort meaning when a loadBalancer type service is created it automatically maps to the NodePort. Although it's tricky but possible to create NodePort type service and manually configure the Google provided loadbalancer to point to NodePorts.
Kubernetes newbie here. Just want to get my fundamental understanding correct. Minikube is known for local development and is it possible for connection outside (not just outside cluster) to access the pods I have deployed in minikube?
I am running my minikube in ec2 instance so I started my minikube with command minikube start --vm-driver=none, which means running minikube with Docker, no VM provisioned. My end goal is to allow connection outside to reach my pods inside the cluster and perform POST request through the pod (for example using Postman).
If yes, I also have my service resource applied using kubectl apply
-f into my minikube using NodePort in yaml file. Also, I also wish to understand port, nodePort, and targetPort correctly. port is
the port number assigned to that particular service, nodePort is the
port number on the node (in my case is my ec2 instance private IP),
targetPort is the port number equivalent to the containerPort I've
assigned in yaml of my deployment. Correct me if I am wrong in this statement.
Thanks.
Yes you can do that
as you have started the minikube with :
minikube start --vm-driver=none
nodePort is the port that a client outside of the cluster will "see". nodePort is opened on every node in your cluster via kube-proxy. You can use nodePort to access the application from outside world. Like https://loadbalancerIP:NodePort
port is the port your service listens on inside the cluster. Let's take this example:
apiVersion: v1
kind: Service
metadata:
name: test-service
spec:
ports:
- port: 8080
targetPort: 8070
nodePort: 31222
protocol: TCP
selector:
component: test-service-app
From inside k8s cluster this service will be reachable via http://test-service.default.svc.cluster.local:8080 (service to service communication inside your cluster) and any request reaching there is forwarded to a running pod on targetPort 8070.
tagetPort is also by default the same value as port if not specified otherwise.
I got the following service defined:
apiVersion: v1
kind: Service
metadata:
name: customerservice
spec:
type: NodePort
selector:
app: customerapp
ports:
- protocol: TCP
port: 31004
nodePort: 31004
targetPort: 8080
Current situation: I am able to hit the pod via the service IP.
Now my goal is to reach the customerservice via the name of the service, which does not work right now. So I would simply type http://customerservice:31004 instead of http://<IP>:31004.
DNS resolution of services is ONLY available within the cluster, provided by CoreDNS/KubeDNS.
Should you wish to have access to this locally on your machine, you'd need to use another tool. One such tool is kubefwd:
https://github.com/txn2/kubefwd
A slightly simpler solution, is to use port-forward; which is a very simple way to access a single service locally.
kubectl port-forward --namespace=whatever svs/service-name port
EDIT:// I've made the assumption that you want to use the service DNS locally, as I'm assuming by saying:
I would simply type http://customerservice:31004
is in the context of your web browser.
Normal Services are assigned a DNS A record for a name of the form my-svc.my-namespace.svc.cluster.local. This resolves to the cluster IP of the Service. This DNS entry is present inside the kubernetes cluster only and hence you're able to access the service by name from inside the kubernetes pod.
Now, if you want to access your kubernetes service by name from one of the node you need to modify the /etc/resolve.conf of your node with <svc_name>.<namespace>.svc.cluster.local, please have a look at following /etc/resolve.conf
search ec2.internal default.svc.cluster.local svc.cluster.local cluster.local
nameserver 10.96.0.10
nameserver is clusterIP of kube-dns service, you can find it using kubectl get svc kube-dns -n kube-system
Now you will be able to curl your service as curl ui.default.svc.cluster.local:80
There are various answers for very similar questions around SO that all show what I expect my deployment to look like, however mine does not.
I am running Minikube 0.25, with Kubernetes 1.9 on Windows
10.
I have successfully created a node, a replication controller, and a
single pod template has been replicated 10 times.
The node is Minikube, and is assigned the IP address 10.49.106.251
The dashboard is available at 10.49.106.251:30000
I am deploying a service with a YAML file, but the service is never assigned an external IP - the result is the same if I happen to use kubectl expose.
The YAML file that I am using:
kind: Service
apiVersion: v1
metadata:
name: hello-service
spec:
type: NodePort
selector:
app: hello-world
ports:
- protocol: TCP
port: 8080
I can also use the YAML file to assign an external IP - I assign it the same value as the node IP address. Either way results in no possible connection to the service. I should also point out that the 10 replicated pods all match the selector.
The result of running kubectl get svc for the default, and after updating the external IP are below:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-service NodePort 10.108.61.233 <none> 8080:32406/TCP 1m
hello-service NodePort 10.108.61.233 10.49.106.251 8080:32406/TCP 1m
The tutorial I have been following, and the other answers on SO show a result similar to:
hello-service NodePort 10.108.61.233 <nodes> 8080:32406/TCP 1m
Where the difference is that the external IP is set to <nodes>
I have encountered a number of issues when running locally - is this just another case of doing so, or has someone else identified a way to get around the external IP assignment issue?
For local development purpose, I have also met with the problem of exposing a 'public IP' for my local development cluster.
Fortunately, I have found one of the kubectl command which can help:
kubectl port-forward service/service-name 9092
Where 9092 is the container port to expose, so that I can access applications inside the cluster, on my local development environment.
The important note is that it is not a 'production' grade solution.
Works well as a temporary hack to get to the cluster insides.
Using NodePort means it will open a port on all nodes of your cluster. In your example above, the port exposed to the outside world is 32406.
In order to access hello-service (if it is http) it will be http://[ the node ip]:32406/. This will hit your minikube and the the request will be routed to your pod in roundrobin fashion.
same problem when trying to deploy a simple helloworld image locally with Kubernetes v1.9.2
After two weeks of attempts , It seems that Kubernetes expose all nginx web server applications internally in port 80 not 8080
So this should work kubectl expose deployment hello-service --type=NodePort --port=80
I've setup a NodePort service using the following config:
wordpress-service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app: wordpress
name: wordpress
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
protocol: TCP
selector:
app: wordpress
Is this sufficient to access the service externally, if so how can I now access the service? What details do I need - and how do I determine them - for example node IP.
For Kubernetes on GCE:
We had the same question regarding services of type NodePort: How do we access node port services from our own host?
#ivan.sim 's answer (nodeIp:nodePort) is on mark however, you still wouldn't be able to access your service unless you add a firewall ingress (inbound to google cloud) traffic rule on the VPC network console to allow your host to be able to access your compute node
the above rule is dangerous and should be used only during development
You can find the node port using either the Google Cloud console or by running subsequent kubectl commands to find out the node running your pod which has your container. i.e kubectl get pods , kubectl describe pod your-pod-name, kubectl describe node node-that-runs-you-pod .status.addresses has your ExternalIP
It would be great if we could extract the node ip running our container in the pod using only a label/selector and a few line of commands, so here is what we did, in this case our selector is app: your-label:
$ nodename=$(kubectl get pods -o jsonpath='{.items[?(#.metadata.labels.app=="your-label")].spec.nodeName}')
$ nodeIp=$(kubectl get nodes -o jsonpath='{.items[?(#.metadata.name=="'$(echo $nodename)'")].status.addresses[?(#.type=="ExternalIP")].address}')
$ echo nodeIp
notice: we used json path to extract the information we desired, for more on json path see: json path
You could certainly turn this into a script that takes a label/selector as input and outputs an external ip of the node running your container !!!
To get the nodeport just type:
$ kubectl get services
under the PORT(S) columns you will see something like tagetPort:nodePort. this nodeport is what you want .
nodeIp:nodePort
When you define a service as type NodePort, every node in your cluster will proxy that port to your service. If you nodes are reachable from outside the Kubernetes cluster, you should be able to access the service at nodeIP:nodePort.
To determine nodeIP of a particular node, you can use either kubectl get no <node> -o yaml or kubectl describe no <node>. The status.Addresses field will be of interest. Generally, you will see fields like HostName, ExternalIP and InternalIP there.
To determine nodePort of your service, you can use either kubectl get svc wordpress -o yaml or kubectl describe svc wordpress. The spec.ports.nodePort is the port you need.
Service defined like this got assgned a high port number and is exposed on all your cluster nodes on that port (probably something like 3xxxx). Hard to tell the rest without proper knowledge of how your cluster is provisioned. kubectl get nodes should give you some knowledge about your nodes.
Although I assume you want to expose the service to the outside world. In the long run I suggest getting familiar with LoadBalancer type services and Ingress / IngressController