What does the colon mean in the list of ports when running kubectl get services - kubernetes

If I run kubectl get services for a simple demo service I get the following response:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
demo-service LoadBalancer 10.104.48.115 <pending> 80:32264/TCP 18m
What does the : mean in the port list?

External access to the demo-service will happen via port 32264, which connects to port 80 on the docker container.

Meaning 80:32264/TCP this is,
You have demo-service and it is pointing 80 port to your pod and 32264/TCP means you can use NodeIP for accessing the application which is running in pod from external network (outside of cluster). And the : will separate these ports for your understanding which is external and internal port for accessing pod.

This means that your service demo-service can be reached on port 80 from other containers and on the NodePort 32264 from the "outer" world.
In this particular case it will be accessed by Load Balancer which is provisioned/managed by some sort of Kubernetes controller.

Though this is old, I want to write a different answer.
For Loadbalancer type of service, the port before : is the port your service exposed, usually specified by admin in service yaml file. The port after ':' is a random NodePort on the node, usually assigned by system.

Related

expose Istio-gateway on port 80

I'm running a bare metal Kubernetes cluster with 1 master node and 3 worker Nodes. I have a bunch of services deployed inside with Istio as an Ingress-gateway.
Everything works fine since I can access my services from outside using the ingress-gateway NodePort.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingressgateway LoadBalancer 10.106.9.2 <pending> 15021:32402/TCP,80:31106/TCP,443:31791/TCP 2d23h
istiod ClusterIP 10.107.220.130 <none> 15010/TCP,15012/TCP,443/TCP,15014/TCP 2d23h
In our case the port 31106.
The issues is, I don't want my customer to access my service on port 31106. that's not user friendly. So is there a way to expose the port 80 to the outside ?
In other word, instead of typing http://example.com:31106/ , I want them to be able to type http://example.com/
Any solution could help.
Based on official documentation:
If the EXTERNAL-IP value is set, your environment has an external load balancer that you can use for the ingress gateway. If the EXTERNAL-IP value is <none> (or perpetually <pending>), your environment does not provide an external load balancer for the ingress gateway. In this case, you can access the gateway using the service’s node port.
This is in line with what David Maze wrote in the comment:
A LoadBalancer-type service would create that load balancer, but only if Kubernetes knows how; maybe look up metallb for an implementation of that. The NodePort port number will be stable unless the service gets deleted and recreated, which in this case would mean wholesale uninstalling and reinstalling Istio.
In your situation you need to access the gateway using the NodePort. Then you can configure istio. Everything is described step by step in this doc. You need to choose the instructions corresponding to NodePort and then set the ingress IP depends on the cluster provider. You can also find sample yaml files in the documentation.

GKE 1 load balancer with multiple apps on different assigned ports

I want to be able to deploy several, single pod, apps and access them on a single IP address leaning on Kubernetes to assign the ports as they are when you use a NodePort service.
Is there a way to use NodePort with a load balancer?
Honestly, NodePort might work by itself, but GKE seems to block direct access to the nodes. There doesn't seem to be firewall controls like on their unmanaged VMs.
Here's a service if we need something to base an answer on. In this case, I want to deploy 10 these services which are different applications, on the same IP, each publicly accessible on a different port, each proxying port 80 of the nginx container.
---
apiVersion: v1
kind: Service
metadata:
name: foo-svc
spec:
selector:
app: nginx
ports:
- name: foo
protocol: TCP
port: 80
type: NodePort
GKE seems to block direct access to the nodes.
GCP allows creating the FW rules that allow incoming traffic either to 'All Instances in the Network' or 'Specified Target Tags/Service Account' in your VPC Network.
Rules are persistent unless the opposite is specified under the organization's policies.
Node's external IP address can be checked at Cloud Console --> Compute Engine --> VM Instances or with kubectl get nodes -o wide.
I run GKE (managed k8s) and can access all my assets externally.
I have opened all the needed ports in my setup. below is the quickest example.
Below you can find my setup:
$ kubectl get nodes -o wide
NAME AGE VERSION INTERNAL-IP EXTERNAL-IP
gke--mnnv 43d v1.14.10-gke.27 10.156.0.11 34.89.x.x
gke--nw9v 43d v1.14.10-gke.27 10.156.0.12 35.246.x.x
kubectl get svc -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) SELECTOR
knp-np NodePort 10.0.11.113 <none> 8180:30008/TCP 8180:30009/TCP app=server-go
$ curl 35.246.x.x:30008/test
Hello from ServerGo. You requested: /test
That is why it looks like a bunch of NodePort type Services would be sufficient (each one serves requests for particular selector)
If for some reason it's not possible to set up the FW rules to allow traffic directly to your Nodes it's possible to configure GCP TCP LoadBalancer.
Cloud Console --> Network Services --> Load Balancing --> Create LB --> TCP Load Balancing.
There you can select your GKE Nodes (or pool of nodes) as a 'Backend' and specify all the needed ports for the 'Frontend'. For the Frontend you can Reserve Static IP right during the configuration and specify 'Port' range as two port numbers separated by a dash (assuming you have multiple ports to be forwarded to your node pool). Additionally, you can create multiple 'Frontends' if needed.
I hope that helps.
Is there a way to use NodePort with a load balancer?
Kubernetes LoadBalancer type service builds on top of NodePort. So internally LoadBalancer uses NodePort meaning when a loadBalancer type service is created it automatically maps to the NodePort. Although it's tricky but possible to create NodePort type service and manually configure the Google provided loadbalancer to point to NodePorts.

How to access kubernetes services externally on bare-metal cluster

I have a api-service with type 'ClusterIp' which is working fine and is accessible on the node with clusterip. I want to access it externally . It's a baremetal installation with kubeadm . I cannot use Loadbalancer or Nodeport.
If I use nginx-ingress that too I will use as 'ClusterIP' so how to get the service externally accessible in either api service or nginx-ingress case .
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
api ClusterIP 10.97.48.17 <none> 80/TCP 41s
ingress-nginx ClusterIP 10.107.76.178 <none> 80/TCP 3h49m
Changes to solve the issue:
nginx configuration on node
in /etc/nginx/sites-available
upstream backend {
server node1:8001;
server node2:8001;
server node3:8001;
}
server_name _;
location / {
proxy_pass http://backend;
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
}
Ran my two services as DaemonSet
ClusterIP services are accesible only within the cluster.
For bare metal clusters, you can use any of the following approaches to make a service available externally. Suggestions are from most recommended to least recommended order:
Use metallb to implement LoadBalancer service type support - https://metallb.universe.tf/. You will need a pool of IP addresses for metallb to handout. It also supports IP sharing mode where you can use same IP for multiple LoadBalancer services.
Use NodePort service. You can access your service from any node IP:node_port address. NodePort service selects random port in node port range by default. You can choose a custom port in node port range using spec.ports.nodePort field in the service specification.
Disadvantage: The node port range by default is 30000-32767. So you cannot bind to any custom port that you want like 8080. Although you can change the node port range with --service-node-port-range flag of kube-api-server, it is not recommended to use it with low port ranges.
Use hostPort to bind a port on the node.
Disadvantage: You don't have fixed IP address because you don't know which node your pod gets scheduled to unless you use nodeAffinity. You can make your pod a daemonset if you want it to be accessible from all nodes on the given port.
If you are dealing with HTTP traffic, another option is installing an IngressController like nginx or Traefik and use Ingress resource. As part of their installation, they use one of the approaches mentioned above to make themselves available externally.
Well, as you can guess by reading the name, ClusterIp is only accessible from inside the cluster.
To make a service accessible from outside the cluster, you havec 3 options :
NodePort Service type
LoadBalancer Service type (you still have to manage your LoadBalancer manually though)
Ingress
There is a fourth option which is hostPort (which is not a service type) but I'd rather keep it from special case when you're absolutely sure that your pod will always be located on the same node (or eventually for debug).
Having this said, then this leaves us only with one solution offered by Kubernetes : Ingress.

Kubernetes is giving junk external ip

Kubernetes is giving junk external ip, check output of below command:
$ kubectl get svc frontend -n web-console
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
frontend LoadBalancer 100.68.90.01 a55ea503bbuddd... 80:31161/TCP 5d
Please help me to understand what's this external IP means
According to this : https://kubernetes.io/docs/concepts/services-networking/service/#external-ips
Traffic that ingresses into the cluster with the external IP (as destination IP), on the Service port, will be routed to one of the Service endpoints. externalIPs are not managed by Kubernetes and are the responsibility of the cluster administrator.
It seems you selected LoadBalancer type your cloud provider provided you a loadbalancer and that externalip is that loadbalancer dns name.
The options that allows you expose your application for access from outside the cluster are:
Kubernetes Service of type LoadBalancer
Kubernetes Service of type ‘NodePort’ + Ingress
A Service in Kubernetes is an abstraction defining a logical set of Pods and an access policy and it can be exposed in different ways by specifying a type (ClusterIP, NodePort, LoadBalancer) in the service spec. The LoadBalancer type is the simplest approach.
Once the service is created, it has an external IP address as in your output:
$ kubectl get svc frontend -n web-console
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
frontend LoadBalancer 100.68.90.01 a55ea503bbuddd... 80:31161/TCP 5d
Now, service 'frontend' can be accessible from outside the cluster without the need for additional components like an Ingress.
To test the external IP run this curl command from your machine:
$ curl http://<external-ip>:<port>
where is the external IP address of your Service, and is the value of Port in your Service description.
ExternalIP gives possibility to access services from outside the cluster (ExternalIP is an endpoint). A ClusterIP type service with an ExternalIP can still be accessed inside the cluster using its service.namespace DNS name, but now it can also be accessed from its external endpoint, too.

what is the use of external IP address with a ClusterIP service type in kubernetes

What is the use of external IP address option in kubernetes service when the service is of type ClusterIP
An ExternalIP is just an endpoint through which services can be accessed from outside the cluster, so a ClusterIP type service with an ExternalIP can still be accessed inside the cluster using its service.namespace DNS name, but now it can also be accessed from its external endpoint, too. For instance, you could set the ExternalIP to the IP of one of your k8s nodes, or create an ingress to your cluster on that IP.
ClusterIP is the default service type in Kubernetes which allows you to reach your service only within the cluster.
If your service type is set as LoadBalancer or NodePort, ClusterIP is automatically created and LoadBalancer or NodePort service will route to this ClusterIP IP address.
The new external IP addresses are only allocated with LoadBalancer type.
You can also use the node's external IP addresses when you set your service as NodePort. But in this case you will need extra firewall rules for your nodes to allow ingress traffic for your exposed node ports.
When you are using service with type: ClusterIP it has only Cluster IP and no external IP address <none>.
ClusterIP is unique Ip given from IP pool to a service and to access pods of this service cluster IP can be used only inside a cluster.Cluster IP is default service type in kubernetes.
kubectl expose deployment nginx --port=80 --target-port=80 --type=LoadBalancer
above example will create a service with external IP and cluster IP.
In case of loadbalancer,nodeport services ,the service can be accessed from other clusters through externalIP
Just to add to coolinuxoid answer. I'm working with GKE and based on their documentation when you add a Service of type ClusterIP they offer to access it via:
Accessing your Service
List your running Pods:
kubectl get pods
In the output, copy one of the Pod names that begins with
my-deployment.
NAME READY STATUS RESTARTS AGE
my-deployment-dbd86c8c4-h5wsf 1/1 Running 0 2m51s
Get a shell into one of your running containers:
kubectl exec -it pod-name -- sh
where pod-name is the name of one of the Pods in my-deployment.
In your shell, install curl:
apk add --no-cache curl
In the container, make a request to your Service by using your cluster
IP address and port 80. Notice that 80 is the value of the port field
of your Service. This is the port that you use as a client of the
Service.
curl cluster-ip:80
So, while you might find a way to route to such a Service from outside, it's not a recommended/ordinary way to do it.
As a better alternative either use:
LoadBalancer if you are working on a project with a lot of services and detailed requirements.
NodePort if you are working on something smaller where cloud-native load balancing is not needed and you don't mind using your node's IP directly to map it to the Service. Incidentally, the same document does suggest an option to do so(tested it personally; works like a charm):
If the nodes in your cluster have external IP addresses, find the
external IP address of one of your nodes:
kubectl get nodes --output wide
The output shows the external IP addresses of your nodes:
NAME STATUS ROLES AGE VERSION EXTERNAL-IP
gke-svc-... Ready none 1h v1.9.7-gke.6 203.0.113.1
Not all clusters have external IP addresses for nodes. For example,
the nodes in private clusters do not have external IP addresses.
Create a firewall rule to allow TCP traffic on your node port:
gcloud compute firewall-rules create test-node-port --allow
tcp:node-port
where: node-port is the value of the nodePort field of your Service.