How is container port different from targetports in a container in Kubernetes?
Are they used interchangeably, if so why?
I came across the below code snippet where containerPort is used to denote the port on a pod in Kubernetes.
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-deployment
labels:
app: demo-voting-app
spec:
replicas: 1
selector:
matchLabels:
name: postgres-pod
app: demo-voting-app
template:
metadata:
name: postgres-pod
labels:
name: postgres-pod
app: demo-voting-app
spec:
containers:
- name: postgres
image: postgres:9.4
ports:
- containerPort: 5432
In the above code snippet, they have given 5432 for the containerPort parameter (in the last line). So, how is this containerPort different from targetport?
As far as I know, the term port in general refers to the port on the service (Kubernetes). Correct me if I'm incorrect.
In a nutshell: targetPort and containerPort basically refer to the same port (so if both are used they are expected to have the same value) but they are used in two different contexts and have entirely different purposes.
They cannot be used interchangeably as both are part of the specification of two distinct kubernetes resources/objects: Service and Pod respectively. While the purpose of containerPort can be treated as purely informational, targetPort is required by the Service which exposes a set of Pods.
It's important to understand that by declaring containerPort with the specific value in your Pod/Deployment specification you cannot make your Pod to expose this specific port e.g. if you declare in containerPort field that your nginx Pod exposes port 8080 instead of default 80, you still need to configure your nginx server in your container to listen on this port.
Declaring containerPort in Pod specification is optional. Even without it your Service will know where to direct the request based on the info it has declared in its targetPort.
It's good to remember that it's not required to declare targetPort in the Service definition. If you omit it, it defaults to the value you declared for port (which is the port of the Service itself).
ContainerPort in pod spec
List of ports to expose from the container. Exposing a port here gives the system additional information about the network connections a container uses, but is primarily informational. Not specifying a port here DOES NOT prevent that port from being exposed
targetPort in service spec
Number or name of the port to access on the pods targeted by the service. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. If this is a string, it will be looked up as a named port in the target Pod's container ports. If this is not specified, the value of the 'port' field is used (an identity map).
Hence targetPort in service needs to match the containerPort in pod spec because that's how service knows which container port is destination to forward the traffic to.
containerPort is the port, which app inside the container can be reached on.
targetPort is the port, which is exposed in the cluster and the service connects the pod to other services or users.
Related
Excuse my relative networking ignorance, but I've read a lot of docs and still have trouble understanding this (perhaps due to lack of background in networks).
Given this Dockerfile:
from node:lts-slim
RUN mkdir /code
COPY package.json /code/
WORKDIR /code
RUN npm install
COPY server.js /code/
EXPOSE 3000
CMD ["node", "server.js"]
...this deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-deployment
spec:
replicas: 2
selector:
matchLabels:
app: web-pod
template:
metadata:
labels:
app: web-pod
spec:
containers:
- name: web
image: kahunacohen/hello-k8s
ports:
- containerPort: 3000
protocol: TCP
and this service:
apiVersion: v1
kind: Service
metadata:
name: web-service
spec:
type: NodePort
selector:
app: web-pod
ports:
- port: 80
targetPort: 3000
protocol: TCP
name: http
My understanding is that:
The app in my container is exposing itself to the outside world on 3000
my deployment yaml is saying, "the container is listening on 3000"
my service is saying map 3000 internally to port 80, which is the default port, so you don't have to add the port to the host.
I'm using the NodePort type because on local clusters like Docker Desktop it works out of the box instead of LoadBalancer. It opens up a random port on every node (pod?) to the outside in the cluster between 30000–32767. That node port is how I access my app from outside. E.g. localhost:30543.
Are my assumptions correct? I am unclear why I can't access my app at localhost:80, or just localhost, if the service makes the mapping between the container port and the outside world? What's the point of the mapping between 3000 and 80 in the service?
In short, why do I need NodePort?
There are two networking layers, which we could call "inside the cluster" and "outside the cluster". The Pod and the Service each have their own IP address, but these are only inside the cluster. You need the NodePort to forward a request from outside the cluster to inside the cluster.
In a "real" Kubernetes cluster, you'd make a request...
...to http://any-kubernetes-node.example.com:31245/, with a "normal" IP address in the way you'd expect a physical system to have, connecting to the NodePort port, which forwards...
...to http://web-service.default.svc.cluster.local:80/, with a cluster-internal IP address and the service port, which looks at the pods it selects and forwards...
...to http://10.20.30.40:3000/, using the cluster-internal IP address of any of the matching pods and the target port from the service.
The containerPort: in the pod spec isn't strictly required (but if you give it name: http then you can have the service specify targetPort: http without knowing the specific port number). EXPOSE in the Dockerfile means pretty much nothing in this sequence.
This sequence also gives you some flexibility in not needing to know where things are running. Say you have 100 nodes and 3 replicas of your pod; the initial connection can be to any node, and the service will forward to all of the target pods, without you needing to know any of these details from the caller.
(For completeness, a LoadBalancer type service requests that a load balancer be created outside the cluster; for example, an AWS ELB. This forwards to any of the cluster nodes as in step 1 above. If you're not in a cloud environment and the cluster doesn't know how to create the external load balancer automatically, it's the same as NodePort.)
If we reduce this to a local Kubernetes installation (Docker Desktop, minikube, kind) the only real difference is that there's only one node; the underlying infrastructure is still built as though it were a multi-node distributed cluster. How exactly you access a service differs across these installations. In Docker Desktop, from the host system, you can use localhost as the "normal" "external" node IP address in the first step.
Is there a way to map a port directly from the node to a pod running on the same node, bypassing services, loadbalancers and ingress. So basically if a pod is deployed on node X, then node X needs to listen on port 1234 and map that port directly into a pod also running on node X on port 1234. So basically there would be no cross node connections. And whatever node Kubernetes decides to deploy the POD on, that node is now the new host for the external connections.
I am fully aware that this goes against all design principles of a Kubernetes cluster. But I am trying to host an old custom build cloud app that was built for a once only custom cloud solution, and see if I can host it on Kubernetes, but each POD in the stateful set needs a dedicated public IP assigned to it as the public IP get's sent to external devices to redirect them to the correct POD. And the protocol is also custom so there doesn't exist an Level 7 loadbalancer for this. So the only solution I can come up with is a direct port mapping from the node into the POD.
You can use hostNetwork: true and dnsPolicy: ClusterFirstWithHostNet in the pod spec
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: nginx
image: nginx
ports:
- containerPort: 8080
This means the pod will listen on the port 8080 directly using the host's network and can be accessed via nodeip:8080 without creating a service or ingress.
Have you considered using a Service with a NodePort type? This will give you a "static" port on each node to which the pod is deployed. The port number will be high (30000+) but reliable. This is described in the narrative documentation.
I have a minikube cluster with a running WordPress in one deployment, and MySQL in another. Both of the deployments have corresponding services. The definition for WordPress service looks like this:
apiVersion: v1
kind: Service
metadata:
name: wordpress
spec:
selector:
app: wordpress
ports:
- port: 80
type: LoadBalancer
The service works fine, and minikube service gives me a nice path, with an address of minikube ip and a random high port. The problem is WordPress needs a full URL in the name of the site. I'd rather not change it every single time and have local DNS name for the cluster.
Is there a way to expose the LoadBalancer on an arbitrary port in minikube? I'll be fine with any port, as long as it's port is decided by me, and not minikube itself?
Keep in mind that Minikube is unable to provide real loadbalancer like different cloud providers and it merely simulates it by using simple nodePort Service instead.
You can have full control over the port that is used. First of all you can specify it manually in the nodePort Service specification (remember it should be within the default range: 30000-32767):
If you want a specific port number, you can specify a value in the
nodePort field. The control plane will either allocate you that port
or report that the API transaction failed. This means that you need to
take care of possible port collisions yourself. You also have to use a
valid port number, one that’s inside the range configured for NodePort
use.
Your example may look as follows:
apiVersion: v1
kind: Service
metadata:
name: wordpress
spec:
selector:
app: wordpress
ports:
- port: 80
targetPort: 80
nodePort: 30000
type: NodePort
You can also change this default range by providing your custom value after --service-node-port-range flag when starting your kube-apiserver.
When you use kubernetes cluster set up by kukbeadm tool (Minikube also uses it as a default bootstrapper), you need to edit /etc/kubernetes/manifests/kube-apiserver.yaml file and provide the required flag with your custom port range.
i have a application ( Let's say Integration-protocol-api)
and this application want to talk to other application, but this application located on another Network (Let's call it Another-Integration-Protocol)
And problem is, on another-integration-protocol side the whitelist exist, which allow to connect to it, only from selected ip addresses.
But my integration-protocol-api is Dockerized and running on Kubernetes cluster, so ip address is changing everyime when i restart my pod.
how can i assign the Public and Static ip to my Kubernetes Pod?
But my integration-protocol-api is Dockerized and running on Kubernetes cluster, so ip address is changing everyime when i restart my pod. how can i assign the Public and Static ip to my Kubernetes Pod
There are several approaches, depending on your actual setup/needs and I'll try to give some options here:
Tie pod to specific node and expose that node's IP address through service. This would be something along those lines:
# Quick deployment/pod manifest node selector (affinity is better)
...
spec:
nodeSelector:
kubernetes.io/hostname: my-node-name
...
# Service manifest
apiVersion: v1
kind: Service
metadata:
name: svc-myservice
labels:
app: myapp
tier: frontend
spec:
selector:
app: myapp
tier: frontend
ports:
- name: tcpserviceport
protocol: TCP
port: 8080
targetPort: 80
externalIPs:
- 111.222.222.111
Pod should be in same namespace, tied to that node via either node selector or affinity rules and have same labels as in selector for service to pick it up. IP address of cluster node with name my-node-name should be 111.222.222.111 in this example, and it would be accessible through port 8080 and that ip address.
If applicable, expose service through ingress and whitelist ingress public ip only. Depending on your namespace separation you'll reference your pod (wherever it might run) in ingress through corresponding service using either service name (in namespace scope) or FQDN such as:
<service-name>.<namespace-name>.svc.cluster.local
Here is good overview of some methods to make it more illustrative from kubernetes docs: https://kubernetes.io/docs/tutorials/kubernetes-basics/expose-intro/
A pod makes any request with it's node IP address as source. So you could whitelist your cluster nodes' IP addresses, and it should work.
I understand how to set targetPort as integer value when defining service in k8s.
However, I'm a little confused about how to set targetPort with string value.
Is there any example about this?
Thanks,
This Service is for Prometheus. In the following manifest, you first have to define web in the Deployment before you can refer to it as a string in targetPort.
apiVersion: v1
kind: Service
metadata:
labels:
prometheus: k8s
name: prometheus-k8s
namespace: monitoring
spec:
ports:
- name: web
nodePort: 30900
port: 9090
protocol: TCP
targetPort: web
selector:
prometheus: k8s
type: NodePort
To address the comment by #sfgroups :
Port number should be integer, Is there reason you want to set the string value?
I actually don't ever use numbers in my targetPort because from the PoV of a Service, that is the contract you have with the Pods, to say (as in Eugene's snippet) the service will provide "web" content on port 9090 to the outside, and will do so using the exposed (key word there) port named "web" from the Pod, and it is then up to the Pod to map the Pod's "web" to the integer port in its containers. So if they want to use nginx on :80 or tomcat on :8080 or node on :3000 or or or, that is up to the Pod and its containers, and should not be a concern of the Service.