Kubernetes cluster, two containers(different pods) are running on same port - kubernetes

Can I create two pods where containers are running on same ports in one kubernetes cluster? considering that will create a separate service for both.
Something like this :
-- Deployment 1
kind: Deployment
spec:
containers:
- name: <name>
image: <image>
imagePullPolicy: Always
ports:
- containerPort: 8080
-- Service 1
kind: Service
spec:
type: LoadBalancer
ports:
- port: 8081
targetPort: 8080
-- Deployment 2
kind: Deployment
spec:
containers:
- name: <name>
image: <image>
imagePullPolicy: Always
ports:
- containerPort: 8080
-- Service 2
kind: Service
spec:
type: LoadBalancer
ports:
- port: 8082
targetPort: 8080
but this approach is not working.

Sure you can. Every POD (which is the basic workload unit in k8s) is isolated from the others in terms of networking (as long as you don't mess with advanced networking options) so you can have as many pods as you want that bind the same port. You can't have two containers inside the same POD that bind the same port, though.

Yes, they are different containers in different pods, so there shouldn't be any conflict between them.

Related

How to get kubernetes service external ip dynamically inside manifests file?

We are creating a deployment in which the command needs the IP of the pre-existing service pointing to a statefulset. Below is the manifest file for the deployment. Currently, we are manually entering the service external IP inside this deployment manifest. Now we would like it to auto-populate during runtime. Is there a way to achieve this dynamically using environment variables or another way?
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-api
namespace: app-api
spec:
selector:
matchLabels:
app: app-api
replicas: 1
template:
metadata:
labels:
app: app-api
spec:
containers:
- name: app-api
image: asia-south2-docker.pkg.dev/rnd20/app-api/api:09
command: ["java","-jar","-Dallow.only.apigateway.request=false","-Dserver.port=8084","-Ddedupe.searcher.url=http://10.10.0.6:80","-Dspring.cloud.zookeeper.connect-string=10.10.0.6:2181","-Dlogging$.file.path=/usr/src/app/logs/springboot","/usr/src/app/app_api/dedupe-engine-components.jar",">","/usr/src/app/out.log"]
livenessProbe:
httpGet:
path: /health
port: 8084
httpHeaders:
- name: Custom-Header
value: ""
initialDelaySeconds: 60
periodSeconds: 60
ports:
- containerPort: 4016
resources:
limits:
cpu: 1
memory: "2Gi"
requests:
cpu: 1
memory: "2Gi"
NOTE: The IP in question here is the Internal load balancer IP, i.e. the external IP for the service and the service is in a different namespace. Below is the manifest for the same
apiVersion: v1
kind: Service
metadata:
name: app
namespace: app
annotations:
cloud.google.com/load-balancer-type: "Internal"
labels:
app: app
spec:
selector:
app: app
type: LoadBalancer
ports:
- name: container
port: 80
targetPort: 8080
protocol: TCP
You could use the following command instead:
command:
- /bin/bash
- -c
- |-
set -exuo pipefail
ip=$(dig +search +short servicename.namespacename)
exec java -jar -Dallow.only.apigateway.request=false -Dserver.port=8084 -Ddedupe.searcher.url=http://$ip:80 -Dspring.cloud.zookeeper.connect-string=$ip:2181 -Dlogging$.file.path=/usr/src/app/logs/springboot /usr/src/app/app_api/dedupe-engine-components.jar > /usr/src/app/out.log
It first resolves the ip address using dig (if you don't have dig in your image - you need to substitute it with something else you have), then execs your original java command.
As of today I'm not aware of any "native" kubernetes way to provide IP meta information directly to the pod.
If you are sure they exist before, and you deploy into the same namespace, you can read them from environment variables. It's documented here: https://kubernetes.io/docs/concepts/services-networking/service/#environment-variables.
When a Pod is run on a Node, the kubelet adds a set of environment variables for each active Service. It adds {SVCNAME}_SERVICE_HOST and {SVCNAME}_SERVICE_PORT variables, where the Service name is upper-cased and dashes are converted to underscores. It also supports variables (see makeLinkVariables) that are compatible with Docker Engine's "legacy container links" feature.
For example, the Service redis-master which exposes TCP port 6379 and has been allocated cluster IP address 10.0.0.11, produces the following environment variables:
REDIS_MASTER_SERVICE_HOST=10.0.0.11
REDIS_MASTER_SERVICE_PORT=6379
REDIS_MASTER_PORT=tcp://10.0.0.11:6379
REDIS_MASTER_PORT_6379_TCP=tcp://10.0.0.11:6379
REDIS_MASTER_PORT_6379_TCP_PROTO=tcp
REDIS_MASTER_PORT_6379_TCP_PORT=6379
REDIS_MASTER_PORT_6379_TCP_ADDR=10.0.0.11
Note, those wont update after the container is started.

How to bind multiple ports in OpenShift pod YAML config?

How to bind multiple ports of a pod to make them visible on the pod IP?
Something analogous to Docker's docker run -p 1234:5555 -p 6789:9999 my_image
The only example of YAML definition I've found in documentation and tutorials uses single port without binding:
spec:
containers:
- name: my_container
image: 'my_image'
ports:
- containerPort: 8080
Could you give a link to the documentation describing the case or a short example of binding multiple ports?
spec.containers.ports is an array, which means you can specify multiple ports like so in your Pod definition:
apiVersion: v1
kind: Pod
metadata:
name: pod-multiple-ports
labels:
app: pod-multiple-ports
spec:
containers:
- name: my-container
image: myexample:latest
ports:
- containerPort: 80
- containerPort: 443

How to access pod by it's hostname from within other pod of the same namespace?

Is there a way to access a pod by its hostname?
I have a pod with hostname: my-pod-1 that need to connect to another pod with hostname:
my-pod-2.
What is the best way to achieve this without services?
Through your description, Headless-Service is you want to find. You can access pod by accessing podName.svc with headless service.
OR access pod by pod ip address.
In order to connect from one pod to another by name (and not by IP),
replace the other pod's IP with the service name that points on it.
for example,
If my-pod-1 (172.17.0.2) is running rabbitmq,
And my-pod-2 (172.17.0.4) is running a rabbitmq consumer (let's say in python).
In my-pod-2 instead of running:
spec:
containers:
- name: consumer-container
image: shlomimn/rabbitmq_consumer:latest
args: ["/app/consumer.py","-p","5672","-s","172.17.0.2"]
Use:
spec:
containers:
- name: consumer-container
image: shlomimn/rabbitmq_consumer:latest
args: ["/app/consumer.py","-p","5672","-s","rabbitmq-svc"]
Where rabbitmq_service.yaml is,
apiVersion: v1
kind: Service
metadata:
name: rabbitmq-svc
namespace: rabbitmq-ns
spec:
selector:
app: rabbitmq
ports:
- name: rabbit-main
protocol: TCP
port: 5672
targetPort: 5672
Shlomi

Load balancing to multiple containers of same app in a pod

I have a scenario where I need to have two instances of an app container run within the same pod.
I have them setup to listen on different ports. Below is how the Deployment manifest looks like.
The Pod launches just fine with the expected number of containers.
I can even connect to both ports on the podIP from other pods.
kind: Deployment
metadata:
labels:
service: app1-service
name: app1-dep
namespace: exp
spec:
template:
spec:
contianers:
- image: app1:1.20
name: app1
ports:
- containerPort: 9000
protocol: TCP
- image: app1:1.20
name: app1-s1
ports:
- containerPort: 9001
protocol: TCP
I can even create two different Services one for each port of the container, and that works great as well.
I can individually reach both Services and end up on the respective container within the Pod.
apiVersion: v1
kind: Service
metadata:
name: app1
namespace: exp
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: 9000
selector:
service: app1-service
sessionAffinity: None
type: ClusterIP
---
apiVersion: v1
kind: Service
metadata:
name: app1-s1
namespace: exp
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: 9001
selector:
service: app1-service
sessionAffinity: None
type: ClusterIP
I want both the instances of the container behind a single service, that round robins between both the containers.
How can I achieve that? Is it possible within the realm of services? Or would I need to explore ingress for something like this?
Kubernetes services has three proxy modes: iptables (is the default), userspace, IPVS.
Userspace: is the older way and it distribute in round-robin as the only way.
Iptables: is the default and select at random one pod and stick with it.
IPVS: Has multiple ways to distribute traffic but first you have to install it on your node, for example on centos node with this command:
yum install ipvsadm and then make it available.
Like i said, Kubernetes service by default has no round-robin.
To activate IPVS you have to add a parameter to kube-proxy
--proxy-mode=ipvs
--ipvs-scheduler=rr (to select round robin)
One can expose multiple ports using a single service. In Kubernetes-service manifest, spec.ports[] is an array. So, one can specify multiple ports in it. For example, see bellow:
apiVersion: v1
kind: Service
metadata:
name: app1
namespace: exp
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: 9000
- name: http-s1
port: 81
protocol: TCP
targetPort: 9001
selector:
service: app1-service
sessionAffinity: None
type: ClusterIP
Now, the hostname is same except the port and by default, kube-proxy in userspace mode chooses a backend via a round-robin algorithm.
What I would do is to separate the app in two different deployments, with one container in each deployment. I would set the same labels to both deployments and attack them both with one single service.
This way, you don't even have to run them on different ports.
Later on, if you would want one of them to receive more traffic, I would just play with the number of the replicas of each deployment.

Discovering Kubernetes Pod without specifying port number

I have a single kubernetes service called MyServices which hold four deployments. Each deployment is running as a single pod and each pod has its own port number.
As mentioned all the pods are running inside one kubernetes service.
I am able to call the services through the external IP Address of that kubernetes service and port number.
Example : 92.18.1.1:3011/MicroserviceA Or 92.18.1.1:3012/MicroserviceB
I am now trying to develop and orchestration layer that calls these services and get a response from them, However, I am trying to figure out a way in which I do NOT need to specify every micro-service port number, instead I can call them through their endpoint/ServiceName. Example: 192.168.1.1/MicroserviceA
How can I achieve above statement?
From architecture perspective, is it a good idea to deploy all microservice inside a single kubenetes service (like my current approach) or each micro-service needs it's own service
Below is the kubernetes deployment file ( I removed the script for micro-service C and D since they are identical to A and B):
apiVersion: v1
kind: Service
metadata:
name: myservice
spec:
selector:
app: microservice
ports:
- name: microserviceA
protocol: TCP
port: 3011
targetPort: 3011
- name: microserviceB
protocol: TCP
port: 3012
targetPort: 3012
- name: microserviceC
protocol: TCP
port: 3013
targetPort: 3013
- name: microserviceD
protocol: TCP
port: 3014
targetPort: 3014
type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: microserviceAdeployment
spec:
replicas: 1
template:
metadata:
labels:
app: microservice
spec:
containers:
- image: dockerhub.com/myimage:v1
name: microservice
ports:
- containerPort: 3011
imagePullSecrets:
- name: regcred
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: microserviceBdeployment
spec:
replicas: 1
template:
metadata:
labels:
app: microservice
spec:
containers:
- image: dockerhub.com/myimage:v1
name: microservice
ports:
- containerPort: 3012
There is a way to discover all the port of Kubernetes services.
So you could consider using kubectl get svc, as seen in "Source IP for Services with Type=NodePort"
NODEPORT=$(kubectl get -o jsonpath="{.spec.ports[0].nodePort}" services <yourService>)
, I am trying to figure out a way in which I do NOT need to specify every micro-service port number, instead I can call them through their endpoint/ServiceName
Then you need to expose those services through one entry point, typically a reverse-proxy like NGiNX.
The idea is to expose said services using the default ports (80 or 443), and reverse-proxy them to the actual URL and port number.
Check "Service Discovery in a Microservices Architecture" for the general idea.
And "Service Discovery for NGINX Plus with etcd" for an implementation (using NGiNX plus, so could be non-free).
Or "Setting up Nginx Ingress on Kubernetes" for a more manual approach.