How to bind multiple ports in OpenShift pod YAML config? - kubernetes

How to bind multiple ports of a pod to make them visible on the pod IP?
Something analogous to Docker's docker run -p 1234:5555 -p 6789:9999 my_image
The only example of YAML definition I've found in documentation and tutorials uses single port without binding:
spec:
containers:
- name: my_container
image: 'my_image'
ports:
- containerPort: 8080
Could you give a link to the documentation describing the case or a short example of binding multiple ports?

spec.containers.ports is an array, which means you can specify multiple ports like so in your Pod definition:
apiVersion: v1
kind: Pod
metadata:
name: pod-multiple-ports
labels:
app: pod-multiple-ports
spec:
containers:
- name: my-container
image: myexample:latest
ports:
- containerPort: 80
- containerPort: 443

Related

How to access pod by it's hostname from within other pod of the same namespace?

Is there a way to access a pod by its hostname?
I have a pod with hostname: my-pod-1 that need to connect to another pod with hostname:
my-pod-2.
What is the best way to achieve this without services?
Through your description, Headless-Service is you want to find. You can access pod by accessing podName.svc with headless service.
OR access pod by pod ip address.
In order to connect from one pod to another by name (and not by IP),
replace the other pod's IP with the service name that points on it.
for example,
If my-pod-1 (172.17.0.2) is running rabbitmq,
And my-pod-2 (172.17.0.4) is running a rabbitmq consumer (let's say in python).
In my-pod-2 instead of running:
spec:
containers:
- name: consumer-container
image: shlomimn/rabbitmq_consumer:latest
args: ["/app/consumer.py","-p","5672","-s","172.17.0.2"]
Use:
spec:
containers:
- name: consumer-container
image: shlomimn/rabbitmq_consumer:latest
args: ["/app/consumer.py","-p","5672","-s","rabbitmq-svc"]
Where rabbitmq_service.yaml is,
apiVersion: v1
kind: Service
metadata:
name: rabbitmq-svc
namespace: rabbitmq-ns
spec:
selector:
app: rabbitmq
ports:
- name: rabbit-main
protocol: TCP
port: 5672
targetPort: 5672
Shlomi

Kubernetes pod can't access other pods exposed by a service

New to Kubernetes.
To build our testing environment, I'm trying to set up a PostgreSQL instance in Kubernetes, that's accessible to other pods in the testing cluster.
The pod and service are both syntactically valid and running. Both show in the output from kubectl get [svc/pods]. But when another pod tries to access the database, it times out.
Here's the specification of the pod:
# this defines the postgres server
apiVersion: v1
kind: Pod
metadata:
name: postgres
labels:
app: postgres
spec:
hostname: postgres
restartPolicy: OnFailure
containers:
- name: postgres
image: postgres:9.6.6
imagePullPolicy: IfNotPresent
ports:
- containerPort: 5432
protocol: TCP
And here is the definition of the service:
# this defines a "service" that makes the postgres server publicly visible
apiVersion: v1
kind: Service
metadata:
name: postgres
spec:
selector:
app: postgres
type: ClusterIP
ports:
- port: 5432
protocol: TCP
I'm certain that something is wrong with at least one of those, but I'm not sufficiently familiar with Kubernetes to know which.
If it's relevant, we're running on Google Kubernetes Engine.
Help appreciated!

How do I talk to a pod from sidecar container in Kubernetes?

I cannot talk to a pod from side car container... any help will be appreciated!
Here's my deployment
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: sidecar-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: sidecar
spec:
containers:
- name: sidecar1
image: sidecar
args:
- /sidecar
- --port=32000
- --path=/sidecar1
ports:
- containerPort: 32000
- name: sidecar2
image: sidecar
args:
- /sidecar
- --port=32001
- --path=/sidecar2
ports:
- containerPort: 32001
And here's my service to the pod
---
apiVersion: v1
kind: Service
metadata:
name: sidecar-service
spec:
ports:
- name: http
port: 80
targetPort: 32001
protocol: TCP
selector:
app: sidecar
type: NodePort
After deploying ingress, I can connect to the service and sidecar2, because sidecar2 is exposed via service:
# this works
curl -L http://ADDR/sidecar2
But, I was expecting to be able to curl to the side container, but I can't.
This is what I did. I ssh into the sidecar container. And curl the colocated pod with localhost:
kubectl exec -it sidecar2 -- /bin/bash
# this doesn't work
curl -L http://localhost:32000/sidecar1
Can somebody help me on this?
Thanks!
If your sidecar image exposes the port (recheck your dockefile), you must connect with curl localhost:port/sidecar
If you have problem connecting from inside the container using the service it may be related to hairpin_mode.

Kubernetes cluster, two containers(different pods) are running on same port

Can I create two pods where containers are running on same ports in one kubernetes cluster? considering that will create a separate service for both.
Something like this :
-- Deployment 1
kind: Deployment
spec:
containers:
- name: <name>
image: <image>
imagePullPolicy: Always
ports:
- containerPort: 8080
-- Service 1
kind: Service
spec:
type: LoadBalancer
ports:
- port: 8081
targetPort: 8080
-- Deployment 2
kind: Deployment
spec:
containers:
- name: <name>
image: <image>
imagePullPolicy: Always
ports:
- containerPort: 8080
-- Service 2
kind: Service
spec:
type: LoadBalancer
ports:
- port: 8082
targetPort: 8080
but this approach is not working.
Sure you can. Every POD (which is the basic workload unit in k8s) is isolated from the others in terms of networking (as long as you don't mess with advanced networking options) so you can have as many pods as you want that bind the same port. You can't have two containers inside the same POD that bind the same port, though.
Yes, they are different containers in different pods, so there shouldn't be any conflict between them.

Using a pod without using the node ip

I have a postgres pod running locally on a coreOS vm.
I am able to access postgres using the ip of the minion it is on but I'm attempting to set it up in such a manner as to not have to know exactly which minion the pod is on but still be able to use postgres.
Here is my pod
apiVersion: v1
kind: Pod
metadata:
name: postgresql
labels:
role: postgres-client
spec:
containers:
- image: postgres:latest
name: postgres
ports:
- containerPort: 5432
hostPort: 5432
name: pg-port
volumeMounts:
- name: nfs
mountPath: /mnt
volumes:
- name: nfs
nfs:
server: nfs.server
path: /
and here is a service I tried to set-up but it doesn't seem correct
apiVersion: v1
kind: Service
metadata:
name: postgres-client
spec:
ports:
- port: 5432
targetPort: 5432
selector:
app: postgres-client
I'm guessing that the selector for your service is not finding any matching backends.
Try changing
app: postgres-client
to
role: postgres-client
in the service definition (or vice versa in the pod definition above).
The label selector has to match both the key and value (i.e. role and postgres-client). See the Labels doc for more details.