IP Address assignment for the application running inside the pod in kubernetes - kubernetes

I run my application in one pod and the Mongo database in other pod.
For my application successful startup, it needs to know the IP address where the Mongo is running.
I have questions below:
How do I get to know the the Mongo pod IP address so that I can configure this in my application.
My application will run on some IP & port, and this is provided as part of some configuration file. But as these are containerized and Kubernetes assigns the Pod IP address, how can my application pick this IP address as it own IP?

You need to expose mongodb using Kubernetes Services. With the help of Services there is no need for an application to know the actual IP address of the Pod, you can use the service name to resolve mongodb.
Reference: https://kubernetes.io/docs/concepts/services-networking/service/
An example using mysql:
apiVersion: v1
kind: Service
metadata:
labels:
name: mysql
name: mysql
spec:
ports:
- port: 3306
selector:
name: mysql
---
apiVersion: v1
kind: Pod
metadata:
name: mysql
labels:
name: mysql
spec:
containers:
image: mysql
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: wppassword
ports:
- containerPort: 3306
name: mysql
If there is application container, in the same namespace, trying to use the mysql container, it can directly use mysql:3306 to connect with out using the POD IP address. And mysql.namespace_name:3306 if the app is in a different namespace.

Related

Can an `ExternalName` service point to the host machine?

I'm working locally (within Docker for Mac) on a Kubernetes cluster that will eventually be deployed to the cloud. We plan to use a database service in that environment. To simulate that, I'd like to have the services in the cluster connect to a database running outside the cluster on my laptop.
Can I do that? Here's what I thought I'd try.
Define a Service with type: ExternalName and externalName: somedb.local
Add 127.0.0.1 somedb.local to /etc/hosts on the laptop
Is that correct? Is there a better way?
After talking with some colleagues, I found a solution.
In Docker for Mac, host.docker.internal points to the host machine, and that lets me connect to the db running there, even from containers running in the K8s cluster.
You may have a Service pointing to an address out of your SDN, by creating an Endpoint object with matching name.
----
apiVersion: v1
kind: Service
metadata:
name: external-db
namespace: my-namespace
spec:
ports:
- name: exporter-3306
port: 3306
selector:
name: external-db
---
apiVersion: v1
kind: Endpoints
metadata:
name: external-db
namespace: my-namespace
subsets:
- addresses:
- ip: 10.42.253.110
ports:
- name: exporter-3306
port: 3306
You may add hosts overrides in your Deployment definition:
apiVersion: apps/v1
kind: Deployment
spec:
template:
spec:
containers:
...
hostAliases:
- ip: 10.42.253.110
hostnames:
- external-db
It seems the Kubernetes docs provide an instruction on how to achieve this.
https://kubernetes.io/docs/concepts/services-networking/service/#services-without-selectors
A note says endpoint IPs must not be: loopback (127.0.0.0/8 for IPv4, ::1/128 for IPv6), or link-local (169.254.0.0/16 and 224.0.0.0/24 for IPv4, fe80::/64 for IPv6).

Access Postgres host from Tableau using kubernetes cluster as a kind of router

Scenario:
Tableau application;
Postgres on a cloud;
Kubernetes on another cloud, running an application based on Alpine image (different cloud than Postgres).
What a I need:
Access Postgres from Tableau using Kubernetes as a kind of router;
So I need to send a request to my Kubernetes cluster, from tableau, and my Kubernetes cluster need to redirect the requisition to my Postgres host, and Postgres must to answer back to my kubernetes cluster after that my Kubernetes cluster must send de answer from Postgres to Tableau.
Important restrictions:
Tableau can access my kubernetes cluster but cannot access my Postgres host directly;
My kubernetes cluster can access my Postgres host.
Next steps
Now I was able to make it work by using Thomas answer, using the following code:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: NodePort
ports:
- port: 5432
targetPort: 5432
nodePort: 30004
---
apiVersion: v1
kind: Endpoints
metadata:
name: my-service
subsets:
- addresses:
- ip: **111.111.111.111** ** < need change this to hostname
ports:
- port: 5432
Everything works fine with numerical IP, but I need to put my Postgres DNS instead, something like:
subsets:
- addresses:
- ip: mypostgres.com
ports:
- port: 5432
You can achieve this by creating service type object without selectors and then manually creating endpoints for this its. Service needs to expose outside either via NodePort or Loadbalancer type:
apiVersion: v1
kind: Service
metadata:
name: my-service #Name of the service must match the name of the endpoints
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
nodePort: 30007
Services don’t link to pods directly. There is another object in between called endpoints. Because of this you are able to define them manually.
apiVersion: v1
kind: Endpoints
metadata:
name: my-service #Name of te endpoint must match the name of the service
subsets:
- addresses:
- ip: 172.217.212.100 # This is the IP of the endpoints that the service will forward connections to.
ports:
- port: 80
Since you are going to expose your postgres some sort securiy measures has to be taken in order to secure it, e.g. whitelist ip
For more reading please visit /Services without selectors .

How to access pod by it's hostname from within other pod of the same namespace?

Is there a way to access a pod by its hostname?
I have a pod with hostname: my-pod-1 that need to connect to another pod with hostname:
my-pod-2.
What is the best way to achieve this without services?
Through your description, Headless-Service is you want to find. You can access pod by accessing podName.svc with headless service.
OR access pod by pod ip address.
In order to connect from one pod to another by name (and not by IP),
replace the other pod's IP with the service name that points on it.
for example,
If my-pod-1 (172.17.0.2) is running rabbitmq,
And my-pod-2 (172.17.0.4) is running a rabbitmq consumer (let's say in python).
In my-pod-2 instead of running:
spec:
containers:
- name: consumer-container
image: shlomimn/rabbitmq_consumer:latest
args: ["/app/consumer.py","-p","5672","-s","172.17.0.2"]
Use:
spec:
containers:
- name: consumer-container
image: shlomimn/rabbitmq_consumer:latest
args: ["/app/consumer.py","-p","5672","-s","rabbitmq-svc"]
Where rabbitmq_service.yaml is,
apiVersion: v1
kind: Service
metadata:
name: rabbitmq-svc
namespace: rabbitmq-ns
spec:
selector:
app: rabbitmq
ports:
- name: rabbit-main
protocol: TCP
port: 5672
targetPort: 5672
Shlomi

How to use Kubernetes endpoint object?

I have a mongodb server hosted outside GCP, I want to connect to it using Kubernetes endpoint service as shown here [https://www.youtube.com/watch?v=fvpq4jqtuZ8]. How can I do that? Can you write a sample YAML file for the same?
use static Kubernetes service, when you have got the Internal IP and Port number for externally hosted service.
kind: Service
apiVersion: v1
metadata:
name: mongo
Spec:
type: ClusterIP
ports:
- port: 27017
targetPort: 27017
As there are no Pod selectors for this service , There will not be any endpoints thus we can create a endpoint object manually.
kind: Endpoints
apiVersion: v1
metadata:
name: mongo
subsets:
- addresses:
- ip: 10.240.0.4 # Replace ME with your IP
ports:
- port: 27017
make sure that service and endpoints are having same name (for example mongo)
If the IP address changes in the future, you can update the Endpoint with the new IP address, and your applications won’t need to make any changes.mapping-external-services

Kubernetes - Get the public URL of a service into my ConfigMap

I have my Kubernetes service of which I'm getting the url in my minikube installation using:
minikube service postgres --url
Which returns the URL like: http://192.xxx.xx.xxxx:3xx62
However I want this URL to be used in my ConfigMap as the pghost and pgport - so for example:
apiVersion: v1
kind: ConfigMap
metadata:
name: firsttest
labels:
app: firsttest
data:
pgdatabase: "first_test"
pguser: "postgresql_user"
pghost: ""
pgport: ""
pgpool_size: "5"
auth_user: "unique"
And the service looks like:
apiVersion: v1
kind: Service
metadata:
name: postgres
spec:
type: NodePort
ports:
- name: pgql
port: 5432
targetPort: 5432
protocol: TCP
selector:
app: postgres
Is this possible?
Any help appreciated.
Thanks.
I will just compile a list of some crucial things about interaction with Services:
You can create a Service object before creating pods with services, it is OK.
To access the service inside a cluster, you should use "ClusterIP."
You cannot use variables like a name of a service, but you can use FQDN of the service, and it's IP address, which is static and will be available until you delete a service.
Service's IP and port are what you need to access pods behind a service.