Connect to local database from inside minikube cluster - kubernetes

I'm trying to access a MySQL database hosted inside a docker container on localhost from inside a minikube pod with little success. I tried the solution described Minikube expose MySQL running on localhost as service but to no effect. I have modelled my solution on the service we use on AWS but it does not appear to work with minikube. My service reads as follows
apiVersion: v1
kind: Service
metadata:
name: mysql-db-svc
namespace: external
spec:
type: ExternalName
ExternalName: 172.17.0.2
...where I try to connect to my database from inside a pod using "mysql-db-svc" on port 3306 but to no avail. If I try and CURL the address "mysql-db-svc" from inside a pod it cannot resolve the host name.
Can anybody please advise a frustrated novice?

I'm using ubuntu with Minikube and my database runs outside of minikube inside a docker container and can be accessed from localhost # 172.17.0.2. My Kubernetes service for my external mysql container reads as follows:
kind: Service
apiVersion: v1
metadata:
name: mysql-db-svc
namespace: external
spec:
type: ExternalName
externalName: 10.0.2.2
Then inside my .env for a project my DB_HOST is defined as
mysql-db-svc.external.svc
... the name of the service "mysql-db-svc" followed by its namespace "external" with "svc"
Hope that makes sense.

If I'm not mistaken, you should also create an Endpoint for this service as it's external.
In your case, the Endpoints definition should be as follows:
kind: "Endpoints"
apiVersion: "v1"
metadata:
name: mysql-db-svc
namespace: external
subsets:
- addresses:
- ip: "10.10.1.1"
ports:
port: 3306
You can read about the external sources on Kubernetes Defining a service docs.

This is because your service type is ExternalName which only fits in cloud environment such as AWS and GKE. To run your service locally change the service type to NodePort which will assign a static NodePort between 30000-32767. If you need to assign a static port on your own so that minikube won't pick a random port for you define that in your service definition under ports section like this nodePort: 32002.
And also I don't see any selector that points to your MySQL deployment in your service definition. So include the corresponding selector key pair (e.g. app: mysql-server) in your service definition under spec section. That selector should match with the selector you have defined in MySQL deployment definition.
So your service definition should be like this:
kind: Service
apiVersion: v1
metadata:
name: mysql-db-svc
namespace: external
spec:
selector:
app: mysql-server
ports:
- protocol: TCP
port: 3306
targetPort: 3306
nodePort: 32002
type: NodePort
After you deploy the service you can hit the MySQL service via http://{minikube ip}:32002 Replace {minikube ip} with actual minikube ip.
Or else you can get the access URL for the service with following command
minikube service <SERVICE_NAME> --url
Replace the with actual name of the service. In your case it is mysql-db-svc

I was also facing a similar problem where I needed to connect a POD inside minikube with a SQL Server container on the machine.
I noticed that minikube is itself a container in the local docker environment and during it's setup it creates a local docker network minikube. I connected my local SQL Server container to this minikube docker network using docker network connect minikube <SQL Server Container Name> --ip=<any valid IP on the minikube network subent>.
I was now able to access the local SQL Server container using the IP address on minikube network.

Above solution somehow doesn't work for me. Finally it works below in terraform
resource "kubernetes_service" "host" {
metadata {
name = "minikube-host"
labels = {
app = "minikube-host"
}
namespace = "default"
}
spec {
port {
name = "app"
port = 8082
}
cluster_ip = "None"
}
}
resource "kubernetes_endpoints" "host" {
metadata {
name = "minikube-host"
namespace = "default"
}
subset {
address {
// This ip comes from command: minikube ssh 'grep host.minikube.internal /etc/hosts | cut -f1'
ip = "192.168.65.2"
}
port {
name = "app"
port = 8082
}
}
}
Then I can access the local service (e.g, postgres, or mysql) in my mac with host minikube-host.default.svc.cluster.local from k8s pods.
The plain yaml file version and more details can be found at issue.
The minikube host access host.minikube.internal detail can be found here.
On the other hand, the raw ip address from command minikube ssh 'grep host.minikube.internal /etc/hosts | cut -f1'(e.,g "192.168.65.2"), can be used as the service host directly, instead of 127.0.0.1/localhost at code. And no more above configurations required.

As an addon to #Crou s answer from 2018, In 2022, kubernetes docs say ExternalName takes in a string and not an address. So, in case ExternalName doesn't work, you can also use the simpler option of services-without-selectors
You can also refer to this Google Cloud Tech video for how this services-without-selectors concept works

Related

Kubernetes: is there any way to get headless service endpoint info in container environment vars

I used Cloud Foundry a lot previously, when an app is bind with a service, all the service connection info will be injected into app's environment variables. In Kubernetes world, I think this is same for normal service.
For me, I try to use headless service to describe an external PostgreSQL using below service yaml.
---
kind: "Service"
apiVersion: "v1"
metadata:
name: "postgresql"
spec:
clusterIP: None
ports:
- protocol: "TCP"
port: 5432
targetPort: 5432
nodePort: 0
---
kind: "Endpoints"
apiVersion: "v1"
metadata:
name: "postgresql"
subsets:
-
addresses:
- ip: "10.29.0.123"
ports:
- port: 5432
After deploy the headless service to cluster, the container does not has any environment variables for that, I guess it is because the ClusterIP = None.
The apps can use postgresql:5432 to access by DNS, but I just wonder why Kubernetes does not inject the headless service and its endpoints into the app's environment variable, so the app can get both ip and port from it?
Is there any way to do so?
Thanks!
The Kube-proxy does not manage HeadLess Service, a request made to theses service is only forwarded to the it.
Kubernetes does not really aknowledge theses endpoints (cf https://kubernetes.io/docs/concepts/services-networking/service/#headless-services).
To pass the IP of your postgreSQL DB, you will have to add a environment variable in your deployment, like this:
env:
- name: POSTGRESQL_ADDR
value: "10.29.0.123:5432"
I found the answer to the question. For a headless service, the service info will not be shown in pod's environment variables. If service info is to be available in the environment var, you need to use the service without selectors, simply remove the "clusterIP: None".
The client pod can use both DNS and environment var for external service discovery.

Kubernetes / Metallb single entrypoint

I'm building a K8 cluster for a school project.
It's bare metal and uses metallb as a loadbalancer.
Each service works in a separate pod:
Nginx
Wordpress
Phpmyadmin
Mysql (mariadb)
In the phpmyadmin file, I need to link my mysql server with something like this:
$cfg['Servers'][$i]['host'] = "mysql-server-name";
I've tried to use the node's IP:
kubectl get node -o=custom-columns='DATA:status.addresses[0].address' | sed -n 2p
adding the port :3306 but I realised that none of my services could be reached through the browser with this method.
For instance the node's Ip:5050 should redirect me to my wordpress but it doesn't.
Is there any way to get a single IP that I can use to make my pods communicate between them ?
I must add that each service works appart when I use the svc IP instead of the nodes.
Here's the configmap I use for metallb:
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 192.168.99.100-192.168.99.200
The reason the node IP doesn't expose your application to other apps is that the pods in the kubernetes cluster don't listen to the requests coming to the node by default. In other words, the port on the pod is not connected to the port on the node.
The service resource is what you need to make that connection.
Services have different types. A service of type cluster IP will assign an IP internal to the cluster to the app. If you don't want to access your mysql database directly from the internet, this is what you would want.
Here is an example service of type cluster IP for your project.
apiVersion: v1
kind: Service
metadata:
name: mysql-service
namespace: metallb-system
spec:
selector:
app: Mysql
ports:
- protocol: TCP
port: 80
targetPort: 3306
Selector selects pods that carry the label app=mysql.
Port is the port that the service will listen to.
TargetPort is the port that mysql is listening to.
When you create the service you can find it's IP by running this command
kubectl get services -n metallb-system
Under CLUSTER-IP column note the IP of the service you created.
So in this case, if mysql is listening to 3306, you can reach it through this service on the service IP on port 80.
If you want to expose your wordpress app to the internet, use either the NodePort or LoadBalancer service types. Here is the reference for service types.

how to give service name and port in configmap yaml?

I have a service (CusterIP) like following which is exposing ports of backend POD.
apiVersion: v1
kind: Service
metadata:
name: fsimulator
namespace: myns
spec:
type: ClusterIP
selector:
application: oms
ports:
- name: s-port
port: 9780
- name: b-port
port: 8780
Front end POD should be able to connect to Backend POD using service. Should we replace hostname with service name to connect from Frontend POD to Backend POD ?
I have to supply the service name and port through environment variables to Frontend POD container.
The enviroment variables are set using configMap.
Is it enough to give service name fsimulator as hostname to connect to ?
How to give service if is created inside namespace ?
Thanks
Check out this documentation. The internal service PORT / IP pairs for active services are indeed passed into the containers by default.
As the documentation also says, it is possible (recommended) to use a DNS cluster add-on for service discovery. Accessing service.namespace from outside / inside a service will resolve to the correct service route (or just service from inside the namespace). This is usually the right path to take.
Built-in service discovery is a huge perk of using Kubernetes, use the available tools if at all possible!

Calling an external service from within Minikube

I have a service (/deployment/pod) running in my Minikube (installed on my Mac) that needs to call an external http service that runs directly on my Mac (i.e. outside Minikube). The domain name of that external service is defined into my Mac /etc/hosts file. Yet, my service within Minikube cannot call that external service. Any idea what I need to configure where? Many thanks. C
Create Endpoints that will forward traffic to your desire external IP address (your local machine). You can directly connect using Endpoints but according to Goole Cloud best practice (doc) is to access it through a Service
Create your Endpoints
kind: Endpoints
apiVersion: v1
metadata:
name: local-ip
subsets:
- addresses:
- ip: 10.240.0.4 # IP of your desire end point
ports:
- port: 27017 # Port that you want to access
Then create you Service
kind: Service
apiVersion: v1
metadata:
name: local-ip
spec:
type: ClusterIP
ports:
- port: 27017
targetPort: 27017
Now you can call external http service using the Service name. In this case local-ip like any other internal service of minikube.
Because your minikube is running on a virtual machine on your laptop , you just need minikube ssh into that machine and enter the address of your external service into the /etc/hosts file of that virtual machine.

Kubernetes ExternalName, exposing 10.0.2.2

I have created a mongo Service as follows:
apiVersion: v1
kind: Service
metadata:
name: mongo-svc
spec:
type: ExternalName
externalName: 10.0.2.2
ports:
- port: 27017
When starting the cluster with the following command:
minikube start --vm-driver=virtualbox
I would expect the Virtualbox loopback address (10.0.2.2) to be mapped to the local Mongo DB instance that runs on my localhost machine.
However when logging in into a pod and trying to ping 10.0.2.2, I experience a 100% package loss.
Is there something I'm missing here?
So if you are trying to expose MongoDB as an external service and basically map mongo-svc to 10.0.2.2, then
the first issue is that ExtrnalName must the fully qualified domain name of the actual service, as per Kubernetes doc:
“ExternalName accepts an IPv4 address string, but as a DNS name comprised of digits, not as an IP address. ExternalNames that resemble IPv4 addresses are not resolved by CoreDNS or ingress-nginx..”
The second issue is that the whole point of external service is to abstract external services by creating ExternalName service so that pods can connect to the external service through mongo-svc.default.svc.cluster.local (use the default if your mongo-svc in a default namespace, which seems to be the case based on your YAML) and not use external service name and its location so that you can modify the service definition and point to another service if needed.