Kubernetes ExternalName, exposing 10.0.2.2 - kubernetes

I have created a mongo Service as follows:
apiVersion: v1
kind: Service
metadata:
name: mongo-svc
spec:
type: ExternalName
externalName: 10.0.2.2
ports:
- port: 27017
When starting the cluster with the following command:
minikube start --vm-driver=virtualbox
I would expect the Virtualbox loopback address (10.0.2.2) to be mapped to the local Mongo DB instance that runs on my localhost machine.
However when logging in into a pod and trying to ping 10.0.2.2, I experience a 100% package loss.
Is there something I'm missing here?

So if you are trying to expose MongoDB as an external service and basically map mongo-svc to 10.0.2.2, then
the first issue is that ExtrnalName must the fully qualified domain name of the actual service, as per Kubernetes doc:
“ExternalName accepts an IPv4 address string, but as a DNS name comprised of digits, not as an IP address. ExternalNames that resemble IPv4 addresses are not resolved by CoreDNS or ingress-nginx..”
The second issue is that the whole point of external service is to abstract external services by creating ExternalName service so that pods can connect to the external service through mongo-svc.default.svc.cluster.local (use the default if your mongo-svc in a default namespace, which seems to be the case based on your YAML) and not use external service name and its location so that you can modify the service definition and point to another service if needed.

Related

How to access ExternalName service in kubernetes using minikube?

I understand that a service of type ExternalName will point to a specified deployment that is not exposed externally using the specified external name as the DNS name. I am using minikube in my local machine with docker drive. I created a deployment using a custom image. When I created a service with default type (Cluster IP) and Load Balancer for the specific deployment I was able to access it after port forwarding to the local ip address. This was possible for service type ExternalName also but accessible using the ip address and not the specified external name.
According to my understanding service of type ExternalName should be accessed when using the specified external name. But I wasn't able to do it. Can anyone say how to access an external name service and whether my understanding is correct?
This is the externalName.yaml file I used.
apiVersion: v1
kind: Service
metadata:
name: k8s-hello-test
spec:
selector:
app: k8s-yaml-hello
ports:
- port: 3000
targetPort: 3000
type: ExternalName
externalName: k8s-hello-test.com
After port forwarding using kubectl port-forward service/k8s-hello-test 3000:3000 particular deployment was accessible using http://127.0.0.1:300
But even after adding it to etc/hosts file, cannot be accessed using http://k8s-hello-test.com
According to my understanding service of type ExternalName should be
accessed when using the specified external name. But I wasn't able to
do it. Can anyone say how to access it using its external name?
You are wrong, external service is for making external connections. Suppose if you are using the third party Geolocation API like https://findmeip.com you can leverage the External Name service.
An ExternalName Service is a special case of Service that does not
have selectors and uses DNS names instead. For more information, see
the ExternalName section later in this document.
For example
apiVersion: v1
kind: Service
metadata:
name: geolocation-service
spec:
type: ExternalName
externalName: api.findmeip.com
So your application can connect to geolocation-service, which which forwards requests to the external DNS mentioned in the service.
As ExternalName service does not have selectors you can not use the port-forwarding as it connects to POD and forwards the request.
Read more at : https://kubernetes.io/docs/concepts/services-networking/service/#externalname

Mosquitto Broker - DNS name instead of IP address for MQTT clients to use

I am able to get eclipse mosquitto broker up and running with the MQTT clients able to talk to the broker using Broker's IP address. However, as am running these on kubernetes, the broker IP keeps changing on restart. I would like to enable DNS name service for the broker, so the clients can use broker-name instead of the IP. coreDNS is running default in kubernetes..
Any suggestions on what can be done ?
$ nslookup kubernetes.default
Server: 10.43.0.10
Address: 10.43.0.10:53
** server can't find kubernetes.default: NXDOMAIN
** server can't find kubernetes.default: NXDOMAIN
You can achieve that using headless service. You create it by setting the clusterIP field in a service spec to None. Once you do that the DNS server will return the pod IPs instead of the single service and instead of returning a single DNS A record, the DNS server will return multiple A records for the service each pointing to the IP of an individual pod backing the service at the moment.
With this your client can perform a single DNS A record lookup to fetch the IP of all the pods that are part of the service. Headless service is also often used as service discovery system.
apiVersion: v1
kind: Service
metadata:
name: your-headless-service
spec:
clusterIP: None # <-- This makes the service hadless!
selector:
app: your-mosquito-broker-pod
ports:
- protocol: TCP
port: 80
targetPort: 3000
You are able also to resolve the dns name with regular service as well. The difference is that with headless service you are able to talk to the pod directly instead having service as load-balancer or proxy.
Resolving the service thru dns is easy and you do that with the following pattern:
backend-broker.default.svc.cluster.local
Whereas backend-broker corresponds to the service name, default stands for the namespace the service is defined in, and svc.cluster.local is a configurable cluster domain suffix used in all cluster local service names.
Note that if you client and broker are in the same namespace you can omit the svc.cluster.local suffix and the namespace. You then reffer the servie as:
backend-broker
I high encourage you to read more about Dns in kubernetes.
All,
Thanks for answering the query, especially Thomas for code pointers. With your suggestions, once I create a Service for the POD, I was able to get the DNS working as core-dns was already running .. I was able to use the hostname in MQTT broker also after this.
opts.AddBroker(fmt.Sprintf("tcp://mqtt-broker:1883"))
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2021-02-01T19:08:46Z"
labels:
app: ipc
name: mqtt-broker
namespace: default
BTW, I wasnt able to get the headless service working, was getting this error, so continued with ClusterIP itself + 1883 exposed port for MQTT. Any suggestions please ?
`services "mqtt-broker" was not valid:`
`spec.clusterIPs[0]: Invalid value: []string{"None"}: may not change once set`

how to give service name and port in configmap yaml?

I have a service (CusterIP) like following which is exposing ports of backend POD.
apiVersion: v1
kind: Service
metadata:
name: fsimulator
namespace: myns
spec:
type: ClusterIP
selector:
application: oms
ports:
- name: s-port
port: 9780
- name: b-port
port: 8780
Front end POD should be able to connect to Backend POD using service. Should we replace hostname with service name to connect from Frontend POD to Backend POD ?
I have to supply the service name and port through environment variables to Frontend POD container.
The enviroment variables are set using configMap.
Is it enough to give service name fsimulator as hostname to connect to ?
How to give service if is created inside namespace ?
Thanks
Check out this documentation. The internal service PORT / IP pairs for active services are indeed passed into the containers by default.
As the documentation also says, it is possible (recommended) to use a DNS cluster add-on for service discovery. Accessing service.namespace from outside / inside a service will resolve to the correct service route (or just service from inside the namespace). This is usually the right path to take.
Built-in service discovery is a huge perk of using Kubernetes, use the available tools if at all possible!

Calling an external service from within Minikube

I have a service (/deployment/pod) running in my Minikube (installed on my Mac) that needs to call an external http service that runs directly on my Mac (i.e. outside Minikube). The domain name of that external service is defined into my Mac /etc/hosts file. Yet, my service within Minikube cannot call that external service. Any idea what I need to configure where? Many thanks. C
Create Endpoints that will forward traffic to your desire external IP address (your local machine). You can directly connect using Endpoints but according to Goole Cloud best practice (doc) is to access it through a Service
Create your Endpoints
kind: Endpoints
apiVersion: v1
metadata:
name: local-ip
subsets:
- addresses:
- ip: 10.240.0.4 # IP of your desire end point
ports:
- port: 27017 # Port that you want to access
Then create you Service
kind: Service
apiVersion: v1
metadata:
name: local-ip
spec:
type: ClusterIP
ports:
- port: 27017
targetPort: 27017
Now you can call external http service using the Service name. In this case local-ip like any other internal service of minikube.
Because your minikube is running on a virtual machine on your laptop , you just need minikube ssh into that machine and enter the address of your external service into the /etc/hosts file of that virtual machine.

Connect to local database from inside minikube cluster

I'm trying to access a MySQL database hosted inside a docker container on localhost from inside a minikube pod with little success. I tried the solution described Minikube expose MySQL running on localhost as service but to no effect. I have modelled my solution on the service we use on AWS but it does not appear to work with minikube. My service reads as follows
apiVersion: v1
kind: Service
metadata:
name: mysql-db-svc
namespace: external
spec:
type: ExternalName
ExternalName: 172.17.0.2
...where I try to connect to my database from inside a pod using "mysql-db-svc" on port 3306 but to no avail. If I try and CURL the address "mysql-db-svc" from inside a pod it cannot resolve the host name.
Can anybody please advise a frustrated novice?
I'm using ubuntu with Minikube and my database runs outside of minikube inside a docker container and can be accessed from localhost # 172.17.0.2. My Kubernetes service for my external mysql container reads as follows:
kind: Service
apiVersion: v1
metadata:
name: mysql-db-svc
namespace: external
spec:
type: ExternalName
externalName: 10.0.2.2
Then inside my .env for a project my DB_HOST is defined as
mysql-db-svc.external.svc
... the name of the service "mysql-db-svc" followed by its namespace "external" with "svc"
Hope that makes sense.
If I'm not mistaken, you should also create an Endpoint for this service as it's external.
In your case, the Endpoints definition should be as follows:
kind: "Endpoints"
apiVersion: "v1"
metadata:
name: mysql-db-svc
namespace: external
subsets:
- addresses:
- ip: "10.10.1.1"
ports:
port: 3306
You can read about the external sources on Kubernetes Defining a service docs.
This is because your service type is ExternalName which only fits in cloud environment such as AWS and GKE. To run your service locally change the service type to NodePort which will assign a static NodePort between 30000-32767. If you need to assign a static port on your own so that minikube won't pick a random port for you define that in your service definition under ports section like this nodePort: 32002.
And also I don't see any selector that points to your MySQL deployment in your service definition. So include the corresponding selector key pair (e.g. app: mysql-server) in your service definition under spec section. That selector should match with the selector you have defined in MySQL deployment definition.
So your service definition should be like this:
kind: Service
apiVersion: v1
metadata:
name: mysql-db-svc
namespace: external
spec:
selector:
app: mysql-server
ports:
- protocol: TCP
port: 3306
targetPort: 3306
nodePort: 32002
type: NodePort
After you deploy the service you can hit the MySQL service via http://{minikube ip}:32002 Replace {minikube ip} with actual minikube ip.
Or else you can get the access URL for the service with following command
minikube service <SERVICE_NAME> --url
Replace the with actual name of the service. In your case it is mysql-db-svc
I was also facing a similar problem where I needed to connect a POD inside minikube with a SQL Server container on the machine.
I noticed that minikube is itself a container in the local docker environment and during it's setup it creates a local docker network minikube. I connected my local SQL Server container to this minikube docker network using docker network connect minikube <SQL Server Container Name> --ip=<any valid IP on the minikube network subent>.
I was now able to access the local SQL Server container using the IP address on minikube network.
Above solution somehow doesn't work for me. Finally it works below in terraform
resource "kubernetes_service" "host" {
metadata {
name = "minikube-host"
labels = {
app = "minikube-host"
}
namespace = "default"
}
spec {
port {
name = "app"
port = 8082
}
cluster_ip = "None"
}
}
resource "kubernetes_endpoints" "host" {
metadata {
name = "minikube-host"
namespace = "default"
}
subset {
address {
// This ip comes from command: minikube ssh 'grep host.minikube.internal /etc/hosts | cut -f1'
ip = "192.168.65.2"
}
port {
name = "app"
port = 8082
}
}
}
Then I can access the local service (e.g, postgres, or mysql) in my mac with host minikube-host.default.svc.cluster.local from k8s pods.
The plain yaml file version and more details can be found at issue.
The minikube host access host.minikube.internal detail can be found here.
On the other hand, the raw ip address from command minikube ssh 'grep host.minikube.internal /etc/hosts | cut -f1'(e.,g "192.168.65.2"), can be used as the service host directly, instead of 127.0.0.1/localhost at code. And no more above configurations required.
As an addon to #Crou s answer from 2018, In 2022, kubernetes docs say ExternalName takes in a string and not an address. So, in case ExternalName doesn't work, you can also use the simpler option of services-without-selectors
You can also refer to this Google Cloud Tech video for how this services-without-selectors concept works