How to access ExternalName service in kubernetes using minikube? - kubernetes

I understand that a service of type ExternalName will point to a specified deployment that is not exposed externally using the specified external name as the DNS name. I am using minikube in my local machine with docker drive. I created a deployment using a custom image. When I created a service with default type (Cluster IP) and Load Balancer for the specific deployment I was able to access it after port forwarding to the local ip address. This was possible for service type ExternalName also but accessible using the ip address and not the specified external name.
According to my understanding service of type ExternalName should be accessed when using the specified external name. But I wasn't able to do it. Can anyone say how to access an external name service and whether my understanding is correct?
This is the externalName.yaml file I used.
apiVersion: v1
kind: Service
metadata:
name: k8s-hello-test
spec:
selector:
app: k8s-yaml-hello
ports:
- port: 3000
targetPort: 3000
type: ExternalName
externalName: k8s-hello-test.com
After port forwarding using kubectl port-forward service/k8s-hello-test 3000:3000 particular deployment was accessible using http://127.0.0.1:300
But even after adding it to etc/hosts file, cannot be accessed using http://k8s-hello-test.com

According to my understanding service of type ExternalName should be
accessed when using the specified external name. But I wasn't able to
do it. Can anyone say how to access it using its external name?
You are wrong, external service is for making external connections. Suppose if you are using the third party Geolocation API like https://findmeip.com you can leverage the External Name service.
An ExternalName Service is a special case of Service that does not
have selectors and uses DNS names instead. For more information, see
the ExternalName section later in this document.
For example
apiVersion: v1
kind: Service
metadata:
name: geolocation-service
spec:
type: ExternalName
externalName: api.findmeip.com
So your application can connect to geolocation-service, which which forwards requests to the external DNS mentioned in the service.
As ExternalName service does not have selectors you can not use the port-forwarding as it connects to POD and forwards the request.
Read more at : https://kubernetes.io/docs/concepts/services-networking/service/#externalname

Related

How to redirect traffic from .svc.cluster.local to .svc.k8s.my-domain.com in Kubernetes?

My cluster has its own domain name k8s.my-domain.com.
Originally when I deploy Dgraph, I met issue that their pods cannot talk to each other by dgraph-service.dgraph-namespace.svc.cluster.local.
If they talk to each other by
dgraph-service.dgraph-namespace
dgraph-service.dgraph-namespace.svc.k8s.my-domain.com
it will work.
I fixed by removing .svc.cluster.local part from Dgraph yaml and created the pull request at https://github.com/dgraph-io/dgraph/pull/7976.
Today, when I deploy Yugabyte, I met the same issue again. I created a ticket at https://github.com/yugabyte/yugabyte-operator/issues/38 and hopefully Yugabyte team can fix it.
However, I am not sure if this approach is good now. I hope I can do something on my side.
Is there a way to redirect from .svc.cluster.local to .svc.k8s.my-domain.com?
Maybe in CoreDNS so that I don't need to change original Dgraph or Yugabyte YAML file? Thanks!
UPDATE 1:
Based on #CodeWizard suggestion, and because I got a warning from my IDE:
So I tried both versions:
apiVersion: v1
kind: Service
metadata:
name: external-service
spec:
type: ExternalName
externalName: k8s.my-domain.com
and
apiVersion: v1
kind: Service
metadata:
name: external-service
spec:
type: ExternalName
externalName: cluster.local
After applying this yaml file.
However, I still got same error deploying Yugabyte to my cluster.
You can use ExternalName in your service.
kind: "Service"
apiVersion: "v1"
metadata:
name: "external-mysql-service"
spec:
type: ExternalName
externalName: example.domain.name
selector: {} # The selector field to leave blank.
Using an External Domain Name
Using external domain names makes it easier to manage an external service because you do not have to worry about the external service’s IP addresses changing.
You can use an ExternalName service to direct traffic to an external service.
Using an external domain name service tells the system that the DNS name in the externalName field (example.domain.name in the previous example) is the location of the resource that backs the service.
When a DNS request is made against the Kubernetes DNS server, it returns the externalName in a CNAME record telling the client to look up the returned name to get the IP address.

How to get ip/address of service in k8s

I would like to create service A (redis instance) and service B (application).
Application would like to use service A (redis).
How can I get some automaticaly address/url of service A inside k8s cluster without expose to internet?
Something like:
redis://service-a-url:6379
I don't know which technic of k8s should I use.
So for example your redis service should look like this:
apiVersion: v1
kind: Service
metadata:
name: redis
labels:
run: redis
spec:
ports:
- port: 6379
targetPort: 6379
protocol: TCP
selector:
run: redis
The service is type ClusterIP (because if you will not specify service type in yaml file by default it will be ClusterIP type) that you don't have to access service from the outside the cluster. There are more types of service - find information here: services-kubernetes.
Take a look: connecting-app-service, app-service-redis.
Kubernetes supports two modes of finding a Service - environment variables and DNS.
Kubernetes has a specific DNS cluster addon Service that automatically assigns DNS names to other Services.
Every single Service created in the cluster has its own assigned DNS name. A client Pod's DNS search list will include the Pod's own namespace and the cluster's default domain by default. This is best illustrated by example:
Assume a Service named example in the Kubernetes namespace ns. A Pod running in namespace ns can look up this service by simply doing a DNS query for example. A Pod running in namespace test can look up this service by doing a DNS query for example.ns.
Find more here: Kubernetes DNS-Based Service Discovery, dns-name-service.
You will be able to access your service within the cluster using following record:
<service>.<ns>.svc.<zone>. <ttl>
For example: redis.default.svc.cluster.local

Exposing LoadBalancer service in minikube at arbitrary port?

I have a minikube cluster with a running WordPress in one deployment, and MySQL in another. Both of the deployments have corresponding services. The definition for WordPress service looks like this:
apiVersion: v1
kind: Service
metadata:
name: wordpress
spec:
selector:
app: wordpress
ports:
- port: 80
type: LoadBalancer
The service works fine, and minikube service gives me a nice path, with an address of minikube ip and a random high port. The problem is WordPress needs a full URL in the name of the site. I'd rather not change it every single time and have local DNS name for the cluster.
Is there a way to expose the LoadBalancer on an arbitrary port in minikube? I'll be fine with any port, as long as it's port is decided by me, and not minikube itself?
Keep in mind that Minikube is unable to provide real loadbalancer like different cloud providers and it merely simulates it by using simple nodePort Service instead.
You can have full control over the port that is used. First of all you can specify it manually in the nodePort Service specification (remember it should be within the default range: 30000-32767):
If you want a specific port number, you can specify a value in the
nodePort field. The control plane will either allocate you that port
or report that the API transaction failed. This means that you need to
take care of possible port collisions yourself. You also have to use a
valid port number, one that’s inside the range configured for NodePort
use.
Your example may look as follows:
apiVersion: v1
kind: Service
metadata:
name: wordpress
spec:
selector:
app: wordpress
ports:
- port: 80
targetPort: 80
nodePort: 30000
type: NodePort
You can also change this default range by providing your custom value after --service-node-port-range flag when starting your kube-apiserver.
When you use kubernetes cluster set up by kukbeadm tool (Minikube also uses it as a default bootstrapper), you need to edit /etc/kubernetes/manifests/kube-apiserver.yaml file and provide the required flag with your custom port range.

how to give service name and port in configmap yaml?

I have a service (CusterIP) like following which is exposing ports of backend POD.
apiVersion: v1
kind: Service
metadata:
name: fsimulator
namespace: myns
spec:
type: ClusterIP
selector:
application: oms
ports:
- name: s-port
port: 9780
- name: b-port
port: 8780
Front end POD should be able to connect to Backend POD using service. Should we replace hostname with service name to connect from Frontend POD to Backend POD ?
I have to supply the service name and port through environment variables to Frontend POD container.
The enviroment variables are set using configMap.
Is it enough to give service name fsimulator as hostname to connect to ?
How to give service if is created inside namespace ?
Thanks
Check out this documentation. The internal service PORT / IP pairs for active services are indeed passed into the containers by default.
As the documentation also says, it is possible (recommended) to use a DNS cluster add-on for service discovery. Accessing service.namespace from outside / inside a service will resolve to the correct service route (or just service from inside the namespace). This is usually the right path to take.
Built-in service discovery is a huge perk of using Kubernetes, use the available tools if at all possible!

Kubernetes ExternalName, exposing 10.0.2.2

I have created a mongo Service as follows:
apiVersion: v1
kind: Service
metadata:
name: mongo-svc
spec:
type: ExternalName
externalName: 10.0.2.2
ports:
- port: 27017
When starting the cluster with the following command:
minikube start --vm-driver=virtualbox
I would expect the Virtualbox loopback address (10.0.2.2) to be mapped to the local Mongo DB instance that runs on my localhost machine.
However when logging in into a pod and trying to ping 10.0.2.2, I experience a 100% package loss.
Is there something I'm missing here?
So if you are trying to expose MongoDB as an external service and basically map mongo-svc to 10.0.2.2, then
the first issue is that ExtrnalName must the fully qualified domain name of the actual service, as per Kubernetes doc:
“ExternalName accepts an IPv4 address string, but as a DNS name comprised of digits, not as an IP address. ExternalNames that resemble IPv4 addresses are not resolved by CoreDNS or ingress-nginx..”
The second issue is that the whole point of external service is to abstract external services by creating ExternalName service so that pods can connect to the external service through mongo-svc.default.svc.cluster.local (use the default if your mongo-svc in a default namespace, which seems to be the case based on your YAML) and not use external service name and its location so that you can modify the service definition and point to another service if needed.