k8s ExternalName endpoint not found - but working - kubernetes

I deployed a simple test ingress and an externalName service using kustomize.
The deployment works and I get the expected results, but when describing the test-ingress it shows the error: <error: endpoints "test-external-service" not found>.
It seems like a k8s bug. It shows this error, but everything is working fine.
Here is my deployment:
kustomization.yaml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: platform
resources:
- test-ingress.yaml
- test-service.yaml
generatorOptions:
disableNameSuffixHash: true
test-service.yaml:
apiVersion: v1
kind: Service
metadata:
name: test-external-service
namespace: platform
spec:
type: ExternalName
externalName: "some-working-external-elasticsearch-service"
test-ingress.yaml:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
kubernetes.io/ingress.class: nginx-external
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_cache_bypass $http_upgrade;
spec:
rules:
- host: testapi.mydomain.com
http:
paths:
- path: /
backend:
serviceName: test-external-service
servicePort: 9200
Here, I connected the external service to a working elasticsearch server. When browsing to testapi.mydomain.com ("mydomain" was replaced with our real domain of course), I'm getting the well known expected elasticsearch results:
{
"name" : "73b40a031651",
"cluster_name" : "docker-cluster",
"cluster_uuid" : "Xck-u_EFQ0uDHJ1MAho4mQ",
"version" : {
"number" : "7.10.1",
"build_flavor" : "oss",
"build_type" : "docker",
"build_hash" : "1c34507e66d7db1211f66f3513706fdf548736aa",
"build_date" : "2020-12-05T01:00:33.671820Z",
"build_snapshot" : false,
"lucene_version" : "8.7.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
So everything is working. But when describing the test-ingress, there is the following error:
test-external-service:9200 (<error: endpoints "test-external-service" not found>)
What is this error? Why am I getting it even though everything is working properly? What am I missing here?

This is how the kubectl describe ingress command works.
The kubectl describe ingress command calls the describeIngressV1beta1 function, which calls the describeBackendV1beta1 function to describe the backend.
As can be found in the source code, the describeBackendV1beta1 function looks up the endpoints associated with the backend services, if it doesn't find appropriate endpoints, it generate an error message (as in your example):
func (i *IngressDescriber) describeBackendV1beta1(ns string, backend *networkingv1beta1.IngressBackend) string {
endpoints, err := i.client.CoreV1().Endpoints(ns).Get(context.TODO(), backend.ServiceName, metav1.GetOptions{})
if err != nil {
return fmt.Sprintf("<error: %v>", err)
}
...
In the Integrating External Services documentation, you can find that ExternalName services do not have any defined endpoints:
ExternalName services do not have selectors, or any defined ports or endpoints, therefore, you can use an ExternalName service to direct traffic to an external service.

Service is a Kubernetes abstraction that uses labels to chose pods to route traffic to.
Endpoints track the IP Addresses of the objects the service send traffic to. When a service selector matches a pod label.
This is the case with Kubernetes service with the type ClusterIP, NodePort or LoadBalancer.
For your case, you use a Kubernetes service with the type ExternalName where the endpoint is a server outside of your cluster or in a different namespace, thus kubernetes displays that error message when you try to describe the ingress.
Usually we do not create an ingress that points to a service of type ExternalName because we are not supposed to expose externally a service that it is already exposed. The kubernetes ingress expects a service with the type ClusterIP, NodePort or LoadBalancer, that is why you got that unexpected error when you described the ingress.
If you are browsing that ExternalName within the cluster, it would be better to avoid using an ingress and use the service uri instead (test-external-service.<namespace>.svc.cluster.local:9200)
Anyway if you insist on using the Ingress, you can create a Headless service without selector and then manually create an endpoint using the same name as of the service. Follow the example here

Related

How to redirect traffic from .svc.cluster.local to .svc.k8s.my-domain.com in Kubernetes?

My cluster has its own domain name k8s.my-domain.com.
Originally when I deploy Dgraph, I met issue that their pods cannot talk to each other by dgraph-service.dgraph-namespace.svc.cluster.local.
If they talk to each other by
dgraph-service.dgraph-namespace
dgraph-service.dgraph-namespace.svc.k8s.my-domain.com
it will work.
I fixed by removing .svc.cluster.local part from Dgraph yaml and created the pull request at https://github.com/dgraph-io/dgraph/pull/7976.
Today, when I deploy Yugabyte, I met the same issue again. I created a ticket at https://github.com/yugabyte/yugabyte-operator/issues/38 and hopefully Yugabyte team can fix it.
However, I am not sure if this approach is good now. I hope I can do something on my side.
Is there a way to redirect from .svc.cluster.local to .svc.k8s.my-domain.com?
Maybe in CoreDNS so that I don't need to change original Dgraph or Yugabyte YAML file? Thanks!
UPDATE 1:
Based on #CodeWizard suggestion, and because I got a warning from my IDE:
So I tried both versions:
apiVersion: v1
kind: Service
metadata:
name: external-service
spec:
type: ExternalName
externalName: k8s.my-domain.com
and
apiVersion: v1
kind: Service
metadata:
name: external-service
spec:
type: ExternalName
externalName: cluster.local
After applying this yaml file.
However, I still got same error deploying Yugabyte to my cluster.
You can use ExternalName in your service.
kind: "Service"
apiVersion: "v1"
metadata:
name: "external-mysql-service"
spec:
type: ExternalName
externalName: example.domain.name
selector: {} # The selector field to leave blank.
Using an External Domain Name
Using external domain names makes it easier to manage an external service because you do not have to worry about the external service’s IP addresses changing.
You can use an ExternalName service to direct traffic to an external service.
Using an external domain name service tells the system that the DNS name in the externalName field (example.domain.name in the previous example) is the location of the resource that backs the service.
When a DNS request is made against the Kubernetes DNS server, it returns the externalName in a CNAME record telling the client to look up the returned name to get the IP address.

Exposing simple application in gcloud with Kubernetes

I have a simple app which I am following from a basic tutorial :
https://medium.com/#bhargavshah2011/hello-world-on-kubernetes-cluster-6bec6f4b1bfd
I create a cluster hello-world2-cluster
I "connect" to the cluster using :
gcloud container clusters get-credentials hello-world2-cluster --zone us-central1-c --project strange-vortex-286312
I perform a git clone of the "hellow world" project, which is basically some kind of snake game :
$ git clone https://github.com/skynet86/hello-world-k8s.git
$ cd hello-world-k8s/
Next I run "create" on the YAML:
kubectl create -f hello-world.yaml
Deplyments and services are created as expected :
Fine! ALl good!
But.... now what????
What do I do now?
I want to access this app from the outside world. Where is the URL where I can access the application?
How can I get my friend Bob in Timbuktu to call some kind of URL (no need for DNS) to access my application?
I know the answer has something to do with LoadBalancer nodes and Ingress, but there seems to be no proper documentation out there on how to do this for a simple hello world app.
GKE hello-world with Ingress Official Example: https://cloud.google.com/kubernetes-engine/docs/how-to/load-balance-ingress
The ingress controller is built-in in GKE, you just need a kind: Ingress object that points to your Service. Addressing your question:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: hello-ingress
annotations:
kubernetes.io/ingress.class: "gce"
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: hello-world
servicePort: 80
Once deployed, query kubectl get ingress hello-ingress -oyaml to find out the resulting load-balancer's IP in the status: field.
If you don't want to set up a DNS and/or an ingress, you can simply do it with what you already have using the node external ip.
It seems that you already have the deployement and the NodePort service is mapped to port 30081.
So, below are the remaining steps to make it work with what you have :
Get your node IP :
kubectl get nodes -o wide
Now that you have your node external IP, you need to allow tcp traffic to this port
gcloud compute firewall-rules create allow-tcp-30081 --allow tcp:30081
That's it ! You can just tell you're friend Bob that the service should be accessible at node-ip:30081
You can have a look at this page to see the other possibilities depending on your service type.
Also, when you use a NodePort the port is mapped to all the nodes of your cluster, so you can use any IP. No need to look for the node where your replicas is deployed.
If you don't want to use Ingress or anything. You can expose your deployment as LoadBalancer then using external IP your friend bob can access application. So to use
kubectl expose deployment hello-world-deployment --name=hello-world-deployment --type=LoadBalancer --port 80 --target-port 80
This will take around 1 minute to assign an external IP address to the service.
to get external IP use
kubectl get services
Get the External IP address in-front of your service and open your browser and enter the IP address that you copied.
Done !! This is the easiest and simplest way to get start with !!
You can expose your Kubernetes services to the internet with an Ingress.
GKE allows you to use different kinds of Ingress (nginx based, and so on) but, by default, it will expose your service through the Cloud Load Balancing service.
As indicated by #MaxLobur, you can configure this default Ingress applying a configuration file similar to this:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: hello-world-ingress
spec:
backend:
serviceName: hello-world
servicePort: 80
Or, equivalently, if you want to prepare your Ingress for various backend services and configure it based on rules:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: hello-world-ingress
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: hello-world
servicePort: 80
This Ingress will be assigned an external IP address.
You can find which with the following command:
kubectl get ingress hello-world-ingress
This command will output something similar to the following:
NAME HOSTS ADDRESS PORTS AGE
hello-world-ingress * 203.0.113.12 80 2m
But, if you give this IP address to your friend Bob in Timbuktu, he will call you back soon telling you that he cannot access the application...
Now seriously: this external IP address assigned by GCP is temporary, an ephemeral one subject to change.
If you need a deterministic, fixed IP address for your application, you need to configure a static IP for your Ingress.
This will also allow you to configure, if necessary, a DNS record and SSL certificates for your application.
In GCP, the first step in this process is reserving a static IP address. You can configure it in the GCP console, or by issuing the following gcloud command:
gcloud compute addresses create hello-world-static-ip --global
This IP should be later assigned to your Ingress by modifying its configuration file:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: hello-world-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: "hello-world-static-ip"
spec:
backend:
serviceName: hello-world
servicePort: 80
This static IP approach only adds value because as long as it is associated with a Google Cloud network resource, you will not be charged for it.
All this process is documented in this GCP documentation page.

Ingress in Kubernetes

I was doing some research about ingress and it seems I have to create a new ingress resource for each namespace. Is that correct?
I just created 2 separate ingress resources in different namespaces in my GKE cluster but it seems to use the same LB in(which is great for cost) but I would think it is possible to have clashes then. (when using same path). I just tried it and the first one I've created is still working on the path, the other newer one on the same path is just not working.
Can someone explain me the correct setup for ingress?
As Kubernetes works, ingress controller won't pass a packet to a service that is in a different namespace from the ingress resource. So, if you create an ingress resource in the default namespace, all your services must be in the default namespace as well.
This is something that won't change. EVER. There has been a feature request years ago, and kubernetes team announced that it's not going to happen. It introduces a security hole when ingress controller is being able to transpass a namespace.
Now, what we do in these situations is actually pretty neat. You will have to do the following:
Say you have 2 services in the namespaces you need. e.g. service1.foo and service2.bar.
create 2 headless services without selectors and 2 Endpoint objects pointing to the IP addresses of the services service1.foo and service2.bar, in the same namespace as the ingress resource. The headless service without selectors will force kube-dns (or coreDNS) to search for either ExternalName type service or an Endpoint object. Now, the only requirement here is that your headless service and the Endpoint object must have the same name.
Create your ingress resource pointing to the headless services.
It should look like this (for 1 service):
Say the IP address of service1.foo is 10.10.10.10. Your headless service and the Endpoint object would be:
apiVersion: v1
kind: Service
metadata:
name: bait-svc
spec:
clusterIP: None
ports:
- name: http
port: 80
targetPort: 80
---
apiVersion: v1
kind: Endpoints
metadata:
name: bait-svc
subsets:
- addresses:
- ip: 10.10.10.10
ports:
- port: 80
protocol: TCP
and Ingress resource:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
tls:
- secretName: ssl-certs
rules:
- host: site1.training.com
http:
paths:
- path: /
backend:
serviceName: bait-svc
servicePort: 80
So, the Ingress points to the bait-svc, and bait-svc points to service1.foo. And you will do this for each service.
UPDATE
I am thinking now, it might not work with GKE Ingress Controller, as on GKE you need a NodePort type service for the HTTP load balancer to reach the service. As you can see, in my example I've got nginx Ingress Controller.
Independently if it works or not, I would recommend using some other Ingress Controller. It's not that GKE IC is not good. It is quite robust, but almost always you end up hitting some limitation. Other ICs are more flexible.
The behavior of conflicting Ingress routes is undefined and implementation dependent. In most cases it’s just last writer wins.

Ingress expose the service with the type clusterIP

Is it possible to expose the service by ingress with the type of ClusterIP?
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: my-service
ports:
- name: my-service-port
port: 4001
targetPort: 4001
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: app-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: my.example.com
http:
paths:
- path: /my-service
backend:
serviceName: my-service
servicePort: 4001
I know the service can be exposed with the type of NodePort, but it may cost one more NAT connection, if someone could show me what's the fastest way to detect internal service from the world of internet in the cloud.
No, clusterIP is only reachable from within the cluster. An Ingress is essentially just a set of layer 7 forwarding rules, it does not handle the layer 4 requirements of exposing the internals of your cluster to the outside world. At least 1 NAT step is required.
For Ingress to work, though, you need to have at least one service involved that exposes your workload externally, so nodePort or loadBalancer. Your ingress controller and the infrastructure of your cluster will determine which of the two services you will need to use.
In the case of Nginx ingress, you need to have a single LoadBalancer service which the ingress will use to bridge traffic from outside the cluster to inside it. After that, you can use clusterIP services for each of your workloads.
In your above example, as long as the nginx ingress controller is correctly configured (with a loadbalancer), then the config you are using should work fine.
In short : YES
Now to the elaborate answer...
First thing first, let's have a look at what the official documentation says :
Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster.
[...]
An Ingress controller is responsible for fulfilling the Ingress, usually with a load balancer...
What's confusing here is the term Load balancer. In the definition above, we are talking about the classic and well known in the web load balancer.
This one has nothing to do with kubernetes !
So back to the definition, to use an Ingress and make it work, we need a kubernetes resource called IngressController. And this resource happen to be a load balancer ! That's it.
However, you have to keep in mind that there is a difference between a load balancer in the outside world and a kubernetes service of type type:LoadBalancer.
So in summary (and in order to redirect the traffic from the outside world to your k8s clusterIp service) :
Do you need a Load balancer to make your kind:Ingress works ? Yes, this is the kind:IngressController kubernetes resource.
Do you need a kubernetes service type:LoadBalancer or type:NodePort to make your kind:Ingress works ? Definitely no ! A service type:ClusterIP works just fine !

Kubernetes path based routing for multiple namespaces

The environment: I have a kubernetes cluster set up with namespaces for "dev", "sit" and "prod". In each of these namespaces i have multiple services of type:LoadBalancer which target a specific deployment of a dockerised application (i have multiple applications) so i can access each of these by just using the exposed ip address of the service of whichever namespace i want. Example service looks like this an is very simple:
apiVersion: v1
kind: Service
metadata:
name: application1
spec:
ports:
- port: 80
targetPort: 3000
protocol: TCP
name: http
type: LoadBalancer
selector:
app: application1
The problem: I now want to be able to support multiple versions of all applications (ip:/v1/, ip:/v2/ etc) as to allow the users to migrate to the new version when they are ready and i've been trying to implement path-based routing following this guide. I have managed to restructure my architecture so that i have ReplicationControllers and an ingress which looks at the rules of the path to route to the correct service.
This seems to work if i'd only have one exposed service and a single namespace because i only have DNS host names for production environment and want to use the individual ip address of a service for other environments and i can't figure out how to specify the ingress rules for a service which doesn't have a hostname.
I could just have a loadbalancer for every environment and use path based routing to route to each different services for dev and sit which is not ideal because to access any service we'd have to now use something like this ip/application1 and ip/application2 instead of directly using the service ip address of each application. But my biggest problem is that when i followed the guide and created the ingress, replicationController and a service in my SIT namespace it started affecting the loadbalancer services in my other two environments (as i understand the kubernetes would sometimes try to use the nginx controller from SIT environment on my DEV services and therefore would fail, other times it would use the GCE default configuration and would work).
I tried adding the arg "- --watch-namespace=sit" to limit the scope of the ingress controller to only affect sit but it does not seem to work.
I now want to be able to support multiple versions of all applications (ip:/v1/, ip:/v2/ etc.)
That is exactly what Ingress can do, but the problem is that you want to use IP addresses for routing, but Ingress is using DNS names for that.
I think the best way to implement this is to use an Ingress which will handle requests. On GCE Ingress uses the HTTP(S) load balancer. Yes, you will need a DNS name for that, but it will help you to create a routing which you need.
Also, I highly recommend using TLS encryption for connections.
You can check LetsEncrypt to get a free SSL certificate.
So, the solution should like below:
1. Deploy your Services with type "ClusterIP" instead of "LoadBalancer". You can have more than one Service object for an application so you can do it in parallel with your current configuration.
2. Select any namespace (even special one), for instance - "ingress-ns". We need to create there Service objects which will point to your services in other namespaces. Here is an example of a service (let new DNS name be "my.shiny.new.domain"):
kind: Service
apiVersion: v1
metadata:
name: service-v1
namespace: ingress-ns
spec:
type: ExternalName
externalName: <service>.<namespace>.svc.cluster.local # here is a service name and namespace of your service with version v1.
ports:
- port: 80
3. Now, we have a namespace with several services which are pointing to different versions of your application in different namespaces. Now, we can create an Ingress object which will create an HTTP(S) Load Balancer on GCE with path-based routing:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test
namespace: ingress-ns
spec:
rules:
- host: my.shiny.new.domain
http:
paths:
- path: /v1
backend:
serviceName: service-v1
servicePort: 80
- path: /v2
backend:
serviceName: service-v2
servicePort: 80
Kubernetes will create a new HTTP(S) balancer with rules you set up in an Ingress object, and you will have an entry point with cross-namespaces path-based routing, and you don't have to use multiple IP addresses for that.
Actually, you can also manage by that ingress your primary version of an application and use your primary domain with "/" path to handle requests to your production version.