Kubernetes service architecture - kubernetes

Within the same kubernetes cluster,
Can I have multiple StatefulSets attached to one headless service or should each StatefulSet have it's own headless service? What are the pros and cons of doing this?
Can I mix standard and headless services in the same cluster? Specifically, I would like to use LoadBalancer service to load balance headless services. Can I define a service of type LoadBalancer and have headless services (ClusterIP = None) attached to it? If yes, how can I achieve this?
Here is my intended architecture:
Load Balancer Service
- Headless Service (Database-service)
- MySql
- BlazeGraph
- Headless Service (Web / Tomcat)
- Web Service (RESTful / GraphQL)
Any advice and insight is appreciated.
My setup
My service and the statefulsets attached to it have different labels.
database-service: app=database
mysqlset: app=mysql
My pods
khteh#khteh-T580:~ 2007 $ k get pods -l app=mysql -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
mysql-0 1/1 Running 1 18h 10.1.1.4 khteh-t580 <none>
khteh#khteh-T580:~ 2008 $ k get pods -l app=blazegraph -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
blazegraph-0 1/1 Running 1 18h 10.1.1.254 khteh-t580 <none>
khteh#khteh-T580:~ 2009 $ k describe service database-service
Name: database-service
Namespace: default
Labels: app=database
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"database"},"name":"database-service","namespace":"defaul...
Selector: app=database,tier=database
Type: ClusterIP
IP: None
Port: mysql 3306/TCP
TargetPort: 3306/TCP
Endpoints: <none>
Port: blazegraph 9999/TCP
TargetPort: 9999/TCP
Endpoints: <none>
Session Affinity: None
Events: <none>
Notice the service Endpoints is <none>. I am not sure this is the right setup.

Headless Service you should use in any case where you want to automatically discover all pods under the service as opposed to regular Service where you get ClusterIP instead. As an illustration from above mentioned example here is difference between DNS entries for Service (with ClusterIP) and Headless Service (without ClusterIP):
Standard service you will get the clusterIP value:
kubectl exec zookeeper-0 -- nslookup zookeeper
Server: 10.0.0.10
Address: 10.0.0.10#53
Name: zookeeper.default.svc.cluster.local
Address: 10.0.0.213
Headless service you will get IP of each pod
kubectl exec zookeeper-0 -- nslookup zookeeper
Server: 10.0.0.10
Address: 10.0.0.10#53
Name: zookeeper.default.svc.cluster.local
Address: 172.17.0.6
Name: zookeeper.default.svc.cluster.local
Address: 172.17.0.7
Name: zookeeper.default.svc.cluster.local
Address: 172.17.0.8
Now, If you connect two statefulset with single headless service, it will return the address of each pod in both the statefulset. There will be no way to differentiate the pods from two applications if you create two statefulset and one headless service for that. See the following article to understand why headless services are used
Headless service allow developer to reduce coupling from kubernetes system by allowing them to do discovery their own way. For such services, clusterIP is not allocated, kube-proxy doesn't handle these services and there is no load balancing and proxying done by platform for them. So, If you define clusterIP: None in your service there will be no load-balancing will be done from kubernetes end.
Hope this helps.
EDIT:
I did a little experiment to answer your queries, created two statefulsets of mysql database named mysql and mysql2, with 1 replica for each statefulset. They have their own PV, PVC but bound by only single headless service.
[root#ip-10-0-1-235 centos]# kubectl get pods -l app=mysql -o wide
NAME READY STATUS RESTARTS AGE IP NODE
mysql-0 1/1 Running 0 4m 192.168.13.21 ip-10-0-1-235.ec2.internal
mysql2-0 1/1 Running 0 3m 192.168.13.22 ip-10-0-1-235.ec2.internal
Now you can see the single headless service attached to both the pods
[root#ip-10-0-1-235 centos]# kubectl describe svc mysql
Name: mysql
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=mysql
Type: ClusterIP
IP: None
Port: <unset> 3306/TCP
TargetPort: 3306/TCP
Endpoints: 192.168.13.21:3306,192.168.13.22:3306
Session Affinity: None
Events: <none>
Now when you lookup the service from some other pod, it returns IP address of both the pods:
[root#rtp-worker-0 /]# nslookup mysql
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: mysql.default.svc.cluster.local
Address: 192.168.13.21
Name: mysql.default.svc.cluster.local
Address: 192.168.13.22
Now, it is impossible to identify which address(pod) is of which statefulset. Now I tried to identify the statefulset using its metadata name, but couldn't
[root#rtp-worker-0 /]# nslookup mysql2.mysql.default.svc.cluster.local
Server: 10.96.0.10
Address: 10.96.0.10#53
** server can't find mysql2.mysql.default.svc.cluster.local: NXDOMAIN
Hope it clarifies.

Related

Kubernetes: Unable to explicitly set endpoint for a service

Hi Kubernetes Experts,
I have an application cluster running in the azure kubernetes cluster. There are 3 pods inside the application cluster. The app is designed in a way, that each pod listens on a different port. For example, pod 1 listens on 31090, pod2 on 31091 and pod 3 on 31092.
This application is needed to be connected from outside the network. At this point, I need to create a separate load balancer service for each of the pods.
In the service, I cannot use selector as app name/label as it tries to distribute traffic between all 3 pods in round robin way. Now as you see above that one port (say 31090) is running only on one pod. So, external connections to the load balancer IP fails 2/3 rd of times.
So, I am trying to create 3 different load balancer services individual to each pod, without mentioning the selector and later assigning endpoint individually to them.
The approach is explained here:
In Kubernetes, how does one select a pod by name in a service selector?
But after the endpoint is created, the service shows endpoint as blank. See below.
First I created only the service
---
apiVersion: v1
kind: Service
metadata:
name: myservice
spec:
ports:
- protocol: TCP
port: 31090
targetPort: 31090
name: b0
type: LoadBalancer
After this, the service shows endpoint as "none". So far, so good.
kubectl describe service myservice
Name: myservice
Namespace: confluent
Labels: <none>
Annotations: <none>
Selector: <none>
Type: LoadBalancer
IP: 10.0.184.1
Port: b0 31090/TCP
TargetPort: 31090/TCP
NodePort: b0 31354/TCP
**Endpoints: <none>**
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal EnsuringLoadBalancer 3s service-controller Ensuring load balancer
Then I create the end point, I have made sure the names match between service and endpoint, including any spaces or tabs. But the endpoint in service desc shows "" (blank). And this is why, I am unable to get to the app from outside network. Telnet to the port and external IP just keeps trying.
---
apiVersion: v1
kind: Endpoints
metadata:
name: myservice
subsets:
- addresses:
- ip: 10.240.1.32
ports:
- port: 31090
kubectl describe service myservice
Name: myservice
Namespace: confluent
Labels: <none>
Annotations: <none>
Selector: <none>
Type: LoadBalancer
IP: 10.0.184.1
LoadBalancer Ingress: 20.124.49.192
Port: b0 31090/TCP
TargetPort: 31090/TCP
NodePort: b0 31354/TCP
**Endpoints:**
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal EnsuringLoadBalancer 3m22s service-controller Ensuring load balancer
Normal EnsuredLoadBalancer 3m10s service-controller Ensured load balancer
Only this service is failing (using no selector). All my other external load balancer services are working fine. They all are getting to the pods. They all are using selector as app label.
Here is the pod ip. I have ensured port 31090 is running inside the pod.
kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
ck21-cp-kafka-0 2/2 Running 2 (76m ago) 78m **10.240.1.32** aks-agentpool-26199219-vmss000013 <none> <none>
Can someone please help me here?
Thanks !

ClusterIP not reachable within the Cluster

I'm struggling with kubernates configurations. What I want to get it's just to reach a deployment within the cluster. The cluster is on my dedicated server and I'm deploying it by using Kubeadm.
My nodes:
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 9d v1.19.3
k8s-worker1 Ready <none> 9d v1.19.3
I've a deployment running (nginx basic example)
$ kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 2/2 2 2 29m
I've created a service
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 9d
my-service ClusterIP 10.106.109.94 <none> 80/TCP 20m
The YAML file for my service is the following:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: nginx-deployment
ports:
- protocol: TCP
port: 80
Now I should expect, if I run curl 10.106.109.94:80 on my k8s-master to get the http answer.. but what I got is:
curl: (7) Failed to connect to 10.106.109.94 port 80: Connection refused
I've tried with NodePort as well and with targetPort and nodePort but the result is the same.
The cluster ip can not be reachable from outside of your cluster that means you will not get any response from the host machine that host your k8s cluster as this ip is not a part of your machine or any other machine rather than its a cluster ip which is used by your cluster CNI network like flunnel,weave.
So to get your services accessible from the outside or atleast from the host machine you have to change the type of your service like NodePort,LoadBalancer,K8s port-forward.
If you can change the service type NodePort then you will get response with any of your host machine ip and the allocated nodeport.
For example,if your k8s-master is 192.168.x.x and nodePort is 33303 then you can get response by
curl http://192.168.x.x:33303
or
curl http://worker_node_ip:33303
if your cluster is in locally installed, then you can install metalLB to get the privilege of load balancer.
You can also use port-forward to get your service accessible from the host that has kubectl client with k8s cluster access.
kubectl port-forward svc/my-service 80:80
kubectl -n namespace port-forward svc/service_name Port:Port

Map service on minikube to host IP

This is my first time running through the Kubernetes tutorial.
I installed Docker, Kubectl and Minikube on a headless Ubuntu server (18.04).
I ran Minikube like this -
minikube start --vm-driver=none
I have a local docker image that run a restful service on port 9110. I create a deployment and expose it like this -
kubectl run hello-node --image=dbtemplate --port=9110 --image-pull-policy=Never
kubectl expose deployment hello-node --type=NodePort
status of my service -
# kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-node NodePort 10.98.104.45 <none> 9110:32651/TCP 39m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3h2m
# kubectl describe services hello-node
Name: hello-node
Namespace: default
Labels: run=hello-node
Annotations: <none>
Selector: run=hello-node
Type: NodePort
IP: 10.98.104.45
Port: <unset> 9110/TCP
TargetPort: 9110/TCP
NodePort: <unset> 32651/TCP
Endpoints: 172.17.0.5:9110
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
# minikube ip
192.168.1.216
As you can see, the service is available on the internal IP of 172.17.0.5.
Is there some way for me to get this service mapped to/exposed on the IP of the parent host, which is 192.168.1.216. I would like my restful service at 192.168.1.216:9110.
I think minikube tunnel might be what you're looking for. https://github.com/kubernetes/minikube/blob/master/docs/networking.md
Services of type LoadBalancer can be exposed via the minikube tunnel command.

Cannot access Kubernetes pods exposed external ip on google cloud

I have created a sample node.js app and other required files (deployment.yml, service.yml) but I am not able to access the external IP of the service.
#kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.7.240.1 <none> 443/TCP 23h
node-api LoadBalancer 10.7.254.32 35.193.227.250 8000:30164/TCP 4m37s
#kubectl get pods
NAME READY STATUS RESTARTS AGE
node-api-6b9c8b4479-nclgl 1/1 Running 0 5m55s
#kubectl describe svc node-api
Name: node-api
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=node-api
Type: LoadBalancer
IP: 10.7.254.32
LoadBalancer Ingress: 35.193.227.250
Port: <unset> 8000/TCP
TargetPort: 8000/TCP
NodePort: <unset> 30164/TCP
Endpoints: 10.4.0.12:8000
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal EnsuringLoadBalancer 6m19s service-controller Ensuring load balancer
Normal EnsuredLoadBalancer 5m25s service-controller Ensured load balancer
When I try to do a curl on external ip it gives connection refused
curl 35.193.227.250:8000
curl: (7) Failed to connect to 35.193.227.250 port 8000: Connection refused
I have exposed port 8000 in Dockerfile also. Let me know if I am missing anything.
Looking at your description on this thread it seems everything is fine.
Here is what you can try:
SSH to the GKE node where the pod is running. You can get the node name by running the same command you used with "-o wide" flag.
$ kubectl get pods -o wide
After that doing the SSH, try to curl Cluster as well as Service IP to see if you get response or not.
Try to SSH to the pod
$ kubectl exec -it -- /bin/bash
After that, run local host to see if you get response or not
$ curl localhost
So if you get response upon trying above troubleshooting steps then it could be an issue underlying at the GKE. You can file a defect report here.
If you do not get any response while trying the above steps, it is possible that you have misconfigured the cluster somewhere.
This seems to me a good starting point for troubleshooting your use case.

Kubernetes ExternalName service not visible in DNS

I'm trying to expose a single database instance as a service in two Kubernetes namespaces. Kubernetes version 1.11.3 running on Ubuntu 16.04.1. The database service is visible and working in the default namespace. I created an ExternalName service in a non-default namespace referencing the fully qualified domain name in the default namespace as follows:
kind: Service
apiVersion: v1
metadata:
name: ws-mysql
namespace: wittlesouth
spec:
type: ExternalName
externalName: mysql.default.svc.cluster.local
ports:
- port: 3306
The service is running:
eric$ kubectl describe service ws-mysql --namespace=wittlesouth
Name: ws-mysql
Namespace: wittlesouth
Labels: <none>
Annotations: <none>
Selector: <none>
Type: ExternalName
IP:
External Name: mysql.default.svc.cluster.local
Port: <unset> 3306/TCP
TargetPort: 3306/TCP
Endpoints: <none>
Session Affinity: None
Events: <none>
If I check whether the service can be found by name from a pod running in the wittlesouth namespace, this service name does not resolve, but other services in that namespace (i.e. Jira) do:
root#rs-ws-diags-8mgqq:/# nslookup mysql.default.svc.cluster.local
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: mysql.default.svc.cluster.local
Address: 10.99.120.208
root#rs-ws-diags-8mgqq:/# nslookup ws-mysql.wittlesouth
Server: 10.96.0.10
Address: 10.96.0.10#53
*** Can't find ws-mysql.wittlesouth: No answer
root#rs-ws-diags-8mgqq:/# nslookup ws-mysql
Server: 10.96.0.10
Address: 10.96.0.10#53
*** Can't find ws-mysql: No answer
root#rs-ws-diags-8mgqq:/# nslookup ws-mysql.wittlesouth
Server: 10.96.0.10
Address: 10.96.0.10#53
*** Can't find ws-mysql.wittlesouth: No answer
root#rs-ws-diags-8mgqq:/# nslookup ws-mysql.wittlesouth.svc.cluster.local
Server: 10.96.0.10
Address: 10.96.0.10#53
*** Can't find ws-mysql.wittlesouth.svc.cluster.local: No answer
root#rs-ws-diags-8mgqq:/# nslookup ws-mysql.wittlesouth
Server: 10.96.0.10
Address: 10.96.0.10#53
*** Can't find ws-mysql.wittlesouth: No answer
root#rs-ws-diags-8mgqq:/# nslookup jira.wittlesouth
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: jira.wittlesouth.svc.cluster.local
Address: 10.105.30.239
Any thoughts on what might be the issue here? For the moment I've worked around it by updating applications that need to use the database to reference the fully qualified domain name of the service running in the default namespace, but I'd prefer to avoid that. My intent eventually is to have the namespaces have separate database instances, and would like to deploy apps configured to work that way now in advance of actually standing up the second instance.
This doesn't work for me with Kubernetes 1.11.2 with coredns and calico. It works only if you reference the external service directly in whichever namespace it runs:
$ kubectl get pods -n default
NAME READY STATUS RESTARTS AGE
mysql-0 2/2 Running 0 17m
mysql-1 2/2 Running 0 16m
$ kubectl get pods -n wittlesouth
NAME READY STATUS RESTARTS AGE
ricos-dummy-pod 1/1 Running 0 14s
kubectl exec -it ricos-dummy-pod -n wittlesouth bash
root#ricos-dummy-pod:/# ping mysql.default.svc.cluster.local
PING mysql.default.svc.cluster.local (192.168.1.40): 56 data bytes
64 bytes from 192.168.1.40: icmp_seq=0 ttl=62 time=0.578 ms
64 bytes from 192.168.1.40: icmp_seq=1 ttl=62 time=0.632 ms
64 bytes from 192.168.1.40: icmp_seq=2 ttl=62 time=0.628 ms
^C--- mysql.default.svc.cluster.local ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.578/0.613/0.632/0.025 ms
root#ricos-dummy-pod:/# ping ws-mysql
ping: unknown host
root#ricos-dummy-pod:/# exit
$ kubectl get svc mysql
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
mysql ClusterIP None <none> 3306/TCP 45d
$ kubectl describe svc mysql
Name: mysql
Namespace: default
Labels: app=mysql
Annotations: <none>
Selector: app=mysql
Type: ClusterIP
IP: None
Port: mysql 3306/TCP
TargetPort: 3306/TCP
Endpoints: 192.168.1.40:3306,192.168.2.25:3306
Session Affinity: None
Events: <none>
The ExternalName service feature is only supported using kube-dns as per the docs and Kubernetes 1.11.x defaults to coredns. You might want to try changing from coredns to kube-dns or possibly changing the configs for your coredns deployment. I expect this to available at some point using coredns.