Does the Google Container Engine support DNS based service discovery? - kubernetes

From the kubernetes docs I see that there is a DNS based service discovery mechanism. Does Google Container Engine support this. If so, what's the format of DNS name to discover a service running inside Container Engine. I couldn't find the relevant information in the Container Engine docs.

The DNS name for services is as follow: {service-name}.{namespace}.svc.cluster.local.
Assuming you configured kubectl to work with your cluster you should be able to get your service and namespace details by the following the steps below.
Get your namespace
$ kubectl get namespaces
NAME LABELS STATUS
default <none> Active
kube-system <none> Active
You should ignore the kube-system entry, because that is for the cluster itself. All other entries are your namespaces. By default there will be one extra namespace called default.
Get your services
$ kubectl get services
NAME LABELS SELECTOR IP(S) PORT(S)
broker-partition0 name=broker-partition0,type=broker name=broker-partition0 10.203.248.95 5050/TCP
broker-partition1 name=broker-partition1,type=broker name=broker-partition1 10.203.249.91 5050/TCP
kubernetes component=apiserver,provider=kubernetes <none> 10.203.240.1 443/TCP
service-frontend name=service-frontend,service=frontend name=service-frontend 10.203.246.16 80/TCP
104.155.61.198
service-membership0 name=service-membership0,partition=0,service=membership name=service-membership0 10.203.246.242 80/TCP
service-membership1 name=service-membership1,partition=1,service=membership name=service-membership1 10.203.248.211 80/TCP
This command lists all the services available in your cluster. So for example, if I want to get the IP address of the service-frontend I can use the following DNS: service-frontend.default.svc.cluster.local.
Verify DNS with busybox pod
You can create a busybox pod and use that pod to execute nslookup command to query the DNS server.
$ kubectl create -f - << EOF
apiVersion: v1
kind: Pod
metadata:
name: busybox
namespace: default
spec:
containers:
- image: busybox
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
name: busybox
restartPolicy: Always
EOF
Now you can do an nslookup from the pod in your cluster.
$ kubectl exec busybox -- nslookup broker-partition0.default.svc.cluster.local
Server: 10.203.240.10
Address 1: 10.203.240.10
Name: service-frontend.default.svc.cluster.local
Address 1: 10.203.246.16
Here you see that the Addres 1 entry is the IP of the service-frontend service, the same as the IP address listed by the kubectl get services.

It should work the same way as mentioned in the doc you linked to. Have you tried that? (i.e. "my-service.my-ns")

Related

Kubernetes Service get Connection Refused

I am trying to create an application in Kubernetes (Minikube) and expose its service to other applications in same clusters, but i get connection refused if i try to access this service in Kubernetes node.
This application just listen on HTTP 127.0.0.1:9897 address and send response.
Below is my yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: exporter-test
namespace: datenlord-monitoring
labels:
app: exporter-test
spec:
replicas: 1
selector:
matchLabels:
app: exporter-test
template:
metadata:
labels:
app: exporter-test
spec:
containers:
- name: prometheus
image: 34342/hello_world
ports:
- containerPort: 9897
---
apiVersion: v1
kind: Service
metadata:
name: exporter-test-service
namespace: datenlord-monitoring
annotations:
prometheus.io/scrape: 'true'
prometheus.io/port: '9897'
spec:
selector:
app: exporter-test
type: NodePort
ports:
- port: 8080
targetPort: 9897
nodePort: 30001
After I apply this yaml file, the pod and the service deployed correctly, and I am sure this pod works correctly, since when I login the pod by
kubectl exec -it exporter-test-* -- sh, then just run curl 127.0.0.1:9897, I can get the correct response.
Also, if I run kubectl port-forward exporter-test-* -n datenlord-monitoring 8080:9897, I can get correct response from localhost:8080. So this application should work well.
However, when I trying to access this service from other application in same K8s cluster by exporter-test-service.datenlord-monitoring.svc:30001 or just run curl nodeIp:30001 in k8s node or run curl clusterIp:8080 in k8s node, I got Connection refused
Anyone had same issue before? Appreciate for any help! Thanks!
you are mixing two things here. NodePort is the port the application is available from outside your cluster. Inside your cluster you need to access your service via the service port, not the NodePort.
Try changing exporter-test-service.datenlord-monitoring.svc:30001 to exporter-test-service.datenlord-monitoring.svc:8080
Welcome to the community!
There are no issues with behaviour you observed.
In short words kubernetes cluster (which is minikube in this case) has its own isolated network with internal DNS.
One way to access your service on the node: you specified nodePort for your service and this made the service accessible on the localhost:30001. You can check it by running on your host:
$ kubectl get svc -n datenlord-monitoring
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
exporter-test-service NodePort 10.111.191.159 <none> 8080:30001/TCP 2m45s
# Test:
curl -I localhost:30001
HTTP/1.1 200 OK
Another way to expose service to the host network is to use minikube tunnel (run in the another console). You'll need to change service type from NodePort to LoadBalancer:
$ kubectl get svc -n datenlord-monitoring
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
exporter-test-service LoadBalancer 10.111.191.159 10.111.191.159 8080:30001/TCP 18m
# Test:
$ curl -I 10.111.191.159:8080
HTTP/1.1 200 OK
Why some of options doesn't work.
Connection to the service by its DNS + NodePort. NodePort is used to link host IP and NodePort to service port inside kubernetes cluster. Internal DNS is not accessible outside kubernetes cluster (unless you don't add IPs to /etc/hosts on your host machine)
Inside the cluster you should use internal DNS with internal service port which is 8080 in your case. You can check how this works with a separate container in the same namespace (e.g. image curlimages/curl) and get following:
$ kubectl exec -it curl -n datenlord-monitoring -- curl -I exporter-test-service:8080
HTTP/1.1 200 OK
Or from the pod in a different namespace:
$ kubectl exec -it curl-default-ns -- curl -I exporter-test-service.datenlord-monitoring.svc:8080
HTTP/1.1 200 OK
I've attached useful links which help you to understand this difference.
Edit: DNS inside deployed pod
$ kubectl exec -it exporter-test-xxxxxxxx-yyyyy -n datenlord-monitoring -- bash
root#exporter-test-74cf9f94ff-fmcqp:/# cat /etc/resolv.conf
nameserver 10.96.0.10
search datenlord-monitoring.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
Useful links:
DNS for pods and services
Service types
Accessing apps in Minikube
you need to change 127.0.0.1:9897 to 0.0.0.0:9897 so that application listens to all incoming requests

Is there a command to check which pods have a service applied

I have made a service.yaml and have created the service.
kind: Service
apiVersion: v1
metadata:
name: cass-operator-service
spec:
type: LoadBalancer
ports:
- port: 9042
targetPort: 9042
selector:
name: cass-operator
Is there a way to check on which pods the service has been applied?
I want that using the above service, I connect to a cluster in Google Cloud running Kubernetes/Cassandra on external_ip/port (9042 port). But using the above service, I am not able to.
kubectl get svc shows
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
cass-operator-service LoadBalancer 10.51.247.82 34.91.214.233 9042:31902/TCP 73s
So probably the service is listening on 9042 but is forwarding to pods in 31902. I want both ports to be 9042. Is it possible to do?
The best way is follow labels and selectros
Your pod have a label section, and the service use it in the selector section, some examples in:
https://kubernetes.io/docs/concepts/services-networking/service/
You can find the selectors of your service with:
kubectl describe svc cass-operator-service
You can list your labels with:
kubectl get pods --show-labels
You can get pods by querying with selector like following
kubectl get pods -l name=cass-operator
You can also list all pods which are serving traffic behind a kubernetes service by running
kubectl get ep <service name> -o=jsonpath='{.subsets[*].addresses[*].ip}' | tr ' ' '\n' | xargs -I % kubectl get pods -o=name --field-selector=status.podIP=%

Expose port from container in a pod minikube kubernetes

I'm new to K8s, I'll try minikube with 2 container running in a pod with this command:
kubectl apply -f deployment.yaml
and this deployment.yml:
apiVersion: v1
kind: Pod
metadata:
name: site-home
spec:
restartPolicy: Never
volumes:
- name: v-site-home
emptyDir: {}
containers:
- name: site-web
image: site-home:1.0.0
ports:
- containerPort: 80
volumeMounts:
- name: v-site-home
mountPath: /usr/share/nginx/html/assets/quotaLago
- name: site-cron
image: site-home-cron:1.0.0
volumeMounts:
- name: v-site-home
mountPath: /app/quotaLago
I've a shared volume so if I understand I cannot use deployment but only pods (maybe stateful set?)
In any case I want to expose the port 80 from the container site-web in the pod site-home.
In the official docs I see this for deployments:
kubectl expose deployment hello-node --type=LoadBalancer --port=8080
but I cannot use for example:
kubectl expose pod site-web --type=LoadBalancer --port=8080
any idea?
but I cannot use for example:
kubectl expose pod site-web --type=LoadBalancer --port=8080
Of course you can, however exposing a single Pod via LoadBalancer Service doesn't make much sense. If you have a Deployment which typically manages a set of Pods between which the real load can be balanced, LoadBalancer does its job. However you can still use it just for exposing a single Pod.
Note that your container exposes port 80, not 8080 (containerPort: 80 in your container specification) so you need to specify it as target-port in your Service. Your kubectl expose command may look like this:
kubectl expose pod site-web --type=LoadBalancer --port=8080 --target-port=80
If you provide only --port=8080 flag to your kubectl expose command it assumes that the target-port's value is the same as value of --port. You can easily check it by yourself looking at the service you've just created:
kubectl get svc site-web -o yaml
and you'll see something like this in spec.ports section:
- nodePort: 32576
port: 8080
protocol: TCP
targetPort: 8080
After exposing your Pod (or Deployment) properly i.e. using:
kubectl expose pod site-web --type=LoadBalancer --port=8080 --target-port=80
you'll see something similar:
- nodePort: 31181
port: 8080
protocol: TCP
targetPort: 80
After issuing kubectl get services you should see similar output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
site-web ClusterIP <Cluster IP> <External IP> 8080:31188/TCP 4m42s
If then you go to http://<External IP>:8080 in your browser or run curl http://<External IP>:8080 you should see your website's frontend.
Keep in mind that this solution makes sense and will be fully functional in cloud environment which is able to provide you with a real load balancer. Note that if you declare such Service type in Minikukbe in fact it creates NodePort service as it is unable to provide you with a real load balancer. So your application will be available on your Node's ( your Minikube VM's ) IP address on randomly selected port in range 30000-32767 (in my example it's port 31181).
As to your question about the volume:
I've a shared volume so if I understand I cannot use deployment but
only pods (maybe stateful set?)
yes, if you want to use specifically EmptyDir volume, it cannot be shared between different Pods (even if they were scheduled on the same node), it is shared only between containers within the same Pod. If you want to use Deployment you'll need to think about another storage solution such as PersistenetVolume.
EDIT:
In the first moment I didn't notice the error in your command:
kubectl expose pod site-web --type=LoadBalancer --port=8080
You're trying to expose non-existing Pod as your Pod's name is site-home, not site-web. site-web is a name of one of your containers (within your site-home Pod). Remember: we're exposing Pod, not containers via Service.
I change 80->8080 but I always come to error:kubectl expose pod
site-home --type=LoadBalancer --port=8080 return:error: couldn't
retrieve selectors via --selector flag or introspection: the pod has
no labels and cannot be exposed See 'kubectl expose -h' for help
and examples.
The key point here is: the pod has no labels and cannot be exposed
It looks like your Pod doesn't have any labels defined which are required so that the Service can select this particular Pod (or set of similar Pods which have the same label) from among other Pods in your cluster. You need at least one label in your Pod definition. Adding simple label name: site-web under Pod's metadata section should help. It may look like this in your Pod definition:
apiVersion: v1
kind: Pod
metadata:
name: site-home
labels:
name: site-web
spec:
...
Now you may even provide this label as selector in your service however it should be handled automatically if you omit --selector flag:
kubectl expose pod site-home --type=LoadBalancer --port=8080 --target-port=80 --selector=name=site-web
Remember: in Minikube real load balancer cannot be created and instead of LoadBalancer NodePort type will be created. Command kubectl get svc will tell you on which port (in range 30000-32767) your application will be available.
and `kubectl expose pod site-web --type=LoadBalancer
--port=8080 return: Error from server (NotFound): pods "site-web" not found. Site-home is the pod, site-web is the container with the port
exposed, what's the issue?
if you don't have a Pod with name "site-web" you can expect such message. Here you are simply trying to expose non-existing Pod.
If I exposed a port from a container the port is automatically exposed
also for the pod ?
Yes, you have the port defined in Container definition. Your Pods automatically expose all ports that are exposed by the Containers within them.

kubectl expose pods using selector as a NodePort service via command-line

I have a requirement to expose pods using selector as a NodePort service from command-line. There can be one or more pods. And the service needs to include the pods dynamically as they come and go. For example,
NAMESPACE NAME READY STATUS RESTARTS AGE
rakesh rakesh-pod1 1/1 Running 0 4d18h
rakesh rakesh-pod2 1/1 Running 0 4d18h
I can create a service that selects the pods I want using a service-definition yaml file -
apiVersion: v1
kind: Service
metadata:
name: np-service
namespace: rakesh
spec:
type: NodePort
ports:
- name: port1
port: 30005
targetPort: 30005
nodePort: 30005
selector:
abc.property: rakesh
However, I need to achieve the same via commandline. I tried the following -
kubectl -n rakesh expose pod --selector="abc.property: rakesh" --port=30005 --target-port=30005 --name=np-service --type=NodePort
and a few other variations without success.
Note: I understand that currently, there is no way to specify the node-port using command-line. A random port is allocated between 30000-32767. I am fine with that as I can patch it later to the required port.
Also note: These pods are not part of a deployment unfortunately. Otherwise, expose deployment might have worked.
So, it kind of boils down to selecting pods based on selector and exposing them as a service.
My kubernetes version: 1.12.5 upstream
My kubectl version: 1.12.5 upstream
You can do:
kubectl expose $(kubectl get po -l abc.property=rakesh -o name) --port 30005 --name np-service --type NodePort
You can create a NodePort service with a specified name using create nodeport command:
kubectl create nodeport NAME [--tcp=port:targetPort] [--dry-run=server|client|none]
For example:
kubectl create service nodeport myservice --node-port=31000 --tcp=3000:80
You can check Kubectl reference for more:
https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#-em-service-nodeport-em-

Kubernetes,k8s how to make service url?

I am learning k8s. My question is that how to let k8s get service url as minikube command "minikube get service xxx --url" do?
Why I ask is because that when pod is down and up/created/initiated again, there is no need to change url by visiting service url. While
I deploy pod as NodePort, I could access pod with host IP and port, but if it is reinitiated/created again, the port changes.
My case is illustrated below: I have
one master(172.16.100.91) and
one node(hostname node3, 172.16.100.96)
I create pod and service as below, helllocomm deployed as NodePort, and helloext deployed as ClusterIP. hellocomm and helloext are both
spring boot hello world applications.
docker build -t jshenmaster2/hellocomm:0.0.2 .
kubectl run hellocomm --image=jshenmaster2/hellocomm:0.0.2 --port=8080
kubectl expose deployment hellocomm --type NodePort
docker build -t jshenmaster2/helloext:0.0.1 .
kubectl run helloext --image=jshenmaster2/helloext:0.0.1 --port=8080
kubectl expose deployment helloext --type ClusterIP
[root#master2 shell]# kubectl get service -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
hellocomm NodePort 10.108.175.143 <none> 8080:31666/TCP 8s run=hellocomm
helloext ClusterIP 10.102.5.44 <none> 8080/TCP 2m run=helloext
[root#master2 hello]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
hellocomm-54584f59c5-7nxp4 1/1 Running 0 18m 192.168.136.2 node3
helloext-c455859cc-5zz4s 1/1 Running 0 21m 192.168.136.1 node3
In above, my pod is deployed at node3(172.16.100.96), so I could access hellocomm by 172.16.100.96:31666/hello,
With this scenario, one could see easily that when node3 is down, a new pod is created/initiated, the port changes also.
so that my client lost connection. I do not want this solution.
My current question is that as helloext is deployed as ClusteriP and it is also a service as shown above. does that mean ClusterIP
10.102.5.44 and port 8080 would be service url, http://10.102.5.44:8080/hello?
Do I need to create service by yaml file again? What is the difference from service created by command against by yaml file? How to write
following yaml file if I have to create service by yaml?
Below is yaml definition template I need to fill, How to fill?
apiVersion: v1
kind: Service
matadata:
name: string helloext
namespace: string default
labels:
- name: string helloext
annotations:
- name: string hello world
spec:
selector: [] ?
type: string ?
clusterIP: string anything I could give?
sessionAffinity: string ? (yes or no)
ports:
- name: string helloext
protocol: string tcp
port: int 8081? (port used by host machine)
targetPort: int 8080? (spring boot uses 8080)
nodePort: int ?
status: since I am not using loadBalancer in deploymennt, I could forget this.
loadBalancer:
ingress:
ip: string
hostname: string
NodePort, as the name suggests, opens a port directly on the node (actually on all nodes in the cluster) so that you can access your service. By default it's random - that's why when a pod dies, it generates a new one for you. However, you can specify a port as well (3rd paragraph here) - and you will be able to access on the same port even after the pod has been re-created.
The clusterIP is only accessible inside the cluster, as it's a private IP. Meaning, in a default scenario you can access this service from another container / node inside the cluster. You can exec / ssh into any running container/node and try it out.
Yaml files can be version controlled, documented, templatized (Helm), etc.
Check https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.10/#servicespec-v1-core for details on each field.
EDIT:
More detailed info on services here: https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0
What about creating a ingress and point it to the service to access it outside of the cluster?