How to get the hostname of a service in Kubernetes? - kubernetes

I need the hostname of the service lensespostgres-postgresql, but I get an error:
$ kubectl get services -n default
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP ........ <none> 443/TCP 20m
lensespostgres-postgresql ClusterIP ........ <none> 5432/TCP 14m
lensespostgres-postgresql-headless ClusterIP None <none> 5432/TCP 14m
$ ping lensespostgres-postgresql.default.svc.cluster.local
ping: lensespostgres-postgresql.default.svc.cluster.local: Name or service not known
Why?

To get the hostname of the service you need dnsutils
If dnsutils is not installed please follow the below steps
sudo apt-get install dnsutils
sudo apt-get update
Create the yaml file for dnsutils as mentioned in the link
Apply the created yaml file using the below command
kubectl apply -f dnsutils.yaml
To get the hostname of the service use the below command:
kubectl exec -ti dnsutils -- nslookup <service-name>
For the further information of yaml file and steps you can refer this link
I have tried this in my project and it worked for me

Related

Kubernetes - resolve hostname of a service

I would like to perform a call to my echo-server but I can not figure out what's the hostname of my service:
orion:webanalytics papaburger$ kubectl get services -n web-analytics
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
echo-server ClusterIP 10.100.92.251 <none> 80/TCP 87m
web-api ClusterIP 10.100.92.250 <none> 8080/TCP 87m
I have tried to reach using kubectl exec -it curl-curl0 -- curl http://web-analytics.echo-server.svc.cluster.local/heythere but it fails:
curl: (6) Couldn't resolve host 'web-analytics.echo-server.svc.cluster.local'
If I change web-analytics.echo-server.svc.cluster.local to cluster ip, it works.
How can I make my pods (web-api) reach the echo server?
edit:
orion:webanalytics papaburger$ kubectl get ep -n web-analytics
NAME ENDPOINTS AGE
echo-server 172.16.187.247:80 95m
web-api 172.16.184.217:8080 95m
it should be like this
the service name is always like this
<service-name>.<namespace-name>.svc.cluster.local
kubectl exec -it curl-curl0 -- curl http://echo-servcer.web-analytics.svc.cluster.local/heythere
or alternative way would be you can directly curl the POD_IP:80
The DNS name is referred incorrectly, it follows the following format
my-svc.my-namespace.svc.cluster-domain.example
Based on the kubectl output, the DNS should be
echo-server.web-analytics.svc.cluster.local
The respective curl will be -
kubectl exec -it curl-curl0 -- curl http://echo-server.web-analytics.svc.cluster.local/heythere

Kubespray : Netchecker connectivity check fails

I deployed a Kubernetes (v1.17.5) cluster on OpenStack instances using Kubespray. Those instances are CentOS 7.6.1811 qcow2 images imported in Glance.
The install was successful, and I can see my nodes and pods with kubectl commands.
I used the deploy_netchecker option to deploy NetChecker and test the network within my cluster, and set network_plugin="flannel".
I also tried kube_proxy_mode="iptables", but it doesn't seem to affect the result.
That's pretty much all the changes I did in the k8s-cluster.yml file.
All the pods are running, services too :
[centos#cl1-master-0 ~]$ kubectl get svc --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.233.0.1 <none> 443/TCP 46h
default netchecker-service NodePort 10.233.13.213 <none> 8081:31081/TCP 46h
kube-system coredns ClusterIP 10.233.0.3 <none> 53/UDP,53/TCP,9153/TCP 46h
kube-system dashboard-metrics-scraper ClusterIP 10.233.59.12 <none> 8000/TCP 46h
kube-system kubernetes-dashboard ClusterIP 10.233.63.20 <none> 443/TCP 46h
But netchecker API gives the following answer :
[root#localhost ~]# curl http://X.X.X.X:31081/api/v1/connectivity_check
{"Message":"Connectivity check fails. Reason: there are absent or outdated pods; look up the payload","Absent":["netchecker-agent-hostnet-kk56x","netchecker-agent-hostnet-klldn","netchecker-agent-hostnet-r2vqs","netchecker-agent-hostnet-wqhjs"],"Outdated":["netchecker-agent-4jsgf","netchecker-agent-c9pcf","netchecker-agent-hostnet-jzbfv","netchecker-agent-vxgpf"]}
For an unknown reason, I cannot access the API from a cluster node with localhost, so I used a floating IP with OpenStack.
Here are some logs from the agent :
[centos#cl1-master-0 ~]$ sudo vi /var/log/pods/default_netchecker-agent-vjnwl_d8290268-3ea4-4e3c-acb4-295ab162a735/netchecker-agent/0.log
{"log":"I0701 13:04:01.814246 1 agent.go:135] Response status code: 200\n","stream":"stderr","time":"2020-07-01T13:04:01.81437579Z"}
{"log":"I0701 13:04:01.814272 1 agent.go:128] Sleep for 15 second(s)\n","stream":"stderr","time":"2020-07-01T13:04:01.814393199Z"}
{"log":"I0701 13:04:16.817398 1 agent.go:55] Send payload via URL: http://netchecker-service:8081/api/v1/agents/netchecker-agent-vjnwl\n","stream":"stderr","time":"2020-07-01T13:04:16.817786735Z"}
[centos#cl1-master-0 ~]$ sudo vi /var/log/pods/default_netchecker-agent-hostnet-klldn_d5fa6e72-885f-44e1-97a6-880a25e6d6d6/netchecker-agent/0.log
{"log":"E0701 13:05:22.804428 1 agent.go:133] Error while sending info. Details: Post http://netchecker-service:8081/api/v1/agents/netchecker-agent-hostnet-klldn: dial tcp 10.233.13.213:8081: i/o timeout\n","stream":"stderr","time":"2020-07-01T13:05:22.805138032Z"}
{"log":"I0701 13:05:22.804474 1 agent.go:128] Sleep for 15 second(s)\n","stream":"stderr","time":"2020-07-01T13:05:22.805190295Z"}
{"log":"I0701 13:05:37.807140 1 agent.go:55] Send payload via URL: http://netchecker-service:8081/api/v1/agents/netchecker-agent-hostnet-klldn\n","stream":"stderr","time":"2020-07-01T13:05:37.807309111Z"}
Logs from the server do not indicate any error.
I tried to check DNS resolve with the following :
[centos#cl1-master-0 ~]$ kubectl exec -it netchecker-agent-4jsgf -- /bin/sh
/ $ nslookup kubernetes.default
Server: 169.254.25.10
Address 1: 169.254.25.10
nslookup: can't resolve 'kubernetes.default'
[centos#cl1-master-0 ~]$ kubectl exec -it netchecker-agent-4jsgf -- cat /etc/resolv.conf
nameserver 169.254.25.10
search default.svc.cluster.local svc.cluster.local cluster.local openstacklocal
options ndots:5
169.254.25.10 is the IP of the nodelocaldns, but it doesn't seem to query the coredns service deployed.
When I use nslookup netchecker-service.default.svc.cluster.local 10.233.0.3, with the coredns IP, I get a correct answer.
What can be wrong with my configuration ?
Thanks in advance
UPDATE : The plugin Flannel has an issue and contains a fix to apply on all nodes of the cluster. Once done, the pods successfully report back to the netchecker server.
UPDATE : The plugin Flannel has an issue and contains a fix to apply on all nodes of the cluster. Once done, the pods successfully report back to the netchecker server.

k8s: Get access to pods

I newbie question related with k8s. I've just installed a k3d cluster.
I've deployed an this helm chart:
$ helm install stable/docker-registry
It's been installed and pod is running correctly.
Nevertheless, I don't quite figure out how to get access to this just deployed service.
According to documentation, it's listening on 5000 port, and is using a ClusterIP. A service is also deployed.
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 42h
docker-registry-1580212517 ClusterIP 10.43.80.185 <none> 5000/TCP 19m
EDIT
I've been able to say to chard creates an ingress:
$ kubectl get ingresses.networking.k8s.io -n default
NAME HOSTS ADDRESS PORTS AGE
docker-registry-1580214408 chart-example.local 172.20.0.4 80 10m
Nevertheless, I'm still without being able tp push images to registry:
$ docker push 172.20.0.4/feedly:v1
The push refers to repository [172.20.0.4/feedly]
Get https://172.20.0.4/v2/: x509: certificate has expired or is not yet valid
Since the service type is ClusterIP, you can't access the service from host system. You can run below command to access the service from your host system.
kubectl port-forward --address 0.0.0.0 svc/docker-registry-1580212517 5000:5000 &
curl <host IP/name>:5000

Installing helm on minikube return an error

I have followed the steps as described in this link.
When i am on section of helm install (Step 2), and trying to run:
helm install --name web ./demo
I am getting the following error:
Get https://10.96.0.1:443/version?timeout=32s: dial tcp 10.96.0.1:443: i/o timeout
Expected Result: It should install and deploy the chart.
This issue relates to your kubernetes configuration and not to your helm.
Assume you are also not able see outputs from other helm commands like helm list , etc.
Lots of people have this issue because of not properly configured CNI(typically this is calico). And sometimes this happens because of your kubeconfig absence.
Solutions are:
migrate from calico to flannel
Change the --pod-network-cidr for calico from 192.168.0.0/16 to 172.16.0.0/16 when using kubeadm to init cluster, like kubeadm init --pod-network-cidr=172.16.0.0
More related info you han find on similar github helm issue
Simple single-machine example:
1) kubeadm init --pod-network-cidr=172.16.0.0/16
2) kubectl taint nodes --all node-role.kubernetes.io/master-
3) kubectl apply -f https://docs.projectcalico.org/v3.8/manifests/calico.yaml
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
4)install helm
curl https://raw.githubusercontent.com/helm/helm/master/scripts/get | bash
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
helm init --service-account tiller
5)create and install chart
$ helm create demo
Creating demo
$ helm install --name web ./demo
NAME: web
LAST DEPLOYED: Tue Jul 16 10:44:15 2019
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/Deployment
NAME READY UP-TO-DATE AVAILABLE AGE
web-demo 0/1 1 0 0s
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
web-demo-6986c66d7d-vctql 0/1 ContainerCreating 0 0s
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
web-demo ClusterIP 10.106.140.176 <none> 80/TCP 0s
NOTES:
1. Get the application URL by running these commands:
export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=demo,app.kubernetes.io/instance=web" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl port-forward $POD_NAME 8080:80
6)result
$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/web-demo-6986c66d7d-vctql 1/1 Running 0 75s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 11m
service/web-demo ClusterIP 10.106.140.176 <none> 80/TCP 75s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/web-demo 1/1 1 1 75s
NAME DESIRED CURRENT READY AGE
replicaset.apps/web-demo-6986c66d7d 1 1 1 75s
You can find more info in how to configure helm and kubernetes itself in Get Started With Kubernetes Using Minikube article

Kubernetes service external IP address remains pending with IBM Cloud (earlier called as Bluemix)

I'm following an example from Kubernetes in Action to run a simple docker image in kubernetes:
$ bx login --apikey #apiKey.json -a https://api.eu-de.bluemix.net
$ bx cs cluster-config my_kubernetes
$ export KUBECONFIG=..my_kubernetes.yml
Next, run the container:
$ kubectl run kubia --image=luksa/kubia --port=8080 --generator=run/v1
$ kubectl expose rc kubia --type=LoadBalancer --name kubia-http
$ kubectl get service
$ kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.10.10.1 <none> 443/TCP 20h
kubia-http 10.10.10.12 <pending> 8080:32373/TCP 0m
Fifteen minutes later ...
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.10.10.1 <none> 443/TCP 20h
kubia-http 10.10.10.12 <pending> 8080:32373/TCP 15m
I don't have anything else running on the Kubernetes cluster.
To close out the thread here, LoadBalancer cannot be used in a lite (aka free) cluster tier. The differences between lite and standard clusters can be found here - https://console.bluemix.net/docs/containers/cs_planning.html#cs_planning.
Run the following to determine if there are any failure events.
kubectl describe svc kubia-http
Thanks to Chris Rosen's answer, I was able to find a workaround:
$ bx cs workers my_kubernetes
OK
ID Public IP Private IP Machine Type State Status
kube-par01-xxxxx 1.2.3.4 6.7.8.9 free normal Ready
Note the Public IP address: 1.2.3.4
Expose the service with NodePort:
$ kubectl expose rc kubia --type=NodePort --name kubia-http2
Check the NodePort details:
$ kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.10.10.1 <none> 443/TCP 21h
kubia-http2 10.10.10.193 <nodes> 8080:31247/TCP 10s
Access the service using the exposed port on the worker Public IP address:
$ curl http://1.2.3.4:31247/
You've hit kubia-bjb59
Based on the posts above I was getting the following steps to work:
Prerequisites: Create a free Kubernetes cluster in the IBM Cloud and follow the steps (you need to have the ibmcloud and kubectl installed and connect to the remote cluster first)
kubectl get nodes
should return something like this
NAME STATUS ROLES AGE VERSION
10.76.197.55 Ready <none> 4h18m v1.18.10+IKS
Then,
kubectl apply -f https://k8s.io/examples/controllers/replication.yaml
replicationcontroller/nginx created
kubectl expose rc nginx --type=NodePort
service/nginx exposed
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx NodePort 172.21.19.73 80:30634/TCP 70s
Note down the port, 30634 in my case
kubectl describe nodes |grep ExternalIP (to find out the external IP)
call IP:port
Have fun!
If your purpose is to test your application by having it the accessible to the external world , I would suggest using the NodePort service which can be used in the free tier service.
More Info can be found here : Expose service to world