by default, in Kubernetes 1.13 CoreDNS is installed.
Can you please tell me how to make a curl in a cluster by the name of the service?
[root#master ~]# kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.233.0.1 <none> 443/TCP 24h
[root#master ~]# kubectl get services --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-system coredns ClusterIP 10.233.0.3 <none> 53/UDP,53/TCP,9153/TCP 21h
tools nexus-svc NodePort 10.233.17.152 <none> 8081:31991/TCP,5000:31111/TCP,8083:31081/TCP,8082:31085/TCP 14h
[root#master ~]# kubectl describe services nexus-svc --namespace=tools
Name: nexus-svc
Namespace: tools
Labels: tools=nexus
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"tools":"nexus"},"name":"nexus-svc","namespace":"tools"},"spec"...
Selector: tools=nexus
Type: NodePort
IP: 10.233.17.152
Port: http 8081/TCP
.....
So I get the correct answer.
[root#master ~]# curl http://10.233.17.152:8081
<!DOCTYPE html>
<html lang="en">
<head>
<title>Nexus Repository Manager</title>
....
And so no.
[root#master ~]# curl http://nexus-svc.tools.svc.cluster.local
curl: (6) Could not resolve host: nexus-svc.tools.svc.cluster.local; Unknown error
[root#master ~]# curl http://nexus-svc.tools.svc.cluster.local:8081
curl: (6) Could not resolve host: nexus-svc.tools.svc.cluster.local; Unknown error
Thanks.
coredns or kubedns are meant to resolve the service name to its clusterIP (normal service) or correspondent Pod IP (headless service) inside the kubernetes cluster not outside. You are trying to curl the service name on the node, not inside the pod and hence it is not able to resolve the service name to its clusterIP.
YOu can go inside the pod and try following:
kubectl exec -it <pod_name> bash
nslookup nexus-svc.tools.svc.cluster.local
It will return you cluster IP and it means coredns is working fine. If your pod has curl utility then you can also curl it using service name (but from inside the cluster only)
If you want to access the service from outside the cluster, this service already exposed as NodePort so you can access it using:
curl http://<node_ip>:31991
Hope this helps.
Related
I'm struggling with kubernates configurations. What I want to get it's just to reach a deployment within the cluster. The cluster is on my dedicated server and I'm deploying it by using Kubeadm.
My nodes:
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 9d v1.19.3
k8s-worker1 Ready <none> 9d v1.19.3
I've a deployment running (nginx basic example)
$ kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 2/2 2 2 29m
I've created a service
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 9d
my-service ClusterIP 10.106.109.94 <none> 80/TCP 20m
The YAML file for my service is the following:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: nginx-deployment
ports:
- protocol: TCP
port: 80
Now I should expect, if I run curl 10.106.109.94:80 on my k8s-master to get the http answer.. but what I got is:
curl: (7) Failed to connect to 10.106.109.94 port 80: Connection refused
I've tried with NodePort as well and with targetPort and nodePort but the result is the same.
The cluster ip can not be reachable from outside of your cluster that means you will not get any response from the host machine that host your k8s cluster as this ip is not a part of your machine or any other machine rather than its a cluster ip which is used by your cluster CNI network like flunnel,weave.
So to get your services accessible from the outside or atleast from the host machine you have to change the type of your service like NodePort,LoadBalancer,K8s port-forward.
If you can change the service type NodePort then you will get response with any of your host machine ip and the allocated nodeport.
For example,if your k8s-master is 192.168.x.x and nodePort is 33303 then you can get response by
curl http://192.168.x.x:33303
or
curl http://worker_node_ip:33303
if your cluster is in locally installed, then you can install metalLB to get the privilege of load balancer.
You can also use port-forward to get your service accessible from the host that has kubectl client with k8s cluster access.
kubectl port-forward svc/my-service 80:80
kubectl -n namespace port-forward svc/service_name Port:Port
I would like to perform a call to my echo-server but I can not figure out what's the hostname of my service:
orion:webanalytics papaburger$ kubectl get services -n web-analytics
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
echo-server ClusterIP 10.100.92.251 <none> 80/TCP 87m
web-api ClusterIP 10.100.92.250 <none> 8080/TCP 87m
I have tried to reach using kubectl exec -it curl-curl0 -- curl http://web-analytics.echo-server.svc.cluster.local/heythere but it fails:
curl: (6) Couldn't resolve host 'web-analytics.echo-server.svc.cluster.local'
If I change web-analytics.echo-server.svc.cluster.local to cluster ip, it works.
How can I make my pods (web-api) reach the echo server?
edit:
orion:webanalytics papaburger$ kubectl get ep -n web-analytics
NAME ENDPOINTS AGE
echo-server 172.16.187.247:80 95m
web-api 172.16.184.217:8080 95m
it should be like this
the service name is always like this
<service-name>.<namespace-name>.svc.cluster.local
kubectl exec -it curl-curl0 -- curl http://echo-servcer.web-analytics.svc.cluster.local/heythere
or alternative way would be you can directly curl the POD_IP:80
The DNS name is referred incorrectly, it follows the following format
my-svc.my-namespace.svc.cluster-domain.example
Based on the kubectl output, the DNS should be
echo-server.web-analytics.svc.cluster.local
The respective curl will be -
kubectl exec -it curl-curl0 -- curl http://echo-server.web-analytics.svc.cluster.local/heythere
I installed microk8s on my ubuntu machine based on steps here https://ubuntu.com/kubernetes/install#single-node
Then I followed kubernetes official tutorial and created and exposed a deployment like this
microk8s.kubectl create deployment kubernetes-bootcamp --image=gcr.io/google-samples/kubernetes-bootcamp:v1
microk8s.kubectl expose deployment/kubernetes-bootcamp --type=NodePort --port 8083
This is my kubectl get services output
akila#ubuntu:~$ microk8s.kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 25h
kubernetes-bootcamp NodePort 10.152.183.11 <none> 8083:31695/TCP 17s
This is my kubectl get pods output
akila#ubuntu:~$ microk8s.kubectl get pods
NAME READY STATUS RESTARTS AGE
kubernetes-bootcamp-6f6656d949-rllgt 1/1 Running 0 51m
But I can't access the service from my browser using http://localhost:8083 OR using http://10.152.183.11:31695
When I tried http://localhost:31695 I'm getting ERR_CONNECTION_REFUSED.
How can I access this "kubernetes-bootcamp" service from my browser ?
Am I mising anything ?
The IP 10.152.183.11 is CLUSTER-IP and not accessible from outside the cluster i.e from a browser. You should be using http://localhost:31695 where 31695 is the NodePort opened on the host system.
The container of the gcr.io/google-samples/kubernetes-bootcamp:v1 image need to listen on port 8083 because you are exposing it on that port. Double check that because otherwise this will lead to ERR_CONNECTION_REFUSED error.
If the container is listening on port 8080 then use below command to expose that port
microk8s.kubectl expose deployment/kubernetes-bootcamp --type=NodePort --port 8080
Try this
kubectl port-forward <pod_name> <local_port>:<pod_port>
then access http://localhost:<local_port>
I deployed my service with the command kubectl expose deployment hotspot-deployment --type=LoadBalancer --port=8080
and the external-ip was pending:
hotspot-deployment LoadBalancer 10.104.104.81 <pending> 8080:31904/TCP 1m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 18m
After searching for it's solution, I assigned the external ip to my service with:
kubectl patch svc hotspot-deployment -p '{"spec": {"type": "LoadBalancer", "externalIPs":["192.168.98.103"]}}'
and the ip got assigned:
hotspot-deployment LoadBalancer 10.106.137.71 192.168.98.103 8080:31354/TCP 12m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 35m
Now, when I try to hit my service using the URL: http://192.168.98.103:8080
The page doesn't open. I tried debugging with
minikube tunnel
, it shows:
machine: minikube
pid: 33058
route: 10.96.0.0/12 -> 192.168.99.105
minikube: Running
services: [hotspot-deployment]
errors:
minikube: no errors
router: no errors
loadbalancer emulator: no errors
Please help!
First of all, you can not expose your service in minikube with LoadBalanser type because it intended to use cloud provider’s load balancer
You should rather use service command:
minikube service hotspot-deployment --url
Then open url in your browser
I'm following an example from Kubernetes in Action to run a simple docker image in kubernetes:
$ bx login --apikey #apiKey.json -a https://api.eu-de.bluemix.net
$ bx cs cluster-config my_kubernetes
$ export KUBECONFIG=..my_kubernetes.yml
Next, run the container:
$ kubectl run kubia --image=luksa/kubia --port=8080 --generator=run/v1
$ kubectl expose rc kubia --type=LoadBalancer --name kubia-http
$ kubectl get service
$ kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.10.10.1 <none> 443/TCP 20h
kubia-http 10.10.10.12 <pending> 8080:32373/TCP 0m
Fifteen minutes later ...
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.10.10.1 <none> 443/TCP 20h
kubia-http 10.10.10.12 <pending> 8080:32373/TCP 15m
I don't have anything else running on the Kubernetes cluster.
To close out the thread here, LoadBalancer cannot be used in a lite (aka free) cluster tier. The differences between lite and standard clusters can be found here - https://console.bluemix.net/docs/containers/cs_planning.html#cs_planning.
Run the following to determine if there are any failure events.
kubectl describe svc kubia-http
Thanks to Chris Rosen's answer, I was able to find a workaround:
$ bx cs workers my_kubernetes
OK
ID Public IP Private IP Machine Type State Status
kube-par01-xxxxx 1.2.3.4 6.7.8.9 free normal Ready
Note the Public IP address: 1.2.3.4
Expose the service with NodePort:
$ kubectl expose rc kubia --type=NodePort --name kubia-http2
Check the NodePort details:
$ kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.10.10.1 <none> 443/TCP 21h
kubia-http2 10.10.10.193 <nodes> 8080:31247/TCP 10s
Access the service using the exposed port on the worker Public IP address:
$ curl http://1.2.3.4:31247/
You've hit kubia-bjb59
Based on the posts above I was getting the following steps to work:
Prerequisites: Create a free Kubernetes cluster in the IBM Cloud and follow the steps (you need to have the ibmcloud and kubectl installed and connect to the remote cluster first)
kubectl get nodes
should return something like this
NAME STATUS ROLES AGE VERSION
10.76.197.55 Ready <none> 4h18m v1.18.10+IKS
Then,
kubectl apply -f https://k8s.io/examples/controllers/replication.yaml
replicationcontroller/nginx created
kubectl expose rc nginx --type=NodePort
service/nginx exposed
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx NodePort 172.21.19.73 80:30634/TCP 70s
Note down the port, 30634 in my case
kubectl describe nodes |grep ExternalIP (to find out the external IP)
call IP:port
Have fun!
If your purpose is to test your application by having it the accessible to the external world , I would suggest using the NodePort service which can be used in the free tier service.
More Info can be found here : Expose service to world