how to access service name from a pod - kubernetes

I have below service running in my k8s pod. I want to access and ping service name "service-plt-mediator" from another pod. What needs to be added in manifest file of the pod so that the service name should be in /etc/host file and can be pinged from inside the pod?
/home/ravi>kubectl get svc | grep
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
plt service-plt-mediator ClusterIP 10.108.188.147 <none> 4561/TCP,4562/TCP 3h47m
Tried to add entry using "hostAlias" in manifest file, but it needs a static IP also. Which I can not as the service IP will change after reboot

You don't need to add a mapping in /etc/hosts. Your pods /etc/resolv.conf is configured by kubelet to send DNS queries to CoreDNS service that is running in the cluster (You can see that default config in the pod spec as dnsPolicy: ClusterFirst). The DNS response will be the clusterIP of the Service.
You can use <service-name>.<namespace> as the DNS request name in the other pod.
You can debug your DNS in the cluster as described here.

Related

How do I actually connect to botfront on kubernetes?

I tried deploying on EKS, and my config.yaml follows this suggested format:
botfront:
app:
# The complete external host of the Botfront application (eg. botfront.yoursite.com). It must be set even if running on a private or local DNS (it populates the ROOT_URL).
host: botfront.yoursite.com
mongodb:
enabled: true # disable to use an external mongoDB host
# Username of the MongoDB user that will have read-write access to the Botfront database. This is not the root user
mongodbUsername: username
# Password of the MongoDB user that will have read-write access to the Botfront database. This is not the root user
mongodbPassword: password
# MongoDB root password
mongodbRootPassword: rootpassword
And I ran this command:
helm install -f config.yaml -n botfront --namespace botfront botfront/botfront
and the deployment appeared successful with all pods listed as running.
But botfront.yoursite.com goes nowhere. I checked the ingress and it matches, but there are no external ip addresses or anything. I don't know how to actually access my botfront site once deployed on kubernetes.
What am I missing?
EDIT:
With nginx lb installed kubectl get ingresses -n botfront now
returns:
NAME CLASS HOSTS ADDRESS PORTS AGE
botfront-app-ingress <none> botfront.cream.com a182b0b24e4fb4a0f8bd6300b440e5fa-423aebd224ce20ac.elb.us-east-2.amazonaws.com 80 4d1h
and
kubectl get svc -n botfront returns:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
botfront-api-service NodePort 10.100.207.27 <none> 80:31723/TCP 4d1h
botfront-app-service NodePort 10.100.26.173 <none> 80:30873/TCP 4d1h
botfront-duckling-service NodePort 10.100.75.248 <none> 80:31989/TCP 4d1h
botfront-mongodb-service NodePort 10.100.155.11 <none> 27017:30358/TCP 4d1h
If you run kubectl get svc -n botfront, it will show you all the Services that expose your botfront
$ kubectl get svc -n botfront
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
botfront-api-service NodePort 10.3.252.32 <none> 80:32077/TCP 63s
botfront-app-service NodePort 10.3.249.247 <none> 80:31201/TCP 63s
botfront-duckling-service NodePort 10.3.248.75 <none> 80:31209/TCP 63s
botfront-mongodb-service NodePort 10.3.252.26 <none> 27017:31939/TCP 64s
Each of them is of type NodePort, which means it exposes your app on the external IP address of each of your EKS cluster nodes on a specific port.
So if you your node1 ip happens to be 1.2.3.4 you can acess botfront-api-service on 1.2.3.4:32077. Don't forget to allow access to this port on firewall/security groups. If you have any registered domain e.g. yoursite.com you can configure for it a subdomain botfront.yoursite.com and point it to one of your EKS nodes. Then you'll be able to access it using your domain. This is the simplest way.
To be able to access it in a more effective way than by using specific node's IP and non-standard port, you may want to expose it via Ingress which will create an external load balancer, making your NodePort services available under one external IP adress and standard http port.
Update: I see that this chart already comes with ingress that exposes your app:
$ kubectl get ingresses -n botfront
NAME HOSTS ADDRESS PORTS AGE
botfront-app-ingress botfront.yoursite.com 80 70m
If you retrieve its yaml definition by:
$ kubectl get ingresses -n botfront -o yaml
you'll see that it uses the following annotation:
kubernetes.io/ingress.class: nginx
which means you need nginx-ingress controller installed on your EKS cluster. This might be one reason why it fails. As you can see in my example, this ingress doesn't get any external IP. That's because nginx-ingress wasn't installed on my GKE cluster. Not sure about EKS but as far as I know it doesn't come with nginx-ingress preinstalled.
One more thing: I assume that in your config.yaml you put some real domain name that you have registered instead of botfront.yoursite.com. Suppose your domain is yoursite.com and you successfully created subdomain botfront.yoursite.com, you should redirected it to the IP of your load balancer (the one used by your ingress).
If you run kubectl get ingresses -n botfront but the ADDRESS is empty, you probably don't have nginx-ingress installed and the underlying load balancer cannot be created. If you have here some external IP address, then redirect your registered domain to this address.

Kubernetes is giving junk external ip

Kubernetes is giving junk external ip, check output of below command:
$ kubectl get svc frontend -n web-console
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
frontend LoadBalancer 100.68.90.01 a55ea503bbuddd... 80:31161/TCP 5d
Please help me to understand what's this external IP means
According to this : https://kubernetes.io/docs/concepts/services-networking/service/#external-ips
Traffic that ingresses into the cluster with the external IP (as destination IP), on the Service port, will be routed to one of the Service endpoints. externalIPs are not managed by Kubernetes and are the responsibility of the cluster administrator.
It seems you selected LoadBalancer type your cloud provider provided you a loadbalancer and that externalip is that loadbalancer dns name.
The options that allows you expose your application for access from outside the cluster are:
Kubernetes Service of type LoadBalancer
Kubernetes Service of type ‘NodePort’ + Ingress
A Service in Kubernetes is an abstraction defining a logical set of Pods and an access policy and it can be exposed in different ways by specifying a type (ClusterIP, NodePort, LoadBalancer) in the service spec. The LoadBalancer type is the simplest approach.
Once the service is created, it has an external IP address as in your output:
$ kubectl get svc frontend -n web-console
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
frontend LoadBalancer 100.68.90.01 a55ea503bbuddd... 80:31161/TCP 5d
Now, service 'frontend' can be accessible from outside the cluster without the need for additional components like an Ingress.
To test the external IP run this curl command from your machine:
$ curl http://<external-ip>:<port>
where is the external IP address of your Service, and is the value of Port in your Service description.
ExternalIP gives possibility to access services from outside the cluster (ExternalIP is an endpoint). A ClusterIP type service with an ExternalIP can still be accessed inside the cluster using its service.namespace DNS name, but now it can also be accessed from its external endpoint, too.

How to expose service outside k8s cluster?

I have run a Hello World application using the below command.
kubectl run hello-world --replicas=2 --labels="run=load-balancer-example" --image=gcr.io/google-samples/node-hello:1.0 --port=8080
Created a service as below
kubectl expose deployment hello-world --type=NodePort --name=example-service
The pods are running
NAME READY STATUS RESTARTS AGE
hello-world-68ff65cf7-dn22t 1/1 Running 0 2m20s
hello-world-68ff65cf7-llvjt 1/1 Running 0 2m20s
Service:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
example-service NodePort 10.XX.XX.XX <none> 8080:32023/TCP 66s
Here, I am able to test it through curl inside the cluster.
curl http://10.XX.XX.XX:8080
Hello Kubernetes!
How can I access this service outside my cluster? (example, through laptop browser)
you shoud try
http://IP_OF_KUBERNETES:32023
IP_OF_KUBERNETES can be your master IP your worker IP
when you expose a port in kubernetes .It expose that port in all of your server in cluster.Imagine you have two worker node with IP1 and IP2
and one pode is running in IP1 and in worker2 there is no pods but you can access your pod by
http://IP1:32023
http://IP2:32023
You should be able to access it outside the cluster using NodePort assingned(32023). Please paste following http://<IP>:<Port> in your browser and you will able to access your app:
http://<MASTER/WORKER_IP>:32023
There are answers already provided, but I felt like this topic needed some consolidation.
This seems to be fairly easy. NodePort actually exposes your application as the name says on the port of each node. So all you have to do is just find the IP address of the Node on which the pod is. You can do it by running:
kubectl get pods -o wide so you can find the IP or name of the node on which the pod is, then just follow what previous answers state: so http://<MASTER/WORKER_IP>:PORT
There is more methods:
You can deploy Ingress Controller and configure Ingress so the application will be reachable through the internet.
You can also use kubectl proxy to expose ClusterIP service outside of the cluster. Like in this example with Dashboard.
Another way is to use LoadBalancer type, which requires underlying cloud infrastructure.
If you are using minikube you can try to run minikube service list to check your exposed services and their IP.
You can access your service using MasterIP or WorkerIP. If you are planning to use it in production or in a more reliable way you should create a service with type LoadBalancer. And use load balancers IP to access it.
If you are using any cloud env, make sure the firewall rules allow incoming traffic.
This will take care of redirecting request to which ever node the pod is running on. Else you will have to manually hit masterIP or workerIP depending on where the pod is running. If the pod gets moved to different node, you will have to change the ip you are hitting
Service Load Balancer

External IP assignment with Minihube ingress add-on enabled

For development purposes I try to use Minikube. I want to test how my application will catch an event of exposing a service and assigning an External-IP.
When I exposed a service in Google Container Engine quick start tutorial I could see an event of External IP assignment with:
kubectl get services --watch
I want to achieve the same with Minikube (if possible).
Here is how I try to set things up locally on my OSX development machine:
minikube start --vm-driver=xhyve
minikube addons enable ingress
kubectl run echoserver --image=gcr.io/google_containers/echoserver:1.4 --port=8080
kubectl expose deployment echoserver --type="LoadBalancer"
kubectl get services --watch
I see the following output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
echoserver LoadBalancer 10.0.0.138 <pending> 8080:31384/TCP 11s
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 4m
External-Ip field never gets updated and shows pending phase. Is it possible to achieve external IP assignment with Minikube?
On GKE or AWS installs, the external IP comes from the cloud support that reports back to kube API the address that the created LB was assigned.
To have the same on minikube you'd have to run some kind of an LB controller, ie. haproxy one, but honestly, for minikube it makes little sense, as you have single IP that you know in advance by minikube ip so you can use NodePort with that knowledge. LB solution would require setting some IP rangethat can be mapped to particular nodeports, as this is effectively what LB will do - take traffic from extIP:extPort and proxy it to minikubeIP:NodePort.
Unless your use case prevents you from it, you should consider Ingress as the way of ingesting traffic to your minikube.
If you want to emulate external IP assignment event (like the one you can observe using GKE or AWS), this can be achieved by applying the following patch on your sandbox kubernetes:
kubectl run minikube-lb-patch --replicas=1 --image=elsonrodriguez/minikube-lb-patch:0.1 --namespace=kube-system
https://github.com/elsonrodriguez/minikube-lb-patch#assigning-external-ips

Do Kubernetes service IPs change?

I'm very new to kubernetes/docker, so apologies if this is a silly question.
I have a pod that is accessing a few services. In my container I'm running a python script and need to access the service. Currently I'm doing this using the services' IP addresses.
Is the service IP address stable or is it better to use environment variables? If so, some tips on doing that would be great.
The opening paragraph of the Services Documentation gives a motivation for services which implies stable IP addresses, but I never see it explicitly stated:
While each Pod gets its own IP address, even those IP addresses cannot be relied upon to be stable over time. This leads to a problem: if some set of Pods (let’s call them backends) provides functionality to other Pods (let’s call them frontends) inside the Kubernetes cluster, how do those frontends find out and keep track of which backends are in that set?
Enter Services.
My pod spec for reference:
kind: Pod
apiVersion: v1
metadata:
name: fetchdataiso
labels:
name: fetchdataiso
spec:
containers:
- name: fetchdataiso
image: 192.111.1.11:5000/ncllc/fetch_data
command: ["python"]
args: ["feed/fetch_data.py", "-hf", "10.222.222.51", "-pf", "8880", "-hi", "10.223.222.173", "-pi","9101"]
The short answer is "Yes, the service IP can change"
$ kubectl apply -f test.svc.yml
service "test" created
$ kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.12.0.1 <none> 443/TCP 10d
test 10.12.172.156 <none> 80/TCP 6s
$ kubectl delete svc test
service "test" deleted
$ kubectl apply -f test.svc.yml
service "test" created
$ kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.12.0.1 <none> 443/TCP 10d
test 10.12.254.241 <none> 80/TCP 3s
The long answer is that if you use it right, you will have no problem with it. What is even more important in scope of your question is that ENV variables are way worse then DNS/IP coupling.
You should refer to your service by service or service.namespace or even full path like something along the lines of test.default.svc.cluster.local. This will get resolved to service ClusterIP, and in opposite to your ENVs it can get re-resolved to a new IP (which will probably never happen unless you explicitly delete and recreate service) while ENV of a running process will not be changed
The service IP address is stable. You should only need to use environment variables if you don't have a better way of discovering the IP address (e.g. DNS).
If you use the DNS cluster addon within your cluster to access your services, and your service is called foo in namespace bar, you can also access it as bar.foo, which is likely more meaningful than a plain IP address.
See http://kubernetes.io/docs/user-guide/services/#dns