Do Kubernetes service IPs change? - kubernetes

I'm very new to kubernetes/docker, so apologies if this is a silly question.
I have a pod that is accessing a few services. In my container I'm running a python script and need to access the service. Currently I'm doing this using the services' IP addresses.
Is the service IP address stable or is it better to use environment variables? If so, some tips on doing that would be great.
The opening paragraph of the Services Documentation gives a motivation for services which implies stable IP addresses, but I never see it explicitly stated:
While each Pod gets its own IP address, even those IP addresses cannot be relied upon to be stable over time. This leads to a problem: if some set of Pods (let’s call them backends) provides functionality to other Pods (let’s call them frontends) inside the Kubernetes cluster, how do those frontends find out and keep track of which backends are in that set?
Enter Services.
My pod spec for reference:
kind: Pod
apiVersion: v1
metadata:
name: fetchdataiso
labels:
name: fetchdataiso
spec:
containers:
- name: fetchdataiso
image: 192.111.1.11:5000/ncllc/fetch_data
command: ["python"]
args: ["feed/fetch_data.py", "-hf", "10.222.222.51", "-pf", "8880", "-hi", "10.223.222.173", "-pi","9101"]

The short answer is "Yes, the service IP can change"
$ kubectl apply -f test.svc.yml
service "test" created
$ kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.12.0.1 <none> 443/TCP 10d
test 10.12.172.156 <none> 80/TCP 6s
$ kubectl delete svc test
service "test" deleted
$ kubectl apply -f test.svc.yml
service "test" created
$ kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.12.0.1 <none> 443/TCP 10d
test 10.12.254.241 <none> 80/TCP 3s
The long answer is that if you use it right, you will have no problem with it. What is even more important in scope of your question is that ENV variables are way worse then DNS/IP coupling.
You should refer to your service by service or service.namespace or even full path like something along the lines of test.default.svc.cluster.local. This will get resolved to service ClusterIP, and in opposite to your ENVs it can get re-resolved to a new IP (which will probably never happen unless you explicitly delete and recreate service) while ENV of a running process will not be changed

The service IP address is stable. You should only need to use environment variables if you don't have a better way of discovering the IP address (e.g. DNS).

If you use the DNS cluster addon within your cluster to access your services, and your service is called foo in namespace bar, you can also access it as bar.foo, which is likely more meaningful than a plain IP address.
See http://kubernetes.io/docs/user-guide/services/#dns

Related

how to access service name from a pod

I have below service running in my k8s pod. I want to access and ping service name "service-plt-mediator" from another pod. What needs to be added in manifest file of the pod so that the service name should be in /etc/host file and can be pinged from inside the pod?
/home/ravi>kubectl get svc | grep
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
plt service-plt-mediator ClusterIP 10.108.188.147 <none> 4561/TCP,4562/TCP 3h47m
Tried to add entry using "hostAlias" in manifest file, but it needs a static IP also. Which I can not as the service IP will change after reboot
You don't need to add a mapping in /etc/hosts. Your pods /etc/resolv.conf is configured by kubelet to send DNS queries to CoreDNS service that is running in the cluster (You can see that default config in the pod spec as dnsPolicy: ClusterFirst). The DNS response will be the clusterIP of the Service.
You can use <service-name>.<namespace> as the DNS request name in the other pod.
You can debug your DNS in the cluster as described here.

How do I actually connect to botfront on kubernetes?

I tried deploying on EKS, and my config.yaml follows this suggested format:
botfront:
app:
# The complete external host of the Botfront application (eg. botfront.yoursite.com). It must be set even if running on a private or local DNS (it populates the ROOT_URL).
host: botfront.yoursite.com
mongodb:
enabled: true # disable to use an external mongoDB host
# Username of the MongoDB user that will have read-write access to the Botfront database. This is not the root user
mongodbUsername: username
# Password of the MongoDB user that will have read-write access to the Botfront database. This is not the root user
mongodbPassword: password
# MongoDB root password
mongodbRootPassword: rootpassword
And I ran this command:
helm install -f config.yaml -n botfront --namespace botfront botfront/botfront
and the deployment appeared successful with all pods listed as running.
But botfront.yoursite.com goes nowhere. I checked the ingress and it matches, but there are no external ip addresses or anything. I don't know how to actually access my botfront site once deployed on kubernetes.
What am I missing?
EDIT:
With nginx lb installed kubectl get ingresses -n botfront now
returns:
NAME CLASS HOSTS ADDRESS PORTS AGE
botfront-app-ingress <none> botfront.cream.com a182b0b24e4fb4a0f8bd6300b440e5fa-423aebd224ce20ac.elb.us-east-2.amazonaws.com 80 4d1h
and
kubectl get svc -n botfront returns:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
botfront-api-service NodePort 10.100.207.27 <none> 80:31723/TCP 4d1h
botfront-app-service NodePort 10.100.26.173 <none> 80:30873/TCP 4d1h
botfront-duckling-service NodePort 10.100.75.248 <none> 80:31989/TCP 4d1h
botfront-mongodb-service NodePort 10.100.155.11 <none> 27017:30358/TCP 4d1h
If you run kubectl get svc -n botfront, it will show you all the Services that expose your botfront
$ kubectl get svc -n botfront
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
botfront-api-service NodePort 10.3.252.32 <none> 80:32077/TCP 63s
botfront-app-service NodePort 10.3.249.247 <none> 80:31201/TCP 63s
botfront-duckling-service NodePort 10.3.248.75 <none> 80:31209/TCP 63s
botfront-mongodb-service NodePort 10.3.252.26 <none> 27017:31939/TCP 64s
Each of them is of type NodePort, which means it exposes your app on the external IP address of each of your EKS cluster nodes on a specific port.
So if you your node1 ip happens to be 1.2.3.4 you can acess botfront-api-service on 1.2.3.4:32077. Don't forget to allow access to this port on firewall/security groups. If you have any registered domain e.g. yoursite.com you can configure for it a subdomain botfront.yoursite.com and point it to one of your EKS nodes. Then you'll be able to access it using your domain. This is the simplest way.
To be able to access it in a more effective way than by using specific node's IP and non-standard port, you may want to expose it via Ingress which will create an external load balancer, making your NodePort services available under one external IP adress and standard http port.
Update: I see that this chart already comes with ingress that exposes your app:
$ kubectl get ingresses -n botfront
NAME HOSTS ADDRESS PORTS AGE
botfront-app-ingress botfront.yoursite.com 80 70m
If you retrieve its yaml definition by:
$ kubectl get ingresses -n botfront -o yaml
you'll see that it uses the following annotation:
kubernetes.io/ingress.class: nginx
which means you need nginx-ingress controller installed on your EKS cluster. This might be one reason why it fails. As you can see in my example, this ingress doesn't get any external IP. That's because nginx-ingress wasn't installed on my GKE cluster. Not sure about EKS but as far as I know it doesn't come with nginx-ingress preinstalled.
One more thing: I assume that in your config.yaml you put some real domain name that you have registered instead of botfront.yoursite.com. Suppose your domain is yoursite.com and you successfully created subdomain botfront.yoursite.com, you should redirected it to the IP of your load balancer (the one used by your ingress).
If you run kubectl get ingresses -n botfront but the ADDRESS is empty, you probably don't have nginx-ingress installed and the underlying load balancer cannot be created. If you have here some external IP address, then redirect your registered domain to this address.

External ip always <none> or <pending> in kubernetes

Recently i started building my very own kubernetes cluster using a few Raspberry pi's.
I have gotten to the point where i have a cluster up and running!
Some background info on how i setup the cluster, i used this guide
But now, when i want to deploy and expose an application i encounter some issues...
Following the kubernetes tutorials i have made an deployment of nginx, this is running fine. when i do a port-forward i can see the default nginx page on my localhost.
Now the tricky part, creating an service and routing the traffic from the internet through an ingress to the service.
i have executed the following command's
kubectl expose deployment/nginx --type="NodePort" --port 80
kubectl expose deployment/nginx --type="Loadbalancer" --port 80
And these result in the following.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 25h
nginx NodePort 10.103.77.5 <none> 80:30106/TCP 7m50s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 25h
nginx LoadBalancer 10.107.233.191 <pending> 80:31332/TCP 4s
The external ip address never shows, which makes it quite impossible for me to access the application from outside of the cluster by doing curl some-ip:80 which in the end is the whole reason for me to setup this cluster.
If any of you have some clear guides or advice i can work with it would be really appreciated!
Note:
I have read things about LoadBalancer, this is supposed to be provided by the cloud host. since i run on RPI i don't think this will work for me. but i believe NodePort should be just fine to route with an ingress.
Also i am aware of the fact that i should have an ingress-controller of some sort for ingress to work.
Edit
So i have the following now for the nodeport - 30168
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 26h
nginx NodePort 10.96.125.112 <none> 80:30168/TCP 6m20s
and for the ip address i have either 192.168.178.102 or 10.44.0.1
$ kubectl describe pod nginx-688b66fb9c-jtc98
Node: k8s-worker-2/192.168.178.102
IP: 10.44.0.1
But when i enter either of these ip addresses in the browser with the nodeport i still don't see the nginx page. am i doing something wrong?
Any of your worker nodes' IP address will work for a NodePort (or LoadBalancer) service. From the description of NodePort services :
If you set the type field to NodePort, the Kubernetes control plane allocates a port from a range specified by --service-node-port-range flag (default: 30000-32767). Each node proxies that port (the same port number on every Node) into your Service.
If you don't know those IP addresses kubectl get nodes can tell you; if you're planning on calling them routinely then setting up a load balancer in front of the cluster or configuring DNS (or both!) can be helpful.
In your example, say some node has the IP address 10.20.30.40 (you log into the Raspberry PI directly and run ifconfig and that's the host's address); you can reach the nginx from the second example at http://10.20.30.40:31332.
The EXTERNAL-IP field will never fill in for a NodePort service, or when you're not in a cloud environment that can provide an external load balancer for you. That doesn't affect this case, for either of these service types you can still call the port on the node directly.
Since you are not in a cloud provider, you need to use MetalLB to have the LoadBalancer features working.
Kubernetes does not offer an implementation of network load-balancers (Services of type LoadBalancer) for bare metal clusters. The implementations of Network LB that Kubernetes does ship with are all glue code that calls out to various IaaS platforms (GCP, AWS, Azure…). If you’re not running on a supported IaaS platform (GCP, AWS, Azure…), LoadBalancers will remain in the “pending” state indefinitely when created.
Bare metal cluster operators are left with two lesser tools to bring user traffic into their clusters, “NodePort” and “externalIPs” services. Both of these options have significant downsides for production use, which makes bare metal clusters second class citizens in the Kubernetes ecosystem.
MetalLB aims to redress this imbalance by offering a Network LB implementation that integrates with standard network equipment, so that external services on bare metal clusters also “just work” as much as possible
The MetalLB setup is very easy:
kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.8.3/manifests/metallb.yaml
This will deploy MetalLB to your cluster, under the metallb-system namespace
You need to create a configMap with the ip range you want to use, create a file named metallb-cf.yaml:
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 192.168.1.240-192.168.1.250 <= Select the range you want.
kubectl apply -f metallb-cf.yaml
That's all.
To use on your services just create with type LoadBalancer and MetalLB will do the rest. If you want to customize the configuration see here
MetalLB will assign a IP for your service/ingress, but if you are in a NAT network you need to configure your router to forward the requests for your ingress/service IP.
EDIT:
You have problem to get External IP with MetalLB running on Raspberry Pi, try to change iptables to legacy version:
sudo sysctl net.bridge.bridge-nf-call-iptables=1
sudo update-alternatives --set iptables /usr/sbin/iptables-legacy
Reference: https://www.shogan.co.uk/kubernetes/building-a-raspberry-pi-kubernetes-cluster-part-2-master-node/
I hope that helps.

Kubernetes service showing External Ip '<pending>'. How can I enable it?

Having trouble getting a wordpress Kubertenes service to listen on my machine so that I can access it with my web browser. It just says "External IP" is pending. I'm using the Kubertenes configuration from Docker Edge v18.06 on Mac, with advanced Kube config enabled (not swarm).
Following this tutorial FROM: https://www.youtube.com/watch?time_continue=65&v=jWupQjdjLN0
And using .yaml config files from https://github.com/kubernetes/examples/tree/master/mysql-wordpress-pd
MACPRO:mysql-wordpress-pd me$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 48m
wordpress LoadBalancer 10.99.205.222 <pending> 80:30875/TCP 19m
wordpress-mysql ClusterIP None <none> 3306/TCP 19m
The commands to get things running, to see for yourself:
kubectl create -f local-volumes.yaml
kubectl create secret generic mysql-pass --from-literal=password=DockerCon
kubectl create -f mysql-deployment.yaml
kubectl create -f wordpress-deployment.yaml
kubectl get pods
kubectl get services
Start admin console to see more detailed config in your web browser:
kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
kubectl proxy
I'm hoping someone can clarify things for me here. Thank you.
For Docker for Mac, you should use your host's DNS name or IP address to access exposed services. The "external IP" field will never fill in here. (If you were in an environment like AWS or GCP where a LoadBalancer Kubernetes Service creates a cloud-hosted load balancer, the cloud provider integration would provide the load balancer's IP address here, but that doesn't make sense for single-host solutions.)
Note that I've had some trouble figuring out which port is involved; answers to that issue suggest you need to use the service port (80) but you might need to try other things.

External IP assignment with Minihube ingress add-on enabled

For development purposes I try to use Minikube. I want to test how my application will catch an event of exposing a service and assigning an External-IP.
When I exposed a service in Google Container Engine quick start tutorial I could see an event of External IP assignment with:
kubectl get services --watch
I want to achieve the same with Minikube (if possible).
Here is how I try to set things up locally on my OSX development machine:
minikube start --vm-driver=xhyve
minikube addons enable ingress
kubectl run echoserver --image=gcr.io/google_containers/echoserver:1.4 --port=8080
kubectl expose deployment echoserver --type="LoadBalancer"
kubectl get services --watch
I see the following output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
echoserver LoadBalancer 10.0.0.138 <pending> 8080:31384/TCP 11s
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 4m
External-Ip field never gets updated and shows pending phase. Is it possible to achieve external IP assignment with Minikube?
On GKE or AWS installs, the external IP comes from the cloud support that reports back to kube API the address that the created LB was assigned.
To have the same on minikube you'd have to run some kind of an LB controller, ie. haproxy one, but honestly, for minikube it makes little sense, as you have single IP that you know in advance by minikube ip so you can use NodePort with that knowledge. LB solution would require setting some IP rangethat can be mapped to particular nodeports, as this is effectively what LB will do - take traffic from extIP:extPort and proxy it to minikubeIP:NodePort.
Unless your use case prevents you from it, you should consider Ingress as the way of ingesting traffic to your minikube.
If you want to emulate external IP assignment event (like the one you can observe using GKE or AWS), this can be achieved by applying the following patch on your sandbox kubernetes:
kubectl run minikube-lb-patch --replicas=1 --image=elsonrodriguez/minikube-lb-patch:0.1 --namespace=kube-system
https://github.com/elsonrodriguez/minikube-lb-patch#assigning-external-ips