How to communicate between pods in a service? - kubernetes

Suppose I have a service containing two pods. One of the pods is an HTTP server, and the other pod needs to hit a REST endpoint on this pod. Is there a hostname that the second pod can use to address the first pod?

I'm assuming when you say "service" you aren't referring to the Kubernetes lexicon of a Service object, otherwise your two Pods in the Service would be identical, so let's start by teasing out what a "Service" means in Kubernetes land.
You will have to create an additional Kubernetes object called a Service to get your hostname for your HTTP server's Pod. When you create a Service you will define a .spec.selector that points to a set of labels on the HTTP service's Pod. For the sake of example, let's say the label is app: nginx. The name of that Service object will become the internal DNS record that can be queried by the second Pod.
A simplified example:
apiVersion: v1
kind: Pod
metadata:
name: http-service
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
---
kind: Service
apiVersion: v1
metadata:
name: my-http-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
Now your second Pod can make requests to the HTTP service by the Service name, my-http-service.
It's also worth mentioning that Kubernetes best practice dictates that these Pods be managed by controllers such as Deployments or ReplicaSets for all sorts of reasons, including high availability of your applications.

Note that a service is a different concept in Docker then in K8s. The easiest way of getting what you want would be creating the two pods; say pod-1 and pod-2, with a yaml file similar to this one:
apiVersion: v1
kind: Pod
metadata:
name: NAME
labels:
app: LABEL
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
Say NAME and LABEL are nginx and nginx-1, so you have now two pods called nginx and nginx-1, with labels app: nginx and app: nginx-1. Actually, as only one of them is going to be exposed, the other label is irrelevant.
Now you expose the pod either with a yaml file or from command line.
Yaml file:
apiVersion: v1
kind: Service
metadata:
name: server
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
selector:
app: nginx
Command line:
kubectl expose pod nginx --port 80 --name server
If you now access the second pod (nginx-1) and curl the service directly, you would end up hitting the pod behind it (nginx):
nerus:~/workspace $ kubectl exec -it nginx-1 bash
root#nginx-1:/# curl -I server
HTTP/1.1 200 OK

You can expose your pod kubectl expose deployment --type=name of pod , then you can use the kubectl describe It will show you port number . Then you access you pod at http://localhost:portnumber in last command ....**.....Hope it will help .

Ironically, you answered your own question: a Service is a stable name and IP that abstracts over the individual coming-and-going of the Pods to which it will route traffic, as described very well in the fine manual.
If the-http-pod needs to reach the-rest-pod, then create a Service that matches the labels on the PodSpec that created the-rest-pod, and from that point forward the-http-pod can always use ${serviceName}.${serviceNamespace}.svc.cluster.local to each any Pod that has matching labels

Related

Container port pods vs container port service

I would like to understand the mapping between the service port and pod container port.
Do I need to define the container port as part of my pod and also as part of my service? Or it's ok just to expose it as part of the service?
containerPort as part of the pod definition is only informational purposes. Ultimately if you want to expose this as a service within the cluster or node then you have to create a service.
To answer your question, yes it is enough if you just expose it as part of the Kubernetes service. It is a good practice to mention as part of the pod definition so that if someone looks at the definition can understand the port where your container service is running.
This is very well explained here
Official kubernetes reference documentation
The port that the container exposes and the port of the service are different concepts in Kubernetes.
If you want to create a service for your app, your pod has to have a port. For example, this is a pod yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
spec:
selector:
matchLabels:
run: my-nginx
replicas: 2
template:
metadata:
labels:
run: my-nginx
spec:
containers:
- name: my-nginx
image: nginx
ports:
- containerPort: 8080
containerPort sets the port that app will expose.
To access this app via a service you have to create a service object with such yaml:
apiVersion: v1
kind: Service
metadata:
name: my-nginx
labels:
run: my-nginx
spec:
ports:
- port: 80
protocol: TCP
targetPort: 8080
selector:
run: my-nginx
In this yaml, keyword port sets the port of the service. targetPort is the port of your app. So, port of the service is different.
Here is a good definition from official doc:
A Kubernetes Service is an abstraction which defines a logical set of
Pods and a policy by which to access them - sometimes called a
micro-service. The set of Pods targeted by a Service is (usually)
determined by a Label Selector (see below for why you might want a
Service without a selector).
Let's take an example and try to understand with the help of a diagram.
Consider a cluster having 2 nodes and one service. Each nodes having 2 pods and each pod having 2 containers say app container and web container.
NodePort: 30001 (cluster level exposed port for each node)
Port: 80 (service port)
targetPort:8080 (app container port same should be mentioned in docker expose)
targetPort:80 (web container port same should be mentioned in docker expose)
Now the below diagram should help us understand it better.
For reference and further details please refer to below link https://theithollow.com/2019/02/05/kubernetes-service-publishing/

Kubernetes Load balancer server connection refused:Default 80 port is working

After deploying a spring microservice ,Load balancer in Kubernetes is not connecting to the mentioned port in Google Cloud Platform.
Is there any firewall settings we need to change to connect to the deployed service ?
https://serverfault.com/questions/912734/kubernetes-connection-refused-during-deployment
Most likely this is an issue with your Kubernetes Service and/or Deployment. GKE will automatically provision the firewall rules required for the ports mapped to the Service resource.
Ensure that you have exposed port 80 on your Service and also mapped it to a valid port on your Deployment's Pods
Here is an example of using a Deployment and Service to expose an nginx pod:
deployment.yaml:
apiVersion: apps/v1 # API Version of this Object
kind: Deployment # This Object Type
metadata: # Allows you to specify custom metadata
name: nginx # Specifies the name of this object
spec: # The official specification matching object type schema
selector: # Label selector for pods
matchLabels: # Must match these label(s)
app: nginx # Custom label with value
template: # Template describes the pods that are created
metadata: # Standard objects metadata
labels: # Labels used to group/categorize objects
app: nginx # The name of this template
spec: # Specification of the desired behaviour of this pod
containers: # List of containers belonging to this pod (cannot be changed/updated)
- name: nginx # Name of this container
image: nginx # Docker image used for this container
ports: # Port mapping(s)
- containerPort: 80 # Number of port to expose on this pods ip
service.yaml:
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
type: LoadBalancer
selector:
app: nginx
ports:
- name: http
port: 80
targetPort: 80
To see what ip address (and ports) are being mapped you can run:
kubectl get services and kubectl describe pod <your pod name>`
If you are still having problems please provide the outputs of the two kubectl commands above.
Good luck!

Kubernetes to find Pod IP from another Pod

I have the following pods hello-abc and hello-def.
And I want to send data from hello-abc to hello-def.
How would pod hello-abc know the IP address of hello-def?
And I want to do this programmatically.
What's the easiest way for hello-abc to find where hello-def?
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hello-abc-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: hello-abc
spec:
containers:
- name: hello-abc
image: hello-abc:v0.0.1
imagePullPolicy: Always
args: ["/hello-abc"]
ports:
- containerPort: 5000
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hello-def-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: hello-def
spec:
containers:
- name: hello-def
image: hello-def:v0.0.1
imagePullPolicy: Always
args: ["/hello-def"]
ports:
- containerPort: 5001
---
apiVersion: v1
kind: Service
metadata:
name: hello-abc-service
spec:
ports:
- port: 80
targetPort: 5000
protocol: TCP
selector:
app: hello-abc
type: NodePort
---
apiVersion: v1
kind: Service
metadata:
name: hello-def-service
spec:
ports:
- port: 80
targetPort: 5001
protocol: TCP
selector:
app: hello-def
type: NodePort
Preface
Since you have defined a service that routes to each deployment, if you have deployed both services and deployments into the same namespace, you can in many modern kubernetes clusters take advantage of kube-dns and simply refer to the service by name.
Unfortunately if kube-dns is not configured in your cluster (although it is unlikely) you cannot refer to it by name.
You can read more about DNS records for services here
In addition Kubernetes features "Service Discovery" Which exposes the ports and ips of your services into any container which is deployed into the same namespace.
Solution
This means, to reach hello-def you can do so like this
curl http://hello-def-service:${HELLO_DEF_SERVICE_PORT}
based on Service Discovery https://kubernetes.io/docs/concepts/services-networking/service/#environment-variables
Caveat: Its very possible that if the Service port changes, only pods that are created after the change in the same namespace will receive the new environment variables.
External Access
In addition, you can also reach this your service externally since you are using the NodePort feature, as long as your NodePort range is accessible from outside.
This would require you to access your service by node-ip:nodePort
You can find out the NodePort which was randomly assigned to your service with kubectl describe svc/hello-def-service
Ingress
To reach your service from outside you should implement an ingress service such as nginx-ingress
https://github.com/helm/charts/tree/master/stable/nginx-ingress
https://github.com/kubernetes/ingress-nginx
Sidecar
If your 2 services are tightly coupled, you can include both in the same pod using the Kubernetes Sidecar feature. In this case, both containers in the pod would share the same virtual network adapter and accessible via localhost:$port
https://kubernetes.io/docs/concepts/workloads/pods/pod/#uses-of-pods
Service Discovery
When a Pod is run on a Node, the kubelet adds a set of environment
variables for each active Service. It supports both Docker links
compatible variables (see makeLinkVariables) and simpler
{SVCNAME}_SERVICE_HOST and {SVCNAME}_SERVICE_PORT variables, where the
Service name is upper-cased and dashes are converted to underscores.
Read more about service discovery here:
https://kubernetes.io/docs/concepts/services-networking/service/#environment-variables
You should be able to reach hello-def-service from pods in hello-abc via DNS as specified here: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#services
However, kube-dns or CoreDNS has to be configured/installed in your k8s cluster before DNS records can be utilized in your cluster.
Specifically, you should be reach hello-def-service via the DNS record http://hello-def-service for the service running in the same namespace as hello-abc-service
And you should be able to reach hello-def-service running in another namespace ohter_namespace via the DNS record hello-def-service.other_namespace.svc.cluster.local.
If, for some reason, you do not have DNS add-ons installed in your cluster, you still can find the virtual IP of the hello-def-service via environment variables in hello-abc pods. As is documented here.

expose kubernetes pod to internet

I created a pod with an api and web docker container in kuberneters using a yml file (see below).
apiVersion: v1
kind: Pod
metadata:
name: test
labels:
purpose: test
spec:
containers:
- name: api
image: gcr.io/test-1/api:latest
ports:
- containerPort: 8085
name: http
protocol: TCP
- name: web
image: gcr.io/test-1/web:latest
ports:
- containerPort: 5000
name: http
protocol: TCP
It show my pod is up and running
NAME READY STATUS RESTARTS AGE
test 2/2 Running 0 5m
but I don't know how to expose it from here.
it seems odd I would have to run kubectl run .... again as the pod is already running. It does not show a deployment though.
if I try something like
kubectl expose deployment test --type="NodePort"--port 80 --target-port 5000
it complains about deployments.extensions "test' not found. What is the cleanest way to deploy from here?
To expose a deployment to the public internet, you will want to use a Service. The service type LoadBalancer handles this nicely, as you can just use pod selectors in the yaml file.
So if my deployment.yaml looks like this:
kind: Deployment
apiVersion: apps/v1beta2
metadata:
name: test-dply
spec:
selector:
# Defines the selector that can be matched by a service for this
deployment
matchLabels:
app: test_pod
template:
metadata:
labels:
# Puts the label on the pod, this must match the matchLabels
selector
app: test_pod
spec:
# Our containers for training each model
containers:
- name: mycontainer
image: myimage
imagePullPolicy: Always
command: ["/bin/bash"]
ports:
- name: containerport
containerPort: 8085
Then the service that would link to it is:
kind: Service
apiVersion: v1
metadata:
# Name of our service
name: prodigy-service
spec:
# LoadBalancer type to allow external access to multiple ports
type: LoadBalancer
selector:
# Will deliver external traffic to the pod holding each of our containers
app: test_pod
ports:
- name: sentiment
protocol: TCP
port: 80
targetPort: containerport
You can deploy these two items by using kubectl create -f /path/to/dply.yaml and kubectl create -f /path/to/svc.yaml. Quick note: The service will allocate a public IP address, which you can find using kubectl get services with the following output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
carbon-relay ClusterIP *.*.*.* <none> 2003/TCP 78d
comparison-api LoadBalancer *.*.*.* *.*.*.* 80:30920/TCP 15d
It can take several minutes to allocate the ip, just a forewarning. But the LoadBalancer's ip is fixed, and you can delete the pod that it points to and re-spin it without consequence. So if I want to edit my test.dply, I can without worrying about my service being impacted. You should rarely have to spin down services
You have created a pod, not a deployment.
Then you have exposed a deployment (and not your pod).
Try:
kubectl expose pod test --type=NodePort --port=80 --target-port=5000
kubectl expose pod test --type=LoadBalancer --port=XX --target-port=XXXX
If you already have pod and service running, you can create an ingress for the service you want to expose to the internet.
If you want to create it through console, Google Cloud provides really easy way to create an ingress from an existing service. Go to Services and Ingress tab, select the service, click on create ingress, fill the name and other mandatory fields.
or you can create using yaml file
apiVersion: "networking.k8s.io/v1"
kind: "Ingress"
metadata:
name: "example-ingress"
namespace: "default"
spec:
defaultBackend:
service:
name: "example-service"
port:
number: 8123
status:
loadBalancer: {}

Mapping incoming port in kubernetes service to different port on docker container

This is the way I understand the flow in question:
When requesting a kubernetes service (via http for example) I am using port 80.
The request is forwarded to a pod (still on port 80)
The port forwards the request to the (docker) container that exposes port 80
The container handles the request
However my container exposes a different port, let's say 3000.
How can make a port mapping like 80:3000 in step 2 or 3?
There are confusing options like targetport and hostport in the kubernetes docs which didn't help me. kubectl port-forward seems to forward only my local (development) machine's port to a specific pod for debugging.
These are the commands I use for setting up a service in the google cloud:
kubectl run test-app --image=eu.gcr.io/myproject/my_app --port=80
kubectl expose deployment test-app --type="LoadBalancer"
I found that I needed to add some arguments to my second command:
kubectl expose deployment test-app --type="LoadBalancer" --target-port=3000 --port=80
This creates a service which directs incoming http traffic (on port 80) to its pods on port 3000.
A nicer way to do this whole thing is with yaml files service.yaml and deployment.yaml and calling
kubectl create -f deployment.yaml
kubectl create -f service.yaml
where the files have these contents
# deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: app-deployment
spec:
replicas: 2
template:
metadata:
labels:
app: test-app
spec:
containers:
- name: user-app
image: eu.gcr.io/myproject/my_app
ports:
- containerPort: 3000
and
# service.yaml
apiVersion: v1
kind: Service
metadata:
name: app-service
spec:
selector:
app: test-app
ports:
- port: 80
targetPort: 3000
type: LoadBalancer
Note that the selector of the service must match the label of the deployment.