Can a service be linked to pods with different images? - kubernetes

In Kubernetes, a Service is linked to a Deployment with the selector property of the Service and the label property of the Deployment.
Then, can a Service be linked to Deployments or pods with different images?

A Service can select all the pods whose labels are matched with service selector. It does not matter what images those pods have.
As far the k8s doc about service:
An abstract way to expose an application running on a set of Pods as a network service.
With Kubernetes you don't need to modify your application to use an unfamiliar service discovery mechanism. Kubernetes gives Pods their own IP addresses and a single DNS name for a set of Pods, and can load-balance across them.
So as far your question, the answer is yes.
For example, let's say about this service yaml, for this service it will select all the pods that have app: MyApp label, it does not matter what are in those pods.
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: MyApp
ports:
- protocol: TCP
port: 80
targetPort: 9376
You can read more about Service then you will clear about it.

In kubernetes a service is not linked to Deployment. It is bound to all the pods using a 'selector'.
see below an example:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: my-app
ports:
- protocol: TCP
port: 80
targetPort: 9376
In this case, the below YAML creates a new Service object named "my-service", which targets TCP port 9376 on any Pod with the "app=my-app" label

Related

Do services send traffic to local pods first?

I have a DaemonSet with a service pointing to it.
When a pod will access the ClusterIP of the my service, will it get the local pod running on the same node or any pod in the service?
Is there any way to achieve this? My understanding is that it would be the same thing than externaltrafficpolicy: local but for internal traffic.
By default, traffic sent to a ClusterIP or NodePort Service may be routed to any backend address for the Service. Since Kubernetes 1.7 it has been possible to route "external" traffic to the Pods running on the Node that received the traffic, but this is not supported for ClusterIP Services, and more complex topologies — such as routing zonally — have not been possible. The Service Topology feature resolves this by allowing the Service creator to define a policy for routing traffic based upon the Node labels for the originating and destination Nodes.
You need to use: Service topology
An example service which prefers local pods:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: my-app
ports:
- protocol: TCP
port: 80
targetPort: 9376
topologyKeys:
- "kubernetes.io/hostname"
- "*"
UPD 1:
There is another option to make sure, that requests sent to a port of some particular node will be handled on the same node - it's hostPort.
An example:
kind: Pod
apiVersion: v1
metadata:
name: test-api
labels:
app: test-api
spec:
containers:
- name: testapicontainer
image: myprivaterepo/testapi:latest
ports:
- name: web
hostPort: 55555
containerPort: 80
protocol: TCP
The above pod will expose container port 80 on a hostPort: 55555 - if you have DaemonSet for those pods - then you can be sure, that they will be run on each node and each request will be handled on the node which received it.
But, please be careful using it and read this: Configuration Best Practices

Container port pods vs container port service

I would like to understand the mapping between the service port and pod container port.
Do I need to define the container port as part of my pod and also as part of my service? Or it's ok just to expose it as part of the service?
containerPort as part of the pod definition is only informational purposes. Ultimately if you want to expose this as a service within the cluster or node then you have to create a service.
To answer your question, yes it is enough if you just expose it as part of the Kubernetes service. It is a good practice to mention as part of the pod definition so that if someone looks at the definition can understand the port where your container service is running.
This is very well explained here
Official kubernetes reference documentation
The port that the container exposes and the port of the service are different concepts in Kubernetes.
If you want to create a service for your app, your pod has to have a port. For example, this is a pod yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
spec:
selector:
matchLabels:
run: my-nginx
replicas: 2
template:
metadata:
labels:
run: my-nginx
spec:
containers:
- name: my-nginx
image: nginx
ports:
- containerPort: 8080
containerPort sets the port that app will expose.
To access this app via a service you have to create a service object with such yaml:
apiVersion: v1
kind: Service
metadata:
name: my-nginx
labels:
run: my-nginx
spec:
ports:
- port: 80
protocol: TCP
targetPort: 8080
selector:
run: my-nginx
In this yaml, keyword port sets the port of the service. targetPort is the port of your app. So, port of the service is different.
Here is a good definition from official doc:
A Kubernetes Service is an abstraction which defines a logical set of
Pods and a policy by which to access them - sometimes called a
micro-service. The set of Pods targeted by a Service is (usually)
determined by a Label Selector (see below for why you might want a
Service without a selector).
Let's take an example and try to understand with the help of a diagram.
Consider a cluster having 2 nodes and one service. Each nodes having 2 pods and each pod having 2 containers say app container and web container.
NodePort: 30001 (cluster level exposed port for each node)
Port: 80 (service port)
targetPort:8080 (app container port same should be mentioned in docker expose)
targetPort:80 (web container port same should be mentioned in docker expose)
Now the below diagram should help us understand it better.
For reference and further details please refer to below link https://theithollow.com/2019/02/05/kubernetes-service-publishing/

Kubernetes to find Pod IP from another Pod

I have the following pods hello-abc and hello-def.
And I want to send data from hello-abc to hello-def.
How would pod hello-abc know the IP address of hello-def?
And I want to do this programmatically.
What's the easiest way for hello-abc to find where hello-def?
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hello-abc-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: hello-abc
spec:
containers:
- name: hello-abc
image: hello-abc:v0.0.1
imagePullPolicy: Always
args: ["/hello-abc"]
ports:
- containerPort: 5000
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hello-def-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: hello-def
spec:
containers:
- name: hello-def
image: hello-def:v0.0.1
imagePullPolicy: Always
args: ["/hello-def"]
ports:
- containerPort: 5001
---
apiVersion: v1
kind: Service
metadata:
name: hello-abc-service
spec:
ports:
- port: 80
targetPort: 5000
protocol: TCP
selector:
app: hello-abc
type: NodePort
---
apiVersion: v1
kind: Service
metadata:
name: hello-def-service
spec:
ports:
- port: 80
targetPort: 5001
protocol: TCP
selector:
app: hello-def
type: NodePort
Preface
Since you have defined a service that routes to each deployment, if you have deployed both services and deployments into the same namespace, you can in many modern kubernetes clusters take advantage of kube-dns and simply refer to the service by name.
Unfortunately if kube-dns is not configured in your cluster (although it is unlikely) you cannot refer to it by name.
You can read more about DNS records for services here
In addition Kubernetes features "Service Discovery" Which exposes the ports and ips of your services into any container which is deployed into the same namespace.
Solution
This means, to reach hello-def you can do so like this
curl http://hello-def-service:${HELLO_DEF_SERVICE_PORT}
based on Service Discovery https://kubernetes.io/docs/concepts/services-networking/service/#environment-variables
Caveat: Its very possible that if the Service port changes, only pods that are created after the change in the same namespace will receive the new environment variables.
External Access
In addition, you can also reach this your service externally since you are using the NodePort feature, as long as your NodePort range is accessible from outside.
This would require you to access your service by node-ip:nodePort
You can find out the NodePort which was randomly assigned to your service with kubectl describe svc/hello-def-service
Ingress
To reach your service from outside you should implement an ingress service such as nginx-ingress
https://github.com/helm/charts/tree/master/stable/nginx-ingress
https://github.com/kubernetes/ingress-nginx
Sidecar
If your 2 services are tightly coupled, you can include both in the same pod using the Kubernetes Sidecar feature. In this case, both containers in the pod would share the same virtual network adapter and accessible via localhost:$port
https://kubernetes.io/docs/concepts/workloads/pods/pod/#uses-of-pods
Service Discovery
When a Pod is run on a Node, the kubelet adds a set of environment
variables for each active Service. It supports both Docker links
compatible variables (see makeLinkVariables) and simpler
{SVCNAME}_SERVICE_HOST and {SVCNAME}_SERVICE_PORT variables, where the
Service name is upper-cased and dashes are converted to underscores.
Read more about service discovery here:
https://kubernetes.io/docs/concepts/services-networking/service/#environment-variables
You should be able to reach hello-def-service from pods in hello-abc via DNS as specified here: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#services
However, kube-dns or CoreDNS has to be configured/installed in your k8s cluster before DNS records can be utilized in your cluster.
Specifically, you should be reach hello-def-service via the DNS record http://hello-def-service for the service running in the same namespace as hello-abc-service
And you should be able to reach hello-def-service running in another namespace ohter_namespace via the DNS record hello-def-service.other_namespace.svc.cluster.local.
If, for some reason, you do not have DNS add-ons installed in your cluster, you still can find the virtual IP of the hello-def-service via environment variables in hello-abc pods. As is documented here.

How to communicate between pods in a service?

Suppose I have a service containing two pods. One of the pods is an HTTP server, and the other pod needs to hit a REST endpoint on this pod. Is there a hostname that the second pod can use to address the first pod?
I'm assuming when you say "service" you aren't referring to the Kubernetes lexicon of a Service object, otherwise your two Pods in the Service would be identical, so let's start by teasing out what a "Service" means in Kubernetes land.
You will have to create an additional Kubernetes object called a Service to get your hostname for your HTTP server's Pod. When you create a Service you will define a .spec.selector that points to a set of labels on the HTTP service's Pod. For the sake of example, let's say the label is app: nginx. The name of that Service object will become the internal DNS record that can be queried by the second Pod.
A simplified example:
apiVersion: v1
kind: Pod
metadata:
name: http-service
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
---
kind: Service
apiVersion: v1
metadata:
name: my-http-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
Now your second Pod can make requests to the HTTP service by the Service name, my-http-service.
It's also worth mentioning that Kubernetes best practice dictates that these Pods be managed by controllers such as Deployments or ReplicaSets for all sorts of reasons, including high availability of your applications.
Note that a service is a different concept in Docker then in K8s. The easiest way of getting what you want would be creating the two pods; say pod-1 and pod-2, with a yaml file similar to this one:
apiVersion: v1
kind: Pod
metadata:
name: NAME
labels:
app: LABEL
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
Say NAME and LABEL are nginx and nginx-1, so you have now two pods called nginx and nginx-1, with labels app: nginx and app: nginx-1. Actually, as only one of them is going to be exposed, the other label is irrelevant.
Now you expose the pod either with a yaml file or from command line.
Yaml file:
apiVersion: v1
kind: Service
metadata:
name: server
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
selector:
app: nginx
Command line:
kubectl expose pod nginx --port 80 --name server
If you now access the second pod (nginx-1) and curl the service directly, you would end up hitting the pod behind it (nginx):
nerus:~/workspace $ kubectl exec -it nginx-1 bash
root#nginx-1:/# curl -I server
HTTP/1.1 200 OK
You can expose your pod kubectl expose deployment --type=name of pod , then you can use the kubectl describe It will show you port number . Then you access you pod at http://localhost:portnumber in last command ....**.....Hope it will help .
Ironically, you answered your own question: a Service is a stable name and IP that abstracts over the individual coming-and-going of the Pods to which it will route traffic, as described very well in the fine manual.
If the-http-pod needs to reach the-rest-pod, then create a Service that matches the labels on the PodSpec that created the-rest-pod, and from that point forward the-http-pod can always use ${serviceName}.${serviceNamespace}.svc.cluster.local to each any Pod that has matching labels

Kubernetes: Multiple containers that have to communicate + exposed nodePort

In my setup, there is a set of containers that were initially built to run with docker-compose. After moving to Kubernetes I'm facing the following challenges:
docker-compose managed to provide some type of internal DNS that allowed a container to be addressed by its name. How do I create such a network in Kubernetes? What I found so far:
they could all be part of one pod and therefore communicate via localhost
they could all have a common label with matching key:value pairs and a service, but how does one handle Ports?
I need to expose an internal Port to a certain NodePort as it has to be publicly available. How does such a service config look like? What I found so far:
something like this:
apiVersion: v1
kind: Service
metadata:
labels:
app: frontend
name: frontend-nodeport
spec:
type: NodePort
ports:
- name: "3000-30001"
port: 3000
nodePort: 30001
selector:
app: frontend
status:
loadBalancer: {}`
Docker-compose managed to provide some type of internal DNS that allowed a container to be addressed by its name. How do I create such a network in Kubernetes?
As you researched, you can, indeed have two approaches:
IF you containers are to be scaled together then place them inside same pod and communicate through localhost over separate ports. This is less likely your case since this approach is more suitable when containerized app is more similar to processes on one physical box than a separate service/server.
IF your containers are to be scaled separaltey, which is more probably your case, then use service. With services, in place of localhost (in previous point) you will either use just service name as it is (if pods are in same namespace) or FQDN (servicename.namespace.svc.cluster.local) if services are accessed across namespaces. As opposed to previous point where you had to have different ports for your containers (since you address localhost), in this case you can have same port across multiple services, since service:port must be unique. Also with service you can remap ports from containers if you wish to do so as well.
Since you asked this as an introductory question two words of caution:
service resolution works from standpoint of pod/container. To test it you actually need to exec into actual container (or proxy from host) and this is common confusion point. Just to be on safe side test service:port accessibility within actual container, not from master.
Finally, just to mimic docker-compose setup for inter-container network, you don't need to expose NodePort or whatever. Service layer in kubernetes will take care of DNS handling. NodePort has different intention.
I need to expose an internal Port to a certain NodePort. How does such a service config look like?
You are on a good track, here is nice overview to get you started, and reference relevant to your question is given below:
apiVersion: v1
kind: Service
metadata:
name: my-nodeport-service
selector:
app: my-app
spec:
type: NodePort
ports:
- name: http
port: 80
targetPort: 80
nodePort: 30036
protocol: TCP
Edit: Could you please provide an example of how a service.yaml would look like if the containers are scaled seperately ?
First one is, say, api server, we'll call it svc-my-api, it will use pod(s) labeled app: my-api and communicate to pod's port 80 and will be accessible by other pods (in the same namespace) as host:svc-my-api and port:8080
apiVersion: v1
kind: Service
metadata:
name: svc-my-api
labels:
app: my-api
spec:
selector:
app: my-api
ports:
- protocol: TCP
port: 8080
targetPort: 80
Second one is, say, mysql server, we'll call it svc-my-database. Supposing that containers from api pods (covered by previous service) want to access database, they will use host:svc-my-database and port:3306.
apiVersion: v1
kind: Service
metadata:
name: svc-my-database
labels:
app: my-database
spec:
selector:
app: my-database
ports:
- name: http
protocol: TCP
port: 3306
targetPort: 3306
1.- You can add some parameters to your pod resource (or any other that is going to create a pod), as follows:
...
spec:
hostname: foo-{1..4} #keep in mind this line
subdomain: bar #and this line
containers:
- image: busybox
...
Note: so imagine you just created 4 pods, with hostname foo-1, foo-2, foo-3 and foo-4. These are separate pods. You can't do foo-{1..4}. So this is just for demo purposes.
If you now create a service with the same name as the subdomain, you would be able to reach the pod from anywhere in the cluster by hostname.service-name.namespace.svc.cluster.local.
Example:
apiVersion: v1
kind: Service
metadata:
name: bar #my subdomain is called "bar", so is this service
spec:
selector:
app: my-app
ports:
- name: foo
port: 1234
targetPort: 1234
Now, say I have the label app: my-app in my pods, so the service is targeting them correctly.
At this point, look what happens (from any pod, within the cluster):
/ # nslookup foo-1.bar.my-namespace.svc.cluster.local
Server: 10.63.240.10
Address 1: 10.63.240.10 kube-dns.kube-system.svc.cluster.local
Name: foo-1.bar.my-namespace.svc.cluster.local
Address 1: 10.60.1.24 foo-1.bar.my-namespace.svc.cluster.local
2.- The second part of your question is almost correct. This is a NodePort service:
apiVersion: v1
kind: Service
metadata:
name: svc-nodeport
spec:
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: http
selector:
app: my-app
type: NodePort
This service runs on port 80, so it is reachable on port 80 from within the cluster. It will map the port to a random port over 30000 on the node. Now this same service is available on port 30001 (for example) of the node from outside world. Finally it will forward the requests to the port 8080 of the container.