From the official example of Kubernetes documentation site on deploying a Wordpress application with mysql:
The service definition of mysql:
apiVersion: v1
kind: Service
metadata:
name: wordpress-mysql
labels:
app: wordpress
spec:
ports:
- port: 3306
selector:
app: wordpress
tier: mysql
clusterIP: None
The deployment definition of mysql
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: wordpress-mysql
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: mysql
My question is the following:
The Deployment definition, has a matchLabel selector, so that it will match the pod defined below that has the app: wordpress and tier:mysql labels.
Why the Service selector does not require a matchLabel directive for the same purpose? What is the "selection" of service performed upon?
According to the K8S documentation on Labels and Selectors.
The API currently supports two types of selectors: equality-based and set-based.
Newer resources, such as Job, Deployment, Replica Set, and Daemon Set, support set-based requirements as well.
Looks like new resources like Deployment support more featured set-based (with matchLabels) and the old resources like Services follow the old equality-based (without matchLabels).
The Service is a concept that makes your container (in this case hosting wordpress) available on a given port. It maps an external port (the Node's port) to and internal port (the container/pod's port). It does this by using the Pod's networking capabilities. The selector is a way of specifying in the service which Pod the port should be opened on. The Deployment is actually just a way of grouping things together - the Pod itself holds the Wordpress container, and the port that's defined in the service is available through the Pod networking.
This is a simple explanation, there are different kinds of services.
Related
I have two services deployed on the same k8s (minikube cluster). What is the url/approach I should use for one service to communicate with another service. I tried searching a bit on the web but most of them are communicating with an external db which is not what I'm after. This is what my deployments look like. I am looking for the goclient to be able to communicate with goserver. I know I need to go through the service but not sure what the url should look like. And is this dynamically discoverable? In addition to this if I expose goserver though ingress will this change ?
apiVersion: apps/v1
kind: Deployment
metadata:
name: goserver
namespace: golang-ns
labels:
app: goserver
spec:
replicas: 1
selector:
matchLabels:
app: goserver
template:
metadata:
labels:
app: goserver
spec:
containers:
- name: goserver
image: goserver:1.0.0
imagePullPolicy: Never
ports:
- containerPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: goclient
namespace: golang-ns
labels:
app: goclient
spec:
replicas: 1
selector:
matchLabels:
app: goclient
template:
metadata:
labels:
app: goclient
spec:
containers:
- name: goclient
image: goclient:1.0.0
imagePullPolicy: Never
ports:
- containerPort: 8081
---
apiVersion: v1
kind: Service
metadata:
name: goserver-service
namespace: golang-ns
spec:
selector:
app: goserver
ports:
- protocol: TCP
port: 8080
targetPort: 8080
type: LoadBalancer
---
apiVersion: v1
kind: Service
metadata:
name: goclient-service
namespace: golang-ns
spec:
selector:
app: goclient
ports:
- protocol: TCP
port: 8081
targetPort: 8081
type: LoadBalancer
Note that the term Service can be quite ambiguous when used in the context of kubernetes.
Service in your question is used to denote one of your microservices, deployed as containerized applications, running in Pods, managed by 2 separate Deployments.
Service, that was mentioned in David Maze's comment, refers to a specific resource type which is used for exposing your apps/microservices both inside and outside your kubernetes cluster. This resource type is called a Service. But I assume you know that as such Services are also added in your examples.
This is the reason why I prefer to use a term microservice if I really want to call "a service" one of the apps (clients, servers, whatever... ) deployed on my kubernetes cluster. And yes, this is really important distinction as talking about communication from one Service to another Service (kubernetes resource type) doesn't make any sense at all. Your Pod can communicate with a different Pod via a Service that exposes this second Pod, but Services don't communicate with each other at all. I hope this is clear.
So in order to expose one of your microservices within your cluster and make it easily accessible for other microservices, running on the same cluster, use a Service. But what you really need in your case is it's simplest form. ❗There is no need for using LoadBalancer type here. In your case you want to expose your Deployment named goserver to make it accessible by Pods from second Deployment, named goclient, not by external clients, sending requests from the public Internet.
Note that LoadBalancer type that you used in your Service's yaml manifests has completely different purpose - it is used for exposing your app for clients reaching to it from outside your kubernetes cluster and is mainly applicable in cloud environments.
So again, what you need in your case is the simplest Service (often called ClusterIP as it is the default Service type) which exposes a Deployment within the cluster. ⚠️ Remember that ClusterIP Service also has loadbalancing capabilities.
OK, enough of explanations, let's move on to the practical part. As I said, it's really simple and it can be done with one command:
kubectl expose deployment goserver --port 8080 --namespace golang-ns
Yes! That's all! It will create a Service named goserver (there is no reason to name it differently than the Deployment it exposes) which will expose Pods belonging to goserver Deployment within your kubernetes cluster, making it easily accessible (and discoverable) via it's DNS name.
If you prefer declarative Service definition, here it is as well:
apiVersion: v1
kind: Service
metadata:
name: goserver
namespace: golang-ns
spec:
selector:
app: goserver
ports:
- port: 8080
Your golang-client Pods need only the Service name i.e. goserver to access goserver Pods as they are deployed in the same namespace (golang-ns). If you need to access them from a Pod deployed to a different namespace, you need to use <servicename>.<namespace> i.e. goserver.golang-ns. You can also use fully quallified domain name (FQDN) (see the official docs here):
my-svc.my-namespace.svc.cluster-domain.example
which in your case may look as follows:
goserver.golang-ns.svc.cluster.local
As to:
In addition to this if I expose goserver though ingress will this
change ?
❗Unless you want to expose your goserver to the external world, don't use Ingress, you don't need it.
I have created a very simple spring boot application with only one REST service. This app is converted into a docker image ("springdockerimage:1") and deployed in the Kubernetes cluster with 3 replicas. Contents of my "Deployment" definition is as follows:
apiVersion: apps/v1
kind: Deployment
metadata:
name: springapp
labels:
app: distributiondemo
spec:
selector:
matchLabels:
app: distributiondemo
replicas: 3
template:
metadata:
labels:
app: distributiondemo
spec:
containers:
- name: spring-container
image: springdockerimage:1
I have created service for my above deployment as follows:
apiVersion: v1
kind: Service
metadata:
name: springservice
labels:
app: distributiondemo
spec:
selector:
app: distributiondemo
ports:
- port: 8080
protocol: TCP
targetPort: 8080
name: spring-port
nodePort: 32000
type: NodePort
After deploying both the above YAML(deployment and service) files, I noticed that everything has been deployed as expected i.e., 3 replicas are created and my service is having 3 endpoints as well. Below screenshot is the proof of the same:
Since I am using minikube for my local testing, I am port forwarding and accessing the application as kubectl port-forward deployment.apps/springapp 40002:8080 .
But one thing I noticed is that all my HTTP requests are getting redirected to only one pod.
while true ; do curl http://localhost:40002/docker-java-app/test ;done
I am not getting where exactly I am doing it wrong. Any help would be appreciated. Thank you.
The loadbalancing might not work with port-forwarded ports as it might be directly redirecting traffic to pod (read more here). The K8s service is the feature will give you that loadbalancing capability.
So you can try either of below instead
Use http://your_service_dns_name:8080/docker-java-app/test
Use http://service_cluster_ip:8080/docker-java-app/test
Use http://any_host_ip_from_k8s_cluster:32000/docker-java-app/test
Option 1 and 2 works only if you are accessing those urls from a host which is part of K8s cluster. Option 3 just needs connectivity to target host and port, from the host you are accessing url.
I've deployed my Django/React app into K8s and exposed both deployments as a service (ClusterIP).
Whenever I try to call the API service through its ClusterIP:8000, it sometimes refuses the connection. So I checked its endpoints and only one out of the three existing endpoints returns what I expect. I understand that when calling the ClusterIP, it redirects to one of those three endpoints.
Is there any way to 'debug' a incoming service request? Can I modify the amount of existing endpoints (so I could limit it to the only working endpoint)? Is there any other way to maybe see logs of the service to find out why only one of the endpoints is working?
I was able to fix it:
I deployed a three-tier-application (Django/React/DB) and used the same selector for every deployment, like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-xxx-deployment
labels:
app: myapp
spec:
replicas: 1
selector:
matchLabels:
app: myapp
So when exposing this with "kubectl expose deployment/..." it created as many endpoints as equal selectors were found in the deployment. Since I have three deployments (DB/React/Djagno), three endpoints were created.
Changing the deployment .yaml like this fixed my error and only one endpoint was crated:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: myapp-web
spec:
replicas: 1
selector:
matchLabels:
app: mapp-web
ClusterIP:8000 is not seems right to use.
You could replace it to http://$(serviceName).$(namespace):8000/ for using service correctly.
I'm setting up active/passive routing for an application in kube, that's outside of the typical K8S use case. I'm trying to find configuration related to routing or load balancing in headless services with multiple backends. So far I have managed to route traffic to my backends however I need to ensure that the traffic is routed correctly. The application requires TCP connections and the primary/secondary instances have differing configuration(requiring different deployment objects). If fail-over occurs, routing is expected to return to the primary once it is restored.
The routing consistently behaves as-desired, but no documentation or configuration would indicate as much. I've found documentation stating that it should be round-robin or random because of the order of the dns entries. The crux of the question is: can I depend on this behavior? Since this is undocumented and not explicitly configured I'm concerned that it will change in future versions, or deployments.
I'm using Rancher with the canal networking layer.
I've read through both the calico and flannel docs.
Neither Endpoints/endpoint slices, nor do dns entries indicate any order for routing.
Currently the setup has two deployments that are selected by a headless service. the deployed pods have a hostname of input-primary in deployment 1 and input-secondary in deployment 2. I can access either of them by dns as input-primary.myservice, or input-secondary.myservice.
the ingress controller tcp-services config map has an entry for my service:
25252: default/myservice:9999
and an abridged version of the k8s config:
ApiVersion: v1
kind: Service
metadata:
name:myservice
spec:
clusterIP: None
ports:
- name: input
port: 9999
selector:
app: myapp
type: ClusterIP
----
apiVersion: apps/v1beta2
kind: Deployment
metadata:
labels:
app: myapp
name: input-primary
spec:
hostname: input-primary
containers:
- ports:
- containerport: 9999
name: input
protocol: TCP
----
apiVersion: apps/v1beta2
kind: Deployment
metadata:
labels:
app: myapp
name: input-secondary
spec:
hostname: input-secondary
containers:
- ports:
- containerport: 9999
name: input
protocol: TCP```
I have a number of Services running against Pods hosted within a cluster on Google Cloud K8's.
Service 1 is an Ingress - basic-ingress
Service 2 is a NodeJS API Gateway w/ 2 Pods - security-gateway-svc
Service 3 is a NodeJS API w/ 2 Pods - some-random-api-svc
and so on service 4 / 5 / 6 etc....
My Ingress allows me to access exposed services via a sub domain however I would like to move my external API's behind my Gateway so I can handle auth etc in the gateway.
What I'd like to do is allow security-gateway-svc to connect to some-random-api-svc without having to go via dns or outside of my cluster.
I figured I could update my ingress so all sub domains use the same service entry and allow the Gateway to figure out where the traffic should go.
I can configure this just fine locally as everything runs on localhost and I specify a port so it's fairly straight forward.
Is it possible however to expose pods to other pods within a cluster via the service name instead of an actual domain / dns look up?
You service should be accessible within your cluster via the service name.
Point your gateway entry for each api to the service name.
Something like http://some-random-api-svc should work.
The easier way to make pods reachable within your kubernetes clulster is to use services link to services documentation. For this you need to create a yaml block that will create an internal hostname binded by an endpoint to your pod. In addition, a selector will allow you to bind one or multiple pods to that internal hostname. Here is an example:
---
apiVersion: v1
kind: Service
metadata:
name: $YOUR_SERVICE_NAME
namespace: $YOUR_NAMESPACE
labels:
app: $YOUR_SERVICE_NAME
spec:
ports:
- name: "8000"
port: 8000
targetPort: 8000
selector:
app: $YOUR_SERVICE_NAME
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: $YOUR_SERVICE_NAME
namespace: $YOUR_NAMESPACE
labels:
app: $YOUR_SERVICE_NAME
spec:
replicas: 1
selector:
matchLabels:
app: $YOUR_SERVICE_NAME
template:
metadata:
labels:
app: k2m
spec:
containers:
- name: $YOUR_SERVICE_NAME
image: alpine:latest
restartPolicy: Always
Finally, use the service name in your ingress controller route to redirect traffic to your api-gateway.
Kubernetes uses CoreDNS to perform in-cluster DNS resolution. By default, all Services are assigned DNS names in the (FQDN) form of <service-name>.<namespace>.svc.cluster.local. So your security-gateway-svc will be able to forward requests to some-random-api-svc via some-random-api-svc.<namespace>, without routing the traffic outside of Kubernetes. Keep in mind that you shouldn't be interacting with pods directly, because pods are ephemeral; always go through Services.