My application has to deployments with a POD.
Can I create a Service to distribute load across these 2 PODs, part of different deployments ?
If so, How ?
Yes it is possible to achieve. Good explanation how to do it can be found on Kubernete documentation. However, keep in mind that both deployments should provide the same functionality, as the output should have the same format.
A Kubernetes Service is an abstraction which defines a logical set of Pods running somewhere in your cluster, that all provide the same functionality. When created, each Service is assigned a unique IP address (also called clusterIP). This address is tied to the lifespan of the Service, and will not change while the Service is alive. Pods can be configured to talk to the Service, and know that communication to the Service will be automatically load-balanced out to some pod that is a member of the Service.
Based on example from Documentation.
1. nginx Deployment. Keep in mind that Deployment can have more than 1 label.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
selector:
matchLabels:
run: nginx
env: dev
replicas: 2
template:
metadata:
labels:
run: nginx
env: dev
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
2. nginx-second Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-second
spec:
selector:
matchLabels:
run: nginx
env: prod
replicas: 2
template:
metadata:
labels:
run: nginx
env: prod
spec:
containers:
- name: nginx-second
image: nginx
ports:
- containerPort: 80
Now to pair Deployments with Services you have to use Selector based on Deployments labels. Below you can find 2 service YAMLs. nginx-service which pointing to both deployments and nginx-service-1 which points only to nginx-second deployment.
## Both Deployments
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
ports:
- port: 80
protocol: TCP
selector:
run: nginx
---
### To nginx-second deployment
apiVersion: v1
kind: Service
metadata:
name: nginx-service-1
spec:
ports:
- port: 80
protocol: TCP
selector:
env: prod
You can verify that service binds to deployment by checking the endpoints.
$ kubectl get pods -l run=nginx -o yaml | grep podIP
podIP: 10.32.0.9
podIP: 10.32.2.10
podIP: 10.32.0.10
podIP: 10.32.2.11
$ kk get ep nginx-service
NAME ENDPOINTS AGE
nginx-service 10.32.0.10:80,10.32.0.9:80,10.32.2.10:80 + 1 more... 3m33s
$ kk get ep nginx-service-1
NAME ENDPOINTS AGE
nginx-service-1 10.32.0.10:80,10.32.2.11:80 3m36s
Yes, you can do that.
Add a common label key pair to both the deployment pod spec and use that common label as selector in service definition
With the above defined service the requests would be load balanced across all the matching pods.
Related
I have a simple Webserver that exposes the pod name on which it is located by using the OUT env var.
Deployment and service look like this:
apiVersion: v1
kind: Service
metadata:
name: simpleweb-service
spec:
selector:
app: simpleweb
ports:
- protocol: TCP
port: 8080
targetPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: simpleweb-deployment
labels:
app: simpleweb
spec:
replicas: 3
selector:
matchLabels:
app: simpleweb
template:
metadata:
labels:
app: simpleweb
spec:
containers:
- name: simpleweb
env:
- name: OUT
valueFrom:
fieldRef:
fieldPath: metadata.name
imagePullPolicy: Never
image: simpleweb
ports:
- containerPort: 8080
I deploy this on my local kind cluster
default simpleweb-deployment-5465f84584-m59n5 1/1 Running 0 12m
default simpleweb-deployment-5465f84584-mw8vj 1/1 Running 0 9m36s
default simpleweb-deployment-5465f84584-x6n74 1/1 Running 0 12m
and access it via
kubectl port-forward service/simpleweb-service 8080:8080
When I am hitting localhost:8080 I always get to the same pod
Questions:
Is my service not doing round robin?
Is there some caching that I am not aware of
Do I have to expose my service differently? Is this a kind issue?
port-forward will only select the first pod for a service selector. If you want round-robin you'd need to use a load balancer like traefik or nginx.
https://github.com/kubernetes/kubectl/blob/652881798563c00c1895ded6ced819030bfaa4d7/pkg/polymorphichelpers/attachablepodforobject.go#L52
To do round-robin and route to different services I have to use a LoadBalancer service. There is MetalLB that implements a LB for Kind. Unfortunately it currently does not support Apply M1 machines.
I assume that MetallLB + LoadBalancer Service would work on a different machine.
If I have to deploy a workload on Kubernetes and also have to expose it as a service, I have to create a Deployment/Pod and a Service. If the Kubernetes offering is from Cloud and we are creating a LoadBalancer service, it makes sense to create the the service first before the workload as the url creation for the service takes time. But in case the Kubernetes deployed on non-cloud platform there is no point in creating the Service before the workload.
So why I have to create a service first then the workload?
There is no requirement to create a service before a deployment or vice versa. You can create the deployment before the service or the service before the deployment.
If you create the deployment before the service, then the application packaged in the deployment will not be accessible externally until you create the LoadBalancer.
Conversely, if you create the LoadBalancer first, the traffic for the application will not be routed to the application as it hasn't been created yet, giving 503s to the caller.
You are declaring to Kubernetes how you want the state of the infrastructure to be. e.g. "I want a deployment and a service". Kubernetes will go off and create those, but they may not necessarily end up being creating in a predictable order. For example, LoadBalancers take a while to assign IPs from your Cloud provider, so even though the resource is created in the cluster, it's not actually getting any traffic.
As wrote in the official kubernetes documentation, and unlike to what other users have told you, there is a specific case that the service needs to be created BEFORE the pod.
https://kubernetes.io/docs/concepts/services-networking/service/#environment-variables
Note:
When you have a Pod that needs to access a Service, and you are using the environment variable method to publish the port and cluster IP to the client Pods, you must create the Service before the client Pods come into existence. Otherwise, those client Pods won't have their environment variables populated.
If you only use DNS to discover the cluster IP for a Service, you don't need to worry about this ordering issue.
For example:
Create some pods, deploy and service in a namespace:
namespace.yaml
# namespace
apiVersion: v1
kind: Namespace
metadata:
creationTimestamp: null
name: securedapp
spec: {}
status: {}
services.yaml
# expose api svc
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
type: api
name: api-svc
namespace: securedapp
spec:
ports:
- port: 90
protocol: TCP
targetPort: 80
selector:
type: api
type: ClusterIP
status:
loadBalancer: {}
---
# expose frontend-svc
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
type: secured
name: frontend-svc
namespace: securedapp
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
type: secured
type: ClusterIP
status:
loadBalancer: {}
pods-and-deploy.yaml
# create the pod for frontend
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
type: secured
name: secured-frontend
namespace: securedapp
spec:
containers:
- image: nginx
name: secured-frontend
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
---
# create the pod for the api
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
type: api
name: webapi
namespace: securedapp
spec:
containers:
- image: nginx
name: webapi
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
---
# create a deploy
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: sure-even-with-deploy
name: sure-even-with-deploy
namespace: securedapp
spec:
replicas: 1
selector:
matchLabels:
app: sure-even-with-deploy
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: sure-even-with-deploy
spec:
containers:
- image: nginx
name: nginx
resources: {}
status: {}
If you create all the resources contextually in the same time, for example placing all this files on a folder, then do an apply on it like this:
kubectl apply -f .
Then if we get the envs from one pod
k exec -it webapi -n securedapp -- env
you will obtain something like this:
# environment of api_svc
API_SVC_PORT_90_TCP=tcp://10.152.183.242:90
API_SVC_SERVICE_PORT=90
API_SVC_PORT_90_TCP_ADDR=10.152.183.242
API_SVC_SERVICE_HOST=10.152.183.242
API_SVC_PORT_90_TCP_PORT=90
API_SVC_PORT=tcp://10.152.183.242:90
API_SVC_PORT_90_TCP_PROTO=tcp
# environment of frontend
FRONTEND_SVC_SERVICE_HOST=10.152.183.87
FRONTEND_SVC_SERVICE_PORT=80
FRONTEND_SVC_PORT=tcp://10.152.183.87:280
FRONTEND_SVC_PORT_280_TCP_PORT=80
FRONTEND_SVC_PORT_280_TCP=tcp://10.152.183.87:280
FRONTEND_SVC_PORT_280_TCP_PROTO=tcp
FRONTEND_SVC_PORT_280_TCP_ADDR=10.152.183.87
Clear all the resoruces created:
kubectl delete -f .
Now, another try.
This time we will do the same thing but "slowly", we create things one by one.
Sure, the ns first
k apply -f ns.yaml
then the pods
k apply -f pods.yaml
After a while create the services
k apply -f services.yaml
Now you will discover that if you get the envs from one pod
k exec -it webapi -n securedapp -- env
this time you will NOT have the environment variables of the services.
So, as mentioned on the k8s documentation there is at least one case, where it is necessary to create the svc BEFORE the pods: if you need env variables of your services.
Saluti
I created a redis deployment and service in kubernetes,
I can access redis from another pod by service ip, but I can't access it by service name
the redis yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-deployment
namespace: myapp-ns
spec:
replicas: 1
selector:
matchLabels:
component: redis
template:
metadata:
labels:
component: redis
spec:
containers:
- name: redis
image: redis
ports:
- containerPort: 6379
---
apiVersion: v1
kind: Service
metadata:
name: redis
namespace: myapp-ns
spec:
type: ClusterIP
selector:
component: redis
ports:
- port: 6379
targetPort: 6379
I applied your file, and I am able to ping and telnet to the service both from within the same namespace and from a different namespace. To test this, I created pods in the same namespace and in a different namespace and installed telnet and ping. Then I exec'ed into them and did the below tests:
Same Namespace
kubectl exec -it <same-namespace-pod> /bin/bash
# ping redis
PING redis.<redis-namespace>.svc.cluster.local (172.20.211.84) 56(84) bytes of data.
# telnet redis 6379
Trying 172.20.211.84...
Connected to redis.<redis-namespace>.svc.cluster.local.
Escape character is '^]'.
Different Namespace
kubectl exec -it <different-namespace-pod> /bin/bash
# ping redis.<redis-namespace>.svc.cluster.local
PING redis.test.svc.cluster.local (172.20.211.84) 56(84) bytes of data.
# telnet redis.<redis-namespace>.svc.cluster.local 6379
Trying 172.20.211.84...
Connected to redis.<redis-namespace>.svc.cluster.local.
Escape character is '^]'.
If you are not able to do that due to dns resolution issues, you could look at your /etc/resolv.conf in your pod to make sure it has the search prefixes svc.cluster.local and cluster.local
I created a redis deployment and service in kubernetes, I can access
redis from another pod by service ip, but I can't access it by service
name
Keep in mind that you can use the Service name to access the backend Pods it exposes only within the same namespace. Looking at your Deployment and Service yaml manifests, we can see they're deployed within myapp-ns namespace. It means that only from a Pod which is deployed within this namespace you can access your Service by using it's name.
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-deployment
namespace: myapp-ns ### 👈
spec:
replicas: 1
selector:
matchLabels:
component: redis
template:
metadata:
labels:
component: redis
spec:
containers:
- name: redis
image: redis
ports:
- containerPort: 6379
---
apiVersion: v1
kind: Service
metadata:
name: redis
namespace: myapp-ns ### 👈
spec:
type: ClusterIP
selector:
component: redis
ports:
- port: 6379
targetPort: 6379
So if you deploy the following Pod:
apiVersion: v1
kind: Pod
metadata:
name: redis-client
namespace: myapp-ns ### 👈
spec:
containers:
- name: redis-client
image: debian
you will be able to access your Service by its name, so the following commands (provided you've installed all required tools) will work:
redis-cli -h redis
telnet redis 6379
However if your redis-cliet Pod is deployed to completely different namespace, you will need to use fully qualified domain name (FQDN) which is build according to the rule described here:
redis-cli -h redis.myapp-ns.svc.cluster.local
telnet redis.myapp-ns.svc.cluster.local 6379
I have a very simple Python app that works fine when I execute uvicorn main:app --reload. When I go to http://127.0.0.1:8000 on my machine, I'm able to interact with the API. (My app has no frontend, it is just an API built with FastAPI). However, I am trying to deploy this via Kubernetes, but am not sure how I can access/interact with my API.
Here is my deployment.yaml.
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.16.1
ports:
- containerPort: 80
When I enter kubectl describe deployments my-deployment in the terminal, I get back a print out of the deployment, the namespace it is in, the pod template, a list of events, etc. So, I am pretty sure it is properly deployed.
How can I access the application? What would the url be? I have tried a variety of localhost + port combinations to no avail. I am new to kubernetes so I'm trying to understand how this works.
Update:
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-deployment
namespace: default
spec:
selector:
matchLabels:
app: web
replicas: 2
template:
metadata:
labels:
app: web
spec:
containers:
- name: site
image: nginx:1.16.1
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: app-entrypoint
namespace: default
spec:
type: NodePort
selector:
app: web
ports:
- port: 80
targetPort: 80
nodePort: 30001
Again, when I use the k8s CLI, I'm able to see my deployment, yet when I hit localhost:30001, I get an Unable to connect message.
You have given containerPort: 80 but if your app listens on port 8080 change it to 8080.
There are different ways to access an application deployed on kubernetes
Port Forward using kubectl port-forward deployment/my-deployment 8080:8080
Creare a NodePort service and use http://<NODEIP>:<NODEPORT>
Create a LoadBalanceer service. This works only in supported cloud environment such as AWS, GKE etc.
Use ingress controller such nginx to expose the application.
By Default k8s application are exposed only within the cluster, if you want to access it from outside of the cluster then you can select any of the below options:
Expose Deployment as a node port service (kubectl expose deployment my-deployment --name=my-deployment-service --type=NodePort), describe the service and get the node port assigned to it (kubectl describe svc my-deployment-service). Then try http://<node-IP:node-port>/
For production grade cluster the best practice is to use LoadBalancer type (kubectl expose deployment my-deployment --name=my-deployment-service --type=LoadBalancer --target-port=8080) as part of this service you get an external IP which can be used to access your service http://EXTERNAL-IP:8080/
You can also see the details about the endpoint using kubectl get ep
Thanks,
I've been trying to implement a service inside Kubernetes where each Pod needs to be accessible from outside the Cluster.
The topology of my service is simple: 3 members, one of them acting as master at any time (election based); writes go to primary; reads go to secondaries. This is MongoDB replica set by the way.
They work with no issues inside the Kubernetes cluster, but from outside the only thing I have is a NodePort service type that load balances incoming connections to one of them, but I need to access each on of them, separately, depending on what I want to do from my client (write or read).
What kind of Kubernetes resource should I use to give individual access to each one of the members of my service?
In order to access every pod from outside you can create a separate service for each pod and use NodePort type.
Because Service uses selectors to get to available backends, you can create just one Service for a master:
apiVersion: v1
kind: Service
metadata:
name: my-master
labels:
run: my-master
spec:
type: NodePort
ports:
- port: #your-external-port
targetPort: #your-port-exposed-in-pod
protocol: TCP
selector:
run: my-master
-------------
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-master
spec:
selector:
matchLabels:
run: my-master
replicas: 1
template:
metadata:
labels:
run: my-master
spec:
containers:
- name: mongomaster
image: yourcoolimage:lates
ports:
- containerPort: #your-port-exposed-in-pod
Also, you can use one Service for all your read-only replicas and this service will balance requests between all of them.
apiVersion: v1
kind: Service
metadata:
name: my-replicas
labels:
run: my-replicas
spec:
type: LoadBalancer
ports:
- port: #your-external-port
targetPort: #your-port-exposed-in-pod
protocol: TCP
selector:
run: my-replicas
---------
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-replicas
spec:
selector:
matchLabels:
run: my-replicas
replicas: 2
template:
metadata:
labels:
run: my-replicas
spec:
containers:
- name: mongoreplica
image: yourcoolimage:lates
ports:
- containerPort: #your-port-exposed-in-pod
I also suggest you do not expose Pod outside of your network because of security reasons. It would be better to create strict firewall rules to restrict any unexpected connections.