Why Kubernetes Services are created before Deployment/Pods? - kubernetes

If I have to deploy a workload on Kubernetes and also have to expose it as a service, I have to create a Deployment/Pod and a Service. If the Kubernetes offering is from Cloud and we are creating a LoadBalancer service, it makes sense to create the the service first before the workload as the url creation for the service takes time. But in case the Kubernetes deployed on non-cloud platform there is no point in creating the Service before the workload.
So why I have to create a service first then the workload?

There is no requirement to create a service before a deployment or vice versa. You can create the deployment before the service or the service before the deployment.
If you create the deployment before the service, then the application packaged in the deployment will not be accessible externally until you create the LoadBalancer.
Conversely, if you create the LoadBalancer first, the traffic for the application will not be routed to the application as it hasn't been created yet, giving 503s to the caller.
You are declaring to Kubernetes how you want the state of the infrastructure to be. e.g. "I want a deployment and a service". Kubernetes will go off and create those, but they may not necessarily end up being creating in a predictable order. For example, LoadBalancers take a while to assign IPs from your Cloud provider, so even though the resource is created in the cluster, it's not actually getting any traffic.

As wrote in the official kubernetes documentation, and unlike to what other users have told you, there is a specific case that the service needs to be created BEFORE the pod.
https://kubernetes.io/docs/concepts/services-networking/service/#environment-variables
Note:
When you have a Pod that needs to access a Service, and you are using the environment variable method to publish the port and cluster IP to the client Pods, you must create the Service before the client Pods come into existence. Otherwise, those client Pods won't have their environment variables populated.
If you only use DNS to discover the cluster IP for a Service, you don't need to worry about this ordering issue.
For example:
Create some pods, deploy and service in a namespace:
namespace.yaml
# namespace
apiVersion: v1
kind: Namespace
metadata:
creationTimestamp: null
name: securedapp
spec: {}
status: {}
services.yaml
# expose api svc
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
type: api
name: api-svc
namespace: securedapp
spec:
ports:
- port: 90
protocol: TCP
targetPort: 80
selector:
type: api
type: ClusterIP
status:
loadBalancer: {}
---
# expose frontend-svc
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
type: secured
name: frontend-svc
namespace: securedapp
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
type: secured
type: ClusterIP
status:
loadBalancer: {}
pods-and-deploy.yaml
# create the pod for frontend
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
type: secured
name: secured-frontend
namespace: securedapp
spec:
containers:
- image: nginx
name: secured-frontend
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
---
# create the pod for the api
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
type: api
name: webapi
namespace: securedapp
spec:
containers:
- image: nginx
name: webapi
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
---
# create a deploy
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: sure-even-with-deploy
name: sure-even-with-deploy
namespace: securedapp
spec:
replicas: 1
selector:
matchLabels:
app: sure-even-with-deploy
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: sure-even-with-deploy
spec:
containers:
- image: nginx
name: nginx
resources: {}
status: {}
If you create all the resources contextually in the same time, for example placing all this files on a folder, then do an apply on it like this:
kubectl apply -f .
Then if we get the envs from one pod
k exec -it webapi -n securedapp -- env
you will obtain something like this:
# environment of api_svc
API_SVC_PORT_90_TCP=tcp://10.152.183.242:90
API_SVC_SERVICE_PORT=90
API_SVC_PORT_90_TCP_ADDR=10.152.183.242
API_SVC_SERVICE_HOST=10.152.183.242
API_SVC_PORT_90_TCP_PORT=90
API_SVC_PORT=tcp://10.152.183.242:90
API_SVC_PORT_90_TCP_PROTO=tcp
# environment of frontend
FRONTEND_SVC_SERVICE_HOST=10.152.183.87
FRONTEND_SVC_SERVICE_PORT=80
FRONTEND_SVC_PORT=tcp://10.152.183.87:280
FRONTEND_SVC_PORT_280_TCP_PORT=80
FRONTEND_SVC_PORT_280_TCP=tcp://10.152.183.87:280
FRONTEND_SVC_PORT_280_TCP_PROTO=tcp
FRONTEND_SVC_PORT_280_TCP_ADDR=10.152.183.87
Clear all the resoruces created:
kubectl delete -f .
Now, another try.
This time we will do the same thing but "slowly", we create things one by one.
Sure, the ns first
k apply -f ns.yaml
then the pods
k apply -f pods.yaml
After a while create the services
k apply -f services.yaml
Now you will discover that if you get the envs from one pod
k exec -it webapi -n securedapp -- env
this time you will NOT have the environment variables of the services.
So, as mentioned on the k8s documentation there is at least one case, where it is necessary to create the svc BEFORE the pods: if you need env variables of your services.
Saluti

Related

Problems to communicate kubernetes pod with external endpoints (rest services, sql server, kafka, redis etc...)

I have a kubernetes cluster of one node. I have java services dockerized that access to rest services, sql server, kafka and another endpoints outside kubernetes cluster but in the same google cloud network.
The main reason cause I ask for help is that I can't connect the java services dockerized inside the pod to before mentioned external endpoints.
I've try before with flannel network but now I've reset the cluster and I've installed calico network without positive results.
Pods of the custer running by default:
Cluster nodes:
I deploy some java services dockerized as cronjobs, others as deployments. To comunicate this cronjobs or deployments with external endpoints like Kafka, Sql Server, etc I use services.
An example of each of them:
Cronjob:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: cronjob-name
spec:
schedule: "*/5 * * * *"
jobTemplate:
spec:
template:
metadata:
labels:
cronjob1: cronjob-name
spec:
containers:
- image: repository/repository-name:service-name:version
imagePullPolicy: ""
name: service-name
resources: {}
restartPolicy: OnFailure
selector:
matchLabels:
cronjob1: cronjob-name
Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
deployment1: deployment_name
name: deployment_name
spec:
replicas: 1
selector:
matchLabels:
deployment1: deployment_name
strategy: {}
template:
metadata:
labels:
deployment1: deployment_name
spec:
containers:
- image: repository/repository-name:service-name:version
imagePullPolicy: ""
name: service-name
resources: {}
imagePullSecrets:
- name: dockerhub
restartPolicy: Always
serviceAccountName: ""
volumes: null
status: {}
Service:
apiVersion: v1
kind: Service
metadata:
name: sqlserver
spec:
type: ClusterIP
selector:
cronjob1: cronjob1
deployment1: deployment1
ports:
- protocol: TCP
port: 1433
targetPort: 1433
My problem is that from java services I can't connect, for example, with Sql Server Instance. I've verified DNS and calico pods logs and there weren't errors. I've try to connect by ssh to pods while it's running and from pod inside I can't do telnet to Sql Server instance.
¿Could you give me some idea about the problem is? or ¿what test could I do?
¡Thank you very much!
I resolved the problem configuring Kubernetes cluster again but with calico instead of fannel.Thanks for the replies. I hope this help anyone else.

How to access app once deployed via Kubernetes?

I have a very simple Python app that works fine when I execute uvicorn main:app --reload. When I go to http://127.0.0.1:8000 on my machine, I'm able to interact with the API. (My app has no frontend, it is just an API built with FastAPI). However, I am trying to deploy this via Kubernetes, but am not sure how I can access/interact with my API.
Here is my deployment.yaml.
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.16.1
ports:
- containerPort: 80
When I enter kubectl describe deployments my-deployment in the terminal, I get back a print out of the deployment, the namespace it is in, the pod template, a list of events, etc. So, I am pretty sure it is properly deployed.
How can I access the application? What would the url be? I have tried a variety of localhost + port combinations to no avail. I am new to kubernetes so I'm trying to understand how this works.
Update:
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-deployment
namespace: default
spec:
selector:
matchLabels:
app: web
replicas: 2
template:
metadata:
labels:
app: web
spec:
containers:
- name: site
image: nginx:1.16.1
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: app-entrypoint
namespace: default
spec:
type: NodePort
selector:
app: web
ports:
- port: 80
targetPort: 80
nodePort: 30001
Again, when I use the k8s CLI, I'm able to see my deployment, yet when I hit localhost:30001, I get an Unable to connect message.
You have given containerPort: 80 but if your app listens on port 8080 change it to 8080.
There are different ways to access an application deployed on kubernetes
Port Forward using kubectl port-forward deployment/my-deployment 8080:8080
Creare a NodePort service and use http://<NODEIP>:<NODEPORT>
Create a LoadBalanceer service. This works only in supported cloud environment such as AWS, GKE etc.
Use ingress controller such nginx to expose the application.
By Default k8s application are exposed only within the cluster, if you want to access it from outside of the cluster then you can select any of the below options:
Expose Deployment as a node port service (kubectl expose deployment my-deployment --name=my-deployment-service --type=NodePort), describe the service and get the node port assigned to it (kubectl describe svc my-deployment-service). Then try http://<node-IP:node-port>/
For production grade cluster the best practice is to use LoadBalancer type (kubectl expose deployment my-deployment --name=my-deployment-service --type=LoadBalancer --target-port=8080) as part of this service you get an external IP which can be used to access your service http://EXTERNAL-IP:8080/
You can also see the details about the endpoint using kubectl get ep
Thanks,

Can we create service to link two PODs from different Deployments >

My application has to deployments with a POD.
Can I create a Service to distribute load across these 2 PODs, part of different deployments ?
If so, How ?
Yes it is possible to achieve. Good explanation how to do it can be found on Kubernete documentation. However, keep in mind that both deployments should provide the same functionality, as the output should have the same format.
A Kubernetes Service is an abstraction which defines a logical set of Pods running somewhere in your cluster, that all provide the same functionality. When created, each Service is assigned a unique IP address (also called clusterIP). This address is tied to the lifespan of the Service, and will not change while the Service is alive. Pods can be configured to talk to the Service, and know that communication to the Service will be automatically load-balanced out to some pod that is a member of the Service.
Based on example from Documentation.
1. nginx Deployment. Keep in mind that Deployment can have more than 1 label.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
selector:
matchLabels:
run: nginx
env: dev
replicas: 2
template:
metadata:
labels:
run: nginx
env: dev
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
2. nginx-second Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-second
spec:
selector:
matchLabels:
run: nginx
env: prod
replicas: 2
template:
metadata:
labels:
run: nginx
env: prod
spec:
containers:
- name: nginx-second
image: nginx
ports:
- containerPort: 80
Now to pair Deployments with Services you have to use Selector based on Deployments labels. Below you can find 2 service YAMLs. nginx-service which pointing to both deployments and nginx-service-1 which points only to nginx-second deployment.
## Both Deployments
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
ports:
- port: 80
protocol: TCP
selector:
run: nginx
---
### To nginx-second deployment
apiVersion: v1
kind: Service
metadata:
name: nginx-service-1
spec:
ports:
- port: 80
protocol: TCP
selector:
env: prod
You can verify that service binds to deployment by checking the endpoints.
$ kubectl get pods -l run=nginx -o yaml | grep podIP
podIP: 10.32.0.9
podIP: 10.32.2.10
podIP: 10.32.0.10
podIP: 10.32.2.11
$ kk get ep nginx-service
NAME ENDPOINTS AGE
nginx-service 10.32.0.10:80,10.32.0.9:80,10.32.2.10:80 + 1 more... 3m33s
$ kk get ep nginx-service-1
NAME ENDPOINTS AGE
nginx-service-1 10.32.0.10:80,10.32.2.11:80 3m36s
Yes, you can do that.
Add a common label key pair to both the deployment pod spec and use that common label as selector in service definition
With the above defined service the requests would be load balanced across all the matching pods.

Can not apply Service as "type: ClusterIP" in my GKE cluster

I want to deploy my service as a ClusterIP but am not able to apply it for the given error message:
[xetra11#x11-work coopr-infrastructure]$ kubectl apply -f teamcity-deployment.yaml
deployment.apps/teamcity unchanged
ingress.extensions/teamcity unchanged
The Service "teamcity" is invalid: spec.ports[0].nodePort: Forbidden: may not be used when `type` is 'ClusterIP'
This here is my .yaml file:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: teamcity
labels:
app: teamcity
spec:
replicas: 1
selector:
matchLabels:
app: teamcity
template:
metadata:
labels:
app: teamcity
spec:
containers:
- name: teamcity-server
image: jetbrains/teamcity-server:latest
ports:
- containerPort: 8111
---
apiVersion: v1
kind: Service
metadata:
name: teamcity
labels:
app: teamcity
spec:
type: ClusterIP
ports:
- port: 8111
targetPort: 8111
protocol: TCP
selector:
app: teamcity
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: teamcity
annotations:
kubernetes.io/ingress.class: nginx
spec:
backend:
serviceName: teamcity
servicePort: 8111
Apply a configuration to the resource by filename:
kubectl apply -f [.yaml file] --force
This resource will be created if it doesn't exist yet. To use 'apply', always create the resource initially with either 'apply' or 'create --save-config'.
2) If the first one fails, you can force replace, delete and then re-create the resource:
kubectl replace -f grav-deployment.yml
This command is only used when grace-period=0. If true, immediately remove resources from API and bypass graceful deletion. Note that immediate deletion of some resources may result in inconsistency or data loss and requires confirmation.
On GKE the ingress can only point to a service of type LoadBalancer or NodePort you can see the error output of the ingress by running:
kubectl describe ingress teamcity
You can see an error, as per your yaml if you are using an nginx controller you have to use the service of type NodePort
Somo documentation:
https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer
https://github.com/kubernetes/ingress-nginx/blob/master/docs/deploy/index.md#gce-gke
Did you just recently change the service description from NodePort to ClusterIP?
Then it might be this issue github.com/kubernetes/kubectl/issues/221.
You need to use kubectl replace or kubectl apply --force.

How to expose an "election-based master and secondaries" service outside Kubernetes cluster?

I've been trying to implement a service inside Kubernetes where each Pod needs to be accessible from outside the Cluster.
The topology of my service is simple: 3 members, one of them acting as master at any time (election based); writes go to primary; reads go to secondaries. This is MongoDB replica set by the way.
They work with no issues inside the Kubernetes cluster, but from outside the only thing I have is a NodePort service type that load balances incoming connections to one of them, but I need to access each on of them, separately, depending on what I want to do from my client (write or read).
What kind of Kubernetes resource should I use to give individual access to each one of the members of my service?
In order to access every pod from outside you can create a separate service for each pod and use NodePort type.
Because Service uses selectors to get to available backends, you can create just one Service for a master:
apiVersion: v1
kind: Service
metadata:
name: my-master
labels:
run: my-master
spec:
type: NodePort
ports:
- port: #your-external-port
targetPort: #your-port-exposed-in-pod
protocol: TCP
selector:
run: my-master
-------------
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-master
spec:
selector:
matchLabels:
run: my-master
replicas: 1
template:
metadata:
labels:
run: my-master
spec:
containers:
- name: mongomaster
image: yourcoolimage:lates
ports:
- containerPort: #your-port-exposed-in-pod
Also, you can use one Service for all your read-only replicas and this service will balance requests between all of them.
apiVersion: v1
kind: Service
metadata:
name: my-replicas
labels:
run: my-replicas
spec:
type: LoadBalancer
ports:
- port: #your-external-port
targetPort: #your-port-exposed-in-pod
protocol: TCP
selector:
run: my-replicas
---------
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-replicas
spec:
selector:
matchLabels:
run: my-replicas
replicas: 2
template:
metadata:
labels:
run: my-replicas
spec:
containers:
- name: mongoreplica
image: yourcoolimage:lates
ports:
- containerPort: #your-port-exposed-in-pod
I also suggest you do not expose Pod outside of your network because of security reasons. It would be better to create strict firewall rules to restrict any unexpected connections.