Kubernetes - can a Deployment contain a Service? - kubernetes

Just finished reading Nigel Poulton's The Kubernetes Book, but I am somewhat puzzled with Services.
Could a Service be added to the Deployment manifest below somehow?Or does the Service have to be POSTed on its own? Isn't the whole purpose of a deployment to specify everything needed for the app to run?
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: hello-deploy
spec:
replicas: 10
selector:
matchLabels:
app: hello-world
minReadySeconds: 10
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
template:
metadata:
labels:
app: hello-world
spec:
containers:
- name: hello-pod
image: nigelpoulton/k8sbook : latest
ports:
- containerPort: 8080

They're different objects and you have to submit them separately (HTTP POST, kubectl apply, ...).
There are a couple of tricks you can do to minimize the impact of this:
You can use a multi-document YAML file and submit that as a single thing, like
---
apiVersion: apps/v1
kind: Deployment
...
---
apiVersion: v1
kind: Service
...
There is an undocumented kind: List that could embed multiple objects
apiVersion: v1
kind: List
items:
- apiVersion: apps/v1
kind: Deployment
...
- apiVersion: v1
kind: Service
...
You can use a higher-level deployment manager such as Helm that lets you keep each object in a separate file, but deploy them in a single command.
It's perhaps unfortunate that a couple of Kubernetes objects have names that are different from their plain English meanings (a Deployment doesn't cover all of the steps or parts of deploying a whole application; a Service is just an IP/DNS pointer and not a service implementation) but that's the way it is. I tend to capitalize the Kubernetes object names when it will disambiguate things.

Isn't the whole purpose of a deployment to specify everything needed for the app to run?
The whole purpose of "Deployment" is to manage the deployment of pods/replicasets including replication, scaling, rolling update, rollbacks. The DeploymentController is part of the master node's controller manager, and it makes sure that the current state always matches the desired state.
does the Service have to be POSTed on its own?
If you are familiar with Load balancers terminology, Services are frontends and Pods are its backends. Since it is frontend, Service forwards requests to its backend (pods).

Related

Kubernetes: Only one service endpoint working

I've deployed my Django/React app into K8s and exposed both deployments as a service (ClusterIP).
Whenever I try to call the API service through its ClusterIP:8000, it sometimes refuses the connection. So I checked its endpoints and only one out of the three existing endpoints returns what I expect. I understand that when calling the ClusterIP, it redirects to one of those three endpoints.
Is there any way to 'debug' a incoming service request? Can I modify the amount of existing endpoints (so I could limit it to the only working endpoint)? Is there any other way to maybe see logs of the service to find out why only one of the endpoints is working?
I was able to fix it:
I deployed a three-tier-application (Django/React/DB) and used the same selector for every deployment, like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-xxx-deployment
labels:
app: myapp
spec:
replicas: 1
selector:
matchLabels:
app: myapp
So when exposing this with "kubectl expose deployment/..." it created as many endpoints as equal selectors were found in the deployment. Since I have three deployments (DB/React/Djagno), three endpoints were created.
Changing the deployment .yaml like this fixed my error and only one endpoint was crated:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: myapp-web
spec:
replicas: 1
selector:
matchLabels:
app: mapp-web
ClusterIP:8000 is not seems right to use.
You could replace it to http://$(serviceName).$(namespace):8000/ for using service correctly.

Autoscaling daemonsets in nodepool

I am fairly new to the kubernetes engine and I have a use case that I can't seem to make working. I want to have each pod run in only one dedicated node and then autoscale the cluster.
For now I have tried using a DaemonSet to run each pod and I have created an HorizontalPodAutoscaler targeting the nodepool.
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: test
spec:
selector:
matchLabels:
app: test
template:
metadata:
labels:
app: test
spec:
containers:
- name: actions
image: image_link
nodeSelector:
cloud.google.com/gke-nodepool: test
updateStrategy:
type: RollingUpdate
---
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: test
spec:
scaleTargetRef:
apiVersion: extensions/v1beta1
kind: DaemonSet
name: test
minReplicas: 1
maxReplicas: 2
metrics:
- type: Resource
resource:
name: cpu
targetAverageUtilization: 80
I then use the stress utility to test the autoscaling process but the number of nodes stays constant. Is there something I am missing here ? Is there another component I can use for my use case ?
HorizontalPodAutoscaler is used to scale the pods depending on the metrics limit. It is not applicable to daemonset.
Daemonset deploys one pod on each node in the cluster. If you want to scale daemonset you need to scale your nodepool.
HorizontalPodAutoscaler is best used to auto scale deployment objects. In your case, change the daemonset object to deployment object or scale out the nodepool. Auto scaling of nodes is supported on Google cloud platform. Not sure about other cloud providers. You need to check your cloud provider documentation
Daemonset is a controller which deploys a POD for each node having the selector matched expression, you can't have more than on POD running on each node. You should look at another controller, I could not see what kind of app you want to deploy, I would suggest:
Deployment: If you want to use a stateless based application which can handle scaling up and down without consistency between the replicas
StatefulSet: If you want to use a stateful based application which needs some care to scaling and also data consistency
One important thing to notice about the HPA is that you must have metrics enabled, otherwise the reconciliation loop would not be able to watch the scale action needed.

What is the purpose of a kubernetes deployment pod selector?

I fail to see why kubernetes need a pod selector in a deployment statement that can only contain one pod template? Feel free to educate me why kubernetes engineers introduced a selector statement inside a deployment definition instead of automatically select the pod from the template?
---
apiVersion: v1
kind: Service
metadata:
name: grpc-service
spec:
type: LoadBalancer
ports:
- name: grpc
port: 8080
targetPort: 8080
protocol: TCP
selector:
app: grpc-test
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: grpc-deployment
spec:
replicas: 1
revisionHistoryLimit: 3
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 0
selector:
matchLabels:
app: grpc-test
template:
metadata:
labels:
app: grpc-test
spec:
containers:
...
Why not simply define something like this?
---
apiVersion: v1
kind: Service
metadata:
name: grpc-service
spec:
type: LoadBalancer
ports:
- name: grpc
port: 8080
targetPort: 8080
protocol: TCP
selector:
app: grpc-test
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: grpc-deployment
spec:
replicas: 1
revisionHistoryLimit: 3
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 0
template:
metadata:
labels:
app: grpc-test
spec:
containers:
...
Ah! Funny enough, I have once tried wrapping my head around the concept of label selectors as well before. So, here it goes...
First of all, what the hell are these labels used for? Labels within Kubernetes are the core means of identifying objects. A controller controls pods based on their label instead of their name. In this particular case they are meant to identify the pods belonging to the deployment’s replica set.
You actually didn’t have to implicitly define .spec.selector when using the v1beta1 extensions. It would in that case default from .spec.template.labels. However, if you don’t, you can run into problems with kubectl apply once one or more of the labels that are used for selecting change because kubeclt apply will look at kubectl.kubernetes.io/last-applied-configuration when comparing changes and that annotation will only contain the user input when he created the resource and none of the defaulted fields. You’ll get an error because it cannot calculate the diff like:
spec.template.metadata.labels: Invalid value: {"app":"nginx"}: `selector` does not match template `labels`
As you can see, this is a pretty big shortcoming since it means you can not change any of the labels that are being used as a selector label or it would completely break your deployment flow. It was “fixed” in apps/v1beta2 by requiring selectors to be explicitly defined, disallowing mutation on those fields.
So in your example, you actually don’t have to define them! The creation will work and will use your .spec.template.labels by default. But yeah, in the near future when you have to use v1beta2, the field will be mandatory. I hope this kind of answers your question and I didn’t make it any more confusing ;)
However, if you don’t, you can run into problems with kubectl apply once one or more of the labels that are used for selecting change because kubeclt apply will look at kubectl.kubernetes.io/last-applied-configuration when comparing changes and that annotation will only contain the user input when he created the resource and none of the defaulted fields.
Quoting from Toon's answer.
My interpretation is it's not logically necessary at all. It's only due to the limitation of the current implementation of Kubernetes, that it has some weird "behavior" in that the functionality it uses to "compare" two deployments / objects does not take into account "default values".
It is a method to decouple a replicaset type from a pod type. There are many similar answers here, but the crux of it is that a deployment/replicaset may be changed at a future point in time, but it won't know what the previous selector was for the last revision. It would have to look at the last revision's template.metadata.labels and then recursively apply those pod labels as the current revision selector. But wait! What if the template.metadata.labels in the current revision changes? Now how do you account for two template.metadata.labels label sets if the new spec doesn't include the same label(s) in the prior revision where the matchLabels was inferred?
Consider inferred matchLabels:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: grpc-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: grpc-test
spec:
containers:
...
Now if I were to go and revise this deployment, my client-side doesn't have awareness of the inferred matchLabels, so my changes would need to account for existing pods. Server-side could do some magic to assume the context in a diff, but what if I changed my template.metadata.labels:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: grpc-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: grpc-test-new
spec:
containers:
...
Now my deployment would need to both infer the new template.metadata.labels as well as munged with the existing server-side, else you end up orphaning a bunch of pods.
I hope this helps illustrate a scenario where explicitly defining the selector allows you to be more flexible in your template updates while still retaining the revision history of previous selectors.
As far as I know, the selector in the deployment is an optional property.
The template is the only required field of spec.
So, you don't need the use the label selector in the deployment, and in you're example I don't see why you couldn't use the latter part?
Deployments are dynamic objects, for example, when your system need a scale up and add more Pods. The template section only defines the Pods that this Deployment would create when you do kubectl apply, while the selector section ensures that the newly created Pods by scaling up are still managed by the already existing Deployment.
Generally speaking, Deployment continuously watches all the Pods and see if there are any Pods it should control, via the selector section.

Reusable Pod Templates

Is it possible in Kubernetes to create a pod template and reuse it later when specifying a pod within a deployment? For example:
Say I have pod template...
apiVersion: v1
kind: PodTemplate
metadata:
name: my-pod-template
template:
metadata:
labels:
app: "my-app"
spec:
containers:
- name: my-app
image: jwaldrip/my-app:latest
Could I then use it in a deployment as so?
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app-deployment
spec:
template:
metadata:
name: my-pod-template
This would be super helpful when deploying something like Jobs, where I want to own the creation of a job with the given template.
There is not.
Specifically in the case of Pods, there are PodPresets:
https://kubernetes.io/docs/tasks/inject-data-application/podpreset/
But those don't apply to other objects.
One way to enforce the shape or attributes of arbitrary objects is to establish tooling that correctly creates those objects, then create credentials for that tooling, and use RBAC to only allow those credentials to create those objects.
https://kubernetes.io/docs/admin/authorization/rbac/
Another way would be to create an Admission Controller to watch the attempted creation of the desired objects, and verify/reject those that don't meet the criteria:
https://kubernetes.io/docs/admin/admission-controllers/

Is it possible to move the running pods from ReplicationController to a Deployment?

We are using RC to run our workload and want to migrate to Deployment. Is there a way to do that with out causing any impact to the running workload. I mean, can we move these running pods under Deployment?
Like, #matthew-l-daniel answered, the answer is yes. But I am more than 80% certain about it. Because I have tested it
Now whats the process we need to follow
Lets say I have a ReplicationController.
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx
spec:
replicas: 3
selector:
app: nginx
template:
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
Question: can we move these running pods under Deployment?
Lets follow these step to see if we can.
Step 1:
Delete this RC with --cascade=false. This will leave Pods.
Step 2:
Create ReplicaSet first, with same label as ReplicationController
apiVersion: apps/v1beta2
kind: ReplicaSet
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
---
So, now these Pods are under ReplicaSet.
Step 3:
Create Deployment Now with same label.
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
----
And Deployment will find one ReplicaSet already exists and our job is done.
Now we can check increasing replicas to see if it works.
And It works.
Which way It doesn't work
After deleting ReplicationController, do not create Deployment directly. This will not work. Because, Deployment will find no ReplicaSet, and will create new one with additional label which will not match with your existing Pods
I'm about 80% certain the answer is yes, since they both use Pod selectors to determine whether new instances should be created. The key trick is to use the --cascade=false (the default is true) in kubectl delete, whose help even speaks to your very question:
--cascade=true: If true, cascade the deletion of the resources managed by this resource (e.g. Pods created by a ReplicationController). Default true.
By deleting the ReplicationController but not its subordinate Pods, they will continue to just hang out (although be careful, if a reboot or other hazard kills one or all of them, no one is there to rescue them). Creating the Deployment with the same selector criteria and a replicas count equal to the number of currently running Pods should cause a "no action" situation.
I regret that I don't have my cluster in front of me to test it, but I would think a small nginx RC with replicas=3 should be a simple enough test to prove that it behaves as you wish.