Kubernetes - Exposing as a service - service

Since rolling update is not a feature supported by statefulsets, thought of experimenting with hybrid pods where the seed nodes would be statefulsets and the other non-seed nodes would be deployments. I was trying out this link as suggested in another question : Statfulsets - akka clustering Is there a way I can expose the seed and the non-seed nodes as the same service so that they can be hit with a single external IP?

That's possible when using labels properly...
For the seed nodes use sth like this:
apiVersion: apps/v1beta1
kind: StatefulSet
...
spec:
serviceName: akka-seed
selector:
matchLabels:
run: akka-seed
template:
metadata:
labels:
run: akka-seed
app: akka
For the worker nodes use sth like this:
apiVersion: apps/v1beta1
kind: Deployment
...
spec:
template:
metadata:
labels:
run: akka-worker
app: akka
In the service you can then reference both through:
apiVersion: v1
kind: Service
metadata:
name: akka
spec:
ports:
...
selector:
app: akka
This would select pods from both groups.

Related

Kubernetes non specific spec.selector does not prevent Kubernetes from working correctly

I've experienced a surprising behavior when playing around with Kubernetes and I wanted to know if there is any good explanation behind it.
I've noticed that when two Kubernetes deployments are created with the same labels, and with the same spec.selector, the deployments still function correctly, even though using the same selector "should" cause them to be confused regarding which pods is related to each one.
Example configurations which present this -
example_deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
extra_label: one
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
example_deployment_2.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment-2
labels:
app: nginx
extra_label: two
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
I expected the deployments not to work correctly, since they will select pods from each other and assume it is theirs.
The actual result is that the deployments seem to be created correctly, but entering the deployment from k9s returns all of the pods. This is true for both deployments.
Can anyone please shed light regarding why this is happening? Is there additional internal filtering in Kubernetes to to prevent pods which were not really created by the deployment from being associated with it?
I'll note that I've seen this behavior in AWS and have reproduced it in Minikube.
When you create a K8S Deployment, K8S creates a ReplicaSet to manage the pods, then this ReplicaSet creates the pods based on the number of replicas provided or patched by the hpa. Addition to the provided labels and annotations you provide, the ReplicaSet add ownerReferences which contains its name and uid, so even if you have 4 pods with the same labels, each two pods will have a different ownerReferences used by the ReplicaSet to manage them:
apiVersion: v1
kind: Pod
metadata:
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: <replicaset name>
uid: <replicaset uid>
...

collect metrics from different pods in prometheus

i want to collect metrics from a deployment (with multiple pods) from Kubernetes, and on of my metrics is the number of calls that my deployment received, my question is about Prometheus, how can i tell Prometheus to call all the pods that are part of the deployment and collect metrics from them? And what is the best practice to achieve this goal?
I would highly recommend using prometheus-operator to do all heavy lifting with configuring Prometheus monitoring for your applications.
For example, having the Deployment and Service like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: example-app
spec:
replicas: 3
selector:
matchLabels:
app: example-app
template:
metadata:
labels:
app: example-app
spec:
containers:
- name: example-app
image: fabxc/instrumented_app
ports:
- name: web
containerPort: 8080
---
kind: Service
apiVersion: v1
metadata:
name: example-app
labels:
app: example-app
spec:
selector:
app: example-app
ports:
- name: web
port: 8080
You may configure ServiceMonitor object which will use Service as a service discovery endpoint to find all the pods of the Deployment. This assumes that your application is exposing metrics using HTTP path /metrics.
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: example-app
labels:
team: frontend
spec:
selector:
matchLabels:
app: example-app
endpoints:
- port: web
This will make Prometheus scrape metrics for your application.
You may read more about ServiceMonitors here: https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/user-guides/getting-started.md

How to get de pods on a running state

I'm trying to set up Cassandra on a Kubernetes cluster made of three virtual machines using two different files (Deployment and Service). In order to do this I use the command
kubectl create -f file.yaml
The service file works perfectly but when I start the other one with three replicas, the state of the pods is CrashLoopBackOff instead of running.
The configuration of the deployment file is the following
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: cassandra
labels:
app: cassandra
spec:
replicas: 3
selector:
matchLabels:
app: cassandra
template:
metadata:
labels:
app: cassandra
spec:
containers:
- name: cassandra
image: gcr.io/google_containers/cassandra:v5
ports:
- containerPort: 9042
And this is the service file
apiVersion: v1
kind: Service
metadata:
labels:
app: cassandra
name: cassandra
spec:
type: NodePort
ports:
- port: 9042
selector:
app: cassandra
I appreciate any help on this.
You shouldnt be using Deployment for running stateful applications. StatefulSets are recommended for running databases like cassandra.
follow the below link for reference --> https://kubernetes.io/docs/tutorials/stateful-application/cassandra/

How to implement Canary deployment in kubernetes with different versions specified in deployment

I have two deployment files
1.
deployment-1.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: process
labels:
app: process
spec:
replicas: 3
selector:
matchLabels:
app: process
template:
metadata:
labels:
app: process
version: v1
spec:
containers:
- name: pull
image: parma/k8s-php:red
ports:
- containerPort: 80
2.
deployment-2.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: process
labels:
app: process
spec:
replicas: 3
selector:
matchLabels:
app: process
template:
metadata:
labels:
app: process
version: v2
spec:
containers:
- name: pull
image: parma/k8s-php:green
ports:
- containerPort: 80
As i have specified two different versions in spec.template.metadata, it does not keep running 6 pods for both replica set, it only enables latest replicaset up and running.
Is there any way to achieve canary deployment by keeping both the replicaset in single deployment up and running with 3 pods from v1 and 3 pods from v2
You can't have multiple deployments with the same name. Rename them to process-v1 and process-v2.
You need to have different selectors for each of them. First one should have matchLabels: {app: process, version: v1}, the second one matchLabels: {app: process, version: v2}.
So technically that will be two completely separate deployments. What makes them "baseline" and "canary" is how you send traffic to them. If you specify common selector (just {app: process}) in your service, then both of the deployments will see a fraction of traffic.
The name of what you want to implement is Canary Deployment. It is a great feature for A / B testing and assists in continuous delivery and production testing, it does not have to be in the same deploy the secret this in the load balancer and at the gateway. There are options in the market for this (Spring Zuul or Istio Envoy) that can provide a solution that filters content from one deploy to a certain percentage and the other to the rest ...

Is it possible to move the running pods from ReplicationController to a Deployment?

We are using RC to run our workload and want to migrate to Deployment. Is there a way to do that with out causing any impact to the running workload. I mean, can we move these running pods under Deployment?
Like, #matthew-l-daniel answered, the answer is yes. But I am more than 80% certain about it. Because I have tested it
Now whats the process we need to follow
Lets say I have a ReplicationController.
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx
spec:
replicas: 3
selector:
app: nginx
template:
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
Question: can we move these running pods under Deployment?
Lets follow these step to see if we can.
Step 1:
Delete this RC with --cascade=false. This will leave Pods.
Step 2:
Create ReplicaSet first, with same label as ReplicationController
apiVersion: apps/v1beta2
kind: ReplicaSet
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
---
So, now these Pods are under ReplicaSet.
Step 3:
Create Deployment Now with same label.
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
----
And Deployment will find one ReplicaSet already exists and our job is done.
Now we can check increasing replicas to see if it works.
And It works.
Which way It doesn't work
After deleting ReplicationController, do not create Deployment directly. This will not work. Because, Deployment will find no ReplicaSet, and will create new one with additional label which will not match with your existing Pods
I'm about 80% certain the answer is yes, since they both use Pod selectors to determine whether new instances should be created. The key trick is to use the --cascade=false (the default is true) in kubectl delete, whose help even speaks to your very question:
--cascade=true: If true, cascade the deletion of the resources managed by this resource (e.g. Pods created by a ReplicationController). Default true.
By deleting the ReplicationController but not its subordinate Pods, they will continue to just hang out (although be careful, if a reboot or other hazard kills one or all of them, no one is there to rescue them). Creating the Deployment with the same selector criteria and a replicas count equal to the number of currently running Pods should cause a "no action" situation.
I regret that I don't have my cluster in front of me to test it, but I would think a small nginx RC with replicas=3 should be a simple enough test to prove that it behaves as you wish.