kubernetes Service select multi labels - kubernetes

I have two StatefulSets named my-sts and my-sts-a, want to create a single service, that addresses same-indexed pods, from two different StatefulSets, like: my-sts-0 and my-sts-a-0. But found k8s doc says:
Labels selectors for both objects are defined in json or yaml files using maps, and only equality-based requirement selectors are supported
My idea is to create a label for the two sts pods like:
my-sts-0 has label abc:sts-0
my-sts-a-0 has label abc:sts-0
my-sts-1 has label abc:sts-1
my-sts-a-1 has label abc:sts-1
How to get the index of those pods so that I can create a label named abc=sts-<index> to approach it?
Is there any other way?

Kubernetes already gives you a DNS name to select individual StatefulSet pods. Say you have a Service my-sts that matches every pod in the StatefulSet, and the StatefulSet is set up with serviceName: my-sts; then you can access host names my-sts-0.my-sts.namespace.svc.cluster.local and so on.
If you specifically want a service to target a specific pod, there is also a statefulset.kubernetes.io/pod-name label that gets added automatically, so you can attach to that
apiVersion: v1
kind: Service
metadata:
name: my-sts-master
spec:
selector:
statefulset.kubernetes.io/pod-name: my-sts-0
ports: [...]

Related

In a Kubernetes deployment yaml, why do we have to match the template labels to the deployment labels in the selector?

I am new to Kubernetes, so this might be obvious, but in a deployment yaml, why do we have to define the labels in the deployment metadata, then define the same labels in the template metadata, but match those separately in the selector?
Shouldn't it be obvious that the template belongs to the deployment it's under?
Is there a use case for a deployment to have a template it doesn't match?
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-backend
spec:
replicas: 2
selector:
matchLabels:
app: api-backend
template:
metadata:
labels:
app: api-backend
spec:
#...etc
I might be missing some key understanding of k8s or yamls.
Having the template with no label, and it seems to work, but I don't understand why. Kubernetes could be auto-magically inserting the labels.
Technically, the parameter matchLabels decides on which Pods belongs to the given Deployment (and the underlying ReplicaSet). In practice, I have never seen a Deployment with different labels than matchLabels. So, the reason might be the uniformity between other Kubernetes resources (like Service where the matchLabels makes more sense).
I recommend reading the blog post matchLabels, labels, and selectors explained in detail, for beginners.
Let's simplify labels, selectors and template labels first.
The Labels in the metadata section are assigned to the deployment itself.
The Labels in the .spec.template section are assigned to the pods created by the deployment. These are actually called PodTemplate labels.
The selectors provide uniqueness to your resource. It is used to identify the resources that match the labels in .spec.selector.matchLabels section.
Now, it is not mandatory to have all the podTemplate Labels in the matchLabels section. a pod can have many labels but only one of the matchLabels is enough to identify the pods. Here's an use case to understand why it has to be used
"Let’s say you have deployment X which creates two pods with label nginx-pods and image nginx and another deployment Y which applies to the pods with the same label nginx-pods but uses images nginx:alpine. If deployment X is running and you run deployment Y after, it will not create new pods, but instead, it will replace the existing pods with nginx:alpine image. Both deployment will identify the pods as the labels in the pods matches the labels in both of the deployments .spec.selector.matchLabels"
Because the Deployment.metadata.labels belongs to the Deployment resource, and the Deployment.spec.template.metadata.labels to the Pods which are handled by the Deployment controller. The Deployment controller knows which Pods are belongs to which Deployment based on the labels on the Pod resources.
This is why you have to specify the labels this way.

how to get all replicaset names inside a container

Consider the following example provided in this doc.
What I'm trying to achieve is to see the 3 replicas names from inside the container.
following this guide I was able to get the current pod name, but i need also the pod names from my replicas.
Ideally i would like to:
print(k8s.get_my_replicaset_names())
or
print(os.getenv("MY_REPLICASET"))
and have a result like:
[frontend-b2zdv,frontend-vcmts,frontend-wtsmm]
that is the pod names of all the container's replicas (also the current container of course) and eventually compare the current name in the name list to get my index in the list.
Is there any way to achieve this?
As you can read here, the Downward API is used to expose Pod and Container fields to a running Container:
There are two ways to expose Pod and Container fields to a running
Container:
Environment variables
Volume Files
Together, these two ways of exposing Pod and Container fields are
called the Downward API.
It is not meant to expose any information about other objects/resources such as ReplicaSet or Deployment, that manage such a Pod.
You can see exactly what fields contains the yaml manifest that describes a running Pod by executing:
kubectl get pods <pod_name> -o yaml
The example fragment of its output may look as follows:
apiVersion: v1
kind: Pod
metadata:
annotations:
<some annotations here>
...
creationTimestamp: "2020-10-08T22:18:03Z"
generateName: nginx-deployment-7bffc778db-
labels:
app: nginx
pod-template-hash: 7bffc778db
name: nginx-deployment-7bffc778db-8fzrz
namespace: default
ownerReferences: πŸ‘ˆ
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet πŸ‘ˆ
name: nginx-deployment-7bffc778db πŸ‘ˆ
...
As you can see, in metadata section it contains ownerReferences which in the above example contains one reference to a ReplicaSet object by which this Pod is managed. So you can get this particular ReplicaSet name pretty easily as it is part of a Pod yaml manifest.
However, you cannot get this way information about other Pods managed by this ReplicaSet .
Such information only can be obtained from the api server e.g by using kubectl client or programmatically with direct calls to the API.

service selector vs deployment selector matchlabels

I understand that services use a selector to identify which pods to route traffic to by thier labels.
apiVersion: v1
kind: Service
metadata:
name: svc
spec:
ports:
- name: tcp
protocol: TCP
port: 443
targetPort: 443
selector:
app: nginx
Thats all well and good.
Now what is the difference between this selector and the one of the spec.selector from the deployment. I understand that it is used so that the deployment can match and manage its pods.
I dont understand however why i need the extra matchLabels declaration and cant just do it like in the service. Whats the use of this semantically?
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
Thanks in advance
In the Service's spec.selector, you can identify which pods to route traffic to only by their labels.
On the other hand, in the Deployment's spec.selector you have two options to decide on which node the pods will be scheduled on, which are: matchExpressions, matchLabels.
How Deployment uses spec.selector
When a Deployment is changed, a new ReplicaSet is created. The ReplicaSet is responsible to manage the Pods. It uses the spec.selector to know what Pods it should manage.
Example:
If the replicas: 1 is changed in the Deployment to e.g. replicas: 2 a new ReplicaSet is created, and it observes the Pods using spec.selector to match Pods with matching labels. It only see 1 replica initially, but its desired state is now replicas: 2 so it is responsible for creating additionally one Pod from the template in the Deployment.
Selector syntax
There is two ways to declare the labels under the spec.selector in `Deployment.
matchLabels - you declare the labels
matchExpressions - you write an expression for labels
See kubectl explain deployment.spec.selector for full explanation of spec.selector alternatives.
Labels and Selectors
Labels and Selectors is a generic concept in Kubernetes and is used in multiple places. Another example is how you can filter what resources you want to see or use with kubectl. E.g. you can select the Pods for an app with:
kubectl get pod -l=app=myappname
(if your Pods is labelled with app: myappname.
why i need the extra matchLabels declaration and cant just do it like in the service. Whats the use of this semantically?
Because service spec only support equality-based selectors and the deployment is a newer resource that supports two syntax (equality-based and set-based).
The API currently supports two types of selectors: equality-based and set-based. A label selector can be made of multiple requirements which are comma-separated. In the case of multiple requirements, all must be satisfied so the comma separator acts as a logical AND (&&) operator.
Reference
The Service spec uses just the "equality-based" label selector syntax.
Newer resources, such as Job, Deployment, ReplicaSet, and DaemonSet, support set-based requirements...
Reference
My understanding is that earlier the only supported syntax was the equality-based one, like we have on the service spec, and that now, when the resource you are using supports the new syntax, you are required to use matchLabels or matchExpressions.

how to ignore random kubernetes pod name in deployment file

Below is my kubernetes deployment file -
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: boxfusenew
labels:
app: boxfusenew
spec:
replicas: 1
template:
metadata:
labels:
app: boxfusenew
spec:
containers:
- image: sk1997/boxfuse:latest
name: boxfusenew
ports:
- containerPort: 8080
In this deployment file under container tag boxfusenew pod name is specified. So I want the pod generated by deployment file should have the boxfusenew name but the deployment is attaching some random value to it as- boxfusenew-5f6f67fc5-kmb7z.
Is it possible to ignore random values in pod name through deployment file??
Not really, unless you create the Pod itself and not a deployment.
According to Kubernetes documentation:
Each object in your cluster has a Name that is unique for that type of resource. Every Kubernetes object also has a UID that is unique across your whole cluster.
For example, you can only have one Pod named myapp-1234 within the same namespace, but you can have one Pod and one Deployment that are each named myapp-1234.
For non-unique user-provided attributes, Kubernetes provides labels and annotations.
If you create a Pod with a specific unique label, you can use this label to query the Pod, so no need of having the exact name.
You can use a jsonpath to query the values that you want from your Pod under that specific deployment. I've created an example that may give you an idea:
kubectl get pods -o=jsonpath='{.items[?(#.metadata.labels.app=="boxfusenew")].metadata.name}'
This would return the name of the Pod which contains the label app=boxfusenew. You can take a look into some other examples of jsonpath here and here.
First what kind of use case that you want to achieve? If you want to simply get available pods belongs to certain deployment you can use label and selector. For example:
kubectl -n <namespace> get po -l <key>=<value>

How to select a specific pod for a service in Kubernetes

I have a kubernetes cluster of 3 hosts where each Host has a unique id label.
On this cluster, there is a software that has 3 instances (replicas).
Each replica requires to talk to all other replicas. In addition, there is a service that contains all pods so that this application is permanently available.
So I have:
Instance1 (with labels run: theTool,instanceid: 1)
Instance2 (with labels run: theTool,instanceid: 2)
Instance3 (with labels run: theTool,instanceid: 3)
and
Service1 (selecting pods with label instanceid=1)
Service2 (selecting pods with label instanceid=2)
Service3 (selecting pods with label instanceid=3)
Service (selecting pods with label run=theTool)
This approach works but have I cannot scale or use the rolling-update feature.
I would like to define a deployment with 3 replicas, where each replicate gets a unique generic label (for instance the replica-id like 1/3, 2/3 and so on).
Within the services, I could use the selector to fetch this label which will exist even after an update.
Another solution might be to select the pod/deployment, depending on the host where it is running on. I could use a DaemonSet or just a pod/deployment with affinity to ensure that each host has an exact one replica of my deployment.
But I didn't know how to select a pod based on a host label where it runs on.
Using the hostname is not an option as hostnames will change in different environments.
I have searched the docs but didn't find anything matching this use case. Hopefully, someone here has an idea how to solve this.
The feature you're looking for is called StatefulSets, which just launched to beta with Kubernetes 1.5 (note that it was previously available in alpha under a different name, PetSets).
In a StatefulSet, each replica has a unique name that is persisted across restarts. In your example, these would be instance-1, instance-2, instance-3. Since the instance names are persisted (even if the pod is recreated on another node), you don't need a service-per-instance.
The documentation has more details:
Using StatefulSets
Scaling a StatefulSet
Deleting a StatefulSet
Debugging a StatefulSet
You can map NodeIP:NodePort with PodIP:PodPort. Your pod is running on some Node(Instance/VM).
Assign Label to your nodes ,
http://kubernetes.io/docs/user-guide/node-selection/
Write a service for your pod , for example
service.yaml:
apiVersion: v1
kind: Service
metadata:
name: mysql-service
labels:
label: mysql-service
spec:
type: NodePort
ports:
- port: 3306 #Port on which your service is running
nodePort: 32001 # Node port on which you can access it statically
targetPort: 3306
protocol: TCP
name: http
selector:
name: mysql-selector #bind pod here
Add node selector (in spec field) to your deployment.yaml
deployment.yaml:
spec:
nodeSelector:
nodename: mysqlnode #labelkey=labelname assigned in first step
With this you will be able to access your pod service with Nodeip:Nodeport. If I labeled node 10.11.20.177 with ,
nodename=mysqlnode
I will add in node selector ,
nodeSelector:
nodename : mysqlnode
I specified in service nodePort so now I can access pod service (Which is running in container)
10.11.20.177:32001
But this node should be in same network so it can access pod. For outside access make 32001 accessible publicaly with firewall configuration. It is static forever. Label will take care of your dynamic pod ips.