kubed syncing secret to more than one namespace - kubernetes

I have kubed running in kubernetes for syncing secret to multiple namespace.
With
annotations:
kubed.appscode.com/sync: "cert-manager-tls=dev"
I was able to sync secret to dev namespace. Now I want to copy same secret to more than one namespace. I tried following
1.
annotations:
kubed.appscode.com/sync: "cert-manager-tls=dev,cert-manager-tls=dev2"
annotations:
kubed.appscode.com/sync: "cert-manager-tls=dev,dev2"
this didn't worked at all.
3
annotations:
kubed.appscode.com/sync: "cert-manager-tls=dev"
kubed.appscode.com/sync: "cert-manager-tls=dev2"
This worked for namespace dev2, but not for namespace dev
How can I get this working for two or more namespaces ?

You may try kubed.appscode.com/sync: "" according to https://appscode.com/products/kubed/0.6.0-rc.0/guides/config-syncer/intra-cluster/
Say, you are using some Docker private registry. You want to keep its image pull secret synchronized across all namespaces of a Kubernetes cluster. Kubed can do that for you. If a ConfigMap or a Secret has the annotation kubed.appscode.com/sync: "", Kubed will create a copy of that ConfigMap/Secret in all existing namespaces. Kubed will also create this ConfigMap/Secret, when you create a new namespace.

Generally, to replicate the secret to multiple (but not all) namespaces, you would need to add a label to the destination namespaces:
metadata:
labels:
cert-manager-tls: dev
So, the label is used by kubed to identify the destination namespaces.
You can see examples here:
https://appscode.com/products/kubed/v0.11.0/guides/config-syncer/intra-cluster/#namespace-selector
However, I can see that there is a typo in the explanation. It says to add an annotation. This should be a label (as the following code also shows)

UseCase: Let's imagine we want to synchronize an image-pull-secret that is managed in kube-system to other namespaces. (Pull secrets are namespace specific)
Option 1 is to sync the secret by default to ALL namespaces. So you need to add this annotation to the secret:
annotations:
kubed.appscode.com/sync: ""
Option 2 is to sync the secret to one or more (!!) specific namespaces. In this case you need to add custom value (it is up to you which value you use):
annotations:
kubed.appscode.com/sync: "pullsecret=bitbucket-dev"
For option 1 you don't need to do anything else on the namespace side, it is simply copied to all of them.
For option 2 you need to label all namespaces where this secret should be available with your defined annotation value:
metadata:
labels:
pullsecret: bitbucket-dev
You can label multiple namespaces with this label. To each of them the secret is copied from kube-system.
Edit: TechnoCowboy is correct. I clarified my answer to avoid any confusion.

Related

In a Kubernetes deployment yaml, why do we have to match the template labels to the deployment labels in the selector?

I am new to Kubernetes, so this might be obvious, but in a deployment yaml, why do we have to define the labels in the deployment metadata, then define the same labels in the template metadata, but match those separately in the selector?
Shouldn't it be obvious that the template belongs to the deployment it's under?
Is there a use case for a deployment to have a template it doesn't match?
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-backend
spec:
replicas: 2
selector:
matchLabels:
app: api-backend
template:
metadata:
labels:
app: api-backend
spec:
#...etc
I might be missing some key understanding of k8s or yamls.
Having the template with no label, and it seems to work, but I don't understand why. Kubernetes could be auto-magically inserting the labels.
Technically, the parameter matchLabels decides on which Pods belongs to the given Deployment (and the underlying ReplicaSet). In practice, I have never seen a Deployment with different labels than matchLabels. So, the reason might be the uniformity between other Kubernetes resources (like Service where the matchLabels makes more sense).
I recommend reading the blog post matchLabels, labels, and selectors explained in detail, for beginners.
Let's simplify labels, selectors and template labels first.
The Labels in the metadata section are assigned to the deployment itself.
The Labels in the .spec.template section are assigned to the pods created by the deployment. These are actually called PodTemplate labels.
The selectors provide uniqueness to your resource. It is used to identify the resources that match the labels in .spec.selector.matchLabels section.
Now, it is not mandatory to have all the podTemplate Labels in the matchLabels section. a pod can have many labels but only one of the matchLabels is enough to identify the pods. Here's an use case to understand why it has to be used
"Let’s say you have deployment X which creates two pods with label nginx-pods and image nginx and another deployment Y which applies to the pods with the same label nginx-pods but uses images nginx:alpine. If deployment X is running and you run deployment Y after, it will not create new pods, but instead, it will replace the existing pods with nginx:alpine image. Both deployment will identify the pods as the labels in the pods matches the labels in both of the deployments .spec.selector.matchLabels"
Because the Deployment.metadata.labels belongs to the Deployment resource, and the Deployment.spec.template.metadata.labels to the Pods which are handled by the Deployment controller. The Deployment controller knows which Pods are belongs to which Deployment based on the labels on the Pod resources.
This is why you have to specify the labels this way.

how to get all replicaset names inside a container

Consider the following example provided in this doc.
What I'm trying to achieve is to see the 3 replicas names from inside the container.
following this guide I was able to get the current pod name, but i need also the pod names from my replicas.
Ideally i would like to:
print(k8s.get_my_replicaset_names())
or
print(os.getenv("MY_REPLICASET"))
and have a result like:
[frontend-b2zdv,frontend-vcmts,frontend-wtsmm]
that is the pod names of all the container's replicas (also the current container of course) and eventually compare the current name in the name list to get my index in the list.
Is there any way to achieve this?
As you can read here, the Downward API is used to expose Pod and Container fields to a running Container:
There are two ways to expose Pod and Container fields to a running
Container:
Environment variables
Volume Files
Together, these two ways of exposing Pod and Container fields are
called the Downward API.
It is not meant to expose any information about other objects/resources such as ReplicaSet or Deployment, that manage such a Pod.
You can see exactly what fields contains the yaml manifest that describes a running Pod by executing:
kubectl get pods <pod_name> -o yaml
The example fragment of its output may look as follows:
apiVersion: v1
kind: Pod
metadata:
annotations:
<some annotations here>
...
creationTimestamp: "2020-10-08T22:18:03Z"
generateName: nginx-deployment-7bffc778db-
labels:
app: nginx
pod-template-hash: 7bffc778db
name: nginx-deployment-7bffc778db-8fzrz
namespace: default
ownerReferences: πŸ‘ˆ
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet πŸ‘ˆ
name: nginx-deployment-7bffc778db πŸ‘ˆ
...
As you can see, in metadata section it contains ownerReferences which in the above example contains one reference to a ReplicaSet object by which this Pod is managed. So you can get this particular ReplicaSet name pretty easily as it is part of a Pod yaml manifest.
However, you cannot get this way information about other Pods managed by this ReplicaSet .
Such information only can be obtained from the api server e.g by using kubectl client or programmatically with direct calls to the API.

Kubernetes: Having same host name but different paths in ingresses in different namespaces in Kubernetes

I want to use the same hostname e.g. example.com in two different namespaces with different paths. e.g. in namespace A I want example.com/clientA and in namespace B I want example.com/clientB. Any ideas on how to achieve this?
nginxinc has Cross-Namespace Configuration feature that allows you do exactly what you described.
You can also find there prepared examples with deployments, services, etc.
The only thing you most probably wont like..nginxinc is not free..
Also look here
Cross-namespace Configuration You can spread the Ingress configuration
for a common host across multiple Ingress resources using Mergeable
Ingress resources. Such resources can belong to the same or different
namespaces. This enables easier management when using a large number
of paths. See the Mergeable Ingress Resources example on our GitHub.
As an alternative to Mergeable Ingress resources, you can use
VirtualServer and VirtualServerRoute resources for cross-namespace
configuration. See the Cross-Namespace Configuration example on our
GitHub.
If you do not want to change your default ingress controller (nginx-ingress), another option is to define a service of type ExternalName in your default namespace that points to the full internal service name of the service in the other namespace.
Something like this:
apiVersion: v1
kind: Service
metadata:
labels:
app: my-svc
name: webapp
namespace: default
spec:
externalName: my-svc.my-namespace.svc # <-- put your service name with namespace here
type: ExternalName

kubernetes Service select multi labels

I have two StatefulSets named my-sts and my-sts-a, want to create a single service, that addresses same-indexed pods, from two different StatefulSets, like: my-sts-0 and my-sts-a-0. But found k8s doc says:
Labels selectors for both objects are defined in json or yaml files using maps, and only equality-based requirement selectors are supported
My idea is to create a label for the two sts pods like:
my-sts-0 has label abc:sts-0
my-sts-a-0 has label abc:sts-0
my-sts-1 has label abc:sts-1
my-sts-a-1 has label abc:sts-1
How to get the index of those pods so that I can create a label named abc=sts-<index> to approach it?
Is there any other way?
Kubernetes already gives you a DNS name to select individual StatefulSet pods. Say you have a Service my-sts that matches every pod in the StatefulSet, and the StatefulSet is set up with serviceName: my-sts; then you can access host names my-sts-0.my-sts.namespace.svc.cluster.local and so on.
If you specifically want a service to target a specific pod, there is also a statefulset.kubernetes.io/pod-name label that gets added automatically, so you can attach to that
apiVersion: v1
kind: Service
metadata:
name: my-sts-master
spec:
selector:
statefulset.kubernetes.io/pod-name: my-sts-0
ports: [...]

What is the recommended way to get the pods of a Kubernetes deployment?

Especially considering all the asynchronous procedures involved with creating and updating a deployment, I find it difficult to reliably find the current pods associated with the current version of a given deployment.
Currently, I do:
Add unique labels to the deployment's template.
Get the revision number of the deployment.
Get all replica sets with the labels.
Filter them further to find the one with the correct revision number.
Extract the pod template hash from the replica set.
Get all pods with the labels plus the pod template hash.
This is awkward and complex. Besides, I am not sure that (4) and (6) are guaranteed to yield only the wanted objects. But I cannot filter by ownerReferences, can I?
Is there a more robust and simpler way?
When you create Deployment, it creates ReplicaSet, which creates Pods.
ReplicaSet contains "ownerReferences" path which includes the name and the UID of the parent deployment.
Pods contain the same path with the link to the parent ReplicaSet.
Here is an example of ReplicaSet info:
# kubectl get rs nginx-deployment-569477d6d8 -o yaml
apiVersion: extensions/v1beta1
kind: ReplicaSet
...
name: nginx-deployment-569477d6d8
namespace: default
ownerReferences:
- apiVersion: extensions/v1beta1
blockOwnerDeletion: true
controller: true
kind: Deployment
name: nginx-deployment
uid: acf5fe8a-5d0e-11e8-b14f-42010a8000fc
...