Using namespaceSelector - kubernetes

I have a ValidatingAdmissionWebhook with namespaceSelector and objectSelector, in addition to configmap.
Im trying to trigger the ValidatingAdmissionWebhook when the configmap 'UPDATE'.
This is part of the ValidatingAdmissionWebhook:
webhooks:
- name: myWebhook
***
namespaceSelector:
matchLabels:
namespace-label: namespace
objectSelector:
matchLabels:
object-label: object
rules:
- operations: ['UPDATE']
apiGroups: ***
apiVersion: ***
resources: ['configmaps']
This is part of the configmap:
data:
data1: 'somedata'
metadata:
name: myConfigmap
namespace: test
labels:
object-label: object
When I remove the namespaceSelector from the ValidatingAdmissionWebhook, it catches the UPDATE from configmap, which is ok.
But I cant figure out how\where to add a namespaceSelector to configmap in order to be caught.
Tried to put it as part of the labels, but with no success:
data:
data1: 'somedata'
metadata:
name: myConfigmap
namespace: test
labels:
object-label: object
namespace-label: namespace <----
If the namespaceSelector is a labelSelector kind, im not sure how to use it.
Many thanks.

According to the K8s documentation, this is how Namespace Selectors work:
The namespaceSelector decides whether to run the webhook on a request
for a namespaced resource (or a Namespace object), based on whether
the namespace's labels match the selector.
For your example to work, make sure to label the namespace your Config Map belongs to with namespace-label: namespace

Related

kubernetes pod deployment not updating

I have a pod egress-operator-controller-manager created from makefile by command make deploy IMG=my_azure_repo/egress-operator:v0.1.
This pod was showing unexpected status: 401 Unauthorized error in description, so I created imagePullSecrets and trying to update this pod with secret by creating pod's deployment.yaml [egress-operator-manager.yaml] file. But when I am applying this yaml file its giving below error:
root#Ubuntu18-VM:~/egress-operator# kubectl apply -f /home/user/egress-operator-manager.yaml
The Deployment "egress-operator-controller-manager" is invalid: spec.selector: Invalid value:
v1.LabelSelector{MatchLabels:map[string]string{"moduleId":"egress-operator"},
MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable
egress-operator-manager.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: egress-operator-controller-manager
namespace: egress-operator-system
labels:
moduleId: egress-operator
spec:
replicas: 1
selector:
matchLabels:
moduleId: egress-operator
strategy:
type: Recreate
template:
metadata:
labels:
moduleId: egress-operator
spec:
containers:
- image: my_azure_repo/egress-operator:v0.1
name: egress-operator
imagePullSecrets:
- name: mysecret
Can somene let me know that how can I update this pod's deployment.yaml ?
Delete the deployment once and try applying the YAML agian.
it could be due to K8s service won't allow the rolling update once deployed the label selectors of K8s service can not be updated until you decide to delete the existing deployment
Changing selectors leads to undefined behaviors - users are not expected to change the selectors
https://github.com/kubernetes/kubernetes/issues/50808

Istio: sidecar EnvoyFilter workloadSelector not filtering

I'm having an issue where two EnvoyFilters with different workloadSelectors that are supposed to apply to different pods workloads, are instead both being applied to both workloads.
More specifically, I'm using Istio 1.4.9 and I have two instances of the same deployment workload in two different namespaces, and each workload has a sidecar. Each deployment has different labels applied.
kubectl get po --show-labels --all-namespaces -l app=myapp,namespace
NAMESPACE NAME ...truncated... LABELS
first myapp-58489c8fcd-kch9f ...truncated... app=myapp,namespace=first ...truncated...
second myapp-6f58dd65dd-tdjm7 ...truncated... app=myapp,namespace=second ...truncated...
I want to attach a different instance of a Lua EnvoyFilter to each workload in each namespace, so each has its own filter. So, for example, the filter for the first namespace looks like the following. The second is similar but with a different workloadSelector
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: first-myapp-filter
...truncated...
spec:
workloadSelector:
labels:
app: myapp
namespace: first
However, I'm seeing that a given workload is processing BOTH envoyfilters instead of only the filter that is matched by the selector. When I look at the listeners on the pod in the first namespace with istioctl, it has BOTH filters attached.
"httpFilters": [
{"name": "envoy.lua", "config": {"inlineCode": "function ...truncated... end\n" }},
{"name": "envoy.lua", "config": {"inlineCode": "function ...truncated... end\n" }}
The selector doesn't seem to be working the way I expect it. Any ideas on how to debug?
Only thing I can think of is that you defined your EnvoyFilter in the config root namespace and it's ignoring workloadSelector.
If you see the docs:
NOTE 3: *_To apply an EnvoyFilter resource to all workloads (sidecars and gateways) in the system, define the resource in the config root namespace, without a workloadSelector.
Try creating 2 EnvoyFilters, each in each namespace were your workloads exist and delete the original EnvoyFilter. So like this:
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: first-myapp-filter
namespace: first
spec:
workloadSelector:
labels:
app: myapp
namespace: first
...
and
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: second-myapp-filter
namespace: second
spec:
workloadSelector:
labels:
app: myapp
namespace: second
...
Note: you might also want to try different labels. For example, app: myapp1, app: myapp2.

create a custom resource in kubernetes using generateName field

I have a sample crd defined as
crd.yaml
kind: CustomResourceDefinition
metadata:
name: testconfig.demo.k8s.com
namespace: testns
spec:
group: demo.k8s.com
versions:
- name: v1
served: true
storage: true
scope: Namespaced
names:
plural: testconfigs
singular: testconfig
kind: TestConfig
I want to create a custom resource based on above crd but i dont want to assign a fixed name to the resource rather use the generateName field. So i generated the below cr.yaml. But when i apply it gives error that name field is mandatory
kind: TestConfig
metadata:
generateName: test-name-
namespace: testns
spec:
image: testimage
Any help is highly appreciated.
You should use kubectl create to create your CR with generateName.
"kubectl apply will verify the existence of the resources before take action. If the resources do not exist, it will firstly create them. If use generateName, the resource name is not yet generated when verify the existence of the resource." source

kubernetes - where is the official api documentation that says whether matchLabels is mandatory or not

Today I was going through some documentation and discussions about matchLabels statement that is a part of a Deployment (or other objects) in Kubernetes. Example below:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
In some discussions, I saw that depending on the version of the API it could be optional or mandatory to use this selector.
Ref:
https://github.com/helm/charts/issues/7680
What is the purpose of a kubernetes deployment pod selector?
But I can't see any official documentation where it is stated explicitly if the usage of this selector was mandatory or not for a specific version of a Kubernetes API. Do you know of any official documentation where it is stated whether or not it is mandatory to use matchLabels selector?
I have checked these links out but I did not bump into an official statement:
https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.10/#deploymentspec-v1beta2-apps
https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
kubectl explain deploy.spec.selector --api-version=apps/v1
Label selector for pods. Existing ReplicaSets whose pods are selected
by
this will be the ones affected by this deployment. It must match the pod
template's labels.
https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/api/apps/v1/types.go#L276-L279
Selector *metav1.LabelSelector `json:"selector" protobuf:"bytes,2,opt,name=selector"`
The lack of +optional above this line tells you it's mandatory. It matches up with the error message you'll get if you try to make a deployment without one.
cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 1
template:
spec:
containers:
- image: nginx
imagePullPolicy: Always
name: nginx
dnsPolicy: ClusterFirst
restartPolicy: Always
EOF
error: error validating "STDIN":
in io.k8s.api.apps.v1.DeploymentSpec; if you choose to ignore these
errors, turn validation off with --validate=false. error validating data: ValidationError(Deployment.spec): missing
required field "selector"
https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apimachinery/pkg/apis/meta/v1/types.go#L1076-L1085
type LabelSelector struct {
// matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels
// map is equivalent to an element of matchExpressions, whose key field is "key", the
// operator is "In", and the values array contains only "value". The requirements are ANDed.
// +optional
MatchLabels map[string]string `json:"matchLabels,omitempty" protobuf:"bytes,1,rep,name=matchLabels"`
// matchExpressions is a list of label selector requirements. The requirements are ANDed.
// +optional
MatchExpressions []LabelSelectorRequirement `json:"matchExpressions,omitempty" protobuf:"bytes,2,rep,name=matchExpressions"`
}

Reusable Pod Templates

Is it possible in Kubernetes to create a pod template and reuse it later when specifying a pod within a deployment? For example:
Say I have pod template...
apiVersion: v1
kind: PodTemplate
metadata:
name: my-pod-template
template:
metadata:
labels:
app: "my-app"
spec:
containers:
- name: my-app
image: jwaldrip/my-app:latest
Could I then use it in a deployment as so?
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app-deployment
spec:
template:
metadata:
name: my-pod-template
This would be super helpful when deploying something like Jobs, where I want to own the creation of a job with the given template.
There is not.
Specifically in the case of Pods, there are PodPresets:
https://kubernetes.io/docs/tasks/inject-data-application/podpreset/
But those don't apply to other objects.
One way to enforce the shape or attributes of arbitrary objects is to establish tooling that correctly creates those objects, then create credentials for that tooling, and use RBAC to only allow those credentials to create those objects.
https://kubernetes.io/docs/admin/authorization/rbac/
Another way would be to create an Admission Controller to watch the attempted creation of the desired objects, and verify/reject those that don't meet the criteria:
https://kubernetes.io/docs/admin/admission-controllers/