How to specify not equal selector in kubernetes service definition yaml? - kubernetes

I am trying to create a service for a set of pods based on certain selectors. For example, the below get pods command retrieves the right pods for my requirement -
kubectl get pods --selector property1=dev,property2!=admin
Below is the extract of the service definition yaml where I am attempting to using the same selectors as above -
apiVersion: v1
kind: Service
metadata:
name: service1
spec:
type: NodePort
ports:
- name: port1
port: 30303
targetPort: 30303
selector:
property1: dev
<< property2: ???? >>>
I have tried matchExpressions without realizing that service is not among the resources that support set-based filters. It resulted in the following error -
error: error validating "STDIN": error validating data: ValidationError(Service.spec.selector.matchExpressions): invalid type for io.k8s.api.core.v1.ServiceSpec.selector: got "array", expected "string"; if you choose to ignore these errors, turn validation off with --validate=false
I am running upstream Kubernetes 1.12.5

I've did some test but I am afraid it is not possible. As per docs API supports two types of selectors:
Equality-based
Set-based
kubeclt allows to use operators like =,== and !=. So it works when you are using $ kubectl get pods --selector property1=dev,property2!=admin.
Configuration which you want to apply would work in set-based option as it supports in, notin and exists:
environment in (production, qa)
tier notin (frontend, backend)
partition
!partition
Unfortunately set-based is supported only by newer resurces as Job, Deployment, Replica Set and Deamon Set but is not supporting services.
More information about this can be found here.
Even if you will set selector in YAML as:
property2: !value
In service, property2 will be without any value.
Selector: property1=dev,property2=
As additional information , is recognized as AND in services.
As I am not aware how you are managing your cluster, the only thing I can advise is to redefine labels to use only AND as logical operator.

Related

How to collect log data of a specific namespace in Openshift?

I have a cluster with many namespaces.
I'm trying to log data from a specific namespace in my Openshift cluster but it is logging the data from all the namespaces. I tried to follow the documentation of the Openshift regarding logging, but there is no mention of scoping the log data.
I followed this documentation:
https://docs.openshift.com/container-platform/4.7/logging/cluster-logging.html
I'm using fluentd as the log collector.
As Cluster Logging on OpenShift, you can transfer logs in namespaces or Pods matched label you select.
The sample CR like Forward logs in my-project namespace to Elasticserach which is deployed by Cluster Logging could be as follows:
apiVersion: "logging.openshift.io/v1"
kind: ClusterLogForwarder
metadata:
name: instance
namespace: openshift-logging
spec:
inputs:
- name: my-app-logs
application:
namespaces:
- my-project
pipelines:
- name: my-app
inputRefs:
- my-app-logs
outputRefs:
- default
You can customize inputs field as you want. It also could be specified Pods using matchLabels expression. *2
outputs default means send logs to default Elasticsearch on Cluster Logging.
*1: https://docs.openshift.com/container-platform/4.11/logging/cluster-logging-external.html
*2: https://docs.openshift.com/container-platform/4.7/logging/cluster-logging-external.html#cluster-logging-collector-log-forward-logs-from-application-pods_cluster-logging-external

service selector vs deployment selector matchlabels

I understand that services use a selector to identify which pods to route traffic to by thier labels.
apiVersion: v1
kind: Service
metadata:
name: svc
spec:
ports:
- name: tcp
protocol: TCP
port: 443
targetPort: 443
selector:
app: nginx
Thats all well and good.
Now what is the difference between this selector and the one of the spec.selector from the deployment. I understand that it is used so that the deployment can match and manage its pods.
I dont understand however why i need the extra matchLabels declaration and cant just do it like in the service. Whats the use of this semantically?
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
Thanks in advance
In the Service's spec.selector, you can identify which pods to route traffic to only by their labels.
On the other hand, in the Deployment's spec.selector you have two options to decide on which node the pods will be scheduled on, which are: matchExpressions, matchLabels.
How Deployment uses spec.selector
When a Deployment is changed, a new ReplicaSet is created. The ReplicaSet is responsible to manage the Pods. It uses the spec.selector to know what Pods it should manage.
Example:
If the replicas: 1 is changed in the Deployment to e.g. replicas: 2 a new ReplicaSet is created, and it observes the Pods using spec.selector to match Pods with matching labels. It only see 1 replica initially, but its desired state is now replicas: 2 so it is responsible for creating additionally one Pod from the template in the Deployment.
Selector syntax
There is two ways to declare the labels under the spec.selector in `Deployment.
matchLabels - you declare the labels
matchExpressions - you write an expression for labels
See kubectl explain deployment.spec.selector for full explanation of spec.selector alternatives.
Labels and Selectors
Labels and Selectors is a generic concept in Kubernetes and is used in multiple places. Another example is how you can filter what resources you want to see or use with kubectl. E.g. you can select the Pods for an app with:
kubectl get pod -l=app=myappname
(if your Pods is labelled with app: myappname.
why i need the extra matchLabels declaration and cant just do it like in the service. Whats the use of this semantically?
Because service spec only support equality-based selectors and the deployment is a newer resource that supports two syntax (equality-based and set-based).
The API currently supports two types of selectors: equality-based and set-based. A label selector can be made of multiple requirements which are comma-separated. In the case of multiple requirements, all must be satisfied so the comma separator acts as a logical AND (&&) operator.
Reference
The Service spec uses just the "equality-based" label selector syntax.
Newer resources, such as Job, Deployment, ReplicaSet, and DaemonSet, support set-based requirements...
Reference
My understanding is that earlier the only supported syntax was the equality-based one, like we have on the service spec, and that now, when the resource you are using supports the new syntax, you are required to use matchLabels or matchExpressions.

prometheus operator - enable monitoring for everything in all namespaces

I want to monitor a couple applications running on a Kubernetes cluster in namespaces named development and production through prometheus-operator.
Installation command used (as per Github) is:
helm install prometheus-operator stable/prometheus-operator -n production --set prometheusOperator.enabled=true,prometheus.service.type=NodePort,prometheusOperator.service.type=NodePort,alertmanager.service.type=NodePort,grafana.service.type=NodePort,grafana.service.nodePort=30906
What parameters do I need to add to above command to have prometheus-operator discover and monitor all apps/services/pods running in all namespaces?
With this, Service Discovery only shows some prometheus-operator related services, but not the app that I am running within 'production' namespace even though prometheus-operator is installed in the same namespace.
Anything I am missing?
Note - Am running performing all actions using the same user (which uses the $HOME/.kube/config file), so I assume permissions are not an issue.
kubectl version - v1.17.3
helm version - 3.1.2
P.S. There are numerous articles on this on different forums, but am still not finding simple and direct answers for this.
I had the same problem. After some investigation answering with more details.
I've installed Prometheus stack via Helm charts which include Prometheus operator chart directly as a sub-project. Prometheus operator monitors namespaces specified by the following helm values:
prometheusOperator:
namespaces: ''
denyNamespaces: ''
prometheusInstanceNamespaces: ''
alertmanagerInstanceNamespaces: ''
thanosRulerInstanceNamespaces: ''
The namespaces value specifies monitored namespaces for ServiceMonitor and PodMonitor CRDs. Other CRDs have their own settings, which if not set, default to namespaces. Helm values are passed as command-line arguments to the operator. See here and here.
Prometheus CRDs are picked up by the operator from the mentioned namespaces, by default - everywhere. However, as the operator is designed with multiple simultaneous Prometheus releases in mind, what to pick up by a particular Prometheus app instance is controlled by the corresponding Prometheus CRD. CRDs selectors and corresponding namespaces selectors are controlled via the following Helm values:
prometheus:
prometheusSpec:
serviceMonitorSelectorNilUsesHelmValues: true
serviceMonitorSelector: {}
serviceMonitorNamespaceSelector: {}
Similar values are present for other CRDs: alertmanagerConfigXXX, ruleNamespaceXXX, podMonitorXXX, probeXXX. XXXSelectorNilUsesHelmValues set to true, means to look for CRD with particular release label, e.g. release=myrelease. See here.
Empty selector (for a namespace, CRD, or any other object) means no filtering. So for Prometheus object to pick up a ServiceMonitor from the other namespaces there are few options:
Set serviceMonitorSelectorNilUsesHelmValues: false. This leaves serviceMonitorSelector empty.
Apply the release label, e.g. release=myrelease, to your ServiceMonitor CRD.
Set a non-empty serviceMonitorSelector that matches your ServiceMonitor.
For the curious ones here are links to the operator sources:
Enqueue of Prometheus CRD processing
Processing of Prometheus CRD
I used values.yaml from https://github.com/helm/charts/blob/master/stable/prometheus-operator/values.yaml, modified parameters *NilUsesHelmValues to False and it seems to work fine with that.
helm install prometheus-operator stable/prometheus-operator -n monitoring -f values.yaml
Also, like https://stackoverflow.com/users/7889479/anish-kumar-mourya stated, the services do show in Grafana dashboard even though they dont appear in Prometheus UI under Service Discovery or Targets.
Hope this helps other newbies like me.
no its fine but you can create new namespace for monitoring and install prometheus over there would be good to manage things related to monitoring.
helm install prometheus-operator stable/prometheus-operator -n monitoring
You need to create a service for the pod and a serviceMonitor custom resource to configure which services in which namespace need to be discovered by prometheus.
kube-state-metrics Service example
apiVersion: v1
kind: Service
metadata:
labels:
app: kube-state-metrics
k8s-app: kube-state-metrics
annotations:
alpha.monitoring.coreos.com/non-namespaced: "true"
name: kube-state-metrics
spec:
ports:
- name: http-metrics
port: 8080
targetPort: metrics
protocol: TCP
selector:
app: kube-state-metrics
This Service targets all Pods with the label k8s-app: kube-state-metrics.
Generic ServiceMonitor example
This ServiceMonitor targets all Services with the label k8s-app (spec.selector) any value, in the namespaces kube-system and monitoring (spec.namespaceSelector).
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: k8s-apps-http
labels:
k8s-apps: http
spec:
jobLabel: k8s-app
selector:
matchExpressions:
- {key: k8s-app, operator: Exists}
namespaceSelector:
matchNames:
- kube-system
- monitoring
endpoints:
- port: http-metrics
interval: 15s
https://github.com/coreos/prometheus-operator/blob/master/Documentation/user-guides/running-exporters.md

Error Prometheus endpoint for checking AlertManager

I installed Prometheus (follow in this link: https://devopscube.com/setup-prometheus-monitoring-on-kubernetes/)
But, when checking status of Targets, it shows "Down" for AlertManager service, every another endpoint are up, please see the attached file
Then, I check Service Discovery, the discovered labels shows:
"address="192.168.180.254:9093"
__meta_kubernetes_endpoint_address_target_kind="Pod"
__meta_kubernetes_endpoint_address_target_name="alertmanager-6c666985cc-54rjm"
__meta_kubernetes_endpoint_node_name="worker-node1"
__meta_kubernetes_endpoint_port_protocol="TCP"
__meta_kubernetes_endpoint_ready="true"
__meta_kubernetes_endpoints_name="alertmanager"
__meta_kubernetes_namespace="monitoring"
__meta_kubernetes_pod_annotation_cni_projectcalico_org_podIP="192.168.180.254/32"
__meta_kubernetes_pod_annotationpresent_cni_projectcalico_org_podIP="true"
__meta_kubernetes_pod_container_name="alertmanager"
__meta_kubernetes_pod_container_port_name="alertmanager"
__meta_kubernetes_pod_container_port_number="9093""
But Target Labels show another port (8080), I don't know why:
instance="192.168.180.254:8080"
job="kubernetes-service-endpoints"
kubernetes_name="alertmanager"
kubernetes_namespace="monitoring"
First, if you want to install prometheus and grafana without getting sick, you need to do it though helm.
First install helm
And then
helm install installationWhatEverName stable/prometheus-operator
I've reproduced your issue on GCE.
If you are using version 1.16+ you have probably changed apiVersion as in tutorial you have Deployment in extensions/v1beta1. Since K8s 1.16+ you need to change it to apiVersion: apps/v1. Otherwise you will get error like:
error: unable to recognize "STDIN": no matches for kind "Deployment" in version "extensions/v1beta1"
Second thing, in 1.16+ you need to specify selector. If you will not do it you will receive another error:
`error: error validating "STDIN": error validating data: ValidationError(Deployment.spec): missing required field "selector" in io.k8s.api.apps.v1.DeploymentSpec; if you choose to ignore these errors, turn validation off with --validate=false`
It would look like:
...
spec:
replicas: 1
selector:
matchLabels:
app: prometheus-server
template:
metadata:
labels:
app: prometheus-server
spec:
containers:
...
Regarding port 8080 please check this article with example.
Port: Port is the port number which makes a service visible to
other services running within the same K8s cluster. In other words,
in case a service wants to invoke another service running within the
same Kubernetes cluster, it will be able to do so using port specified
against “port” in the service spec file.
It worked for my environment in GCE. Did you configure firewall for your endpoints?
In addition. In Helm 3 some hooks were deprecated. You can find this information here.
If you still have issue please provide your YAMLs witch applied changes to version 1.16+.

Error from server (NotFound): replicationcontrollers "kubia-liveness" not found

I have created pods using below yaml.
apiVersion: v1
kind: Pod
metadata:
name: kubia-liveness
spec:
containers:
- image: luksa/kubia-unhealthy
name: kubia
livenessProbe:
httpGet:
path: /
port: 8080
Then I created pods using the below command.
$ kubectl create -f kubia-liveness-probe.yaml
It created a pod successfully.
Then I'm trying to create load balancer service to access from the external world.
For that I'm using the below command.
$ kubectl expose rc kubia-liveness --type=LoadBalancer --name kubia-liveness-http
For this, I'm getting below error.
Error from server (NotFound): replicationcontrollers "kubia-liveness" not found
I'm not sure how to create replicationControllers. Could anybody please give me the command to do the same.
You are mixing two approaches here, one is creating stuff from yaml definition, which is good by it self (but bare in mind that it is really rare to create a POD rather then Deployment or ReplicationController) with exposing via CLI, which has some assumptions made (ie. it expects replication controller) and with these assumptions it creates appropriate service. My suggestion would be to go for creating Service from yaml manifest as well, so you can tailor it to fit your case.