I am trying to monitor external service (which is exporter of cassandra metrics) in prometheus-operator. I installed prometheus-operator using helm 2.11.0. I installed it using this yaml:
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
and these commands on my kubernetes cluster:
kubectl create -f rbac-config.yml
helm init --service-account tiller --history-max 200
helm install stable/prometheus-operator --name prometheus-operator --namespace monitoring
Next, basing on article:
how monitor to an external service
I tried to do steps described in it. As suggested I created Endpoints, Service and ServiceMonitor with label for existing Prometheus. Here are my yaml files:
apiVersion: v1
kind: Endpoints
metadata:
name: cassandra-metrics80
labels:
app: cassandra-metrics80
subsets:
- addresses:
- ip: 10.150.1.80
ports:
- name: web
port: 7070
protocol: TCP
apiVersion: v1
kind: Service
metadata:
name: cassandra-metrics80
namespace: monitoring
labels:
app: cassandra-metrics80
release: prometheus-operator
spec:
externalName: 10.150.1.80
ports:
- name: web
port: 7070
protocol: TCP
targetPort: 7070
type: ExternalName
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: cassandra-metrics80
labels:
app: cassandra-metrics80
release: prometheus-operator
spec:
selector:
matchLabels:
app: cassandra-metrics80
release: prometheus-operator
namespaceSelector:
matchNames:
- monitoring
endpoints:
- port: web
interval: 10s
honorLabels: true
And in prometheus service discovery page I can see:
That this service is not active and all labels are dropped.
I did a numerous things trying to fix this, like setting targetLabels. Trying to relabel the once that are discovered, as
described here: prometheus relabeling
But unfortunately nothing works. What could be the issue or how can I investigate it better?
Ok, I found out that service should be in the same namespace as service monitor and endpoint, after that prometheus started to see some metrics from cassandra.
When using the kube-prometheus-stack helm chart, it can be done as follows
prometheus:
prometheusSpec:
additionalScrapeConfigs:
- job_name: external
metrics_path: /metrics
static_configs:
- targets:
- <IP>:<PORT>
To be strict only "Endpoints" and "Service" should be in the same namespace.
Additionally "Endpoints" and "Service" should have the same name as Lucas mentioned it before.
ServiceMonitor can be placed anywhere, it finds and scrapes SVC/Endpoint inside defined namespaces (namespaceSelector->matchNames) and matching all labels (selector->matchLabels):
spec:
selector:
matchLabels:
app: cassandra-metrics80
release: prometheus-operator
namespaceSelector:
matchNames:
- my-namespace
Furthermore now there is much more easier method to define additional scraping:
https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/additional-scrape-config.md
The only drawback for the second one is that it requires pod restart after the change. Configuration based on Endpoint/Service/ServiceMonitor seem to be discovered and applied automatically.
Related
I have a problem with Kubernetes API and CRD, while creating a deployment with a single nginx pod, i would like to access using port 80 from a remote server, and locally as well. After seeing the pod in a pending state and running the kubectl get pods and then after around 40 seconds on average, the pod disappears, and then a different nginx pod name is starting up, this seems to be in a loop.
The error is
* W1214 23:27:19.542477 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
I was following this article about service accounts and roles,
https://thorsten-hans.com/custom-resource-definitions-with-rbac-for-serviceaccounts#create-the-clusterrolebinding
I am not even sure i have created this correctly?
Do i even need to create the ServiceAccount_v1.yaml, PolicyRule_v1.yaml and ClusterRoleBinding.yaml files to resolve my error above.
All of my .yaml files for this are below,
CustomResourceDefinition_v1.yaml
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
# name must match the spec fields below, and be in the form: <plural>.<group>
name: webservers.stable.example.com
spec:
# group name to use for REST API: /apis/<group>/<version>
group: stable.example.com
names:
# kind is normally the CamelCased singular type. Your resource manifests use this.
kind: WebServer
# plural name to be used in the URL: /apis/<group>/<version>/<plural>
plural: webservers
# shortNames allow shorter string to match your resource on the CLI
shortNames:
- ws
# singular name to be used as an alias on the CLI and for display
singular: webserver
# either Namespaced or Cluster
scope: Cluster
# list of versions supported by this CustomResourceDefinition
versions:
- name: v1
schema:
openAPIV3Schema:
type: object
properties:
spec:
type: object
properties:
cronSpec:
type: string
image:
type: string
replicas:
type: integer
# Each version can be enabled/disabled by Served flag.
served: true
# One and only one version must be marked as the storage version.
storage: true
Deployments_v1_apps.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
# Unique key of the Deployment instance
name: nginx-deployment
spec:
# 1 Pods should exist at all times.
replicas: 1
selector:
matchLabels:
app: nginx
strategy:
rollingUpdate:
maxSurge: 100
maxUnavailable: 0
type: RollingUpdate
template:
metadata:
labels:
# Apply this label to pods and default
# the Deployment label selector to this value
app: nginx
spec:
containers:
# Run this image
- image: nginx:1.14
name: nginx
ports:
- containerPort: 80
hostname: nginx
nodeName: webserver01
securityContext:
runAsNonRoot: True
#status:
#availableReplicas: 1
Ingress_v1_networking.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-ingress
spec:
rules:
- http:
paths:
- path: /
pathType: Exact
backend:
resource:
kind: nginx-service
name: nginx-deployment
#service:
# name: nginx
# port: 80
#serviceName: nginx
#servicePort: 80
Service_v1_core.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- port: 80
protocol: TCP
targetPort: 80
ServiceAccount_v1.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: user
namespace: example
PolicyRule_v1.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: "example.com:webservers:reader"
rules:
- apiGroups: ["example.com"]
resources: ["ResourceAll"]
verbs: ["VerbAll"]
ClusterRoleBinding_v1.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: "example.com:webservers:cdreader-read"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: "example.com:webservers:reader"
subjects:
- kind: ServiceAccount
name: user
namespace: example
I have a ready-made Kubernetes cluster with configured grafana + prometheus(operator) monitoring.
I added the following labels to pods with my app:
prometheus.io/scrape: "true"
prometheus.io/path: "/my/app/metrics"
prometheus.io/port: "80"
But metrics don't get into Prometheus. However, prometheus has all the default Kubernetes metrics.
What is the problem?
You should create ServiceMonitor or PodMonitor objects.
ServiceMonitor which describes the set of targets to be monitored by Prometheus. The Operator automatically generates Prometheus scrape configuration based on the definition and the targets will have the IPs of all the pods behind the service.
Example:
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: example-app
labels:
team: frontend
spec:
selector:
matchLabels:
app: example-app
endpoints:
- port: web
PodMonitor, which declaratively specifies how groups of pods should be monitored. The Operator automatically generates Prometheus scrape configuration based on the definition.
Example:
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: example-app
labels:
team: frontend
spec:
selector:
matchLabels:
app: example-app
podMetricsEndpoints:
- port: web
I've follow the documentation about how to enable IAP on GKE.
I've:
configured the consent screen
Create OAuth credentials
Add the universal redirect URL
Add myself as IAP-secured Web App User
And write my deployment like this:
data:
client_id: <my_id>
client_secret: <my_secret>
kind: Secret
metadata:
name: backend-iap-secret
type: Opaque
---
apiVersion: v1
kind: Service
metadata:
name: grafana
spec:
ports:
- port: 443
protocol: TCP
targetPort: 3000
selector:
k8s-app: grafana
type: NodePort
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: grafana
spec:
replicas: 1
template:
metadata:
labels:
k8s-app: grafana
spec:
containers:
- env:
- name: GF_SERVER_HTTP_PORT
value: "3000"
image: docker.io/grafana/grafana:6.7.1
name: grafana
ports:
- containerPort: 3000
protocol: TCP
readinessProbe:
httpGet:
path: /api/health
port: 3000
---
apiVersion: cloud.google.com/v1beta1
kind: BackendConfig
metadata:
name: backend-config-iap
spec:
iap:
enabled: true
oauthclientCredentials:
secretName: backend-iap-secret
---
apiVersion: networking.gke.io/v1beta1
kind: ManagedCertificate
metadata:
name: monitoring-tls
spec:
domains:
- monitoring.foo.com
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
beta.cloud.google.com/backend-config: '{"default": "backend-config-iap"}'
kubernetes.io/ingress.global-static-ip-name: monitoring
networking.gke.io/managed-certificates: monitoring-tls
name: grafana
spec:
backend:
serviceName: grafana
servicePort: 443
When I look at my ingress I've this:
$ k describe ingress
Name: grafana
[...]
Annotations: beta.cloud.google.com/backend-config: {"default": "backend-config-iap"}
ingress.kubernetes.io/backends: {"k8s-blabla":"HEALTHY"}
[...]
Events: <none>
$
I can connect to the web page without any problem, the grafana is up and running, but I can also connect without being authenticated (witch is a problem).
So everything look fine, but IAP is not activated, why ?
The worst is that, if I enable it manualy it work but if I redo kubectl apply -f monitoring.yaml IAP is disabled.
What am I missing ?
Because my secret values are stored in secret manager (and retrieved at build time) I suspected my secret to have some glitches (spaces, \n, etc.) in them so I've add a script to test it:
gcloud compute backend-services update \
--project=<my_project_id> \
--global \
$(kubectl get ingress grafana -o json | jq -r '.metadata.annotations."ingress.kubernetes.io/backends"' | jq -r 'keys[0]') \
--iap=enabled,oauth2-client-id=$(gcloud --project="<my_project_id>" beta secrets versions access latest --secret=Monitoring_client_id),oauth2-client-secret=$(gcloud --project="<my_project_id>" beta secrets versions access latest --secret=Monitoring_secret)
And now IAP is properly enabled with the correct OAuth Client, so my secrets are "clean"
By the way, I also tried to rename secret variables like this (from client_id):
* oauth_client_id
* oauth-client-id
* clientID (like in backend documentation )
I've also write the value in the backend like this:
kind: BackendConfig
metadata:
name: backend-config-iap
spec:
iap:
enabled: true
oauthclientCredentials:
secretName: backend-iap-secret
clientID: <value>
clientSecret: <value>
But doesn't work either.
Erratum:
The fact that the IAP is destroyed when I deploy again (after I enable it in web UI) is part of my deployment script in this test (I made a kubectl delete before).
But nevertheless, I can't enable IAP only with my backend configuration.
As suggested I've filed a bug report: https://issuetracker.google.com/issues/153475658
Solution given by Totem
Change given yaml with this:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.global-static-ip-name: monitoring
networking.gke.io/managed-certificates: monitoring-tls
name: grafana
[...]
---
apiVersion: v1
kind: Service
metadata:
name: grafana
annotations:
beta.cloud.google.com/backend-config: '{"default": "backend-config-iap"}'
[...]
The backend is associated with the service and not the Ingress...
Now it Works !
You did everything right, just a one small change:
The annotation should be added on the Service resource
apiVersion: v1
kind: Service
metadata:
annotations:
beta.cloud.google.com/backend-config: '{"ports": { "443":"backend-config-iap"}}'
name: grafana
Usually you need to associate it with a port so ive added this example above, but make sure it works with 443 as expected.
this is based on internal example im using:
beta.cloud.google.com/backend-config: '{"ports": { "3000":"be-cfg}}'
I have the TICK stack deployed in my Kubernetes cluster for monitoring purposes. My application pushes its custom data to it.
I have tried horizontal pod autoscaling using custom metrics with the help of the Prometheus adapter. I was curious if there is such an adapter for InfluxDB as well?
The Kubernetes popular custom metrics adapters do not include the InfluxDB one. Is there a way I can use my current infrastructure(containing InfluxDB) to autoscale pods using custom metrics from my application?
Why not use custom metrics from Prometheus with influxdb-exporter? I don't see why it should not work.
It is possible to use influxdb with heapster, in attachments some files that I set up to use in an easy way.
First run influxdb.yaml
Run second heapster-rbac.yaml
Third run heapster.yaml
**INFLUXDB.YAML**
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: monitoring-influxdb
namespace: kube-system
spec:
replicas: 1
template:
metadata:
labels:
task: monitoring
k8s-app: influxdb
spec:
containers:
- name: influxdb
image: k8s.gcr.io/heapster-influxdb-amd64:v1.5.2
volumeMounts:
- mountPath: /data
name: influxdb-storage
volumes:
- name: influxdb-storage
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
labels:
task: monitoring
# If you are NOT using this as an addon, you should comment out this line.
kubernetes.io/cluster-service: 'true'
kubernetes.io/name: monitoring-influxdb
name: monitoring-influxdb
namespace: kube-system
spec:
ports:
- port: 8086
targetPort: 8086
selector:
k8s-app: influxdb
**heapster-rbac.yaml**
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: heapster
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: heapster
namespace: kube-system
**heapster.yaml**
apiVersion: v1
kind: ServiceAccount
metadata:
name: heapster
namespace: kube-system
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: heapster
namespace: kube-system
spec:
replicas: 1
template:
metadata:
labels:
task: monitoring
k8s-app: heapster
spec:
serviceAccountName: heapster
containers:
- name: heapster
image: k8s.gcr.io/heapster-amd64:v1.5.4
imagePullPolicy: IfNotPresent
command:
- /heapster
- --source=kubernetes:https://kubernetes.default?kubeletHttps=true&kubeletPort=10250&insecure=true
- --sink=influxdb:http://monitoring-influxdb.kube-system.svc:8086
---
apiVersion: v1
kind: Service
metadata:
labels:
task: monitoring
# If you are NOT using this as an addon, you should comment out this line.
kubernetes.io/cluster-service: 'true'
kubernetes.io/name: Heapster
name: heapster
namespace: kube-system
spec:
ports:
- port: 80
targetPort: 8082
selector:
k8s-app: heapster
InfluxDB can be used with Heapster(as pointed out by #LucasSales) but it is deprecated in the current versions of Kubernetes.
For the latest versions of Kubernetes we have the metrics server for basic CPU/memory metrics. Prometheus is the accepted third party monitoring tool especially for things like custom metrics.
I have a bare-metal kubernetes (v1.11.0) cluster created with kubeadm and working fine without any issues. Network with calico and made it a single node cluster using kubectl taint nodes command. (single node is a requirement).
I need to run mydockerhub/sampleweb static website image on host port 80. Assume the IP address of the ubuntu server running this kubernetes is 192.168.8.10.
How to make my static website available on 192.168.8.10:80 or a hostname mapped to it on local DNS server? (Example: frontend.sampleweb.local:80). Later I need to run other services on different port mapped to another subdomain. (Example: backend.sampleweb.local:80 which routes to a service run on port 8080).
I need to know:
Can I achieve this without a load balancer?
What resources needed to create? (ingress, deployment, etc)
What additional configurations needed on the cluster? (network policy, etc)
Much appreciated if sample yaml files are provided.
I'm new to kubernetes world. I got sample kubernetes deployments (like sock-shop) working end-to-end without any issues. I tried NodePort to access the service but instead of running it on a different port I need to run it exact port 80 on the host. I tried many ingress solutions but didn't work.
Screenshot of my setup:
I recently used traefik.io to configure a project with similar requirements to yours.
So I'll show a basic solution with traefik and ingresses.
I dedicated a whole namespace (you can use kube-system), called traefik, and created a kubernetes serviceAccount:
apiVersion: v1
kind: Namespace
metadata:
name: traefik
---
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: traefik
name: traefik-ingress-controller
The traefik controller which is invoked by ingress rules requires a ClusterRole and its binding:
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: traefik-ingress-controller
rules:
- apiGroups:
- ""
resources:
- services
- endpoints
- secrets
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- ingresses
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: traefik-ingress-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: traefik-ingress-controller
subjects:
- kind: ServiceAccount
namespace: traefik
name: traefik-ingress-controller
The traefin controller will be deployed as daemonset (i.e. by definition one for each node in your cluster) and a Kubernetes service is dedicated to the controller:
kind: DaemonSet
apiVersion: extensions/v1beta1
metadata:
name: traefik-ingress-controller
namespace: traefik
labels:
k8s-app: traefik-ingress-lb
spec:
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
containers:
- name: traefik-ingress-lb
image: traefik
ports:
- name: http
containerPort: 80
hostPort: 80
- name: admin
containerPort: 8080
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
args:
- --api
- --kubernetes
- --logLevel=INFO
---
kind: Service
apiVersion: v1
metadata:
namespace: traefik
name: traefik-ingress-service
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- protocol: TCP
port: 80
name: web
- protocol: TCP
port: 8080
name: admin
The final part requires you to create a service for each microservice in you project, here an example:
apiVersion: v1
kind: Service
metadata:
namespace: traefik
name: my-svc-1
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- port: 80
targetPort: 8080
and also the ingress (set of rules) that will forward the request to the proper service:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace: traefik
name: ingress-ms-1
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: my-address-url
http:
paths:
- backend:
serviceName: my-svc-1
servicePort: 80
In this ingress I wrote a host URL, this will be the entry point in your cluster, so you need to resolve the name to your master K8S node. If you have more nodes which could be master, then a loadbalancer is suggested (in this case the host URL will be the LB).
Take a look to kubernetes.io documentation to have clear the concepts for kubernetes. Also traefik.io is useful.
I hope this helps you.
In addition to the andswer of Nicola Ben , You have to define an externalIPs in your traefik service, just follow the steps of Nicola Ben and add a externalIPs section to the service "my-svc-1" .
apiVersion: v1
kind: Service
metadata:
namespace: traefik
name: my-svc-1
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- port: 80
targetPort: 8080
externalIPs:
- <IP_OF_A_NODE>
And you can define more than on externalIP.