Import custom dashboard template to grafana and update prometheus scape config using istioctl - kubernetes

I'm using istioctl 1.6.8 and with the help of command istioctl install --set profile=demo --file istio-config.yaml I was able to deloy istio to my cluster with grafana and prometheus enabled. My istio-config.yaml file looks like this.
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
components:
ingressGateways:
- name: istio-ingressgateway
enabled: true
k8s:
serviceAnnotations:
service.beta.kubernetes.io/aws-load-balancer-internal: true
values:
grafana:
security:
enabled: true
I have some grafana dashboard json files which I need to export to the newly installed grafana and for these dashboards to work I have to add some exporter details in to my prometheus scrape-config.
My question:
Apart from importing dashboard via grafana UI, is there any way I could do this by passing relevant details to my istio-config.yaml? If not, can anyone suggest any other approach?
(One approach that I have in my mind is to overwrite the existing resources with custom yaml using kubectl apply -f -)
Thanks In Advance

You shouldn't investigate on this any further. With Istio 1.7 Prometheues/Kiali/Grafana installation with istioctl was deprecated and will be removed with Istio 1.8.
See: https://istio.io/latest/blog/2020/addon-rework/
In the further you will have to set up your own prometheus/grafana eg with helm so I would recommand to work in this direction.

Related

istio v1.11.4 - install via helm chart; how to enable envoy proxy logging?

This is probably a very basic question. I am looking at Install Istio with Helm and Enable Envoy’s access logging.
How do I enable envoy access logging if I install istio via its helm charts?
Easiest, and probably only, way to do this is to install Istio with IstioOperator using Helm.
Steps to do so are almost the same, but instead of base chart, you need to use istio-operator chart.
First create istio-operator namespace:
kubectl create namespace istio-operator
then deploy IstioOperator using Helm (assuming you have downloaded Istio, and changed current working directory to istio root):
helm install istio-operator manifests/charts/istio-operator -n istio-operator
Having installed IstioOperator, you can now install Istio. This is a step where you can enable Envoy’s access logging:
kubectl apply -f - <<EOF
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
namespace: istio-system
name: istiocontrolplane
spec:
profile: default
meshConfig:
accessLogFile: /dev/stdout
EOF
I tried enabling Envoy’s access logging with base chart, but could not succeed, no matter what I did.

Enable metrics for bitnami/redis with prometheus-community/kube-prometheus-stack

I have already setup prometheus-community/kube-prometheus-stack in my cluster using helm.
I need to also deploy a redis cluster in the same cluster.
How can provide option so that metric of this redis cluster go to prometheus and to be fed to grafana?
On github page some options are listed.
Will it work with below configuration?
$ helm install my-release \
--set metrics.enabled=true\
bitnami/redis
Do I need to do anything else?
I would assume that you asking this question in the first place means the redis metrics didn't show up in the prometheus for you.
Setting up prometheus using "prometheus-community/kube-prometheus-stack" helm chart could be very different for you than me as it has a lot of configurables.
As the helm chart comes with "prometheus operator", we have used PodMonitor and/or ServiceMonitor CRD's as they provide far more configuration options. Here's some docs around that:
https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/user-guides/getting-started.md#include-servicemonitors
So basically, deploy prometheus with "prometheus.prometheusSpec.serviceMonitorSelector.matchLabels" with a label value of your choice
serviceMonitorSelector:
matchLabels:
monitoring-platform: core-prometheus
and thereafter deploy redis with "metrics.enabled=true", "metrics.serviceMonitor.enabled=true" & "metrics.serviceMonitor.selector" set to value similar to the label defined in prometheus serviceMonitorSelector (monitoring-platform: core-prometheus in this case). Something like this:
metrics:
enabled: true
serviceMonitor:
enabled: true
selector:
monitoring-platform: core-prometheus
This setup works for us.
screenshot

RabbitMQ-Overview dashboard no metrics data shown

I have a working bitnami/rabbitmq server helm chart working on my Kubernetes cluster but the graphs displayed on Grafana are not enough, I found this dashboard RabbitMQ-Overview which has sufficient details that can be very useful for my bitnami/rabbitmq server but unfortunately there is no data showing on the dashboard. The issue here is I cannot see my metrics on this dashboard, can someone please suggest a work-around approach for my case. Please note I am using the kube-prometheus-stack helm chart for Prometheus+Grafana services.
Steps to solve this issue:
I enabled the rabbitmq_prometheus for all my Rabbitmq nodes by entering the Pods
rabbitmq-plugins enable rabbitmq_prometheus
Output:
:/$ rabbitmq-plugins list
Listing plugins with pattern "." ...
Configured: E = explicitly enabled; e = implicitly enabled
| Status: * = running on rabbit#rabbitmq-0.broker-server
|/
[E] rabbitmq_management 3.8.9
[e*] rabbitmq_management_agent 3.8.9
[ ] rabbitmq_mqtt 3.8.9
[e*] rabbitmq_peer_discovery_common 3.8.9
[E*] rabbitmq_peer_discovery_k8s 3.8.9
[E*] rabbitmq_prometheus 3.8.9
Made sure I was using the same data source for Prometheus used in my Grafana
I tried creating a prometheus-rabbitmq-exporter to get the metrics from Rabbitmq and send them to RabbitMQ-Overview dashboard but no data was displayed.
my dashboard no data
Please note while reading this documentation to solve my issue I already have metrics from Rabbitmq displayed on Grafana but I need the RabbitMQ-Overview that contains more details about my server.
To build on #amin 's answer, the label can be added to the helm like so:
Before helm chart version 9.0.0
metrics:
serviceMonitor:
enabled: true
additionalLabels:
release: prometheus
After helm chart version 9.0.0
metrics:
serviceMonitor:
enabled: true
labels:
release: prometheus
Fixed the issue after including 2 commands from this table table and need to edit the ServiceMonitor to include the label release: prometheus as it is needed to be visible in the prometheus targets as I am using kube-prometheus-stack helm chart.
helm install -f values.yml broker bitnami/rabbitmq --namespace default --set nodeSelector.test=rabbit --set volumePermissions.enabled=true --set replicaCount=1 --set metrics.enabled=true --set metrics.serviceMonitor.enabled=true
I hope it helps someone in the future!

Register new istioctl profile

I want to install istio with istioctl. My customizations will be numerous, so I dont think it would be helpful to use this syntax: istioctl manifest apply --set addonComponents.grafana.enabled=true
I figured that the best way to do it is to copy a profile from istio-1.5.1/install/kubernetes/operator/profiles
How can I add a custom profile to:
[root#localhost profiles]# istioctl profile list
Istio configuration profiles:
remote
separate
default
demo
empty
minimal
So that I can: istioctl manifest apply --set profile=mynewprofile
istio manifests apply --set addonComponents.grafana.enabled=true would translate to
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
addonComponents:
grafana:
enabled: true
Istio uses the CR IstioOperator to pass values for installation. All customization options listed here are available. Some examples can be found in their documentation under Installation Configuration Profiles and within their releases under install/kubernetes/operator/profiles.
You can make custom profiles by referencing a path to another CR. Let's say you turned the file above into a-new-profile.yaml.
istioctl manifest apply --set installPackagePath=< path to istio releases >/istio-1.5.1/install/kubernetes/operator/charts \
--set profile=path/to/a-new-profile.yaml
After that, you would have your new profile set and ready to use. See their section install-from-external-charts for more information.

GitOps (Flex) install of standard Jenkins Helm chart in Kubernetes via HelmRelease operator

I've just started working with Weavework's Flux GitOps system in Kubernetes. I have regular deployments (deployments, services, volumes, etc.) working fine. I'm trying for the first time to deploy a Helm chart.
I've followed the instructions in this tutorial: https://github.com/fluxcd/helm-operator-get-started and have its sample service working after making a few small changes. So I believe that I have all the right tooling in place, including the custom HelmRelease K8s operator.
I want to deploy Jenkins via Helm, which if I do manually is as simple as this Helm command:
helm install --set persistence.existingClaim=jenkins --set master.serviceType=LoadBalancer jenkins stable/jenkins
I want to convert this to a HelmRelease object in my Flex-managed GitHub repo. Here's what I've got, per what documentation I can find:
apiVersion: helm.fluxcd.io/v1
kind: HelmRelease
metadata:
name: jenkins
namespace: jenkins
updating-applications/
fluxcd.io/ignore: "false"
spec:
releaseName: jenkins
chart:
git: https://github.com/helm/charts/tree/master
path: stable/jenkins
ref: master
values:
persistence:
existingClaim: jenkins
master:
serviceType: LoadBalancer
I have this in the file 'jenkins/jenkins.yaml' from the root of the location in my git repo that Flex is monitoring. Adding this file does nothing...I get no new K8s objects, no HelmRelease object, and no new Helm release when I run "helm list -n jenkins".
I see some mention of having to have 'image' tags in my 'values' section, but since I don't need to specify any images in my manual call to Helm, I'm not sure what I would add in terms of 'image' tags. I've also seen examples of HelmRelease definitions that don't have 'image' tags, so it seems that they aren't absolutely necessary.
I've played around with adding a few annotations to my 'metadata' section:
annotations:
# fluxcd.io/automated: "true"
# per: https://blog.baeke.info/2019/10/10/gitops-with-weaveworks-flux-installing-and-updating-applications/
fluxcd.io/ignore: "false"
But none of that has helped to get things rolling. Can anyone tell me what I have to do to get the equivalent of the simple Helm command I gave at the top of this post to work with Flex/GitOps?
Have you tried checking the logs on the fluxd and flux-helm-operator pods? I would start there to see what error message you're getting. One thing that i'm seeing is that you're using https for git. You may want to double check, but I don't recall ever seeing any documentation configuring chart pulls via git to use anything other than SSH. Moreover, I'd recommend just pulling that chart from the stable helm repository anyhow:
apiVersion: helm.fluxcd.io/v1
kind: HelmRelease
metadata:
name: jenkins
namespace: jenkins
annotations: #not sure what updating-applications/ was?
fluxcd.io/ignore: "false" #pretty sure this is false by default and can be omitted
spec:
releaseName: jenkins
chart:
repository: https://kubernetes-charts.storage.googleapis.com/
name: jenkins
version: 1.9.16
values:
persistence:
existingClaim: jenkins
master:
serviceType: LoadBalancer