Grafana summary dashboards receive data but per-service dashboards do not - grafana

What have I done wrong?
I installed Istio last week on GKE and, when following the instructions step-by-step, everything appeared to work correctly including all the Grafana dashboards.
This week I attempted to recreate the configuration to share with my team. Everything appears to work correctly except the per-service (e.g. productpage) dashboards that report "no datapoints".
I did delete and recreate some resources out of order and perhaps this explains my error?
I would appreciate a heuristic that could help me diagnose where I've gone wrong and how to address. My largest area of non-familiarity is with Prometheus. Clearly Grafana is connected to Prometheus. What could I check in Prometheus to ensure it's configured correctly?
Perhaps I should simply delete and recreate but, I'd like to learn from this experience.
istioctl version:
Version: 0.1.5
GitRevision: 21f4cb4
GitBranch: master
User: jenkins#ubuntu-16-04-build-de3bbfab70500
GolangVersion: go1.8.1
KubeInjectHub: docker.io/istio
KubeInjectTag: 0.1
apiserver version:
Version: 0.1.5
GitRevision: 21f4cb4
GitBranch: master
User: jenkins#ubuntu-16-04-build-de3bbfab70500
GolangVersion: go1.8.1

When we've seen this before, it is typically fixed by just refreshing the page in the browser. The metrics powering the summary dashboards are the same ones that are used to power the service graphs.
Can you try refreshing the page and seeing what happens?

Related

Error in installing ingress-nginx helm chart

I am installing nginx ingress controller through helm chart and pods are not coming up. Got some issue with the permission.
Chart link - https://artifacthub.io/packages/helm/ingress-nginx/ingress-nginx
I am using latest version 4.2.1
I done debugging as stated here https://github.com/kubernetes/ingress-nginx/issues/4061
also tried to run as root user runAsUser: 0
I think i got this issue after cluster upgrade from 1.19 to 1.22. Previously it was working fine.
Any suggestion what i need to do to fix that?
unexpected error storing fake SSL Cert: could not create PEM
certificate file
/etc/ingress-controller/ssl/default-fake-certificate.pem: open
/etc/ingress-controller/ssl/default-fake-certificate.pem: permission
denied
You obviously have permission problem. Looking at the Chart you specified, the are multiple values of runAsUser for different config.
controller.image.runAsUser: 101
controller.admissionWebhooks.patch.runAsUser: 2000
defaultBackend.image.runAsUser: 65534
I'm not sure why these are different, but if possible -
Try to delete your existing chart, and fresh install it.
If the issue still persist - check the deployment / pod events, see if the cluster alerts you about something.
Also worth noting, there were breaking changes in 1.22 to Ingress resource.
Check this and this links from the official release notes.
That issue occurred because all worker node is not properly upgraded as a result ingress controller cant setup properly so i tried to install it on particular node having same version as cluster then it works properly

Grafana: how to automate user/team creation in Helm chart installation

I am using Grafana Helm chart to install Grafana on K8s cluster.
The procedure works quite good, also predefining dashboards, so that they are accessible after installation.On the other hand I didn’t find a solution to automate the creation of users & teams so far.
How can I specify/predefine users + teams , so that they are being created on “helm install”-ing the chart ?
Any hint highly appreciated
PS: I am aware of the HTTP API , but I am more interested in a way to predefine the info and having "helm install..." is setting up the whole stack

Show more logs in kubernetes dashboard

I am using the official kubernetes dashboard in Version kubernetesui/dashboard:v2.4.0 to manage my cluster and I've noticed that, when I select a pod and look into the logs, the length of the displayed logs is quite short. It's like 50 lines or something?
If an exception occurs, the logs are pretty much useless because the original cause is hidden by lots of other lines. I would have to download the logs or shell to the kubernetes server and use kubectl logs in order to see whats going on.
Is there any way to configure the dashboard in a way so that more lines of logs get displayed?
AFAIK, it is not possible with kubernetesui/dashboard:v2.4.0. On the list of dashboard arguments that allow for customization, there is no option to change the amount of logs displayed.
As a workaround you can use Prometheus + Grafana combination or ELK kibana as separate dashboards with logs/metrics, however depending on the size and scope of your k8s cluster it might be overkill. There are also alternative k8s opensource dashboards such as skooner (formerly known as k8dash), however I am not sure if it offers more workload logs visibility.
If anyone is interested: As the feature that i was looking for does not exist yet, i have submitted a feature request in GitHub. You can see it here: https://github.com/kubernetes/dashboard/issues/6700

Kiali can't see any topo of my services in the Graph view

I'm running kubernetes v1.7.10, and Istio 1.0.4, and also, kiali v0.9, which bundled with istio 1.0.4.
Then I deployed bookinfo example into a namespace, gateway points to the bookinfo service, and accessed the productpage homepage from a browser, all were fine.
But, from Graph view of kiali, just a diamon icon with label "unknown" displayed. what's wrong with it?
I can see all the services, workloads, istio configs on the Kiali, just no topology.
At last, I traced back to the prometheus metrics, something like:
istio_requests_total{connection_security_policy="none",destination_app="unknown",destination_principal="unknown",destination_service="details.test.svc.cluster.local",destination_service_name="details",destination_service_namespace="test",destination_version="unknown",destination_workload="unknown",destination_workload_namespace="unknown",instance="172.22.178.111:42422",job="istio-mesh",reporter="destination",request_protocol="http",response_code="200",source_app="unknown",source_principal="unknown",source_version="unknown",source_workload="unknown",source_workload_namespace="unknown"}
I noticed that they were all "unknown", destination_app, destination_version, source_app, source_version ... , I believe that's why no topo displayed.
And the metrics from http://istio-telemetry:42422/metrics:
istio_requests_total{connection_security_policy="none",destination_app="unknown",destination_principal="unknown",destination_service="details.test.svc.cluster.local",destination_service_name="details",destination_service_namespace="test",destination_version="unknown",destination_workload="unknown",destination_workload_namespace="unknown",reporter="destination",request_protocol="http",response_code="200",source_app="unknown",source_principal="unknown",source_version="unknown",source_workload="unknown",source_workload_namespace="unknown"} 32
Then I did another testing, I setup a cluster of kubernetes v1.10.3, and installed istio 1.0.4 into it, deployed bookinfo examples, everything is fine, with a beatiful topo graph.
So, I doubt is there anyting different between the kubernetes versions that break the graph view of kiali?
Can someone give any hints?
thanks.
likun
I can't find a clear information on Istio website, but I believe kubernetes below 1.9 isn't supported. This is kind of suggested in the install page for Minikube: https://istio.io/docs/setup/kubernetes/platform-setup/minikube/
Maybe you can try with an older version of Istio, but I wouldn't guarantee it's going to work either. You would also have to pick up an older version of Kiali.
Kiali builds its graph from labels in istio telemetry, so you're right to correlate with what you see in Prometheus. In particular, source_app, source_workload, destination_app and destination_workload are used by Kiali to detect graph relations.

Deployments not visible in Kubernetes Dashboard

I've created a deployment like this:
kubectl run my-app --image=ecr.us-east-1.amazonaws.com/my-app:v1 -l name=my-app --replicas=1
Now I goto the Kubernetes Dashboard:
https://172.0.0.1/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard
But I dont see my-app listed here.
Is it possible to use the Kubernetes Dashboard to view deployments? I'd like to use the dashboard to do things like view the deployments mem/cpu usage, check logs, etc
Kubernetes Dashboard is pretty limited at the moment, and only supports ReplicationControllers. If you create a ReplicationController then you will be able to see the Pods connected to it, check their memory and CPU usage, and view their logs.
Work is being done to improve Dashboard and in the future it should support other Kubernetes resources besides ReplicationControllers. You can see some mockups in the GitHub repo.
I'm one of the Dashboard UI maintainers.
Deployments will be shown in the UI in next release (a few weeks from now). I'm sorry this wasn't done before, but we had tight schedule. If you want to test the features sooner, use v1.1.0-beta2 version of the UI which will be released next week.