I use tidb database, but when I want to build monitor. the grafana can't display data. It display "no data points". I check the network in the chrome. and docker logs. they look like right. I don't know why.
Sorry, I want to ask question,but I don't have reputation to post images. so I write the issue in the github.
https://github.com/pingcap/tidb/issues/7509#issuecomment-416438806
Thanks for your feedback. You need to use pushgateway v0.4.0 instead of pushgateway (0.5.1).
You could try https://github.com/pingcap/tidb-docker-compose or https://github.com/pingcap/tidb-operator directly.
We're working on remove pushgateway, and use prometheus to pull metrics directly.
Related
I've seen in the Kubernetes dashboard the tracking of some information in the form of:
X happened 14 times and the last occurence was at time T
Where is this data coming from? Can I query for it using the kubectl? Is there a K8s API for this information as well? Or is the dashboard running some kind of aggregation internally?
X happened 14 times and the last occurence was at time T
That is OOB dashboard functionality and most probably you should dive deep into the code to answer this question.
The thing is of course dashboard relies on open data that you can collect on your own using kubectl - the only question is what exactly you wanna see as output. Using kubectl in spike with greps, sorts, seds, etc will give you the same information you just asked. Maybe you wanna create new question and specify your exact task?
Currently I tried to fetch already rotated logs within the node using --since-time parameter.
Can anybody suggest what is the command/mechanism to fetch already rotated logs within kubernetes architecture using commands
You can't. Kubernetes does not store logs for you, it's just providing an API to access what's on disk. For long term storage look at things like Loki, ElasticSearch, Splunk, SumoLogic, etc etc.
By default in Elasticsearch, the maximum number of open scrolls is 500 but I need to increase this number. There s no problem in updating "search.max_open_scroll_context" in local machine but AWS Elasticsearch has not allowed to make changes.
While trying to update with answer given in this thread configure-search-max-open-scroll-context, the response is: {"Message":"Your request: '/_cluster/settings' payload is not allowed."} while I can perform such operation in my local Elasticsearch but AWS Elasticsearch doesn't seems to allow such operation. Does anyone has answer to this for AWS Elasticsearch or have faced similar?
This is restricted in AWS ES for customer end.
You need to reach out to AWS Support Team for this. Just let them know the value of "search.max_open_scroll_context" that you are looking for and they will update it from the backend.
Here the link to AWS-supported operations on elasticsearch.
Currently, AWS doesn't support updating "search.max_open_scroll_context" as of now. You can definitely contact AWS support to increase the scroll context count. Alternatively, you can use Search-After API instead of scroll.
We are using grafana to visualize the influx data. There are multiple dashboard created in. Because of some technical issue there may not be new data in Influx to display in the dashboard because of some downtime.
Is there a possibilities that I can add a panel in all the dashboard with an alert message of the downtime. So that dash board users don't have to go anywhere and notified about the downtime there itself.
Thanks
I think that it's not possible to configure a pop up like you want with grafana.
Find another notification channel (e-mail,discord,slack,...).
If you really want a pop up, this wont be configured in Grafana but in Javascript. To do that, you'll have to custom your Grafana page.For that, i can't help ou.
We would like to collect some interesting user-related metrics on our website (e.g. "user edited profile", or "user clicked on downloaded file", etc.) and are thinking about using the ELK stack for this.
Is it a good idea to use Elasticsearch to store such events? Or would it make more sense to log them in our RDBMS?
What would be the advantages of using either of those?
(Side note: We already use Elasticsearch and PostgreSQL in our our stack.
You could save your logs in any persistent solution out there and later decide what tool to use for analyzing them.
If you want to do some queries (manage your data on the fly/real-time) you could just directly parse/pipe the logs generated by your applications and send them to elastic search, the flow would be something like:
(your app) --> filebeat --> elasticsearch <-- Kibana
Just keep in mind that the elk stack is not "cheap" and based on your setup could become more expensive to maintain in long term.
At the end depends on your use case, both solutions you mention can be used to store data, but the way you extract/query data is the one that makes the difference.