Splunk: setting up real time alerts - real-time

I need to configure real time alerts in Splunk but it shows only the scheduled option. How do I enable real time alerts? Is it lack of licensing? Can't find it in documentation.

Without knowing any more information about your Splunk environment or your search setup, the two most likely possibilities that you can't set up a real-time alert are:
#1: You are running an old version of Splunk.
Real-time alerting was introduced in Splunk v4.2. Upgrade to the current version to use real-time alerts.
#2: You're trying to set up a real-time alert from a historical search.
A real-time alert can't be set up on a historical search. See this solution on Splunk Answers for more information, including how to change your search to a real-time search if that's what it should be.

Related

Missing features in Grafana

I am using grafana to visualize some data from my InfluxDB-database, that collects data from some sensors. That said, it's the first time for my working with both grafana and InfluxDB, also I'm pretty new to coding so my knowledge is very limited.
As I scroll through threads and forums on the web trying to find guidance, I find a lot of tutorials mostly 2-4 years old that seem to use features in grafana that are simply not available vor me.
For example I tried to set an alert which tells me when my sensor is delivering flawed values (values that I my case cannot physically be true) too often. But when I'm using avg() from the classic condition operations, I can't select a time frame in which I want the average value monitored.
My expression part of the alert settings
Is it a problem that has to be configured via grafana.ini? Is it because these features cannot be used with InfluxDB?
For some background information, I'm using a Ubuntu Server via VirtualBox to run both the database and the grafana server. I'm using a little python script to distribute the sensor data into the database.
If someone could help me out soon that would be great!

How to setup timezone in firestore usage tab?

Not in Analysis dashboard. But when you go database then usage tab.
I'm not in GMT-7
This is quota daily period. It does not depend on where you are, but it shows how daily quota is accounted.
If you open the link "billing and quota usage" it takes you to GCP and than there is a link "Understanding Quotas". It directs to App Engine quotas here, but I understand it's the same logic. According to the doc:
Daily quotas are refreshed daily at midnight Pacific time.
So this is information about which daily period you are currently in, and it's accounted in PDT time zone.
I hope it will help!
I raised this query to Google and received response as:
Thanks for reaching out. This is Estefani from the Firebase Support team.
Unfortunately, there is no way to change the time zone on the usage tab. This is because it's an internal tool for Firebase to be able to monitor and should be in that time zone.
For now, I suggest using our tools to monitor your database performance, and if you want, I can file a feature request on your behalf for this feature to be considered for future releases. Just give me the green light and I'll do the rest.

Is it possible to track down very rare failed requests using linkerd?

Linkerd's docs explain how to track down failing requests using the tap command, but in some cases the success rate might be very high, with only a single failed request every hour or so. How is it possible to track down those requests that are considered "unsuccessful"? Perhaps a way to log them somewhere?
It sounds like you're looking for a way to configure Linkerd to trap requests that fail and dump the request data somewhere, which is not supported by Linkerd at the moment.
You do have a couple of options with the current functionality to derive some of the info that you're looking for. The Linkerd proxies record error rates as Prometheus metrics which are consumed by Grafana to render the dashboards. When you observe one of these infrequent errors, you can use the time window functionality in Grafana to find the precise time that the error occurred, then refer to the service log to see if there are any corresponding error messages there. If the error is coming from the service itself, then you can add as much logging info about the request that you need to in order to help solve the problem.
Another option, which I haven't tried myself is to integrate linkerd tap into your monitoring system to collect the request info and save the data for the requests that fail. There's a caveat here in that you will want to be careful about leaving a tap command running, because it will continuously collect data from the tap control plane component, which will add load to that service.
Perhaps a more straightforward approach would be to ensure that all the proxy logs and service logs are written to a long-term store like Splunk, an ELK (Elasticsearch, Logstash, and Kibana), or Loki. Then you can set up alerting (Prometheus alert-manager, for example) to send a notification when a request fails, then you can match the time of the failure with the logs that have been collected.
You could also look into adding distributed tracing to your environment. Depending on the implementation that you use (jaeger, zipkin, etc.) I think the interface will allow you to inspect the details of the request for each trace.
One final thought: since Linkerd is an open source project, I'd suggest opening a feature request with specifics on the behavior that you'd like to see and work with the community to get it implemented. I know the roadmap includes plans to be able to see the request bodies using linkerd tap and this sounds like a good use case for having those bodies.

Flutter and Firestore: debug usage

Is there a way to easily debug the read and write requests a Flutter app makes to Firestore? I'm getting a high number of reads, but battling to find where those are originating from in the app.
Have you tried using the StackDriver logging User Interface?
It offers plenty of log analysis tools that you can use to monitor resources that are writing into your Firestore DataBase.
You can read more about this here [1].
Once you have created logs-based metrics, you can create charts and alerts on said metrics.
[1] https://firebase.google.com/docs/functions/writing-and-viewing-logs

Does it make sense to use ELK to collect page metrics?

We would like to collect some interesting user-related metrics on our website (e.g. "user edited profile", or "user clicked on downloaded file", etc.) and are thinking about using the ELK stack for this.
Is it a good idea to use Elasticsearch to store such events? Or would it make more sense to log them in our RDBMS?
What would be the advantages of using either of those?
(Side note: We already use Elasticsearch and PostgreSQL in our our stack.
You could save your logs in any persistent solution out there and later decide what tool to use for analyzing them.
If you want to do some queries (manage your data on the fly/real-time) you could just directly parse/pipe the logs generated by your applications and send them to elastic search, the flow would be something like:
(your app) --> filebeat --> elasticsearch <-- Kibana
Just keep in mind that the elk stack is not "cheap" and based on your setup could become more expensive to maintain in long term.
At the end depends on your use case, both solutions you mention can be used to store data, but the way you extract/query data is the one that makes the difference.