Datadog allows me to create facets but it does not show any values for them - tags

I am using datadog to see my microservice metrics. When I go to APM tab I can see the spans I created and their corresponding tags are reaching the server correctly. The problem is that If I click in a tag "gear" to convert it to a facet, while the operation completes correctly I can not query for this value nor do I seee any value when I add it as a column to my metrics. Example below:
I can click that gear and convert "Headers-Received" to a string value, there is no error at all from DD, but I cannot query or see any value being registered. But I DO can see the values in each trace of a request reaching my server.
What is going on here?

Not sure if it helps, but by default facets need to be created before logs are processed, which means it won't show old logs prior to the creation of facets.

Related

Grafana - How to set a default field value for a visualization with a cloudwatch query as the data source

I'm new to grafana, so I might be missing something obvious. But, I have a custom cloudwatch metric that records http response codes into buckets (e.g. 2xx, 3xx, etc.).
My grafana visualization is using a query to pull and group data from cloudwatch and the resulting fields are dynamic: 2xx (us-east-1), 2xx (us-west-1), 3xx (us-east-1), etc.
I then use transformations to aggregate those values for a global view of the data:
The problem is, I can't create the transformation until the data exists. I'd like to have a 5xx field, but since that data is sporadic, it doesn't show up in the UI and I can't find a way to force "5xx (...)" to exist and have it get used when/if those response codes start occurring.
Is there a way to create placeholder fields somehow to achieve this?
You can't create it in the UI. But you have still option to edit that in the panel model directly. It is JSON, which represent whole panel. Edit it manually - in the panel menu click Inspect > Panel JSON and create&customize another item in the transformation section. It is not very convenient option to edit panel, but you will achieve your target.

Create alert when Kubernetes pods "Does not have minimum availability" on Google Cloud Platform

I want to set alert policy for when there isn't enough pods in my Deployment. There are tons of metrics in Kubernetes which I am not sure which to use.
Just choosing CPU utilisation might work as a hack, but that might still miss cases where container crashes and backs of - I am not too sure.
Edit: hack above doesn't really work - perhaps I should check at requested cores?
Edit 2: adding image to answer comment
Here is the Step-by-Step procedure for Creating a Log Based Metric for Creating an Alert Based on it.
Create a Log Based Metric in console
a. Go to Logging->Log Based Metric->Create Metric
b. Select Counter in Metric type
c. In details give any log name (ex:user/creation)
d. In filter provide the following:
resource.type="k8s_pod"
severity>=WARNING
jsonPayload.message= "<error message>"
You can replace the filter with something that is more appropriate for
your case and refer this documentation for the details of query language
e. Let the other Fields be default
f. Then, Create the Metric
Create an Alert Policy :
a. Go to Monitoring -> Alerting
b. Select Create Policy -> Add Condition In Find Resource type and Metric
field:
Resource type:gke_container
Metric:logging/user/user/creation (logging/user /<logname in step 1>)
(both Resource type & Metric in same field)
In Filter: project_id=<Your project id>
In configuration: Condition triggers if:All time series violate
Condition: is above , threshold: 0,for:most recent value
c. Let the other fields be Default
d. Add and click on NEXT
e. In Notifications Channels go to Manage notifications channels this will
redirect you to a new page in that select email->Add new (provide the
email where you want to get notifications & display name )
f. Refresh the previous tab now you can see your Display Name in
Notifications channel and check the box of Display Name, Click OK
g. Check the box of Notify on Incident Closure & Click Next
h. Provide Alert Name & Save the Changes.

Weird "data has been changed" issue

I'm experiencing a very weird issue with "data has been changed" errors.
I use ms access as a frontend and postgresql as backend. The backend used to be in ms access and there were no issues, then it was moved to sql server and there were no issues there either. The problem started when I moved to postgresql.
I have a table called Orders and a table called Job. Each order has multiple jobs, I have 2 forms, one parent form for the Order and one Subform for the Jobs (continuous form). I put the subform in a separate tab, first tab contains general order information and the second tab has the Job information. Job is connected Orders using a foreign key called OrderID, Id of Orders is equal to OrderID in Job.
Here is my problem:
I enter some information in the first tab, customer name, dates etc, then move to the second tab, do nothing in the second tab, go back to the first one and change a date. I get "The data has been changed" error
I'm confused as to why this is happening. Now why I call this weird?
First, if I put the subform on the first tab, I can change all fields of Orders just fine. IT's only if I put it on the second tab and, add some info, change tab, then go back and change an already existing value that I get the error
Second, if I make the subform on the second tab Unbound (so no ID - OrderID) connection, I get the SAME error
Third, the "usual" id for "The data has been changed" error is Runtime Error 440. But what I get is Runtime Error: "-2147352567 (80020009)". Searching online for this error didn't help because it can mean a lot of different things, including "The value you entered isn't valid for this field" like here:
Access Run time error - '-2147352567 (80020009)': subform
or many different results for code 80020009 but none for "the data has been changed"
MS access 2016, postgresql 12.4.1
I'm guessing you are using ODBC to connect Access to Postgresql. If so do you have timestamp fields in the data you working with? I have seen the above as the Postgres timestamp can have a higher precision then Access. This means when you go to UPDATE Access uses a truncated version of the timestamp and can't find the record and you get the error. For this and other possible causes see:
https://odbc.postgresql.org/faq.html#6.4
Microsoft Applications

Grafana throw error after selecting more than 60 variables

I was trying to display the metrics for 64 nodes on my k8s clsuter. Then I found out that whenever I select more than 60 nodes in the variable dropdown
Grafana throws query error that looks like this:
The exception message is not particularly helpful, could somebody provide me more insights? Thanks!
I've had a similar problem after selecting too many variables. As long as the rest of your monitor is able to pull the info successfully from prometheus, you can disable the annotation query. Go to the dashboard and remove the annotations under settings.

Grafana - Graph with metrics on demand

I am using Grafana for my application, where I have metrics being exposed from my data source on demand, and I want to monitor such on-demand metrics in Grafana in a user-friendly graph. For example, until an exception has been hit by my application, the data source does NOT expose the metric named 'Exception'. However, I want to create a graph before hand where I should be able to specify the metric 'Exception' and it should log it in the graph whenever my data source exposes the 'Exception' metric.
When I try to create a graph on Grafana using the web GUI, I'm unable to see these 'on-demand metrics' since they've not yet been exposed by my data source. However, I should be able to configure the graph such that in case these metrics are exposed then show them. If I go ahead and type out the non-exposed metric name in the metrics field, I get an error "Timeseries data request error".
Does Grafana provide a method to do this? If so, what am I missing?
It depends on what data source you are using (Graphite, InfluxDB, OpenTSDB?).
For graphite you can enter raw query mode (pen button). To specify what ever query you want, it does not need to exist. Same is true InfluxDB, you find the raw query mode in the hamburger menu drop down to the right of eacy query.
You can also use wildcards in a graphite query (or regex in InfluxDB) to create generic graphs that will add series to the graph as they come in.