What is the difference between severity and priority? Which of these can be set through rules or API? - pagerduty

I see alerts and incidents having two attributes which sound similar:
priority
severity
What is the difference between them? How are they set?

Alerts in PagerDuty can be generated with a severity field. These severity values can be directly provided from the triggering monitoring tool, or set using event rules.
When an incident is generated from an alert, the alert’s severity field is used to determine the urgency level. The values of this field must be one of the following: critical, error, warning, or info.
More info here: https://support.pagerduty.com/docs/dynamic-notifications#eventalert-severity-levels
More information on Priority: https://support.pagerduty.com/docs/incident-priority

Related

Create alert when Kubernetes pods "Does not have minimum availability" on Google Cloud Platform

I want to set alert policy for when there isn't enough pods in my Deployment. There are tons of metrics in Kubernetes which I am not sure which to use.
Just choosing CPU utilisation might work as a hack, but that might still miss cases where container crashes and backs of - I am not too sure.
Edit: hack above doesn't really work - perhaps I should check at requested cores?
Edit 2: adding image to answer comment
Here is the Step-by-Step procedure for Creating a Log Based Metric for Creating an Alert Based on it.
Create a Log Based Metric in console
a. Go to Logging->Log Based Metric->Create Metric
b. Select Counter in Metric type
c. In details give any log name (ex:user/creation)
d. In filter provide the following:
resource.type="k8s_pod"
severity>=WARNING
jsonPayload.message= "<error message>"
You can replace the filter with something that is more appropriate for
your case and refer this documentation for the details of query language
e. Let the other Fields be default
f. Then, Create the Metric
Create an Alert Policy :
a. Go to Monitoring -> Alerting
b. Select Create Policy -> Add Condition In Find Resource type and Metric
field:
Resource type:gke_container
Metric:logging/user/user/creation (logging/user /<logname in step 1>)
(both Resource type & Metric in same field)
In Filter: project_id=<Your project id>
In configuration: Condition triggers if:All time series violate
Condition: is above , threshold: 0,for:most recent value
c. Let the other fields be Default
d. Add and click on NEXT
e. In Notifications Channels go to Manage notifications channels this will
redirect you to a new page in that select email->Add new (provide the
email where you want to get notifications & display name )
f. Refresh the previous tab now you can see your Display Name in
Notifications channel and check the box of Display Name, Click OK
g. Check the box of Notify on Incident Closure & Click Next
h. Provide Alert Name & Save the Changes.

Kubernetes: validating update requests to custom resource

I created a custom resource definition (CRD) and its controller in my cluster, now I can create custom resources, but how do I validate update requests to the CR? e.g., only certain fields can be updated.
The Kubernetes docs on Custom Resources has a section on Advanced features and flexibility (never mind that validating requests should be considered a pretty basic feature đŸ˜‰). For validation of CRDs, it says:
Most validation can be specified in the CRD using OpenAPI v3.0 validation. Any other validations supported by addition of a Validating Webhook.
The OpenAPI v3.0 validation won't help you accomplish what you're looking for, namely ensuring immutability of certain fields on your custom resource, it's only helpful for stateless validations where you're looking at one instance of an object and determining if it's valid or not, you can't compare it to a previous version of the resource and validate that nothing has changed.
You could use Validating Webhooks. It feels like a heavyweight solution, as you will need to implement a server that conforms to the Validating Webhook contract (responding to specific kinds of requests with specific kinds of responses), but you will have the required data at least to make the desired determination, e.g. knowing that it's an UPDATE request and knowing what the old object looked like. For more details, see here. I have not actually tried Validating Webhooks, but it feels like it could work.
An alternative approach I've used is to store the user-provided data within the Status subresource of the custom resource the first time it's created, and then always look at the data there. Any changes to the Spec are ignored, though your controller can notice discrepancies between what's in the Spec and what's in the Status, and embed a warning in the Status telling the user that they've mutated the object in an invalid way and their specified values are being ignored. You can see an example of that approach here and here. As per the relevant README section of that linked repo, this results in the following behaviour:
The AVAILABLE column will show false if the UAA client for the team has not been successfully created. The WARNING column will display a warning if you have mutated the Team spec after initial creation. The DIRECTOR column displays the originally provided value for spec.director and this is the value that this team will continue to use. If you do attempt to mutate the Team resource, you can see your (ignored) user-provided value with the -o wide flag:
$ kubectl get team --all-namespaces -owide
NAMESPACE NAME DIRECTOR AVAILABLE WARNING USER-PROVIDED DIRECTOR
test test vbox-admin true vbox-admin
If we attempt to mutate the spec.director property, here's what we will see:
$ kubectl get team --all-namespaces -owide
NAMESPACE NAME DIRECTOR AVAILABLE WARNING USER-PROVIDED DIRECTOR
test test vbox-admin true API resource has been mutated; all changes ignored bad-new-director-name

Timestamp filter for Sentry API for events

I see that Sentry has an API to list the events in a project: https://docs.sentry.io/api/events/get-project-events/
But there doesn't seem to be any filter for start and end times here to get say, the events that occurred in the last one hour.
Is there any such filter available that I'm missing?
I think what you want is to first search by issues using https://docs.sentry.io/api/events/get-project-group-index/, and then get the latest event for each issue (if you need the level of detail) using https://docs.sentry.io/api/events/get-group-events-latest/
The first endpoint allows you to add ?query= at the end to use the search queries you're used from the UI. You can filter by timerange over lastSeen there.
It is related to this feature request:
Show only events which match filter in Issues screen. (also event &
user count)
https://github.com/getsentry/sentry/issues/15189

Rejection of FIX order modification: what happens to the original order?

Could anyone point me to the relevant section of the FIX spec pertaining to rejected order modification?
Please consider the following scenario:
A limit order (NewOrderSingle: ClOrdID='blah.0') is placed and
confirmed as submitted by the broker
Modification request of the order (OrderCancelReplaceRequest:
ClOrdID='blah.1'; OrigClOrdID='blah.0') gets rejected due to, say,
limit violation
What happens to the original order (ClOrdID='blah.0')? Is it still considered valid and can be filled? Does the FIX specification define the expected behavior for such scenarios and the expected state of the original order?
TL;DR
You should consult your counterparty's FIX specification document(s) for the exact behavior to expect from that specific counterparty when an attempt to replace a working order is rejected.
Long answer
Assuming nothing has happened to the original order 11=blah.0 between the time it was placed and the OrderCancelReplaceRequest with 11=blah.1|41=blah.0 was sent and rejected (e.g., fill, partial fill(s), external cancel), the original order 11=blah.0 should still be working, and can be filled.
There is nothing in the FIX specification that states the exact expected outcome when an attempt to replace a working order is rejected. Since most exchanges/brokers use some flavor of FIX 4.2, I'll point to the documentation for that version:
Order Cancel Reject - The order cancel reject message is issued by the
broker upon receipt of a cancel request or cancel/replace request
message which cannot be honored. Requests to change price or decrease
quantity are executed only when an outstanding quantity exists. Filled
orders cannot be changed (i.e quantity reduced or price change.
However, the broker/sellside may support increasing the order quantity
on a currently filled order).
In the message specification it has:
Tag | Field Name | Req'd | Comments
39 | OrdStatus | Y | OrdStatus value after this cancel reject is applied.
Whatever the counterparty provides for OrdStatus in the OrderCancelReject message is the state of the original order. I have never run into any counterparty that cancels the original order when a replace request is rejected, but I suppose it's possible. If a counterparty does handle the situation this way, any documentation provided by the counterparty should clearly state so.

Google Analytics API TotalEvents query

I'm trying to get a total count for a give event,however I get a number much bigger(10-20 times bigger), than I see on the GA website, what am I doing wrong? (api v3)
here is the segment
metric:
ga:totalEvents
segment:
dynamic::ga:eventCategory==mycategory;ga:eventAction==myaction;ga:eventLabel==mylabel
note that I get wrong results with the query explorer as well.
You are using a segment instead of a filter.
Segments are session based so it will include all events in a session if it matches you specification. So essentially if I triggered the event you want plus other events they will all be included in the total events field.
What you need to do Is remove the segment and add a filter, the filter will include only the data you request.
Hope that helps