Google Data Studio Visualize Count of Distinct Events in Pie Chart - visualization

We have a list of tasks on the website that all users need to perform before they are able to apply for a job. Tracked by Events
I'm able to create a table by userid that shows all the task that were completed by that user and then I'm able to create a table that shows distinct count of all tasks they've finished.
To count the task events, I've created a new metric
COUNT_DISTINCT(Event Action)
Then using a filter to show the task I'm interested in.
Tasks completed by User:
User ID
Count of Tasks Completed
1
11
2
11
3
11
4
10
5
9
6
9
7
4
8
4
9
4
10
2
What I'd like to do is show a pie chart that display the % of users that completed a certain number of tasks. For example 30% completed 10 tasks, 20% completed 9 tasks, etc.

Took a while, but I finally figured this one out.
First you have to create three new metrics, one to count distinct events and one to count distinct user IDs You need a third to convert the distinct event counts into a usable metric for calculating a completion % and is based on the distinct event counts.
Then I created filters to isolate the specific events I'm interested in.
Since you can't re-aggregate data or apply filters based on a custom metrics in a data source, I created two separate tables with the necessary filters in order to blend the data.
I then blended the data and created the necessary additional filter based on the event count and applied to desired blended source. In this case, I want to see all users with an ID that finished 11 tasks.
Blended table
Blended data source
The final step is divide the total number of desired events / total users to get a completion rate.
And here's the custom metric for that.
This is pretty controverted and took taking several technics from several blog sources and combining them into one work flow.

Related

Is there a way to total estimates and completed work in Azure DevOps queries?

We've come across a number of issues with our Azure DevOps projects and are trying to surface relevant information to the management team with queries and dashboards on projects. Mainly it's just been counting the number of results for particular queries, e.g. when a status hasn't changed in 30 days, number of blocked items, total items in current sprint etc.
What we've been asked for though is to be able to rollup the original estimate total for all work items, and also roll up the completed work as another value. The queries and other things I've seen only seem to be able to count, rather than sum up, but some of the widgets I've seen do appear to sum things for graphs (but I'm just looking for the values).
Can anyone suggest anything?
I imagine that you have two different options here. The first being is that you could leverage the new roll-up columns and aggregate some of this information on the backlog view. Some of this makes assumptions about how you are grouping and the hierarchy of your work items.
Add a rollup column
In the Column options dialog, choose Add a rollup column, select From quick list, and then choose from one of the options listed.
Choose from the menu provided.
Progress bar displays progress bars based on the percentage of associated descendant work items which have been completed or closed.
Total number displays the sum of descendant items or the associated fields of descendant items. Totals provide a measure of the
size of a Feature or Epic based on the number of its child items. For
example, Count of Tasks shows the sum of all tasks that are linked
to parent items. The active or closed state is ignored. Rollup column
menu
Remaining Work of Tasks shows the sum of Remaining Work of tasks that are linked to the parent item.
If you wanted to instead see the summarized details on a dashboard, I'd recommend downloading the Query Tile PRO marketplace extension. Let's say you had a query already defined:
The options support sums based on query fields:
And so you have the tile with the summation value you are wanting. Just replace with other fields that you might need.

Creating Datadog alerts for when the percentage difference between two custom metrics goes over a specified percentage threshold

My current situation is that I have two different data feeds (Feed A & Feed B) and I have created custom metrics for both feeds:
Metric of Order counts from Feed A
Metric Order counts from Feed B
Next steps is to create alert monitoring for the agreed upon threshold of difference between the two metrics. Say we have agreed that it is acceptable for Order Counts from Feed A to be within ~5% of Order Counts from Feed B. How can I go about creating that threshold and comparison between the two metrics that I have already developed in Datadog?
I would like to send alerts to myself when the % difference between the two data feeds is > 5 % for a daily validation.
You might be able to get this if you...
Start creating a metric type monitor
To the far right of the metric definition, select "advanced"
Select "Add Query"
Input your metrics
In the field called "Express these queries as:", input (a-b)/b or some such
Trigger when the metric is above or equal to the threshold in total during the last 24 hours
Set Alert threshold >= 0.05
If you start having trouble as you start setting it up, you may want to reach out to support#datadoghq.com to get their assistance.

Identifying the last time customers ordered a particular item and have them enter a workflow

I have made a Customer search that identifying the customers that haven't purchased an item in six months or more. I've used group summaries and maximum summaries for the company name and the maximum transaction date, which refers to the latest sales order containing a certain item. The idea is to send them an email. However, the workflow is only executing on 20 records at a time. i even conducted a search that was not summarized and the workflow still only executed on 20 records. I did use the "Execute Now" button in testing mode to see how many entered the work flow from the summary search. But each execution only yields 20 workflow instances. the searches yield about 213 and 300 records respectively. I appreciate any insight!
Workflows using searches will only process 20 records when in testing mode.
From SuiteAnswers Article 36738 (NetSuite Login required)
When you execute a workflow on demand, NetSuite only processes the first 20 records returned by the saved search. For example, if the saved search for a scheduled workflow returns 1000 records, the workflow only initiates on the first 20 records returned by the saved search.

possible to load the latest available datapoint and discard the rest in Druid?

Consider raw events (alpha set in Druid parlance) of the form timestamp | compoundId | dimension 1 | dimension 2 | metric 1 | metric 2
Normally in Druid data can be loaded in Realtime nodes and historic nodes based on some rules. These rules seem to be related to time-ranges. E.g.:
load the last day of data on boxes A
load the last week (except last day) on boxes B
keep the rest in deep storage but don't load segments.
In contrast I want to support the use-case of:
load the last event for each given compoundId on boxes A. Regardless if that last event happened to be loaded today or yesterday.
Is this possible?
Alternatively, if the above is not possible, I figured it would perhaps be possible as a workaround to create a betaset (finest granulation level as follows):
Given an alphaset with schema as defined above, create a betaset so that:
all events for a given compoundId are rolled-up.
metric1 and metric2 are set to the metrics from the last occurring (largest timestamp) event.
Any advice much appreciated.
I believe the first and last aggregators is what you are looking for.

How to create a chart in Kibana from a set number

I have a task that sends to Kibana a number of files it's supposed to run, then it sends an event for each file that finishes. How can I configure my Kibana to give me a pie chart of remaining files from finished files? (If it's impossible to do with a pie chart I'd like to hear about other charts to do that with)
Ideally if for example I have 20 files and 5 finished I want my pie to be 3 quarters of one color (waiting files) and 1 quarter another color (finished files).
If you're to use the pie chart, you might have to differentiate the file whether it's a finished one or a waiting one by using filter aggregation. You can have a look at this, for more about using the filter.
So in your case, let's assume if you have a field called status which has distinct values as waiting and finished. What you can do is to have two filters containing:
filter 1
status:'waiting'
filter 2
status:'finished'
So the above would actually split your pie chart into two sections where one containing the waiting events and the other containing finished events with two different colors. This is just a thought so that you could reproduce. Hope it helps!