I want to create a graph which shows the total capacity for each week relative to remaining availability across a series of specific dates. Just now when I attempt this in Power Bi it calculates this correctly for one of the values (remaining availability) but generates a value much higher than expected by manual calculation for the total capacity - instead showing the total for the entire column rather than for each specific date.
Why is Power Bi doing this and how can I solve it?
So far, I have tried generating the graph like this:
(https://i.stack.imgur.com/GV3vk.png)
and as you can see the capacity values are incredibly high they should be 25 days.
The total availability values are correct (ranging from 0 to 5.5 days).
When I create matrices to see the sum breakdown they are correct but it only appears to be that when combined together one of the values changes to the value for the whole column.
If anyone could help me with this issue that would be great! Thanks!
I am using Power Bi to produce several reports, one of it is the NPS score for support. However, I am coming across an issue with the clustered column chart. It is showing the value against the total number rather than for each row.
What I want to see if the following (within Excel),
The NPS score is shown as a percentage for each week.
e.g. Week 3 has the Promoter at 95.5% and Detractor at 4.5%
However, when using Power Bi, I am shown the following, which is a Percentage of the grand total, instead of each week.
Using a Matrix, I could see the following as total numbers.
I can copy this Matrix and show it as a Percentage of each Row, which is also correctly showing the results.
I have the dates already set up using a feeder table to allow me to get the week number etc from a date within the main raw data, so they sort in the correct order..
My Chart is using the following table entries
Cal Week and WeekNo are both from the feeder table (Fiscal)
Net Promoter and Count of Case Num are from the RawData table.
How can I get the chart to show the percentages per week instead of the total?
I am also planning to use slicers to filter down further, for example, Regions (which are in the RawData).
I believe I will need to add an extra column to the RawData, but no idea what to put in it and then how to use that in the chart, and still allow it to slice.
Any help would be greatly appreciated.
Thanks
DD
I'm looking for a way to display a line showing the average value of a parameter in a chart dedicated to this parameter's evolution.
I have a dataset, let's take as an example the following structure :
Product | Month | Price
P1 2021-01 13.00
P1 2021-02 13.50
P1 2021-03 15.00
P1 2021-04 14.50
P2 2021-01 3.00
P2 2021-02 3.50
P2 2021-03 5.00
P2 2021-04 4.50
In a chart, I display de price's evolution for each selected product (multi select filter upstream) and I would like to add, also for each product, a line showing the average for the displayed period.
I tried so far 2 different approaches:
Use multiple series and add one dedicated to this average. But I did not manage to calculate this average. Actually, to display the initial chart, the evolution of my property, it looks like I must use an aggregation function (each layer type requires to define series where the first parameter to define is an aggregation function)
Create a summary dataset, with aggregates values for each product, and the latest calculation date. It looks like this:
Product | Latest Month | Avg Price | Max Price | Min Price
P1 2021-04 14.00 15.00 13.00
P2 2021-04 4.0 5.00 3.00
But I'm not able to overlay these values, as there is no time series to define the same X-axis.
I considered a 3rd solution, but looks dirty to me: to add the aggregate values in the first dataset. Each row would contain avg/max/min values for the period of time so that I can display these values the same way as any other property.
Finally, writing this post made me wonder if I well understood how this tool works, as I feel that what I implemented should have led to display the average values I'm looking for, but it's the only way I found to display a "simple" property's evolution.
Thanks in advance for your help.
There are a few considerations at play here with regards to building your charts that lead to different approaches when it comes to deciding how to represent your data.
One point up front about charting in Workshop is, as you've observed, the chart expects you to aggregate the granular per-object data to create each data point of your visualization. If you want to instead draw some feature of the chart (a dot or bar) per object, then you'll need to select an appropriately narrow bucket size. In this case, if you have one object per product per month, then choosing a granularity of monthly or less should result in having one data point per object.
As for options related to deriving the average, let's look at three approaches:
Temporal Metrics Schema
Creating an ontology with a primary object (i.e. the "Product") and then a linked object type for storing values of metrics about that object (i.e. the "Product Metrics") can be a flexible approach that works well with Workshop and Quiver charting expectations.
Consider a modification of your original granular schema like this:
Product | Timestamp | Value | Metric Type
P1 2021-01 13.00 Monthly Price
P1 2021-02 13.50 Monthly Price
P1 2021-03 15.00 Monthly Price
P1 2021-04 14.50 Monthly Price
P2 2021-01 3.00 Monthly Price
P2 2021-02 3.50 Monthly Price
P2 2021-03 5.00 Monthly Price
P2 2021-04 4.50 Monthly Price
P1 2021-03 14.00 Quarterly Average
P2 2021-03 4.00 Quarterly Average
...
This schema is quite flexible and robust; you can easily add new metrics later, or even use a tool like Taurus to let users define their own rules to generate metrics that fit into this schema. It has the advantage of storing the metric type as data itself, which means that in your Workshop app, for example, you can let the user choose, using a Filter List widget, which metrics to display on a chart.
This pattern also ensures consistency of what the date "means" when presented to the user. Having, for example, a quarterly average pre-calculated means that every user will get the same information from reviewing the chart, regardless of what time period they filter to, whereas a dynamic average based on the user's selection could lead two different users to quite different conclusions based on how they chose to filter the data.
And finally, for this pattern, it becomes quite easy to show the chart itself, since you simply choose to plot the filtered object set of metrics and choose the "Metric Type" as the series property, bucket by a small granularity (say "Day") and have the chart interpolate any gaps. This means that even aperiodic metrics along with metrics recorded at different periods can all render on the same chart.
This pattern is somewhat formalized with the nascent Time-Dependent Property feature of the Foundry Ontology. If the Time-Dependent Property feature is available on your Foundry instance, you can read more about it in the Ontology product documentation in the "How to create a Time-Dependent Property" section.
Dynamic Charts with Functions
Let's say you don't want to precompute the metrics for whatever reason, and instead want your chart to exactly render a line based on the average price values that are in the object set. One approach to accomplish this is to use a Function-backed Chart and a simple Typescript function that takes in as a parameter to object set of price information and returns a 2DimensionAggregation type that represents two data points: the first and last timestamp of the period represented by the input object set each paired with the average value calculated across the price values or a 3DimensionalAggregation since you perhaps want these two data points for each product category.
You can find clear steps in the Workshop and Functions product documentation for producing Function-backed Charts as well as examples of various Typescript Function implementations in the Foundry Training and Resources project on your Foundry instance.
Dynamic Charts using Quiver
The Workshop XY Chart is still under active development and a number of features that might be useful are not yet available. In some circumstances creating the chart in Quiver and embedding it in your Workshop app with the Quiver Canvas widget can give you flexibility to build charts with "derived" values that you cannot currently accomplish directly with the Workshop chart.
I'm adding this for completeness; I don't actually think it'd be the best solution in this specific case. The power in this pattern comes from taking an object-backed bar or line chart in Quiver and using the "Convert to Timeseries" feature to unlock Quiver's timeseries plotting and transformation capabilities. You can check out the Quiver documentation for more guidance on how to create object-derived timeseries and how to turn a Quiver canvas into an Object Template to be embedded elsewhere.
As far as I understand, according to #Logan's answer, the documentation, my tests, the solution is based on a key feature I did not really understood before: a chart always displays an aggregate value of a property.
It's something I noticed and mentioned in my question, and I just understood how this aggregation is performed: there is no parameter to define an interval of any kind, but when your X-axis is a date, you need to select a bucket, and that actually is the aggregation interval. If your date is daily basis and you decide to display values at weekly level, the aggregation is based on 7 days.
Thus, the solution I found was just to add exactly the same layer than my monthly values, I just changed the bucket in the second one to display yearly values (it was actually what I was looking for, my sample data were just one quarter because I did not see it would have had an impact).
But if I needed to visualize a quarterly average (#Logan's answer made me asked), how should I proceed?
I assume I would have several approaches: the ones described by #Logan, even if I'm still doubtful about the first one. At least, I looks like function-backed chart would work, but I do not know typescript at all to implement such functions. Otherwise, preparing data in the same dataset, or in an another one, designed to be displayed with the same scale, might work as well.
How to deal with a rolling average?
Well, I'm not sure it's possible, the only solution I see is function-backed chart but, one more time, I do not know typescript... I would probably use Slate instead, where I'm sure I'll be able to implement it.
Of course, any comment is welcome here, as I am still in a discovering phase of this tool.
I am trying to use Tableau's row total function but am running into a challenge. In the same widget I have Rows 1 - 4 with Numbers. Row 5 is a percentage.
What I would like to do is have Rows 1 - 4 use a Sum Total and Row 5 use an Average total.
Any suggestions on how I can do this?
Thanks,
I don't believe you can use different total metrics on the same worksheet.
What you can do is to create 2 different worsheets, and bring them side by side on a dashboard. Then use the proper Total metric in each.
But beware on calculation average of percentages, because they might be twisted. Usually weighted average is required to accurately express the "average" of a percentage.
What you can do is to actually calculate the percentage (use a calculated field) via the division of two metrics. That way, when you do Totals you will actually a valid value for the "average" of the percentage.
As an exercise, suppose you have sales (in $) in first row, and # of clients in row 2. Now I create a calculated field called ticket, that is
SUM(sales) / sum([# of clients])
That way I can add that to a third row, and for each column I'll have the right number of ticket, and if I add a Row Grand Total, I'll get the actual average ticket value (that is total sales / total # clients), because Tableau will sum all sales, sum all # clients and them perform the calculation (the division)
The problem I'm trying to solve is finding the right similarity metric, rescorer heuristic and filtration level for my data. (I'm using 'filtration level' to mean the amount of ratings that a user or item must have associated with it to make it into the production database).
Setup
I'm using mahout's taste collaborative filtering framework. My data comes in the form of triplets where an item's rating are contained in the set {1,2,3,4,5}. I'm using an itemBased recommender atop a logLikelihood similarity metric. I filter out users who rate fewer than 20 items from the production dataset. RMSE looks good (1.17ish) and there is no data capping going on, but there is an odd behavior that is undesireable and borders on error-like.
Question
First Call -- Generate a 'top items' list with no info from the user. To do this I use, what I call, a Centered Sum:
for i in items
for r in i's ratings
sum += r - center
where center = (5+1)/2 , if you allow ratings in the scale of 1 to 5 for example
I use a centered sum instead of average ratings to generate a top items list mainly because I want the number of ratings that an item has received to factor into the ranking.
Second Call -- I ask for 9 similar items to each of the top items returned in the first call. For each top item I asked for similar items for, 7 out of 9 of the similar items returned are the same (as the similar items set returned for the other top items)!
Is it about time to try some rescoring? Maybe multiplying the similarity of two games by (number of co-rated items)/x, where x is tuned (around 50 or something to begin with).
Thanks in advance fellas
You are asking for 50 items similar to some item X. Then you look for 9 similar items for each of those 50. And most of them are the same. Why is that surprising? Similar items ought to be similar to the same other items.
What's a "centered" sum? ranking by sum rather than average still gives you a relatively similar output if the number of items in the sum for each calculation is roughly similar.
What problem are you trying to solve? Because none of this seems to have a bearing on the recommender system you describe that you're using and works. Log-likelihood similarity is not even based on ratings.